Vulnerability Analysis and Adversarial Learning

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 30 June 2024 | Viewed by 3545

Special Issue Editors

College of Cyber Science, Nankai University, Tianjin 300350, China
Interests: binary rewriting; malware detection
Department of Computing Science, University of Aberdeen, Aberdeen AB243UE, UK
Interests: web security; identity management; authentication; cryptography; machine learning

Special Issue Information

Dear Colleagues,

Traditionally, vulnerabilities come from unreasonable software design, non-standard programming etc. Recently, the advancement of vulnerability analysis in intelligent systems, especially in machine learning algorithms, has been gaining more and more attention.

The vulnerabilities of machine-learning algorithms and models are the foundation of adversarial learning, which is a novel research area that lies at the intersection of machine learning and computer security. Adversarial learning aims at gaining a deeper understanding of the security properties of current machine-learning algorithms against carefully targeted attacks, and at developing suitable countermeasures for the design of more secure learning algorithms.

This Special Issue focuses on the vulnerability analysis of information systems, especially intelligent systems. This includes:

  1. Information system vulnerability analysis;
  2. Vulnerability analysis theory and methods;
  3. Machine learning for vulnerability analysis;
  4. Vulnerability analysis of AI algorithms, models, systems;
  5. Formal theory for adversarial leaning;
  6. Evaluation metrics for adversarial learning;
  7. Program and binary analysis;
  8. Trustworthy machine learning and AI;
  9. Privacy-preserving machine learning.

Dr. Zhi Wang
Dr. Wanpeng Li
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • vulnerability analysis
  • adversarial learning
  • program analysis
  • trustworthy AI
  • privacy-preserving AI

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

29 pages, 6804 KiB  
Article
Mouse Data Attack Technique Using Machine Learning in Image-Based User Authentication: Based on a Defense Technique Using the WM_INPUT Message
by Wontae Jung, Sejun Hong and Kyungroul Lee
Electronics 2024, 13(4), 710; https://doi.org/10.3390/electronics13040710 - 09 Feb 2024
Viewed by 529
Abstract
Recently, as the non-face-to-face society persists due to the coronavirus (COVID-19), the Internet usage rate continues to increase, and input devices, such as keyboards and mice, are mainly used to authenticate users in non-face-to-face environments. Due to the nature of the non-face-to-face environment, [...] Read more.
Recently, as the non-face-to-face society persists due to the coronavirus (COVID-19), the Internet usage rate continues to increase, and input devices, such as keyboards and mice, are mainly used to authenticate users in non-face-to-face environments. Due to the nature of the non-face-to-face environment, important personal data are processed, and since these personal data include authentication information, it is very important to protect them. As such, personal information, including authentication information, is entered mainly from the keyboard, and attackers use attack tools, such as keyloggers, to steal keyboard data in order to grab sensitive user information. Therefore, to prevent disclosure of sensitive keyboard input, various image-based user authentication technologies have emerged that allow sensitive information, such as authentication information, to be entered via mouse. To address mouse data stealing vulnerabilities via GetCursorPos() function or WM_INPUT message, which are representative mouse data attack techniques, a mouse data defense technique has emerged that prevents attackers from classifying real mouse data and fake mouse data by the defender generating fake mouse data. In this paper, we propose a mouse data attack technique using machine learning against a mouse data defense technique using the WM_INPUT message. The proposed technique uses machine learning models to classify fake mouse data and real mouse data in a scenario where the mouse data defense technique, utilizing the WM_INPUT message in image-based user authentication, is deployed. This approach is verified through experiments designed to assess its effectiveness in preventing the theft of real mouse data, which constitute the user’s authentication information. For verification purposes, a mouse data attack system was configured, and datasets for machine learning were established by collecting mouse data from the configured attack system. To enhance the performance of machine learning classification, evaluations were conducted based on data organized according to various machine learning models, datasets, features, and generation cycles. The results, highlighting the highest performance in terms of features and datasets were derived. If the mouse data attack technique proposed in this paper is used, attackers can potentially steal the user’s authentication information from various websites or services, including software, systems, and servers that rely on authentication information. It is anticipated that attackers may exploit the stolen authentication information for additional damages, such as voice phishing. In the future, we plan to conduct research on defense techniques aimed at securely protecting mouse data, even if the mouse data attack technique proposed in this paper is attempted. Full article
(This article belongs to the Special Issue Vulnerability Analysis and Adversarial Learning)
Show Figures

Figure 1

16 pages, 493 KiB  
Article
BSFuzz: Branch-State Guided Hybrid Fuzzing
by Qi Hu, Weijia Chen, Zhi Wang, Shuaibing Lu, Yuanping Nie, Xiang Li and Xiaohui Kuang
Electronics 2023, 12(19), 4033; https://doi.org/10.3390/electronics12194033 - 25 Sep 2023
Viewed by 829
Abstract
Hybrid fuzzing is an automated software testing approach that synchronizes test cases between the fuzzer and the concolic executor to improve performance. The concolic executor solves path constraints to direct the fuzzer to explore the uncovered path. Despite many performance optimizations for hybrid [...] Read more.
Hybrid fuzzing is an automated software testing approach that synchronizes test cases between the fuzzer and the concolic executor to improve performance. The concolic executor solves path constraints to direct the fuzzer to explore the uncovered path. Despite many performance optimizations for hybrid fuzzing, we observe that the concolic executor often repeatedly performs constraint solving on branches with unsolvable constraints and branches covered by multiple test cases. This can cause significant computational redundancies. To be efficient, we propose BSFuzz, which keeps tracking the coverage state and solving state in a lightweight branch state map. BSFuzz synchronizes the current coverage state of all test cases from the fuzzer’s queue with the concolic executor in a timely manner to reduce constraint solving for high-frequency branches. It also records the branch-solving state during the concolic execution to reduce repeated solving of unsolvable branches. Guided by the coverage state and historical solving state, BSFuzz can efficiently discover and solve more branches. The experimental results with real-world programs demonstrate that BSFuzz can effectively increase the speed of the concolic executor and improve branch coverage. Full article
(This article belongs to the Special Issue Vulnerability Analysis and Adversarial Learning)
Show Figures

Figure 1

17 pages, 654 KiB  
Article
Feature-Fusion-Based Abnormal-Behavior-Detection Method in Virtualization Environment
by Luxin Zheng, Jian Zhang, Faxin Lin and Xiangyi Wang
Electronics 2023, 12(16), 3386; https://doi.org/10.3390/electronics12163386 - 09 Aug 2023
Viewed by 813
Abstract
From general systems to mission-critical systems at financial and government institutions, the application scope of cloud computing services is continuously expanding. Therefore, there is a need for better methods to ensure the stability and security of the cloud data and services. Monitoring the [...] Read more.
From general systems to mission-critical systems at financial and government institutions, the application scope of cloud computing services is continuously expanding. Therefore, there is a need for better methods to ensure the stability and security of the cloud data and services. Monitoring the abnormal behavior of virtual machines (VMs) is one of the most-important means to identify the causes of security incidents related to the cloud. However, current traditional abnormal-behavior-detection methods for VMs on cloud platforms face multiple challenges such as privacy protection and the semantic gap. Virtualization technology plays a key role in cloud computing. Meanwhile, virtualization security is the core issue of cloud computing security as well. To address these issues, this paper proposes a feature-fusion-based abnormal-behavior-detection method (FFABD) in a virtualization environment. This method acquires the hardware features and syscalls of the VM at the physical machine level and the virtualization level, respectively. Therefore, this method is not limited by the operating system running on the VM. This makes our method more efficient and universally applicable compared to traditional abnormal-VM-detectionmethods. The ensemble learning model performs the best among all the models, achieving an Accuracy of 99.7%. Full article
(This article belongs to the Special Issue Vulnerability Analysis and Adversarial Learning)
Show Figures

Figure 1

18 pages, 3299 KiB  
Article
IRC-CLVul: Cross-Programming-Language Vulnerability Detection with Intermediate Representations and Combined Features
by Tianwei Lei, Jingfeng Xue, Yong Wang and Zhenyan Liu
Electronics 2023, 12(14), 3067; https://doi.org/10.3390/electronics12143067 - 13 Jul 2023
Viewed by 949
Abstract
The most severe problem in cross-programming languages is feature extraction due to different tokens in different programming languages. To solve this problem, we propose a cross-programming-language vulnerability detection method in this paper, IRC-CLVul, based on intermediate representation and combined features. Specifically, we first [...] Read more.
The most severe problem in cross-programming languages is feature extraction due to different tokens in different programming languages. To solve this problem, we propose a cross-programming-language vulnerability detection method in this paper, IRC-CLVul, based on intermediate representation and combined features. Specifically, we first converted programs in different programming languages into a unified LLVM intermediate representation (LLVM-IR) to provide a classification basis for different programming languages. Afterwards, we extracted the code sequences and control flow graphs of the samples, used the semantic model to extract the program semantic information and graph structure information, and concatenated them into semantic vectors. Finally, we used Random Forest to learn the concatenated semantic vectors and obtained the classification results. We conducted experiments on 85,811 samples from the Juliet test suite in C, C++, and Java. The results show that our method improved the accuracy by 7% compared with the two baseline algorithms, and the F1 score showed a 12% increase. Full article
(This article belongs to the Special Issue Vulnerability Analysis and Adversarial Learning)
Show Figures

Figure 1

Back to TopTop