Advanced Research on Information System Security and Privacy

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: 31 August 2024 | Viewed by 3927

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen), Shenzhen 518055, China
Interests: artificial intelligence security; cyber attack and defense; situation awareness analysis; big data analysis; intelligent connected vehicle; knowledge graph
Special Issues, Collections and Topics in MDPI journals
School of Information Technology, Deakin University, Burwood, VIC 3125, Australia
Interests: social computing; query processing and optimization; big data analytics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We are pleased to announce this Special Issue of the journal Mathematics entitled “Advanced Research on Information System Security and Privacy”. As new technologies emerge and threats become more sophisticated, there is a constant need for advanced research and innovative solutions to safeguard sensitive information and ensure user privacy. The main topics of this Special Issue include, but are not limited to: information systems security, privacy-preserving technologies, cryptography and security, digital forensics, artificial intelligence security, and adversarial explainable AI, etc.

With the pervasive presence of information technology and the Internet, understanding the complexities of information security and privacy has become a global concern. The escalating volume of digital information processed online has led to a significant rise in breaches and hacks, underscoring the need for a trustworthy and secure cyberspace. This Special Issue aims to address these challenges by providing a forum to serve a diverse audience by mitigating information risk, staying abreast of the latest developments, and publishing a range of articles that cater to a broad cross-sectoral and multi-disciplinary readership. We encourage submissions that explore topics such as cryptography, machine learning, and other innovative approaches to achieve robust security measures.

Dr. Zhaoquan Gu
Dr. Jianxin Li
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • information systems security
  • privacy-preserving technologies of information systems
  • secure communication for information systems
  • cryptography and security
  • situation awareness analysis of information systems
  • threat detection and access control
  • digital forensics
  • artificial intelligence security for information systems
  • adversarial attacks and defenses in deep-learning-based information systems
  • adversarial explainable AI
  • security evaluation of information systems

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 735 KiB  
Article
Efficient Large-Scale IoT Botnet Detection through GraphSAINT-Based Subgraph Sampling and Graph Isomorphism Network
by Lihua Yin, Weizhe Chen, Xi Luo and Hongyu Yang
Mathematics 2024, 12(9), 1315; https://doi.org/10.3390/math12091315 - 25 Apr 2024
Viewed by 271
Abstract
In recent years, with the rapid development of the Internet of Things, large-scale botnet attacks have occurred frequently and have become an important challenge to network security. As artificial intelligence technology continues to evolve, intelligent detection solutions for botnets are constantly emerging. Although [...] Read more.
In recent years, with the rapid development of the Internet of Things, large-scale botnet attacks have occurred frequently and have become an important challenge to network security. As artificial intelligence technology continues to evolve, intelligent detection solutions for botnets are constantly emerging. Although graph neural networks are widely used for botnet detection, directly handling large-scale botnet data becomes inefficient and challenging as the number of infected hosts increases and the network scale expands. Especially in the process of node level learning and inference, a large number of nodes and edges need to be processed, leading to a significant increase in computational complexity and posing new challenges to network security. This paper presents a novel approach that can accurately identify diverse intricate botnet architectures in extensive IoT networks based on the aforementioned circumstance. By utilizing GraphSAINT to process large-scale IoT botnet graph data, efficient and unbiased subgraph sampling has been achieved. In addition, a solution with enhanced information representation capability has been developed based on the Graph Isomorphism Network (GIN) for botnet detection. Compared with the five currently popular graph neural network (GNN) models, our approach has been tested on C2, P2P, and Chord datasets, and higher accuracy has been achieved. Full article
(This article belongs to the Special Issue Advanced Research on Information System Security and Privacy)
Show Figures

Figure 1

13 pages, 314 KiB  
Article
VTT-LLM: Advancing Vulnerability-to-Tactic-and-Technique Mapping through Fine-Tuning of Large Language Model
by Chenhui Zhang, Le Wang, Dunqiu Fan, Junyi Zhu, Tang Zhou, Liyi Zeng and Zhaohua Li
Mathematics 2024, 12(9), 1286; https://doi.org/10.3390/math12091286 - 24 Apr 2024
Viewed by 312
Abstract
Vulnerabilities are often accompanied by cyberattacks. CVE is the largest repository of open vulnerabilities, which keeps expanding. ATT&CK models known multi-step attacks both tactically and technically and remains up to date. It is valuable to correlate the vulnerability in CVE with the corresponding [...] Read more.
Vulnerabilities are often accompanied by cyberattacks. CVE is the largest repository of open vulnerabilities, which keeps expanding. ATT&CK models known multi-step attacks both tactically and technically and remains up to date. It is valuable to correlate the vulnerability in CVE with the corresponding tactic and technique of ATT&CK which exploit the vulnerability, for active defense. Mappings manually is not only time-consuming but also difficult to keep up-to-date. Existing language-based automated mapping methods do not utilize the information associated with attack behaviors outside of CVE and ATT&CK and are therefore ineffective. In this paper, we propose a novel framework named VTT-LLM for mapping Vulnerabilities to Tactics and Techniques based on Large Language Models, which consists of a generation model and a mapping model. In order to generate fine-tuning instructions for LLM, we create a template to extract knowledge of CWE (a standardized list of common weaknesses) and CAPEC (a standardized list of common attack patterns). We train the generation model of VTT-LLM by fine-tuning the LLM according to the above instructions. The generation model correlates vulnerability and attack through their descriptions. The mapping model transforms the descriptions of ATT&CK tactics and techniques into vectors through text embedding and further associates them with attacks through semantic matching. By leveraging the knowledge of CWE and CAPEC, VTT-LLM can eventually automate the process of linking vulnerabilities in CVE to the attack techniques and tactics of ATT&CK. Experiments on the latest public dataset, ChatGPT-VDMEval, show the effectiveness of VTT-LLM with an accuracy of 85.18%, which is 13.69% and 54.42% higher than the existing CVET and ChatGPT-based methods, respectively. In addition, compared to fine-tuning without outside knowledge, the accuracy of VTT-LLM with chain fine-tuning is 9.24% higher on average across different LLMs. Full article
(This article belongs to the Special Issue Advanced Research on Information System Security and Privacy)
Show Figures

Figure 1

19 pages, 11263 KiB  
Article
Inter-Channel Correlation Modeling and Improved Skewed Histogram Shifting for Reversible Data Hiding in Color Images
by Dan He, Zhanchuan Cai, Dujuan Zhou and Zhihui Chen
Mathematics 2024, 12(9), 1283; https://doi.org/10.3390/math12091283 - 24 Apr 2024
Viewed by 240
Abstract
Reversible data hiding (RDH) is an advanced data protection technology that allows the embedding of additional information into an original digital medium while maintaining its integrity. Color images are typical carriers for information because of their rich data content, making them suitable for [...] Read more.
Reversible data hiding (RDH) is an advanced data protection technology that allows the embedding of additional information into an original digital medium while maintaining its integrity. Color images are typical carriers for information because of their rich data content, making them suitable for data embedding. Compared to grayscale images, color images with their three color channels (RGB) enhance data embedding capabilities while increasing algorithmic complexity. When implementing RDH in color images, researchers often exploit the inter-channel correlation to enhance embedding efficiency and minimize the impact on image visual quality. This paper proposes a novel RDH method for color images based on inter-channel correlation modeling and improved skewed histogram shifting. Initially, we construct an inter-channel correlation model based on the relationship among the RGB channels. Subsequently, an extended method for calculating the local complexity of pixels is proposed. Then, we adaptively select the pixel prediction context and design three types of extreme predictors. The improved skewed histogram shifting method is utilized for data embedding and extraction. Finally, experiments conducted on the USC-SIPI and Kodak datasets validate the superiority of our proposed method in terms of image fidelity. Full article
(This article belongs to the Special Issue Advanced Research on Information System Security and Privacy)
Show Figures

Figure 1

13 pages, 2210 KiB  
Article
A Speech Adversarial Sample Detection Method Based on Manifold Learning
by Xiao Ma, Dongliang Xu, Chenglin Yang, Panpan Li and Dong Li
Mathematics 2024, 12(8), 1226; https://doi.org/10.3390/math12081226 - 19 Apr 2024
Viewed by 304
Abstract
Deep learning-based models have achieved impressive results across various practical fields. However, these models are susceptible to attacks. Recent research has demonstrated that adversarial samples can significantly decrease the accuracy of deep learning models. This susceptibility poses considerable challenges for their use in [...] Read more.
Deep learning-based models have achieved impressive results across various practical fields. However, these models are susceptible to attacks. Recent research has demonstrated that adversarial samples can significantly decrease the accuracy of deep learning models. This susceptibility poses considerable challenges for their use in security applications. Various methods have been developed to enhance model robustness by training with more effective and generalized adversarial examples. However, these approaches tend to compromise model accuracy. Currently proposed detection methods mainly focus on speech adversarial samples generated by specified white-box attack models. In this study, leveraging manifold learning technology, a method is proposed to detect whether a speech input is an adversarial sample before feeding it into the recognition model. The method is designed to detect speech adversarial samples generated by black-box attack models and achieves a detection success rate of 84.73%. It identifies the low-dimensional manifold of training samples and measures the distance of a sample under investigation to this manifold to determine its adversarial nature. This technique enables the preprocessing detection of adversarial audio samples before their introduction into the deep learning model, thereby preventing adversarial attacks without affecting model robustness. Full article
(This article belongs to the Special Issue Advanced Research on Information System Security and Privacy)
Show Figures

Figure 1

24 pages, 5348 KiB  
Article
Intrusion Detection Based on Adaptive Sample Distribution Dual-Experience Replay Reinforcement Learning
by Haonan Tan, Le Wang, Dong Zhu and Jianyu Deng
Mathematics 2024, 12(7), 948; https://doi.org/10.3390/math12070948 - 23 Mar 2024
Viewed by 637
Abstract
In order to cope with ever-evolving and increasing cyber threats, intrusion detection systems have become a crucial component of cyber security. Compared with signature-based intrusion detection methods, anomaly-based methods typically employ machine learning techniques to train detection models and possess the capability to [...] Read more.
In order to cope with ever-evolving and increasing cyber threats, intrusion detection systems have become a crucial component of cyber security. Compared with signature-based intrusion detection methods, anomaly-based methods typically employ machine learning techniques to train detection models and possess the capability to discover unknown attacks. However, intrusion detection methods face the challenge of low detection rates for minority class attacks due to imbalanced data distributions. Traditional intrusion detection algorithms address this issue by resampling or generating synthetic data. Additionally, reinforcement learning, as a machine learning method that interacts with the environment to obtain feedback and improve performance, is gradually being considered for application in the field of intrusion detection. This paper proposes a reinforcement-learning-based intrusion detection method that innovatively uses adaptive sample distribution dual-experience replay to enhance a reinforcement learning algorithm, aiming to effectively address the issue of imbalanced sample distribution. We have also developed a reinforcement learning environment specifically designed for intrusion detection tasks. Experimental results demonstrate that the proposed model achieves favorable performance on the NSL-KDD, AWID, and CICIoT2023 datasets, effectively dealing with imbalanced data and showing better classification performance in detecting minority attacks. Full article
(This article belongs to the Special Issue Advanced Research on Information System Security and Privacy)
Show Figures

Figure 1

16 pages, 767 KiB  
Article
AGCN-Domain: Detecting Malicious Domains with Graph Convolutional Network and Attention Mechanism
by Xi Luo, Yixin Li, Hongyuan Cheng and Lihua Yin
Mathematics 2024, 12(5), 640; https://doi.org/10.3390/math12050640 - 22 Feb 2024
Viewed by 509
Abstract
Domain Name System (DNS) plays an infrastructure role in providing the directory service for mapping domains to IPs on the Internet. Considering the foundation and openness of DNS, it is not surprising that adversaries register massive domains to enable multiple malicious activities, such [...] Read more.
Domain Name System (DNS) plays an infrastructure role in providing the directory service for mapping domains to IPs on the Internet. Considering the foundation and openness of DNS, it is not surprising that adversaries register massive domains to enable multiple malicious activities, such as spam, command and control (C&C), malware distribution, click fraud, etc. Therefore, detecting malicious domains is a significant topic in security research. Although a substantial quantity of research has been conducted, previous work has failed to fuse multiple relationship features to uncover the deep underlying relationships between domains, thus largely limiting their level of performance. In this paper, we proposed AGCN-Domain to detect malicious domains by combining various relations. The core concept behind our work is to analyze relations between domains according to their behaviors in multiple perspectives and fuse them intelligently. The AGCN-Domain model utilizes three relationships (client relation, resolution relation, and cname relation) to construct three relationship feature graphs to extract features and intelligently fuse the features extracted from the graphs through an attention mechanism. After the relationship features are extracted from the domain names, they are put into the trained classifier to be processed. Through our experiments, we have demonstrated the performance of our proposed AGCN-Domain model. With 10% initialized labels in the dataset, our AGCN-Domain model achieved an accuracy of 94.27% and the F1 score of 87.93%, significantly outperforming other methods in the comparative experiments. Full article
(This article belongs to the Special Issue Advanced Research on Information System Security and Privacy)
Show Figures

Figure 1

21 pages, 2259 KiB  
Article
Privacy-Enhanced Federated Learning for Non-IID Data
by Qingjie Tan, Shuhui Wu and Yuanhong Tao
Mathematics 2023, 11(19), 4123; https://doi.org/10.3390/math11194123 - 29 Sep 2023
Viewed by 870
Abstract
Federated learning (FL) allows the collaborative training of a collective model by a vast number of decentralized clients while ensuring that these clients’ data remain private and are not shared. In practical situations, the training data utilized in FL often exhibit non-IID characteristics, [...] Read more.
Federated learning (FL) allows the collaborative training of a collective model by a vast number of decentralized clients while ensuring that these clients’ data remain private and are not shared. In practical situations, the training data utilized in FL often exhibit non-IID characteristics, hence diminishing the efficacy of FL. Our study presents a novel privacy-preserving FL algorithm, HW-DPFL, which leverages data label distribution similarity as a basis for its design. Our proposed approach achieves this objective without incurring any additional overhead communication. In this study, we provide evidence to support the assertion that our approach improves the privacy guarantee and convergence of FL both theoretically and empirically. Full article
(This article belongs to the Special Issue Advanced Research on Information System Security and Privacy)
Show Figures

Figure 1

Back to TopTop