Advanced Machine Learning Applications for Security, Privacy, and Reliability

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (15 February 2024) | Viewed by 11186

Special Issue Editors


E-Mail Website
Guest Editor
College of Computer Science and Electronic Engineering, Hunan University, Changsha 410012, China
Interests: hardware/hardware-assisted security; artificial intelligence security; integrated circuit design; post-quantum cryptographic acceleration
Special Issues, Collections and Topics in MDPI journals
School of Cyber Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074, China
Interests: embedded system security; very large-scale integration design; vehicular ad-hoc network security
School of Electronic Science and Engineering, Southeast University, Nanjing 211189, China
Interests: reconfigurable computing; quantum information; quantum arithmetic; AI & hardware security
College of Electronic Science and Technology, National University of Defense Technology, Changsha 410073, China
Interests: satellite navigation; adaptive signal processing; artificial intelligence; intelligent algorithm

Special Issue Information

Dear Colleagues,

Approaching the era of the Internet of Things (IoTs) has brought great convenience to the public through the interconnection of all things. Although our lives benefit from the emerging techniques, we face severe security, privacy, and reliability concerns. With the development of big data, there is a growing need for access control and privacy. Machine learning provides a promising solution to protect user data and detect known and unknown malicious attacks. Thus, advanced machine learning applications have been proposed to address the issues of security, privacy, and reliability in the IoTs. Additionally, machine learning in life-critical applications, such as autonomous driving, smart cities, healthcare, etc., security, privacy, and reliability should be the first and foremost concern.

This Special Issue aims to solicit innovative perspectives that focus on two fundamental questions: 1) How can advanced machine learning applications be exploited to address the issues of security, privacy, and reliability? 2) What security, privacy, and reliability concerns the advanced machine learning applications have incurred?

In this Special Issue, original research articles and reviews are welcome. Research areas may include (but not limited to) the following:

  • Security and privacy in smart city;
  • Advances in machine learning frameworks for intrusion detection;
  • Adversarial attacks against deep neural networks;
  • Reliability in computer vision systems;
  • Advanced machine learning for industrial internet;
  • Machine-learning-assisted side-channel attacks;
  • Security protocols in cyber-physical systems;
  • Trusted computing in machine learning;
  • Advanced machine learning for hardware security.

I/We look forward to receiving your contributions.

Prof. Dr. Jiliang Zhang 
Dr. Zhaojun Lu
Dr. He Li
Dr. Zukun Lu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • security and privacy in smart city
  • advances in machine learning frameworks for intrusion detection
  • adversarial attacks against deep neural networks
  • reliability in computer vision systems
  • advanced machine learning for industrial internet
  • machine-learning-assisted side-channel attacks
  • security protocols in cyber-physical systems
  • trusted computing in machine learning
  • advanced machine learning for hardware security

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 4701 KiB  
Article
Multiscale Feature Fusion and Graph Convolutional Network for Detecting Ethereum Phishing Scams
by Zhen Chen, Jia Huang, Shengzheng Liu and Haixia Long
Electronics 2024, 13(6), 1012; https://doi.org/10.3390/electronics13061012 - 07 Mar 2024
Viewed by 578
Abstract
With the emergence of blockchain technology, the cryptocurrency market has experienced significant growth in recent years, simultaneously fostering environments conducive to cybercrimes such as phishing scams. Phishing scams on blockchain platforms like Ethereum have become a grave economic threat. Consequently, there is a [...] Read more.
With the emergence of blockchain technology, the cryptocurrency market has experienced significant growth in recent years, simultaneously fostering environments conducive to cybercrimes such as phishing scams. Phishing scams on blockchain platforms like Ethereum have become a grave economic threat. Consequently, there is a pressing demand for effective detection mechanisms for these phishing activities to establish a secure financial transaction environment. However, existing methods typically utilize only the most recent transaction record when constructing features, resulting in the loss of vast amounts of transaction data and failing to adequately reflect the characteristics of nodes. Addressing this need, this study introduces a multiscale feature fusion approach integrated with a graph convolutional network model to detect phishing scams on Ethereum. A node basic feature set comprising 12 features is initially designed based on the Ethereum transaction dataset in the basic feature module. Subsequently, in the edge embedding representation module, all transaction times and amounts between two nodes are sorted, and a gate recurrent unit (GRU) neural network is employed to capture the temporal features within this transaction sequence, generating a fixed-length edge embedding representation from variable-length input. In the time trading feature module, attention weights are allocated to all embedding representations surrounding a node, aggregating the edge embedding representations and structural relationships into the node. Finally, combining basic and time trading features of the node, graph convolutional networks (GCNs), SAGEConv, and graph attention networks (GATs) are utilized to classify phishing nodes. The performance of these three graph convolution-based deep learning models is validated on a real Ethereum phishing scam dataset, demonstrating commendable efficiency. Among these, SAGEConv achieves an F1-score of 0.958, an AUC-ROC value of 0.956, and an AUC-PR value of 0.949, outperforming existing methods and baseline models. Full article
Show Figures

Figure 1

27 pages, 8046 KiB  
Article
Enhancing Communication Security an In-Vehicle Wireless Sensor Network
by Algimantas Venčkauskas, Marius Taparauskas, Šarūnas Grigaliūnas and Rasa Brūzgienė
Electronics 2024, 13(6), 1003; https://doi.org/10.3390/electronics13061003 - 07 Mar 2024
Viewed by 631
Abstract
Confronting the challenges of securing communication in-vehicle wireless sensor networks demands innovative solutions, particularly as vehicles become more interconnected. This paper proposes a tailored communication security framework for in-vehicle wireless sensor networks, addressing both scientific and technical challenges through effective encryption methods. It [...] Read more.
Confronting the challenges of securing communication in-vehicle wireless sensor networks demands innovative solutions, particularly as vehicles become more interconnected. This paper proposes a tailored communication security framework for in-vehicle wireless sensor networks, addressing both scientific and technical challenges through effective encryption methods. It segments the local vehicle network into independent subsystems communicating via encrypted and authenticated tunnels, enhancing automotive system safety and integrity. The authors introduce a process for periodic cryptographic key exchanges, ensuring secure communication and confidentiality in key generation without disclosing parameters. Additionally, an authentication technique utilizing the sender’s message authentication code secures communication tunnels, significantly advancing automotive cybersecurity and interconnectivity protection. Through a series of steps, including key generation, sending, and cryptographic key exchange, energy costs were investigated and compared with DTLS and TLS methods. For cryptographic security, testing against brute-force attacks and analysis of potential vulnerabilities in the AES-CBC 128 encryption algorithm, HMAC authentication, and HKDF key derivation function were carried out. Additionally, an evaluation of the memory resource consumption of the DTLS and TLS protocols was compared with the proposed solution. This work is crucial for mitigating risks associated with in-vehicle communication compromises within smart cities. Full article
Show Figures

Figure 1

17 pages, 2169 KiB  
Article
Collaborative Federated Learning-Based Model for Alert Correlation and Attack Scenario Recognition
by Hadeel K. Alkhpor and Faeiz M. Alserhani
Electronics 2023, 12(21), 4509; https://doi.org/10.3390/electronics12214509 - 02 Nov 2023
Cited by 1 | Viewed by 1182
Abstract
Planned and targeted attacks, such as the advanced persistent threat (APT), are highly sophisticated forms of attack. They involve numerous steps and are intended to remain within a system for an extended length of period before progressing to the next stage of action. [...] Read more.
Planned and targeted attacks, such as the advanced persistent threat (APT), are highly sophisticated forms of attack. They involve numerous steps and are intended to remain within a system for an extended length of period before progressing to the next stage of action. Anticipating the next behaviors of attackers is a challenging and crucial task due to the stealthy nature of advanced attack scenarios, in addition to the possible high volumes of false positive alerts generated by different security tools such as intrusion detection systems (IDSs). Intelligent models that are capable of establishing a correlation individual between individual security alerts in order to reconstruct attack scenarios and to extract a holistic view of intrusion activities are required to exploit hidden links between different attack stages. Federated learning models performed in distributed settings have achieved successful and reliable implementations. Alerts from distributed security devices can be utilized in a collaborative manner based on several learning models to construct a federated model. Therefore, we propose an intelligent detection system that employs federated learning models to identify advanced attack scenarios such as APT. Features extracted from alerts are preprocessed and engineered to produce a model with high accuracy and fewer false positives. We conducted training on four machine learning models in a centralized learning; these models are XGBoost, Random Forest, CatBoost, and an ensemble learning model. To maintain privacy and ensure the integrity of the global model, the proposed model has been implemented using conventional neural network federated learning (CNN_FL) across several clients during the process of updating weights. The experimental findings indicate that ensemble learning achieved the highest accuracy of 88.15% in the context of centralized learning. CNN_FL has demonstrated an accuracy of 90.18% in detecting various attacks of APTs while maintaining a low false alarm rate. Full article
Show Figures

Figure 1

19 pages, 3329 KiB  
Article
Enhancing Industrial Cyber Security, Focusing on Formulating a Practical Strategy for Making Predictions through Machine Learning Tools in Cloud Computing Environment
by Zaheer Abbas and Seunghwan Myeong
Electronics 2023, 12(12), 2650; https://doi.org/10.3390/electronics12122650 - 13 Jun 2023
Cited by 1 | Viewed by 2491
Abstract
Cloud computing has revolutionized how industries store, process, and access data. However, the increasing adoption of cloud technology has also raised concerns regarding data security. Machine learning (ML) is a promising technique to enhance cloud computing security. This paper focuses on utilizing ML [...] Read more.
Cloud computing has revolutionized how industries store, process, and access data. However, the increasing adoption of cloud technology has also raised concerns regarding data security. Machine learning (ML) is a promising technique to enhance cloud computing security. This paper focuses on utilizing ML techniques (Support Vector Machine, XGBoost, and Artificial Neural Networks) to progress cloud computing security in the industry. The selection of 11 important features for the ML study satisfies the study’s objectives. This study identifies gaps in utilizing ML techniques in cloud cyber security. Moreover, this study aims at developing a practical strategy for predicting the employment of machine learning in an Industrial Cloud environment regarding trust and privacy issues. The efficiency of the employed models is assessed by applying validation matrices of precision, accuracy, and recall values, as well as F1 scores, R.O.C. curves, and confusion matrices. The results demonstrated that the X.G.B. model outperformed, in terms of all the matrices, with an accuracy of 97.50%, a precision of 97.60%, a recall value of 97.60%, and an F1 score of 97.50%. This research highlights the potential of ML algorithms in enhancing cloud computing security for industries. It emphasizes the need for continued research and development to create more advanced and efficient security solutions for cloud computing. Full article
Show Figures

Graphical abstract

16 pages, 2357 KiB  
Article
Adversarial Perturbation Elimination with GAN Based Defense in Continuous-Variable Quantum Key Distribution Systems
by Xun Tang, Pengzhi Yin, Zehao Zhou and Duan Huang
Electronics 2023, 12(11), 2437; https://doi.org/10.3390/electronics12112437 - 27 May 2023
Cited by 2 | Viewed by 1290
Abstract
Machine learning is being applied to continuous-variable quantum key distribution (CVQKD) systems as defense countermeasures for attack classification. However, recent studies have demonstrated that most of these detection networks are not immune to adversarial attacks. In this paper, we propose to implement typical [...] Read more.
Machine learning is being applied to continuous-variable quantum key distribution (CVQKD) systems as defense countermeasures for attack classification. However, recent studies have demonstrated that most of these detection networks are not immune to adversarial attacks. In this paper, we propose to implement typical adversarial attack strategies against the CVQKD system and introduce a generalized defense scheme. Adversarial attacks essentially generate data points located near decision boundaries that are linearized based on iterations of the classifier to lead to misclassification. Using the DeepFool attack as an example, we test it on four different CVQKD detection networks and demonstrate that an adversarial attack can fool most CVQKD detection networks. To solve this problem, we propose an improved adversarial perturbation elimination with a generative adversarial network (APE-GAN) scheme to generate samples with similar distribution to the original samples to defend against adversarial attacks. The results show that the proposed scheme can effectively defend against adversarial attacks including DeepFool and other adversarial attacks and significantly improve the security of communication systems. Full article
Show Figures

Figure 1

23 pages, 3057 KiB  
Article
Cluster-Based Secure Aggregation for Federated Learning
by Jien Kim, Gunryeong Park, Miseung Kim and Soyoung Park
Electronics 2023, 12(4), 870; https://doi.org/10.3390/electronics12040870 - 08 Feb 2023
Cited by 3 | Viewed by 1671
Abstract
In order to protect each node’s local learning parameters from model inversion attacks, secure aggregation has become the essential technique for federated learning so that the federated learning server knows only the combined result of all local parameters. In this paper, we introduced [...] Read more.
In order to protect each node’s local learning parameters from model inversion attacks, secure aggregation has become the essential technique for federated learning so that the federated learning server knows only the combined result of all local parameters. In this paper, we introduced a novel cluster-based secure aggregation model that effectively deals with dropout nodes while reducing communicational and computational overheads. Specifically, we considered a federated learning environment with heterogeneous devices deployed across the country. The computing power of each node and the amount of training data can be heterogeneous. Because of this, each node had a different local processing time, and the response time to the server is also different. To clearly determine the dropout nodes in this environment, our model clusters nodes with similar response times based on each node’s local processing time and location and then performs the aggregation on a pre-cluster basis. In addition, we propose a new practical additive sharing-based masking protocol to hide the actual local updates of nodes during aggregation. The new masking protocol makes it easy to remove the share of dropout nodes from the aggregation without using a (t, n) threshold scheme, and updates from dropout nodes are still secure even if they are delivered to the server after the dropout shares have been revealed. In addition, our model provides mask verification for reliable aggregation. Nodes can publicly verify the correctness and integrity of the masks received from others using a discrete logarithm problem before the aggregation. As a result, the proposed aggregation model is robust to dropout nodes and ensures the privacy of local updates if at least three honest nodes are alive in each cluster. Since the masking process is performed on a cluster basis, our model effectively reduces the overhead of generating and sharing the masking value. For an average cluster size C and a total number of nodes N, the computation and communication cost of each node is O(C), the computation cost of the server is O(N), and the communication cost is O(NC). We analyzed the security and efficiency of our protocol by simulating diverse dropout scenarios. The simulated results showed that our cluster-based secure aggregation outputs about a 91% learning accuracy regardless of dropout rate with four clusters for one hundred nodes. Full article
Show Figures

Figure 1

22 pages, 631 KiB  
Article
ARMOR: Differential Model Distribution for Adversarially Robust Federated Learning
by Yanting Zhang, Jianwei Liu, Zhenyu Guan, Bihe Zhao, Xianglun Leng and Song Bian
Electronics 2023, 12(4), 842; https://doi.org/10.3390/electronics12040842 - 07 Feb 2023
Viewed by 1311
Abstract
In this work, we formalize the concept of differential model robustness (DMR), a new property for ensuring model security in federated learning (FL) systems. For most conventional FL frameworks, all clients receive the same global model. If there exists a Byzantine client who [...] Read more.
In this work, we formalize the concept of differential model robustness (DMR), a new property for ensuring model security in federated learning (FL) systems. For most conventional FL frameworks, all clients receive the same global model. If there exists a Byzantine client who maliciously generates adversarial samples against the global model, the attack will be immediately transferred to all other benign clients. To address the attack transferability concern and improve the DMR of FL systems, we propose the notion of differential model distribution (DMD) where the server distributes different models to different clients. As a concrete instantiation of DMD, we propose the ARMOR framework utilizing differential adversarial training to prevent a corrupted client from launching white-box adversarial attack against other clients, for the local model received by the corrupted client is different from that of benign clients. Through extensive experiments, we demonstrate that ARMOR can significantly reduce both the attack success rate (ASR) and average adversarial transfer rate (AATR) across different FL settings. For instance, for a 35-client FL system, the ASR and AATR can be reduced by as much as 85% and 80% over the MNIST dataset. Full article
Show Figures

Figure 1

19 pages, 1396 KiB  
Article
Parity Check Based Fault Detection against Timing Fault Injection Attacks
by Maoshen Zhang, He Li , Peijing Wang and Qiang Liu
Electronics 2022, 11(24), 4082; https://doi.org/10.3390/electronics11244082 - 08 Dec 2022
Cited by 2 | Viewed by 1258
Abstract
Fault injection technologies can be utilized to steal secret information inside integrated circuits (ICs), and thus cause serious information security threats. Parity check has been adopted as an efficient method against fault injection attacks. However, the contradiction between security and overhead restricts the [...] Read more.
Fault injection technologies can be utilized to steal secret information inside integrated circuits (ICs), and thus cause serious information security threats. Parity check has been adopted as an efficient method against fault injection attacks. However, the contradiction between security and overhead restricts the further development and applications of parity check in fault injection detection. This paper proposes two methods, mixed-grained parity check and word recombination parity check, based on parity check for the trade-off between security and overhead. The efficiency of the proposed approaches is verified on RC5, AES, and DES encryption implementations by clock glitch attack. Compared with the traditional parity check, the fault coverage rate of the mixed-grained approach can be increased by up to 53.69% by consuming 13.2% registers more. Against the traditional parity check, the fault coverage rate of the word recombination approach can be increased by up to 47.16% by using only 2.35% register more. The proposed approaches provide IC designers with countermeasure options targeting different design skills and design specifications. Full article
Show Figures

Figure 1

Back to TopTop