Security and Privacy Evaluation of Machine Learning in Networks

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Microwave and Wireless Communications".

Deadline for manuscript submissions: 31 May 2024 | Viewed by 8405

Special Issue Editors


E-Mail Website
Guest Editor
Institute of Artificial Intelligence and Blockchain, Guangzhou University, Guangzhou 510555, China
Interests: machine learning; image processing; information security
School of Computer Science and Cyber Engineering, Guangzhou University, Guangzhou 510006, China
Interests: information security; cryptography; blockchain

E-Mail Website
Guest Editor
College of Computer and Information Science, Southwest University, Chongqing 400715, China
Interests: data mining; machine learning; recommendation systems

E-Mail Website
Guest Editor
College of Computer Science, Chongqing University, Chongqing 400044, China
Interests: computer vision; deep learning; video coding

Special Issue Information

Dear Colleagues,

Machine learning algorithms have been playing an increasingly important role in many practical computing systems, such as auto-piloting, medicine diagnosis, spam detection, and person reidentification. Meanwhile, such machine learning algorithms are usually confronted with critical threats in an open and adversarial setting. These threats include evasion attack, backdoor attack, model stealth, data leakage and so on, which significantly affect the effectiveness and security of the machine learning systems.

To better protect the security of machine learning algorithms, it is necessary to evaluate multiple aspects of their security risks. In this regard, robust and certificated evaluation methods are critically needed in various industrial and academic communities. Although recent studies have provided many insightful works in this field (e.g., empirical model robustness evaluation), comprehensive and sophisticated security-oriented evaluations for machine learning algorithms remain rarely explored.

This feature topic will benefit the research community towards identifying challenges and disseminating the latest methodologies and solutions regarding security and privacy evaluation issues in machine learning. The ultimate objective is to publish high-quality articles presenting open issues, delivering algorithms, protocols, frameworks, and solutions for security evaluation in machine learning. Relevant topics include, but are not limited to, the following:

  • Adversarial machine learning;
  • Model robustness boosting and evaluation;
  • Secure federated machine learning;
  • Secure neural network inference;
  • Certificated evaluation intelligent systems;
  • Attack and defense for machine learning systems;
  • Semi-supervised adversarial learning;
  • Hardware/software co-design data security;
  • Verification mechanism for neural networks;
  • Security evaluation of generative models;
  • Machine-learning-based security and privacy design;
  • Security protocols for communication networks;
  • Information-theoretical foundations for advanced security and privacy techniques;
  • Encryption and decryption algorithms for machine learning systems networks;
  • Security and privacy design for intelligent vehicle networks;
  • Blockchain-based solutions for communication networks;
  • Anonymity in data transmission;
  • Prototype and testbed for security and privacy solutions;
  • Challenges of security and privacy in node–edge–cloud computation.

Dr. Xianmin Wang
Dr. Jing Li
Prof. Dr. Di Wu
Dr. Mingliang Zhou
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • adversarial machine learning
  • security evaluation
  • certificated evaluation
  • privacy-preserving
  • federated machine learning
  • robustness of machine learning
  • blockchain

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 4121 KiB  
Article
Comparative Analysis of Machine Learning Techniques for Non-Intrusive Load Monitoring
by Noman Shabbir, Kristina Vassiljeva, Hossein Nourollahi Hokmabad, Oleksandr Husev, Eduard Petlenkov and Juri Belikov
Electronics 2024, 13(8), 1420; https://doi.org/10.3390/electronics13081420 - 09 Apr 2024
Viewed by 489
Abstract
Non-intrusive load monitoring (NILM) has emerged as a pivotal technology in energy management applications by enabling precise monitoring of individual appliance energy consumption without the requirements of intrusive sensors or smart meters. In this technique, the load disaggregation for the individual device is [...] Read more.
Non-intrusive load monitoring (NILM) has emerged as a pivotal technology in energy management applications by enabling precise monitoring of individual appliance energy consumption without the requirements of intrusive sensors or smart meters. In this technique, the load disaggregation for the individual device is accrued by the recognition of their current signals by employing machine learning (ML) methods. This research paper conducts a comprehensive comparative analysis of various ML techniques applied to NILM, aiming to identify the most effective methodologies for accurate load disaggregation. The study employs a diverse dataset comprising high-resolution electricity consumption data collected from an Estonian household. The ML algorithms, including deep neural networks based on long short-term memory networks (LSTM), extreme gradient boost (XgBoost), logistic regression (LR), and dynamic time warping with K-nearest neighbor (DTW-KNN) are implemented and evaluated for their performance in load disaggregation. Key evaluation metrics such as accuracy, precision, recall, and F1 score are utilized to assess the effectiveness of each technique in capturing the nuanced energy consumption patterns of diverse appliances. Results indicate that the XgBoost-based model demonstrates superior performance in accurately identifying and disaggregating individual loads from aggregated energy consumption data. Insights derived from this research contribute to the optimization of NILM techniques for real-world applications, facilitating enhanced energy efficiency and informed decision-making in smart grid environments. Full article
(This article belongs to the Special Issue Security and Privacy Evaluation of Machine Learning in Networks)
Show Figures

Figure 1

21 pages, 1264 KiB  
Article
Practical and Malicious Multiparty Private Set Intersection for Small Sets
by Ji Zhou, Zhusen Liu, Luyao Wang, Chuan Zhao, Zhe Liu and Lu Zhou
Electronics 2023, 12(23), 4851; https://doi.org/10.3390/electronics12234851 - 30 Nov 2023
Viewed by 980
Abstract
Private set intersection (PSI) is a pivotal subject in the realm of privacy computation. Numerous research endeavors have concentrated on situations involving vast and imbalanced sets. Nevertheless, there is a scarcity of existing PSI protocols tailored for small sets. Those that exist are [...] Read more.
Private set intersection (PSI) is a pivotal subject in the realm of privacy computation. Numerous research endeavors have concentrated on situations involving vast and imbalanced sets. Nevertheless, there is a scarcity of existing PSI protocols tailored for small sets. Those that exist are either restricted to interactions between two parties or necessitate resource-intensive homomorphic operations. To bring forth practical multiparty private set intersection solutions for small sets, we present two multiparty PSI protocols founded on the principles of Oblivious Key–Value Stores (OKVSs), polynomials, and gabled cuckoo tables. Our security analysis underscores the resilience of these protocols against malicious models and collision attacks. Through experimental evaluations, we establish that, in comparison to related endeavors, our protocols excel in small-set contexts, particularly in low-bandwidth wide area network (WAN) settings. Full article
(This article belongs to the Special Issue Security and Privacy Evaluation of Machine Learning in Networks)
Show Figures

Figure 1

12 pages, 467 KiB  
Article
Fast and Accurate SNN Model Strengthening for Industrial Applications
by Deming Zhou, Weitong Chen, Kongyang Chen and Bing Mi
Electronics 2023, 12(18), 3845; https://doi.org/10.3390/electronics12183845 - 11 Sep 2023
Cited by 1 | Viewed by 847
Abstract
In spiking neural networks (SNN), there are emerging security threats, such as adversarial samples and poisoned data samples, which reduce the global model performance. Therefore, it is an important issue to eliminate the impact of malicious data samples on the whole model. In [...] Read more.
In spiking neural networks (SNN), there are emerging security threats, such as adversarial samples and poisoned data samples, which reduce the global model performance. Therefore, it is an important issue to eliminate the impact of malicious data samples on the whole model. In SNNs, a naive solution is to delete all malicious data samples and retrain the entire dataset. In the era of large models, this is impractical due to the huge computational complexity. To address this problem, we present a novel SNN model strengthening method to support fast and accurate removal of malicious data from a trained model. Specifically, we use untrained data that has the same distribution as the training data. We can infer that the untrained data has no effect on the initial model, and the malicious data should have no effect on the final refined model. Thus, we can use the model output of the untrained data with respect to the initial model to guide the final refined model. In this way, we present a stochastic gradient descent method to iteratively determine the final model. We perform a comprehensive performance evaluation on two industrial steel surface datasets. Experimental results show that our model strengthening method can provide accurate malicious data elimination, with speeds 11.7× to 27.2× faster speeds than the baseline method. Full article
(This article belongs to the Special Issue Security and Privacy Evaluation of Machine Learning in Networks)
Show Figures

Figure 1

20 pages, 806 KiB  
Article
Vertical Federated Unlearning on the Logistic Regression Model
by Zihao Deng, Zhaoyang Han, Chuan Ma, Ming Ding, Long Yuan, Chunpeng Ge and Zhe Liu
Electronics 2023, 12(14), 3182; https://doi.org/10.3390/electronics12143182 - 22 Jul 2023
Cited by 1 | Viewed by 1393
Abstract
Vertical federated learning is designed to protect user privacy by building local models over disparate datasets and transferring intermediate parameters without directly revealing the underlying data. However, the intermediate parameters uploaded by participants may memorize information about the training data. With the recent [...] Read more.
Vertical federated learning is designed to protect user privacy by building local models over disparate datasets and transferring intermediate parameters without directly revealing the underlying data. However, the intermediate parameters uploaded by participants may memorize information about the training data. With the recent legislation on the“right to be forgotten”, it is crucial for vertical federated learning systems to have the ability to forget or remove previous training information of any client. For the first time, this work fills in this research gap by proposing a vertical federated unlearning method on logistic regression model. The proposed method is achieved by imposing constraints on intermediate parameters during the training process and then subtracting target client updates from the global model. The proposed method boasts the advantages that it does not need any new clients for training and requires only one extra round of updates to recover the performance of the previous model. Moreover, data-poisoning attacks are introduced to evaluate the effectiveness of the unlearning process. The effectiveness of the method is demonstrated through experiments conducted on four benchmark datasets. Compared to the conventional unlearning by retraining from scratch, the proposed unlearning method has a negligible decrease in accuracy but can improve training efficiency by over 400%. Full article
(This article belongs to the Special Issue Security and Privacy Evaluation of Machine Learning in Networks)
Show Figures

Figure 1

13 pages, 2812 KiB  
Article
Effects of Different Full-Reference Quality Assessment Metrics in End-to-End Deep Video Coding
by Weizhi Xian, Bin Chen, Bin Fang, Kunyin Guo, Jie Liu, Ye Shi and Xuekai Wei
Electronics 2023, 12(14), 3036; https://doi.org/10.3390/electronics12143036 - 11 Jul 2023
Cited by 1 | Viewed by 818
Abstract
Visual quality assessment is often used as a key performance indicator (KPI) to evaluate the performance of electronic devices. There exists a significant association between visual quality assessment and electronic devices. In this paper, we bring attention to alternative choices of perceptual loss [...] Read more.
Visual quality assessment is often used as a key performance indicator (KPI) to evaluate the performance of electronic devices. There exists a significant association between visual quality assessment and electronic devices. In this paper, we bring attention to alternative choices of perceptual loss function for end-to-end deep video coding (E2E-DVC), which can be used to reduce the amount of data generated by electronic sensors and other sources. Thus, we analyze the effects of different full-reference quality assessment (FR-QA) metrics on E2E-DVC. First, we select five optimization-suitable FR-QA metrics as perceptual objectives, which are differentiable and thus support back propagation, and use them to optimize an E2E-DVC model. Second, we analyze the rate–distortion (R-D) behaviors of an E2E-DVC model under different loss function optimizations. Third, we carry out subjective human perceptual tests on the reconstructed videos to show the performance of different FR-QA optimizations on subjective visual quality. This study reveals the effects of the competing FR-QA metrics on E2E-DVC and provides a guide for further future study on E2E-DVC in terms of perceptual loss function design. Full article
(This article belongs to the Special Issue Security and Privacy Evaluation of Machine Learning in Networks)
Show Figures

Figure 1

23 pages, 10195 KiB  
Article
A Secure Data-Sharing Scheme for Privacy-Preserving Supporting Node–Edge–Cloud Collaborative Computation
by Kaifa Zheng, Caiyang Ding and Jinchen Wang
Electronics 2023, 12(12), 2737; https://doi.org/10.3390/electronics12122737 - 19 Jun 2023
Cited by 1 | Viewed by 1365
Abstract
The node–edge–cloud collaborative computation paradigm has introduced new security challenges to data sharing. Existing data-sharing schemes suffer from limitations such as low efficiency and inflexibility and are not easily integrated with the node–edge–cloud environment. Additionally, they do not provide hierarchical access control or [...] Read more.
The node–edge–cloud collaborative computation paradigm has introduced new security challenges to data sharing. Existing data-sharing schemes suffer from limitations such as low efficiency and inflexibility and are not easily integrated with the node–edge–cloud environment. Additionally, they do not provide hierarchical access control or dynamic changes to access policies for data privacy preservation, leading to a poor user experience and lower security. To address these issues, we propose a data-sharing scheme using attribute-based encryption (ABE) that supports node–edge–cloud collaborative computation (DS-ABE-CC). Our scheme incorporates access policies into ciphertext, achieving fine-grained access control and data privacy preservation. Firstly, considering node–edge–cloud collaborative computation, it outsources the significant computational overhead of data sharing from the owner and user to the edge nodes and the cloud. Secondly, integrating deeply with the “node–edge–cloud” scenario, the key distribution and agreement between all entities embedded in the encryption and decryption process, with a data privacy-preserving mechanism, improve the efficiency and security. Finally, our scheme supports flexible and dynamic access control policies and realizes hierarchical access control, thereby enhancing the user experience of data sharing. The theoretical analysis confirmed the security of our scheme, while the comparison experiments with other schemes demonstrated the practical feasibility and efficiency of our approach in node–edge–cloud collaborative computation. Full article
(This article belongs to the Special Issue Security and Privacy Evaluation of Machine Learning in Networks)
Show Figures

Figure 1

17 pages, 29294 KiB  
Article
Human Pose Estimation via an Ultra-Lightweight Pose Distillation Network
by Shihao Zhang, Baohua Qiang, Xianyi Yang, Xuekai Wei, Ruidong Chen and Lirui Chen
Electronics 2023, 12(12), 2593; https://doi.org/10.3390/electronics12122593 - 08 Jun 2023
Cited by 2 | Viewed by 1191
Abstract
Most current pose estimation methods have a high resource cost that makes them unusable in some resource-limited devices. To address this problem, we propose an ultra-lightweight end-to-end pose distillation network, which applies some helpful techniques to suitably balance the number of parameters and [...] Read more.
Most current pose estimation methods have a high resource cost that makes them unusable in some resource-limited devices. To address this problem, we propose an ultra-lightweight end-to-end pose distillation network, which applies some helpful techniques to suitably balance the number of parameters and predictive accuracy. First, we designed a lightweight one-stage pose estimation network, which learns from an increasingly refined sequential expert network in an online knowledge distillation manner. Then, we constructed an ultra-lightweight re-parameterized pose estimation subnetwork that uses a multi-module design with weight sharing to improve the multi-scale image feature acquisition capability of the single-module design. When training was complete, we used the first re-parameterized module as the deployment network to retain the simple architecture. Finally, extensive experimental results demonstrated the detection precision and low parameters of our method. Full article
(This article belongs to the Special Issue Security and Privacy Evaluation of Machine Learning in Networks)
Show Figures

Figure 1

Back to TopTop