Security, Privacy and Application in New Intelligence Techniques

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 November 2024 | Viewed by 3352

Special Issue Editors


E-Mail Website
Guest Editor
Software College, Northeastern University, Shenyang 110169, China
Interests: network security; information security; cryptography; privacy computing

E-Mail Website
Guest Editor
School of Information and Software Engineering, University of Electronic Science and Technology, Chengdu 611731, China
Interests: federated learning; information security; privacy computing; blockchain

Special Issue Information

Dear Colleagues,

In recent years, intelligence techniques have attracted extensive attention from research, industry and other fields, greatly expanding the ability of human beings to perceive, understand and control the physical world, and profoundly affecting the production and lifestyle of human beings. Nevertheless, their rapid and widespread deployment, along with their participation in the provisioning of potentially critical services, raise numerous issues related to the security and privacy of the performed operations and provided services. Every day, we use intelligence techniques to collect and analyze our personal, financial as well as health information on a regular basis. As these techniques are an often open and complex, they can be subjected to malicious attacks from both insiders and outsiders; the need to protect the security and privacy in these techniques becomes a critical issue. Consequently, research and development efforts in academia and industry have been increasingly focusing on security and privacy issues in intelligence techniques. Although recent advances in the security and privacy protection of intelligence techniques, such as fully homomorphic encryption, secure multiparty computation, and adversarial machine learning, are promising, more work is still needed to transform theoretical techniques into practical solutions that can be efficiently implemented in the new intelligence techniques. This Special Issue is dedicated to the most recent developments and research outcomes addressing the related theoretical and practical aspects on security, privacy and application in new intelligence techniques, and the goal is to provide worldwide researchers and practitioners an ideal platform to innovate new solutions targeting at the corresponding key challenges. Original and unpublished high-quality research results are solicited to explore various challenging topics which include, but are not limited to the ones listed below:

  • Intelligence techniques in cybersecurity;
  • New cryptographic techniques for intelligence techniques;
  • Privacy preserving machine learning;
  • Adversarial machine learning;
  • Deep Learning in security and privacy;
  • Big data intelligence in security and privacy;
  • Security and privacy in new intelligent computing technologies;
  • Security and privacy in intelligent data sharing, integration, and storage;
  • Security and privacy in Internet of Things;
  • Blockchain in intelligent applications and services;
  • Intelligent data processing, storage and sharing;
  • Intelligent application;
  • Security and privacy in graph neural network;
  • Risk assessment and prediction;
  • Prediction and early warning of security risk of intelligent system;
  • Secure federated learning.

Prof. Dr. Jian Xu
Dr. Ruijin Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • security
  • privacy
  • intelligence techniques
  • adversarial machine learning
  • blockchain
  • federated learning
  • Internet of Things

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 7921 KiB  
Article
A Dynamic Parameter Tuning Strategy for Decomposition-Based Multi-Objective Evolutionary Algorithms
by Jie Zheng, Jiaxu Ning, Hongfeng Ma and Ziyi Liu
Appl. Sci. 2024, 14(8), 3481; https://doi.org/10.3390/app14083481 - 20 Apr 2024
Viewed by 258
Abstract
The penalty-based boundary cross-aggregation (PBI) method is a common decomposition method of the MOEA/D algorithm, but the strategy of using a fixed penalty parameter in the boundary cross-aggregation function affects the convergence of the populations to a certain extent and is not conducive [...] Read more.
The penalty-based boundary cross-aggregation (PBI) method is a common decomposition method of the MOEA/D algorithm, but the strategy of using a fixed penalty parameter in the boundary cross-aggregation function affects the convergence of the populations to a certain extent and is not conducive to the maintenance of the diversity of boundary solutions. To address the above problems, this paper proposes a penalty boundary crossing strategy (DPA) for MOEA/D to adaptively adjust the penalty parameter. The strategy adjusts the penalty parameter values according to the state of uniform distribution of solutions around the weight vectors in the current iteration period, thus helping the optimization process to balance convergence and diversity. In the experimental part, we tested the MOEA/D-DPA algorithm with several MOEA/D improved algorithms on the classical test set. The results show that the MOEA/D with the DPA has better performance than the MOEA/D with the other decomposition strategies. Full article
(This article belongs to the Special Issue Security, Privacy and Application in New Intelligence Techniques)
Show Figures

Figure 1

16 pages, 1419 KiB  
Article
IG-Based Method for Voiceprint Universal Adversarial Perturbation Generation
by Meng Bi, Xianyun Yu, Zhida Jin and Jian Xu
Appl. Sci. 2024, 14(3), 1322; https://doi.org/10.3390/app14031322 - 05 Feb 2024
Viewed by 538
Abstract
In this paper, we propose an Iterative Greedy-Universal Adversarial Perturbations (IGUAP) approach based on an iterative greedy algorithm to create universal adversarial perturbations for acoustic prints. A thorough, objective account of the IG-UAP method is provided, outlining its framework and approach. The method [...] Read more.
In this paper, we propose an Iterative Greedy-Universal Adversarial Perturbations (IGUAP) approach based on an iterative greedy algorithm to create universal adversarial perturbations for acoustic prints. A thorough, objective account of the IG-UAP method is provided, outlining its framework and approach. The method leverages a greedy iteration approach to formulate an optimization problem for solving acoustic universal adversarial perturbations, with a new objective function designed to ensure that the attack has higher accuracy in terms of minimizing the perceptibility of adversarial perturbations and increasing the accuracy of successful attacks. The perturbation generation process is described in detail, and the resulting acoustic universal adversarial perturbation is evaluated in both target-attack and no-target-attack scenarios. Experimental analysis and testing were carried out using comparable techniques and dissimilar target models. The findings reveal that the acoustic generality adversarial perturbation produced by the IG-UAP method can obtain effective attack results even when the audio training data sample size is minimal, i.e., one for each category. Moreover, the human ear finds it difficult to detect the loss of original data information and the addition of adversarial perturbation (for the case of a target attack, the ASR values range from 82.4% to 90.2% for the small sample data set). The success rates for untargeted and targeted attacks average 85.8% and 84.9%, respectively. Full article
(This article belongs to the Special Issue Security, Privacy and Application in New Intelligence Techniques)
Show Figures

Figure 1

18 pages, 865 KiB  
Article
FLGQM: Robust Federated Learning Based on Geometric and Qualitative Metrics
by Shangdong Liu, Xi Xu, Musen Wang, Fei Wu, Yimu Ji, Chenxi Zhu and Qurui Zhang
Appl. Sci. 2024, 14(1), 351; https://doi.org/10.3390/app14010351 - 30 Dec 2023
Viewed by 744
Abstract
Federated learning is a distributed learning method that seeks to train a shared global model by aggregating contributions from multiple clients. This method ensures that each client’s local data are not shared with others. However, research has revealed that federated learning is vulnerable [...] Read more.
Federated learning is a distributed learning method that seeks to train a shared global model by aggregating contributions from multiple clients. This method ensures that each client’s local data are not shared with others. However, research has revealed that federated learning is vulnerable to poisoning attacks launched by compromised or malicious clients. Many defense mechanisms have been proposed to mitigate the impact of poisoning attacks, but there are still some limitations and challenges. The defense methods are either performing malicious model removal from the geometric perspective to measure the geometric direction of the model or adding an additional dataset to the server for verifying local models. The former is prone to failure when facing advanced poisoning attacks, while the latter goes against the original intention of federated learning as it requires an independent dataset; thus, both of these defense methods have some limitations. To solve the above problems, we propose a robust federated learning method based on geometric and qualitative metrics (FLGQM). Specifically, FLGQM aims to metricize local models in both geometric and qualitative aspects for comprehensive defense. Firstly, FLGQM evaluates all local models from both direction and size aspects based on similarity calculated by cosine and the Euclidean distance, which we refer to as geometric metrics. Next, we introduce a union client set to assess the quality of all local models by utilizing the union client’s local dataset, referred to as quality metrics. By combining the results of these two metrics, FLGQM is able to use information from multiple views for accurate poisoning attack identification. We conducted experimental evaluations of FLGQM using the MNIST and CIFAR-10 datasets. The experimental results demonstrate that, under different kinds of poisoning attacks, FLGQM can achieve similar performance to FedAvg in non-adversarial environments. Therefore, FLGQM has better robustness and poisoning attack defense performance. Full article
(This article belongs to the Special Issue Security, Privacy and Application in New Intelligence Techniques)
Show Figures

Figure 1

15 pages, 4371 KiB  
Article
RepVGG-SimAM: An Efficient Bad Image Classification Method Based on RepVGG with Simple Parameter-Free Attention Module
by Zengyu Cai, Xinyang Qiao, Jianwei Zhang, Yuan Feng, Xinhua Hu and Nan Jiang
Appl. Sci. 2023, 13(21), 11925; https://doi.org/10.3390/app132111925 - 31 Oct 2023
Cited by 1 | Viewed by 1070
Abstract
With the rapid development of Internet technology, the number of global Internet users is rapidly increasing, and the scale of the Internet is also expanding. The huge Internet system has accelerated the spread of bad information, including bad images. Bad images reflect the [...] Read more.
With the rapid development of Internet technology, the number of global Internet users is rapidly increasing, and the scale of the Internet is also expanding. The huge Internet system has accelerated the spread of bad information, including bad images. Bad images reflect the vulgar culture of the Internet. They will not only pollute the Internet environment and impact the core culture of society but also endanger the physical and mental health of young people. In addition, some criminals use bad images to induce users to download software containing computer viruses, which also greatly endanger the security of cyberspace. Cyberspace governance faces enormous challenges. Most existing methods for classifying bad images face problems such as low classification accuracy and long inference times, and these limitations are not conducive to effectively curbing the spread of bad images and reducing their harm. To address this issue, this paper proposes a classification method (RepVGG-SimAM) based on RepVGG and a simple parameter-free attention mechanism (SimAM). This method uses RepVGG as the backbone network and embeds the SimAM attention mechanism in the network so that the neural network can obtain more effective information and suppress useless information. We used pornographic images publicly disclosed by data scientist Alexander Kim and violent images collected from the internet to construct the dataset for our experiment. The experimental results prove that the classification accuracy of the method proposed in this paper can reach 94.5% for bad images, that the false positive rate of bad images is only 4.3%, and that the inference speed is doubled compared with the ResNet101 network. Our proposed method can effectively identify bad images and provide efficient and powerful support for cyberspace governance. Full article
(This article belongs to the Special Issue Security, Privacy and Application in New Intelligence Techniques)
Show Figures

Figure 1

Back to TopTop