Advances in Secure AI: Technology and Applications

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (30 November 2022) | Viewed by 22266

Special Issue Editors


E-Mail Website
Guest Editor
School of Cybersecurity, Korea University, Seoul 02841, Republic of Korea
Interests: trustworthy AI; AI model stealing; XAI; applications in cybersecurity
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, Seoul National University, Seoul 08826, Korea
Interests: software and systems security; hardware support for the security of computing devices; intelligent security; secure AI
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI) is a technology that enables us to identify solutions to complex problems using relatively simple learning mechanisms in a data-driven fashion. Due to the recent success and advances in AI techniques such as computer vision and natural language processing, many intelligent services integrate AI, especially in mission-critical applications dealing with complex systems such as autonomous vehicles, environmental monitoring, and cybersecurity. However, we still do not understand the complete characteristics of AI models and learning techniques; therefore, it has become an urgent call for both theoreticians and partitioners to investigate robust, timely, explainable, and trustworthy AI to avoid unforeseen malfunction of AI-based services.

This Special Issue aims to address the latest advances in the techniques and applications of secure AI. Potential topics include but are not limited to the following:

  • Adversarial attack and defense techniques;
  • AI model stealing attack and defense techniques;
  • Data poisoning (AI backdoor/trojan) attack, detection, and defense;
  • Explainable AI (XAI) techniques and applications;
  • XAI for human-AI cooperation;
  • H/W enclaves for secure AI;
  • AI model verification;
  • Data privacy;
  • Statistical bias and fairness;
  • Learning provenance of data;
  • Learning causality;
  • Robust training;
  • Robust decisions in dynamic environments;
  • Perspectives of secure AI in real-world applications;
  • Sharing confidential data: confidential federated learning, differential privacy, and homomorphic encryption.

Dr. Sangkyun Lee
Prof. Dr. Yunheung Paek
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • adversarial attack
  • AI model stealing attack
  • AI model inversion attack
  • data poisoning (AI backdoor/trojan) attack
  • attack detection
  • AI model verification
  • data privacy
  • bias/fairness issues
  • robust training
  • explainable AI (XAI) techniques and applications
  • H/W enclaves for secure AI
  • confidential federated learning
  • differential privacy
  • homomorphic encryption

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

2 pages, 158 KiB  
Editorial
Foreword to the Special Issue on Advances in Secure AI: Technology and Applications
by Sangkyun Lee
Appl. Sci. 2022, 12(19), 10015; https://doi.org/10.3390/app121910015 - 05 Oct 2022
Viewed by 895
Abstract
I am pleased to introduce the Special Issue on “Advances in Secure AI: Technology and Applications” [...] Full article
(This article belongs to the Special Issue Advances in Secure AI: Technology and Applications)

Research

Jump to: Editorial, Review

13 pages, 4684 KiB  
Article
Activation Fine-Tuning of Convolutional Neural Networks for Improved Input Attribution Based on Class Activation Maps
by Sungmin Han, Jeonghyun Lee and Sangkyun Lee
Appl. Sci. 2022, 12(24), 12961; https://doi.org/10.3390/app122412961 - 16 Dec 2022
Cited by 1 | Viewed by 1559
Abstract
Model induction is one of the most popular methods to extract information to better understand AI’s decisions by estimating the contribution of input features for a class of interest. However, we found a potential issue: most model induction methods, especially those that compute [...] Read more.
Model induction is one of the most popular methods to extract information to better understand AI’s decisions by estimating the contribution of input features for a class of interest. However, we found a potential issue: most model induction methods, especially those that compute class activation maps, rely on arbitrary thresholding to mute some of their computed attribution scores, which can cause the severe quality degradation of model induction. Therefore, we propose a new threshold fine-tuning (TFT) procedure to enhance the quality of input attribution based on model induction. Our TFT replaces arbitrary thresholding with an iterative procedure to find the optimal cut-off threshold value of input attribution scores using a new quality metric. Furthermore, to remove the burden of computing optimal threshold values on a per-input basis, we suggest an activation fine-tuning (AFT) framework using a tuner network attached to the original convolutional neural network (CNN), retraining the tuner-attached network with auxiliary data produced by TFT. The purpose of the tuner network is to make the activations of the original CNN less noisy and thus better suited for computing input attribution scores based on class activation maps from the activations. In our experiments, we show that the per-input optimal thresholding of attribution scores using TFT can significantly improve the quality of input attribution, and CNNs fine-tuned with our AFT can be used to produce improved input attribution matching the quality of TFT-tuned input attribution without requiring costly per-input threshold optimization. Full article
(This article belongs to the Special Issue Advances in Secure AI: Technology and Applications)
Show Figures

Figure 1

9 pages, 428 KiB  
Article
Backdoor Attacks on Deep Neural Networks via Transfer Learning from Natural Images
by Yuki Matsuo and Kazuhiro Takemoto
Appl. Sci. 2022, 12(24), 12564; https://doi.org/10.3390/app122412564 - 08 Dec 2022
Cited by 3 | Viewed by 1660
Abstract
Backdoor attacks are a serious security threat to open-source and outsourced development of computational systems based on deep neural networks (DNNs). In particular, the transferability of backdoors is remarkable; that is, they can remain effective after transfer learning is performed. Given that transfer [...] Read more.
Backdoor attacks are a serious security threat to open-source and outsourced development of computational systems based on deep neural networks (DNNs). In particular, the transferability of backdoors is remarkable; that is, they can remain effective after transfer learning is performed. Given that transfer learning from natural images is widely used in real-world applications, the question of whether backdoors can be transferred from neural models pretrained on natural images involves considerable security implications. However, this topic has not been evaluated rigorously in prior studies. Hence, in this study, we configured backdoors in 10 representative DNN models pretrained on a natural image dataset, and then fine-tuned the backdoored models via transfer learning for four real-world applications, including pneumonia classification from chest X-ray images, emergency response monitoring from aerial images, facial recognition, and age classification from images of faces. Our experimental results show that the backdoors generally remained effective after transfer learning from natural images, except for small DNN models. Moreover, the backdoors were difficult to detect using a common method. Our findings indicate that backdoor attacks can exhibit remarkable transferability in more realistic transfer learning processes, and highlight the need for the development of more advanced security countermeasures in developing systems using DNN models for sensitive or mission-critical applications. Full article
(This article belongs to the Special Issue Advances in Secure AI: Technology and Applications)
Show Figures

Figure 1

12 pages, 803 KiB  
Article
A Neuro-Symbolic Classifier with Optimized Satisfiability for Monitoring Security Alerts in Network Traffic
by Darian Onchis, Codruta Istin and Eduard Hogea
Appl. Sci. 2022, 12(22), 11502; https://doi.org/10.3390/app122211502 - 12 Nov 2022
Cited by 1 | Viewed by 1261
Abstract
We introduce in this paper a neuro-symbolic predictive model based on Logic Tensor Networks, capable of discriminating and at the same time of explaining the bad connections, called alerts or attacks, and the normal connections. The proposed classifier incorporates both the ability of [...] Read more.
We introduce in this paper a neuro-symbolic predictive model based on Logic Tensor Networks, capable of discriminating and at the same time of explaining the bad connections, called alerts or attacks, and the normal connections. The proposed classifier incorporates both the ability of deep neural networks to improve on their own through learning from experience and the interpretability of the results provided by the symbolic artificial intelligence approach. Compared to other existing solutions, we advance in the discovery of potential security breaches from a cognitive perspective. By introducing the reasoning in the model, our aim is to further reduce the human staff needed to deal with the cyber-threat hunting problem. To justify the need for shifting towards hybrid systems for this task, the design, the implementation, and the comparison of the dense neural network and the neuro-symbolic model is performed in detail. While in terms of standard accuracy, both models demonstrated similar precision, we further introduced for our model the concept of interactive accuracy as a way of querying the model results at any time coupled with deductive reasoning over data. By applying our model on the CIC-IDS2017 dataset, we reached an accuracy of 0.95, with levels of satisfiability around 0.85. Other advantages such as overfitting mitigation and scalability issues are also presented. Full article
(This article belongs to the Special Issue Advances in Secure AI: Technology and Applications)
Show Figures

Figure 1

15 pages, 528 KiB  
Article
Communication-Efficient Secure Federated Statistical Tests from Multiparty Homomorphic Encryption
by Meenatchi Sundaram Muthu Selva Annamalai, Chao Jin and Khin Mi Mi Aung
Appl. Sci. 2022, 12(22), 11462; https://doi.org/10.3390/app122211462 - 11 Nov 2022
Cited by 1 | Viewed by 1040
Abstract
The power and robustness of statistical tests are strongly tied to the amount of data available for testing. However, much of the collected data today is siloed amongst various data owners due to privacy concerns, thus limiting the utility of the collected data. [...] Read more.
The power and robustness of statistical tests are strongly tied to the amount of data available for testing. However, much of the collected data today is siloed amongst various data owners due to privacy concerns, thus limiting the utility of the collected data. While frameworks for secure multiparty computation enable functions to be securely evaluated on federated datasets, they depend on protocols over secret shared data, which result in high communication costs even in the semi-honest setting.In this paper, we present methods for securely evaluating statistical tests, specifically the Welch’s t-test and the χ2-test, in the semi-honest setting using multiparty homomorphic encryption (MHE). We tested and evaluated our methods against real world datasets and found that our method for computing the Welch’s t-test and χ2-test statistics required 100× less communication than equivalent protocols implemented using secure multiparty computation (SMPC), resulting in up to 10× improvement in runtime. Lastly, we designed and implemented a novel protocol to perform a table lookup from a secret shared index and use it to build a hybrid protocol that switches between MHE and SMPC representations in order to calculate the p-value of the statistics efficiently. This hybrid protocol is 1.5× faster than equivalent protocols implemented using SMPC alone. Full article
(This article belongs to the Special Issue Advances in Secure AI: Technology and Applications)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

21 pages, 7107 KiB  
Review
Blockchain Technology and Artificial Intelligence Together: A Critical Review on Applications
by Hamed Taherdoost
Appl. Sci. 2022, 12(24), 12948; https://doi.org/10.3390/app122412948 - 16 Dec 2022
Cited by 18 | Viewed by 14886
Abstract
It is undeniable that the adoption of blockchain- and artificial intelligence (AI)-based paradigms is proceeding at lightning speed. Both paradigms provide something new to the market, but the degree of novelty and complexity of each is different. In the age of digital money, [...] Read more.
It is undeniable that the adoption of blockchain- and artificial intelligence (AI)-based paradigms is proceeding at lightning speed. Both paradigms provide something new to the market, but the degree of novelty and complexity of each is different. In the age of digital money, blockchains may automate installments to allow for the safe, decentralized exchange of personal data, information, and logs. AI and blockchains are two of the most talked about technologies right now. Using a decentralized, secure, and trustworthy system, blockchain technology can automate bitcoin payments and provide users access to a shared ledger of records, transactions, and data. Through the use of smart contracts, a blockchain may also regulate user interactions without the need for a central authority. As an alternative, AI provides robots with the ability to reason and make decisions and human-level intellect. This revelation led to a thorough assessment of the AI and blockchain combo created between 2012 and 2022. This critical review contains 121 articles from the recent decade that investigate the present situation and rationale of the AI and blockchain combination. The integration’s practical application is the emphasis of this overview’s meatiest portion. In addition, the gaps and problems of this combination in the linked literature have been studied, with a focus on the constraints. Full article
(This article belongs to the Special Issue Advances in Secure AI: Technology and Applications)
Show Figures

Figure 1

Back to TopTop