Privacy and Security in Machine Learning and Artificial Intelligence

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 October 2024 | Viewed by 2230

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer Science, Open University, 6401 DL Heerlen, The Netherlands
Interests: privacy and security; machine learning

Special Issue Information

Dear Colleagues,

The development of Artificial Intelligence (AI) and its learning techniques, such as Machine Learning (ML) and Deep Learning (DL), have revolutionized data processing and analysis. This transformation is rapidly changing human life and has allowed for many practical applications based on AI, including the Internet of Things/Vehicles (IoT/ IoV), smart grid and energy saving, fog/edge computing, face/image recognition, text/sentimental analysis, attack detection, and healthcare.

However, the potential benefits of AI are hindered by issues such as insecurity, bias, unreliability, and privacy violations in data processing and communication. This negative impact affects both AI applications and society as a consequence. To address these concerns, this Special Issue seeks novel ideas, findings, and envisions the future of private and secure machine learning and AI.

The Special Issue will focus on and welcome submissions on topics such as privacy-preserving machine learning, deep learning, federated learning, trustworthy machine learning, metrics in private, secure and trustworthy AI, adversarial attacks against AI models, cryptography and security protocols in AI, privacy by design in AI-based systems, and applications of private, secure, and trustworthy AI.

Dr. Mina Sheikhalishahi
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • privacy
  • trustworthy AI
  • federated learning
  • AI security

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 383 KiB  
Article
Structure Estimation of Adversarial Distributions for Enhancing Model Robustness: A Clustering-Based Approach
by Bader Rasheed, Adil Khan and Asad Masood Khattak
Appl. Sci. 2023, 13(19), 10972; https://doi.org/10.3390/app131910972 - 05 Oct 2023
Viewed by 692
Abstract
In this paper, we propose an advanced method for adversarial training that focuses on leveraging the underlying structure of adversarial perturbation distributions. Unlike conventional adversarial training techniques that consider adversarial examples in isolation, our approach employs clustering algorithms in conjunction with dimensionality reduction [...] Read more.
In this paper, we propose an advanced method for adversarial training that focuses on leveraging the underlying structure of adversarial perturbation distributions. Unlike conventional adversarial training techniques that consider adversarial examples in isolation, our approach employs clustering algorithms in conjunction with dimensionality reduction techniques to group adversarial perturbations, effectively constructing a more intricate and structured feature space for model training. Our method incorporates density and boundary-aware clustering mechanisms to capture the inherent spatial relationships among adversarial examples. Furthermore, we introduce a strategy for utilizing adversarial perturbations to enhance the delineation between clusters, leading to the formation of more robust and compact clusters. To substantiate the method’s efficacy, we performed a comprehensive evaluation using well-established benchmarks, including MNIST and CIFAR-10 datasets. The performance metrics employed for the evaluation encompass the adversarial clean accuracy trade-off, demonstrating a significant improvement in both robust and standard test accuracy over traditional adversarial training methods. Through empirical experiments, we show that the proposed clustering-based adversarial training framework not only enhances the model’s robustness against a range of adversarial attacks, such as FGSM and PGD, but also improves generalization in clean data domains. Full article
(This article belongs to the Special Issue Privacy and Security in Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

19 pages, 7360 KiB  
Article
Federated Learning for Clients’ Data Privacy Assurance in Food Service Industry
by Hamed Taheri Gorji, Mahdi Saeedi, Erum Mushtaq, Hossein Kashani Zadeh, Kaylee Husarik, Seyed Mojtaba Shahabi, Jianwei Qin, Diane E. Chan, Insuck Baek, Moon S. Kim, Alireza Akhbardeh, Stanislav Sokolov, Salman Avestimehr, Nicholas MacKinnon, Fartash Vasefi and Kouhyar Tavakolian
Appl. Sci. 2023, 13(16), 9330; https://doi.org/10.3390/app13169330 - 17 Aug 2023
Viewed by 995
Abstract
The food service industry must ensure that service facilities are free of foodborne pathogens hosted by organic residues and biofilms. Foodborne diseases put customers at risk and compromise the reputations of service providers. Fluorescence imaging, empowered by state-of-the-art artificial intelligence (AI) algorithms, can [...] Read more.
The food service industry must ensure that service facilities are free of foodborne pathogens hosted by organic residues and biofilms. Foodborne diseases put customers at risk and compromise the reputations of service providers. Fluorescence imaging, empowered by state-of-the-art artificial intelligence (AI) algorithms, can detect invisible residues. However, using AI requires large datasets that are most effective when collected from actual users, raising concerns about data privacy and possible leakage of sensitive information. In this study, we employed a decentralized privacy-preserving technology to address client data privacy issues. When federated learning (FL) is used, there is no need for data sharing across clients or data centralization on a server. We used FL and a new fluorescence imaging technology and applied two deep learning models, MobileNetv3 and DeepLabv3+, to identify and segment invisible residues on food preparation equipment and surfaces. We used FedML as our FL framework and Fedavg as the aggregation algorithm. The model achieved training and testing accuracies of 95.83% and 94.94% for classification between clean and contamination frames, respectively, and resulted in intersection over union (IoU) scores of 91.23% and 89.45% for training and testing, respectively, of segmentation of the contaminated areas. The results demonstrated that using federated learning combined with fluorescence imaging and deep learning algorithms can improve the performance of cleanliness auditing systems while assuring client data privacy. Full article
(This article belongs to the Special Issue Privacy and Security in Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop