Artificial Intelligence and Database Security

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 31 December 2024 | Viewed by 3529

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computing, The Hong Kong Polytechnic University, Hong Kong 999077, China
Interests: database security; AI security

E-Mail Website
Guest Editor
School of Cyberspace Science and Technology, Beijing Institute of Technology, Beijing 100081, China
Interests: applied cryptography; mobile crowdsourcing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Science, Nanjing University of Information Science and Technology, Nanjing 210044, China
Interests: applied cryptography

E-Mail Website
Guest Editor
Frontier Research Center, Peng Cheng Laboratory, Shenzhen 518000, China
Interests: 6G networks; network intelligence; network virtualization

Special Issue Information

Dear Colleagues,

In recent years, databases and Artificial Intelligence (AI) have played pivotal roles in reshaping modern computing. Databases serve as foundational repositories for structured and unstructured data, while AI, with its advanced algorithms and machine learning capabilities, has revolutionized data analysis, pattern recognition, and decision making. The integration of databases with AI has emerged as a transformative force, enabling data-driven insights, predictive analytics, and intelligent automation across various domains. However, this convergence introduces a new dimension of security challenges that demand focused attention.

Security is paramount in both the database and AI domains, as data confidentiality, integrity, and availability are fundamental. Database security ensures the protection of sensitive information, while AI security guards against vulnerabilities in machine learning models and algorithms. As these two domains converge to create AI-driven database systems, the security challenges become even more critical. Ensuring that AI-empowered decisions are based on accurate and secure data, while safeguarding against novel threats that exploit the dynamic interactions between AI and databases presents a multifaceted challenge that necessitates specialized research and solutions.

This Special Issue is dedicated to investigating innovative approaches, methodologies, and frameworks that address the unique security concerns encountered in AI-driven database environments. It aims to provide a platform for researchers, practitioners, and experts in the fields of artificial intelligence and database security to share their latest findings, exchange ideas, and advance the state of the art in safeguarding databases in the era of AI-driven computing. By scrutinizing the intricate security challenges arising from the interplay of AI and databases, this Special Issue seeks to contribute significantly to the development of secure and reliable AI-powered systems, ultimately bolstering trust in the broader adoption of artificial intelligence technologies across various domains. High-quality original research and review articles in this area are welcome. We will also publish selected papers from the second International Workshop on Future Mobile Computing and Networking for Internet of Things (FMobile, see https://fmobile2023.github.io/FMobile/). The Special Issue encompasses a broad spectrum of topics related to the security of databases and AI, including but not limited to:

  • Database security;
  • AI security;
  • Secure AI–database integration;
  • Machine learning for anomaly detection;
  • Applied cryptography and privacy-preserving protocols;
  • Privacy-preserving AI techniques;
  • Adversarial attacks and defenses;
  • Access control and authorization;
  • Blockchain and integrity assurance;
  • Ethical considerations in AI–database security.

Dr. Jinwen Liang
Dr. Chuan Zhang
Dr. Fuyuan Song
Dr. Wen Wu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • database
  • artificial intelligence (AI)
  • privacy preservation
  • integrity
  • blockchain
  • applied cryptography

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 16315 KiB  
Article
An Abnormal Account Identification Method by Topology Feature Analysis for Blockchain-Based Transaction Network
by Yuyu Yue, Jixin Zhang, Mingwu Zhang and Jia Yang
Electronics 2024, 13(8), 1416; https://doi.org/10.3390/electronics13081416 - 09 Apr 2024
Viewed by 572
Abstract
Cryptocurrency, as one of the most successful applications of blockchain technology, has played a vital role in promoting the development of the digital economy. However, its anonymity, large scale of cryptographic transactions, and decentralization have also brought new challenges in identifying abnormal accounts [...] Read more.
Cryptocurrency, as one of the most successful applications of blockchain technology, has played a vital role in promoting the development of the digital economy. However, its anonymity, large scale of cryptographic transactions, and decentralization have also brought new challenges in identifying abnormal accounts and preventing abnormal transaction behaviors, such as money laundering, extortion, and market manipulation. Recently, some researchers have proposed efficient and accurate abnormal transaction detection based on machine learning. However, in reality, abnormal accounts and transactions are far less common than normal accounts and transactions, so it is difficult for the previous methods to detect abnormal accounts by training with such an imbalance in abnormal/normal accounts. To address the issues, in this paper, we propose a method for identifying abnormal accounts using topology analysis of cryptographic transactions. We consider the accounts and transactions in the blockchain as graph nodes and edges. Since the abnormal accounts may have special topology features, we extract topology features from the transaction graph. By analyzing the topology features of transactions, we discover that the high-dimensional sparse topology features can be compressed by using the singular value decomposition method for feature dimension reduction. Subsequently, we use the generative adversarial network to generate samples like abnormal accounts, which will be sent to the training dataset to produce an equilibrium of abnormal/normal accounts. Finally, we utilize several machine learning techniques to detect abnormal accounts in the blockchain. Our experimental results demonstrate that our method significantly improves the accuracy and recall rate for detecting abnormal accounts in blockchain compared with the state-of-the-art methods. Full article
(This article belongs to the Special Issue Artificial Intelligence and Database Security)
Show Figures

Figure 1

13 pages, 3190 KiB  
Article
RadiantVisions: Illuminating Low-Light Imagery with a Multi-Scale Branch Network
by Yu Zhang, Shan Jiang and Xiangyun Tang
Electronics 2024, 13(4), 788; https://doi.org/10.3390/electronics13040788 - 17 Feb 2024
Viewed by 536
Abstract
In the realms of the Internet of Things (IoT) and artificial intelligence (AI) security, ensuring the integrity and quality of visual data becomes paramount, especially under low-light conditions, where low-light image enhancement emerges as a crucial technology. However, the current methods for enhancing [...] Read more.
In the realms of the Internet of Things (IoT) and artificial intelligence (AI) security, ensuring the integrity and quality of visual data becomes paramount, especially under low-light conditions, where low-light image enhancement emerges as a crucial technology. However, the current methods for enhancing images under low-light conditions still face some challenging issues, including the inability to effectively handle uneven illumination distribution, suboptimal denoising performance, and insufficient correlation among a branch network. Addressing these issues, the Multi-Scale Branch Network is proposed. It utilizes multi-scale feature extraction to handle uneven illumination distribution, introduces denoising functions to mitigate noise issues arising from image enhancement, and establishes correlations between network branches to enhance information exchange. Additionally, our approach incorporates a vision transformer to enhance feature extraction and context understanding. The process begins with capturing raw RGB data, which are then optimized through sophisticated image signal processor (ISP) techniques, resulting in a refined visual output. This method significantly improves image brightness and reduces noise, achieving remarkable improvements in low-light image enhancement compared to similar methods. Using the LOL-V2-real dataset, we achieved improvements of 0.255 in PSNR and 0.23 in SSIM, with decreases of 0.003 in MAE and 0.009 in LPIPS, compared to the state-of-the-art methods. Rigorous experimentation confirmed the reliability of this approach in enhancing image quality under low-light conditions. Full article
(This article belongs to the Special Issue Artificial Intelligence and Database Security)
Show Figures

Figure 1

21 pages, 2691 KiB  
Article
Enabling Privacy-Preserving Data Sharing with Bilateral Access Control for Cloud
by Tong Wu, Xiaochen Ma and Hailun Yan
Electronics 2023, 12(23), 4798; https://doi.org/10.3390/electronics12234798 - 27 Nov 2023
Viewed by 602
Abstract
Cloud computing plays an essential role in various fields. However, the existing cloud services face a severe challenge, which is how to share the data among a large scale of devices securely. In this paper, we introduce a cloud-based privacy-preserving data sharing scheme, [...] Read more.
Cloud computing plays an essential role in various fields. However, the existing cloud services face a severe challenge, which is how to share the data among a large scale of devices securely. In this paper, we introduce a cloud-based privacy-preserving data sharing scheme, derived from identity-based matchmaking encryption. In our scheme, the access policies are designed by both the sender and receiver simultaneously, to support bilateral access control. To improve efficiency, we delegate the match algorithm to the cloud server, reducing the computation cost and communication overhead on end devices without revealing the users’ privacy. Through formal security analysis, we show that our scheme holds security, authenticity, and privacy. Finally, we evaluate our scheme by conducting extensive experiments, indicating that our scheme is more efficient than the other data-sharing schemes in ME-based services in a real-world dataset. Full article
(This article belongs to the Special Issue Artificial Intelligence and Database Security)
Show Figures

Figure 1

28 pages, 1595 KiB  
Article
Joint AP Selection and Task Offloading Based on Deep Reinforcement Learning for Urban-Micro Cell-Free UAV Network
by Chunyu Pan, Jincheng Wang, Xinwei Yue, Linyan Guo and Zhaohui Yang
Electronics 2023, 12(23), 4777; https://doi.org/10.3390/electronics12234777 - 25 Nov 2023
Viewed by 615
Abstract
The flexible mobility feature of unmanned aerial vehicles (UAVs) leads to frequent handovers and serious inter-cell interference problems in UAV-assisted cellular networks. Establishing a cell-free UAV (CF-UAV) network without cell boundaries effectively alleviates frequent handovers and interference problems and has been an important [...] Read more.
The flexible mobility feature of unmanned aerial vehicles (UAVs) leads to frequent handovers and serious inter-cell interference problems in UAV-assisted cellular networks. Establishing a cell-free UAV (CF-UAV) network without cell boundaries effectively alleviates frequent handovers and interference problems and has been an important topic of 6G research. However, in existing CF-UAV networks, a large amount of backhaul data increases the computational pressure on the central processing unit (CPU), which also increases system delay. Meanwhile, the mobility of UAVs also leads to time-varying channel conditions. Therefore, designing dynamic resource allocation schemes with the help of edge computing can effectively alleviate this problem. Thus, aiming at partial network breakdown in an urban-micro (UMi) environment, an urban-micro CF-UAV (UMCF-UAV) network architecture is proposed in this paper. A delay minimization problem and a dynamic task offloading (DTO) strategy that jointly optimizes access point (AP) selection and task offloading is proposed to reduce system delay in this paper. Considering the coupling of various resources and the non-convex feature of the proposed problem, a dynamic resource cooperative allocation (DRCA) algorithm based on deep reinforcement learning (DRL) to flexibly deploy AP selection and task offloading of UAVs between the edge and locally is proposed to solve the problem. Simulation results show fast convergence behavior of the proposed algorithm compared with classical reinforcement learning. Decreased system delay is obtained by the proposed algorithm compared with other baseline resource allocation schemes, with the maximize improvement being 53%. Full article
(This article belongs to the Special Issue Artificial Intelligence and Database Security)
Show Figures

Figure 1

Back to TopTop