Parallel and Distributed Computing: Theory and Applications

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Computational and Applied Mathematics".

Deadline for manuscript submissions: 31 August 2024 | Viewed by 1976

Special Issue Editors


E-Mail Website
Guest Editor
School of Software Engineering, Sun Yat-sen University, No. 132, Outer Ring East Road, Guangzhou 510275, China
Interests: distributed computing; federated computing; artificial intelligence
School of Computer Science and Engineering, Faculty of Innovation Engineering, Macau University of Science and Technology, Avenida Wai Long, Taipa, Macau, China
Interests: adversarial machine learning; multimedia security; subspace learning
College of Systems Engineering, National University of Defense Technology, No. 109 Deya Road, Kaifu District, Changsha 410073, China
Interests: high performance computing; distributed systems; evolutionary algorithm; artificial intelligence

Special Issue Information

Dear Colleagues,

With the recent vigorous development of emerging technologies such as big data, artificial intelligence, and the metaverse, various intelligent systems and applications have become commonplace in daily life. It is difficult for traditional centralized computing power to support the computing needs of such applications. The simultaneous growth of complex intelligent algorithms, massive multi-modal data, and various large-scale applications has put unprecedented pressure on computing architecture. The development of parallel and distributed computing is essential for overcoming the limited computing power of traditional centralized computing, and its social and economic impacts are considerable.

Parallel and distributed computing occur in many areas of computer science, including algorithms, computer architecture, networking, operating systems, and software engineering. With the emergence of new infrastructures and computing architectures (e.g., cloud computing, edge computing, smart cities, and the Internet of Vehicles), the current parallel and distributed computing techniques have become challenging, and there is still much room for improvement in their research. Therefore, the development of powerful and effective parallel computing capabilities to ensure computing efficiency and security across various intelligent applications is increasingly urgent in both academia and industry. In addition, the implementation of parallel and distributed computing technology for practical application in various fields is another key issue.

The purpose of this Special Issue is to present current developments, share theoretical and technical knowledge, and discuss emerging challenges and trends in the field of parallel and distributed computing. Original research and review articles are welcome.

Topics of interest include, but are not limited to:

  • The theory of parallel and distributed computing;
  • Architectures, algorithms, and models of parallel and distributed computing;
  • High-performance computing architectures for big data, artificial intelligence, and the metaverse;
  • Parallel and distributed computing for machine learning/data mining;
  • Novel algorithms and applications in all fields of parallel and distributed computing;
  • The management and analytics of parallel or distributed computing-based applications.

Dr. Jianguo Chen
Dr. Jinyu Tian
Dr. Ji Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • parallel and distributed computing
  • high-performance computing
  • artificial intelligence
  • machine learning
  • data mining
  • evolutionary algorithm
  • novel algorithms and applications

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

28 pages, 1764 KiB  
Article
Intelligent Vehicle Computation Offloading in Vehicular Ad Hoc Networks: A Multi-Agent LSTM Approach with Deep Reinforcement Learning
by Dingmi Sun, Yimin Chen and Hao Li
Mathematics 2024, 12(3), 424; https://doi.org/10.3390/math12030424 - 28 Jan 2024
Viewed by 592
Abstract
As distributed computing evolves, edge computing has become increasingly important. It decentralizes resources like computation, storage, and bandwidth, making them more accessible to users, particularly in dynamic Telematics environments. However, these environments are marked by high levels of dynamic uncertainty due to frequent [...] Read more.
As distributed computing evolves, edge computing has become increasingly important. It decentralizes resources like computation, storage, and bandwidth, making them more accessible to users, particularly in dynamic Telematics environments. However, these environments are marked by high levels of dynamic uncertainty due to frequent changes in vehicle location, network status, and edge server workload. This complexity poses substantial challenges in rapidly and accurately handling computation offloading, resource allocation, and delivering low-latency services in such a variable environment. To address these challenges, this paper introduces a “Cloud–Edge–End” collaborative model for Telematics edge computing. Building upon this model, we develop a novel distributed service offloading method, LSTM Muti-Agent Deep Reinforcement Learning (L-MADRL), which integrates deep learning with deep reinforcement learning. This method includes a predictive model capable of forecasting the future demands on intelligent vehicles and edge servers. Furthermore, we conceptualize the computational offloading problem as a Markov decision process and employ the Multi-Agent Deep Deterministic Policy Gradient (MADDPG) approach for autonomous, distributed offloading decision-making. Our empirical results demonstrate that the L-MADRL algorithm substantially reduces service latency and energy consumption by 5–20%, compared to existing algorithms, while also maintaining a balanced load across edge servers in diverse Telematics edge computing scenarios. Full article
(This article belongs to the Special Issue Parallel and Distributed Computing: Theory and Applications)
Show Figures

Figure 1

20 pages, 2375 KiB  
Article
CVFL: A Chain-like and Verifiable Federated Learning Scheme with Computational Efficiency Based on Lagrange Interpolation Functions
by Mengnan Wang, Chunjie Cao, Xiangyu Wang, Qi Zhang, Zhaoxing Jing, Haochen Li and Jingzhang Sun
Mathematics 2023, 11(21), 4547; https://doi.org/10.3390/math11214547 - 04 Nov 2023
Viewed by 887
Abstract
Data privacy and security concerns have attracted significant attention, leading to the frequent occurrence of data silos in deep learning. To address this issue, federated learning (FL) has emerged. However, simple federated learning frameworks still face two security risks during the training process. [...] Read more.
Data privacy and security concerns have attracted significant attention, leading to the frequent occurrence of data silos in deep learning. To address this issue, federated learning (FL) has emerged. However, simple federated learning frameworks still face two security risks during the training process. Firstly, sharing local gradients instead of private datasets among users does not completely eliminate the possibility of data leakage. Secondly, malicious servers could obtain inaccurate aggregation parameters by forging or simplifying the aggregation process, ultimately leading to model training failures. To address these issues and achieve high-performance training models, we have designed a verifiable federated learning scheme called CVFL, where users exist in a serial manner to resist inference attacks and further protect the privacy of user dataset information through serial encryption. We ensure the secure aggregation of models through a verification protocol based on Lagrange interpolation functions. The serial transmission of local gradients effectively reduces the communication burden on cloud servers, and our verification protocol avoids the computational overhead caused by a large number of encryption and decryption operations without sacrificing model accuracy. Experimental results on the MNIST dataset demonstrate that, after 10 epochs of training with 100 users, our solution achieves a model accuracy of 90.63% for MLP architecture under IID data distribution and 87.47% under non-IID data distribution. For CNN architecture, our solution achieves a model accuracy of 96.72% under IID data distribution and 93.53% under non-IID data distribution. Experimental evaluations corroborate the practical performance of the presented scheme with high accuracy and efficiency. Full article
(This article belongs to the Special Issue Parallel and Distributed Computing: Theory and Applications)
Show Figures

Figure 1

Back to TopTop