Advanced Theories, Applications and Techniques in Cloud and Distributed Computing

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 15 September 2024 | Viewed by 2305

Special Issue Editor


E-Mail Website
Guest Editor
The School of Computer Science, Wuhan University, Wuhan, 430072, China
Interests: parallel and cloud computing; distributed computing; Big Data platform; artificial intelligence architecture

Special Issue Information

Dear Colleagues,

We currently live in a world where data and computing play a vital role. With the widespread usage of the internet, computing has evolved from single machines to cluster, grid, cloud and distributed computing. Cloud and distributed computing are topics of great interest in both academia and industry. They offer powerful and scalable computing capabilities that accelerate big data and artificial intelligence applications.

Despite the technical advantages of cloud and distributed computing, they also pose challenges in resource management, job scheduling, and application performance. Imbalanced resource utilization across nodes in a cloud computing system can potentially hinder application performance. A centralized job distribution strategy may also negatively impact big data applications. While cloud and distributed computing platforms offer aggregation and cooperation features that can address some of these issues, there are still many unresolved challenges that need to be addressed to unleash the full potential of these platforms.

This Special Issue aims to collect advanced theories, applications, and techniques that address the currently unresolved challenges in cloud and distributed computing. The topics of interest include, but are not limited to:

  • Resource allocation and management;
  • Job scheduling;
  • Heterogeneous task management;
  • Task partitioning and assignment;
  • Cooperation mechanisms;
  • Storage optimization;
  • Scalability issues;
  • Reliability and dependability;
  • Energy management;
  • Development and optimization of architectures;
  • Data-resilient, fault-tolerant techniques;
  • Parallel computing;
  • Advanced algorithms for applications (e.g., AI, big data);
  • Testbeds and prototypes.

We welcome contributions that provide insights and solutions to these challenges.

Prof. Dr. Dazhao Cheng
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • cloud and distributed computing
  • parallel computing
  • resource management
  • job scheduling
  • performance optimization
  • energy management
  • scalability, reliability and dependability
  • testbeds and prototypes

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

28 pages, 845 KiB  
Article
SLA-Adaptive Threshold Adjustment for a Kubernetes Horizontal Pod Autoscaler
by Olesia Pozdniakova, Dalius Mažeika and Aurimas Cholomskis
Electronics 2024, 13(7), 1242; https://doi.org/10.3390/electronics13071242 - 27 Mar 2024
Viewed by 409
Abstract
Kubernetes is an open-source container orchestration system that provides a built-in module for dynamic resource provisioning named the Horizontal Pod Autoscaler (HPA). The HPA identifies the number of resources to be provisioned by calculating the ratio between the current and target utilisation metrics. [...] Read more.
Kubernetes is an open-source container orchestration system that provides a built-in module for dynamic resource provisioning named the Horizontal Pod Autoscaler (HPA). The HPA identifies the number of resources to be provisioned by calculating the ratio between the current and target utilisation metrics. The target utilisation metric, or threshold, directly impacts how many and how quickly resources will be provisioned. However, the determination of the threshold that would allow satisfying performance-based Service Level Objectives (SLOs) is a long, error-prone, manual process because it is based on the static threshold principle and requires manual configuration. This might result in underprovisioning or overprovisioning, leading to the inadequate allocation of computing resources or SLO violations. Numerous autoscaling solutions have been introduced as alternatives to the HPA to simplify the process. However, the HPA is still the most widely used solution due to its ease of setup, operation, and seamless integration with other Kubernetes functionalities. The present study proposes a method that utilises exploratory data analysis techniques along with moving average smoothing to identify the target utilisation threshold for the HPA. The objective is to ensure that the system functions without exceeding the maximum number of events that result in a violation of the response time defined in the SLO. A prototype was created to adjust the threshold values dynamically, utilising the proposed method. This prototype enables the evaluation and comparison of the proposed method with the HPA, which has the highest threshold set that meets the performance-based SLOs. The results of the experiments proved that the suggested method adjusts the thresholds to the desired service level with a 1–2% accuracy rate and only 4–10% resource overprovisioning, depending on the type of workload. Full article
Show Figures

Figure 1

16 pages, 502 KiB  
Article
Asynchronous Consensus Quorum Read: Pioneering Read Optimization for Asynchronous Consensus Protocols
by He Dong and Shengyun Liu
Electronics 2024, 13(3), 481; https://doi.org/10.3390/electronics13030481 - 23 Jan 2024
Viewed by 542
Abstract
In the era of cloud computing, the reliability and efficiency of distributed systems, particularly in cloud-based databases and applications, are important. State Machine Replication (SMR), underpinning these distributed architectures, commonly utilizes consensus protocols to ensure linearizable operations. These protocols are critical in cloud [...] Read more.
In the era of cloud computing, the reliability and efficiency of distributed systems, particularly in cloud-based databases and applications, are important. State Machine Replication (SMR), underpinning these distributed architectures, commonly utilizes consensus protocols to ensure linearizable operations. These protocols are critical in cloud environments as they maintain data consistency across geographically dispersed data centers. However, the inherent latency in cloud infrastructures poses a challenge to the performance of consensus-based systems, especially for read operations that do not alter the system state and are frequently executed. This paper addresses this challenge by proposing “Asynchronous Consensus Quorum Read” (ACQR), a novel read optimization method specifically designed for asynchronous consensus protocols in cloud computing scenarios. We have incorporated ACQR into Rabia, an advanced asynchronous consensus protocol, to show its effectiveness. The experimental results are encouraging, they demonstrate that ACQR improves Rabia’s performance, achieving up to a 1.7× increase in throughput and a 40% reduction in optimal latency. This advancement represents a critical step in enhancing the efficiency of read operations in asynchronous consensus protocols within cloud computing environments. Full article
Show Figures

Figure 1

20 pages, 1178 KiB  
Article
µFuncCache: A User-Side Lightweight Cache System for Public FaaS Platforms
by Bao Li, Zhe Li, Jun Luo, Yusong Tan and Pingjing Lu
Electronics 2023, 12(12), 2649; https://doi.org/10.3390/electronics12122649 - 13 Jun 2023
Viewed by 982
Abstract
Building cloud-native applications based on public “Function as a Service” (FaaS) platforms has become an attractive way to improve business roll-out speed and elasticity, as well as reduce cloud usage costs. Applications based on FaaS are usually designed with multiple different cloud functions [...] Read more.
Building cloud-native applications based on public “Function as a Service” (FaaS) platforms has become an attractive way to improve business roll-out speed and elasticity, as well as reduce cloud usage costs. Applications based on FaaS are usually designed with multiple different cloud functions based on their functionality, and there will be call relationships between cloud functions. At the same time, each cloud function may depend on other services provided by cloud providers, such as object storage services, database services, and file storage services. When there is a call relationship between cloud functions, or between cloud functions and other services, a certain delay will occur, and the delay will increase with the length of the call chain, thereby affecting the quality of application services and user experience. Therefore, we introduce μFuncCache, a user-side lightweight caching mechanism to speed up data access for public FaaS services, fully utilizing the container delay destruction mechanism and over-booked memory commonly found in public FaaS platforms, to reduce function call latency without the need to perceive and modify the internal architecture of public clouds. Experiments in different application scenarios have shown that μFuncCache can effectively improve the performance of FaaS applications by consuming only a small amount of additional resources, while achieving a maximum reduction of 97% in latency. Full article
Show Figures

Figure 1

Back to TopTop