Special Issue "Parallel and Distributed Cloud, Edge and Fog Computing: Latest Advances and Prospects"

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 31 July 2023 | Viewed by 2214

Special Issue Editors

Department of Systems and Computer Engineering, Carleton University, 1125 Colonel By Dr, Ottawa, ON K1S 5B6, Canada
Interests: computer networking; IoT; cloud and edge computing; computational offloading; resource allocation; service function chain placement; 6G
School of Electrical & Computer Engineering, National Technical University of Athens, Zografou Campus, 9, Iroon Polytechniou Str., 15780 Zografou, Greece
Interests: cloud computing; edge computing; control theory; resource allocation; IoT; trust management
Special Issues, Collections and Topics in MDPI journals
School of Electrical & Computer Engineering, National Technical University of Athens, Zografou Campus, 9, Iroon Polytechniou Str., 15780 Zografou, Greece
Interests: software-defined networks; cognitive radio networks; IoT; big data; social network analysis; recommender systems
School of Electrical & Computer Engineering, National Technical University of Athens, Zografou Campus, 9, Iroon Polytechniou Str., 15780 Zografou, Greece
Interests: big data; caching; social network analysis; recommender systems; information diffusion

Special Issue Information

Dear Colleagues,

During the 21st century, cloud computing has been established as a breakthrough computing paradigm which provides utility computing at large scale, with applicability and adoption in several application domains. At the same time, the dawn of the 5G era in networking has paved the way for the next generation of cloud technologies, namely edge and fog computing, which tend to reposition the computational resources closer to the user.

However, as often happens with new technologies, there remain several challenges to be resolved. Unbalanced workload among the nodes of a cloud computing system infrastructure can potentially hamper its performance. The centralized management and processing of information can also have a negative impact, especially when big data applications are deployed. These are some of the problems that parallel and distributed techniques can solve when applied in the context of cloud, edge and fog computing, by enabling the aggregation and sharing of an increasing variety of distributed computational resources at large scale. Still, given other challenges such as security issues, increased infrastructure complexity, resilient low-latency communication as well as efficient orchestration and synchronization, there is room for improvement.

To this end, this Special Issue is soliciting conceptual, theoretical, and experimental contributions to a set of currently unresolved challenges in the area of parallel and distributed cloud, edge and fog computing. The topics of interest include, but are not limited to:

  • Distributed resource allocation and scheduling in cloud, edge and fog computing;
  • Optimization algorithms for distributed and parallel computing at network infrastructures;
  • Network routing for distributed and parallel computing;
  • Management and orchestration of distributed computational resources;
  • Middleware and libraries for parallel and distributed computing at the cloud, edge and fog layer;
  • Development of architectures for parallel and distributed computing;
  • Scalability issues in parallel and distributed cloud computing;
  • Applications of parallel and distributed computing in next-generation networking infrastructures;
  • Security issues during network-enabled parallel and distributed computing;
  • Data-resilient, fault-tolerant techniques for intra-infrastructure communication in distributed computing;
  • Advanced algorithms for parallelization and distribution of network applications (AI, control theory, etc.).

Dr. Marios Avgeris
Dr. Dimitrios Dechouniotis
Dr. Konstantinos Tsitseklis
Dr. Vitoropoulou Margarita
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • cloud computing
  • edge computing
  • fog computing
  • parallel and distributed computing
  • resource management optimization
  • system architecture optimization

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Clustering Algorithms for Enhanced Trustworthiness on High-Performance Edge-Computing Devices
Electronics 2023, 12(7), 1689; https://doi.org/10.3390/electronics12071689 - 03 Apr 2023
Viewed by 322
Abstract
Trustworthiness is a critical concern in edge-computing environments as edge devices often operate in challenging conditions and are prone to failures or external attacks. Despite significant progress, many solutions remain unexplored. An effective approach to this problem is the use of clustering algorithms, [...] Read more.
Trustworthiness is a critical concern in edge-computing environments as edge devices often operate in challenging conditions and are prone to failures or external attacks. Despite significant progress, many solutions remain unexplored. An effective approach to this problem is the use of clustering algorithms, which are powerful machine-learning tools that can discover correlations within vast amounts of data. In the context of edge computing, clustering algorithms have become increasingly relevant as they can be employed to improve trustworthiness by classifying edge devices based on their behaviors or detecting attack patterns from insecure domains. In this context, we develop a new hybrid clustering algorithm for computing devices that is suitable for edge computing model-based infrastructures and that can categorize nodes based on their trustworthiness. This algorithm is thoroughly assessed and compared to two computing systems equipped with high-end GPU devices with respect to performance and energy consumption. The evaluation results highlight the feasibility of designing intelligent sensor networks to make decisions at the data-collection points, thereby, enhancing the trustworthiness and preventing attacks from unauthorized sources. Full article
Show Figures

Figure 1

Article
Dynamic Load Balancing in Stream Processing Pipelines Containing Stream-Static Joins
Electronics 2023, 12(7), 1613; https://doi.org/10.3390/electronics12071613 - 29 Mar 2023
Viewed by 493
Abstract
Data stream processing systems are used to continuously run mission-critical applications for real-time monitoring and alerting. These systems require high throughput and low latency to process incoming data streams in real time. However, changes in the distribution of incoming data streams over time [...] Read more.
Data stream processing systems are used to continuously run mission-critical applications for real-time monitoring and alerting. These systems require high throughput and low latency to process incoming data streams in real time. However, changes in the distribution of incoming data streams over time can cause partition skew, which is defined as an unequal distribution of data partitions among workers, resulting in sub-optimal processing due to an unbalanced load. This paper presents the first solution designed specifically to address partition skew in the context of joining streaming and static data. Our solution uses state-of-the-art principles to monitor processing load, detect load imbalance, and dynamically redistribute partitions, to achieve optimal load balance. To accomplish this, our solution leverages the collocation of streaming and static data, while considering the processing load of the join and the subsequent stream processing operations. Finally, we present the results of an experimental evaluation, in which we compared the throughput and latency of four stream processing pipelines containing such a join. The results show that our solution achieved significantly higher throughput and lower latency than the competing approaches. Full article
Show Figures

Figure 1

Article
A Generic Preprocessing Architecture for Multi-Modal IoT Sensor Data in Artificial General Intelligence
Electronics 2022, 11(22), 3816; https://doi.org/10.3390/electronics11223816 - 20 Nov 2022
Cited by 1 | Viewed by 725
Abstract
A main barrier for autonomous and general learning systems is their inability to understand and adapt to new environments—that is, to apply previously learned abstract solutions to new problems. Supervised learning system functions such as classification require data labeling from an external source [...] Read more.
A main barrier for autonomous and general learning systems is their inability to understand and adapt to new environments—that is, to apply previously learned abstract solutions to new problems. Supervised learning system functions such as classification require data labeling from an external source and do not have the ability to learn feature representation autonomously. This research details an unsupervised learning method for multi-modal feature detection and evaluation to be used for preprocessing in general learning systems. The learning method details a clustering algorithm that can be applied to any generic IoT sensor data, and a seeded stimulus labeling algorithm impacted and evolved by cross-modal input. The method is implemented and tested in two agents consuming audio and image data, each with varying innate stimulus criteria. Their run-time stimulus changes over time depending on their experiences, while newly experienced features become meaningful without preprogrammed labeling of distinct attributes. The architecture provides interfaces for higher-order cognitive processes to be built on top of the unsupervised preprocessor. This method is unsupervised and modular, in contrast to the highly constrained and pretrained learning systems that exist, making it extendable and well-disposed for use in artificial general intelligence. Full article
Show Figures

Figure 1

Back to TopTop