Edge Computing Communications

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (20 September 2022) | Viewed by 15312

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science and Engineering, Central South University, Changsha 410083, China
Interests: wireless network; edge computing; artificial intelligence; big data processing; software engineering

E-Mail Website
Guest Editor
1. School of Artificial Intelligence, Guilin University of Electronic Technology, Guilin, China.
2. Department of Electrical Engineering, Universidad de Chile, Santiago, Chile.
Interests: edge computing; green computing; Artificial Intelligence
Special Issues, Collections and Topics in MDPI journals
Department of Computer Science, School of Engineering and Computing Sciences, New York Institute of Technology, New York, NY 10023, USA
Interests: security and trust; mobile and wireless systems; IoT; cyber physical systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the development of science and technology, new mobile applications such as face recognition, augmented reality, autonomous driving, and so on emerge one after another. This type of application is sensitive to delay and requires a lot of computation resources. However, the computation and storage resources of mobile terminals are limited; in particular, the battery life cycle is short, and running such applications will bring high latency and energy consumption. Edge computing is an important technology for these computation-intensive applications. The main idea is to deploy servers at the edge of the network close to the mobile terminals and extend the computation services originally provided by the remote cloud to a position closer to the mobile terminals. Consequently, mobile terminals can transmit data to the edge servers with a lower network delay and offload the computation task to edge servers for efficient computing services, thereby enhancing the QoE (quality of experience) and QoS (quality of service) of computation-intensive applications.

Currently, edge computing is receiving increasing attention from academia and industry. A lot of research has been conducted in a wide variety of fields, such as task offloading, resource management, caching strategy, and security and privacy protection, and attempts have been made in the past two to three years to improve the performance of edge computing in areas related to mobile computation, industrial data processing, and vehicular applications. Additionally, edge computing provides important computing support for artificial intelligence (AI) applications at the network edge, making AI possible in areas where AI was previously difficult to apply. Therefore, edge intelligence has become an important research direction of edge computing.

While this Special Issue invites topics broadly across advanced models, algorithms, and technologies for emerging edge computing, some specific topics include but are not limited to:

  • Frameworks and models for edge computing applications;
  • Algorithms for resource management and computation offloading in edge computing;
  • Game-based analysis for the participants in edge computing;
  • Machine learning, deep learning and federated learning for edge computing;
  • Offloading and scheduling strategy for edge intelligence;
  • Energy-efficient and green computing for edge computing;
  • Security and privacy for edge computing.

Prof. Dr. Feng Zeng
Prof. Dr. Jinsong Wu
Dr. Wenjia Li
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • edge computing
  • computation offloading
  • resource management
  • edge intelligence

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

22 pages, 565 KiB  
Article
Designing a Deep Q-Learning Model with Edge-Level Training for Multi-Level Task Offloading in Edge Computing Networks
by Ahmad Zendebudi and Salimur Choudhury
Appl. Sci. 2022, 12(20), 10664; https://doi.org/10.3390/app122010664 - 21 Oct 2022
Viewed by 1252
Abstract
Even though small portable devices are becoming increasingly more powerful in terms of processing power and power efficiency, there are still workloads that require more computational capacity than these devices offer. Examples of such workloads are real-time sensory input processing, video game streaming, [...] Read more.
Even though small portable devices are becoming increasingly more powerful in terms of processing power and power efficiency, there are still workloads that require more computational capacity than these devices offer. Examples of such workloads are real-time sensory input processing, video game streaming, and workloads relating to IoT devices. Some of these workloads such as virtual reality, however, require very small latency; hence, the workload cannot be offloaded to a cloud service. To tackle this issue, edge devices, which are closer to the user, are used instead of cloud servers. In this study, we explore the problem of assigning tasks from mobile devices to edge devices in order to minimize the task response latency and the power consumption of mobile devices, as they have limited power capacity. A deep Q-learning model is used to handle the task offloading decision process in mobile and edge devices. This study has two main contributions. Firstly, training a deep Q-learning model in mobile devices is a computational burden for a mobile device; hence, a solution is proposed to move the computation to the connected edge devices. Secondly, a routing protocol is proposed to deliver task results to mobile devices when a mobile device connects to a new edge device and therefore is no longer connected to the edge device to which previous tasks were offloaded. Full article
(This article belongs to the Special Issue Edge Computing Communications)
Show Figures

Figure 1

0 pages, 5191 KiB  
Article
A Network Traffic Prediction Method for AIOps Based on TDA and Attention GRU
by Kun Wang, Yuan Tan, Lizhong Zhang, Zhigang Chen and Jinghong Lei
Appl. Sci. 2022, 12(20), 10502; https://doi.org/10.3390/app122010502 - 18 Oct 2022
Cited by 1 | Viewed by 1346
Abstract
Fault early warning is a challenge in the field of operation and maintenance. Considering the improvement of accuracy and real-time standards, as well as the explosive growth of operation and maintenance data, traditional manual experience and static threshold can no longer meet the [...] Read more.
Fault early warning is a challenge in the field of operation and maintenance. Considering the improvement of accuracy and real-time standards, as well as the explosive growth of operation and maintenance data, traditional manual experience and static threshold can no longer meet the production requirements. This research fully digs into the difficulties in fault early warning and provides targeted solutions in several aspects, such as difficulty in feature extraction, insufficient prediction accuracy, and difficulty in determining alarm threshold. The TCAG model proposed in this paper creatively combines the spatiotemporal characteristics and topological characteristics of specific time series data to apply to time series prediction and gives the recommended dynamic threshold interval for fault early warning according to the prediction value. A data comparison experiment of a core router of Ningxia Electric Power Co., Ltd. shows that the combination of topological data analysis (TDA) and convolutional neural network (CNN) enables the TCAG model to obtain superior feature extraction capability, and the support of the attention mechanism improves the prediction accuracy of the TCAG model compared to the benchmark models. Full article
(This article belongs to the Special Issue Edge Computing Communications)
Show Figures

Figure 1

16 pages, 2518 KiB  
Article
Edge Intelligence Service Orchestration with Process Mining
by Yong Zhu, Zhihui Hu and Zhenyu He
Appl. Sci. 2022, 12(20), 10436; https://doi.org/10.3390/app122010436 - 16 Oct 2022
Cited by 1 | Viewed by 1585
Abstract
In the post-cloud computing era, edge computing as a distributed computing paradigm, integrating the core capabilities of computing, storage, network, and application, provides EIS (edge intelligence service), such as real-time business, data optimization, intelligent application, security, and privacy protection. The EIS has become [...] Read more.
In the post-cloud computing era, edge computing as a distributed computing paradigm, integrating the core capabilities of computing, storage, network, and application, provides EIS (edge intelligence service), such as real-time business, data optimization, intelligent application, security, and privacy protection. The EIS has become the core value driver to promote the IoE (Internet of Everything), to dig deeply into data value and create a new ecology of application scenarios. With the emergence of new business processes, EIS orchestration has also become a hot topic in academic research. A design methodology based on a complete “describe-synthesize-verify-evaluate” process was established to explore executable design specifications for EIS by means of model validation and running instances. As proof of concept, a CPN (colored Petri net) prototype was simulated and its operational processes were discovered by process mining from event data available in EIS for behavior verification. The instances running on WISE-PaaS demonstrate the feasibility of the research methodology, which aims to optimize EIS through service orchestration. Full article
(This article belongs to the Special Issue Edge Computing Communications)
Show Figures

Figure 1

23 pages, 7565 KiB  
Article
A New One-Dimensional Compound Chaotic System and Its Application in High-Speed Image Encryption
by Shenli Zhu, Xiaoheng Deng, Wendong Zhang and Congxu Zhu
Appl. Sci. 2021, 11(23), 11206; https://doi.org/10.3390/app112311206 - 25 Nov 2021
Cited by 18 | Viewed by 1811
Abstract
In the edge computing and network communication environment, important image data need to be transmitted and stored securely. Under the condition of limited computing resources, it is particularly necessary to design effective and fast image encryption algorithms. One-dimensional (1D) chaotic maps provide an [...] Read more.
In the edge computing and network communication environment, important image data need to be transmitted and stored securely. Under the condition of limited computing resources, it is particularly necessary to design effective and fast image encryption algorithms. One-dimensional (1D) chaotic maps provide an effective solution for real-time image encryption, but most 1D chaotic maps have only one parameter and a narrow chaotic interval, which has the disadvantage of security. In this paper, a new compound 1D chaotic map composed of a logistic map and tent map is proposed. The new system has two system parameters and an arbitrarily large chaotic parameter interval, and its chaotic signal is evenly distributed in the whole value space so it can improve the security in the application of information encryption. Furthermore, based on the new chaotic system, a fast image encryption algorithm is proposed. The algorithm takes the image row (column) as the cyclic encryption unit, and the time overhead is greatly reduced compared with the algorithm taking the pixel as the encryption unit. In addition, the mechanism of intermediate key associated with image content is introduced to improve the ability of the algorithm to resist chosen-plaintext attack and differential attack. Experiments show that the proposed image encryption algorithm has obvious speed advantages and good cryptographic performance, showing its excellent application potential in secure network communication. Full article
(This article belongs to the Special Issue Edge Computing Communications)
Show Figures

Figure 1

Review

Jump to: Research

36 pages, 617 KiB  
Review
Federated Learning for Edge Computing: A Survey
by Alexander Brecko, Erik Kajati, Jiri Koziorek and Iveta Zolotova
Appl. Sci. 2022, 12(18), 9124; https://doi.org/10.3390/app12189124 - 11 Sep 2022
Cited by 22 | Viewed by 7855
Abstract
New technologies bring opportunities to deploy AI and machine learning to the edge of the network, allowing edge devices to train simple models that can then be deployed in practice. Federated learning (FL) is a distributed machine learning technique to create a global [...] Read more.
New technologies bring opportunities to deploy AI and machine learning to the edge of the network, allowing edge devices to train simple models that can then be deployed in practice. Federated learning (FL) is a distributed machine learning technique to create a global model by learning from multiple decentralized edge clients. Although FL methods offer several advantages, including scalability and data privacy, they also introduce some risks and drawbacks in terms of computational complexity in the case of heterogeneous devices. Internet of Things (IoT) devices may have limited computing resources, poorer connection quality, or may use different operating systems. This paper provides an overview of the methods used in FL with a focus on edge devices with limited computational resources. This paper also presents FL frameworks that are currently popular and that provide communication between clients and servers. In this context, various topics are described, which include contributions and trends in the literature. This includes basic models and designs of system architecture, possibilities of application in practice, privacy and security, and resource management. Challenges related to the computational requirements of edge devices such as hardware heterogeneity, communication overload or limited resources of devices are discussed. Full article
(This article belongs to the Special Issue Edge Computing Communications)
Show Figures

Figure 1

Back to TopTop