Deep Reinforcement Learning in IoT Networks

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (29 February 2024) | Viewed by 1872

Special Issue Editors

Faculty of Engineering and Physical Sciences, University of Leeds, Leeds LS2 9JT, UK
Interests: AI for computing and networking; Internet of Things; multimedia networking
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, Faculty of Environment, Science and Economy, University of Exeter, Exeter EX4 4PY, UK
Interests: edge–cloud computing; resource optimization; applied machine learning; network security
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Big Data and Software Engineering, Chongqing University, Chongqing 400000, China
Interests: network optimization; mobile edge computing and caching; network virtualization; machine learning

E-Mail Website
Guest Editor
Key Lab of Data Engineering and Knowledgement Engineering, Renmin University of China, Beijing 100872, China
Interests: next-generation internet architecture; Internet of Things; big data

E-Mail Website
Guest Editor
School for Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
Interests: data science; AI; sustainable computing; Internet of Things

Special Issue Information

Dear Colleagues,

Internet of Things (IoT) network connects numerous smart devices around the globe to the internet, which has achieved great success in creating better enterprise solutions, integrating smarter homes, innovating agriculture, building smarter cities, upgrading supply chain management, transforming healthcare, and so on. However, IoT networks involve diversified protocols (e.g., WLAN, 4G/5G/6G, 6LoWPAN, and LPWAN), and heterogeneous IoT devices with intrinsic resource constraints (e.g., poor computing capability, limited storage and low battery capacity of IoT devices), which increase the complexity of network management significantly. Fortunately, the advances in deep reinforcement learning (DRL) have shown great potentials in removing the curse of high dimensionality and complexity of problems, as it can learns to make better decisions through observations of the resulting performance of past decisions without assumptions about the environment. Nevertheless, there are still many significant gaps and technical challenges in applying DRL in large-scale IoT systems that enable robust, real-time, secure IoT network management.

The aim of this Special Issue is to bring together researchers in the field of IoT, AI, cloud/edge computing, and networks to address new challenges in DRL for IoT networks by soliciting original, previously unpublished empirical, experimental, and theoretical research works at the intersection of these technologies.

Dr. Xu Zhang
Dr. Jia Hu
Dr. Qilin Fan
Dr. Tong Li
Prof. Dr. Lu Liu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • real-time decision making based on DRL
  • edge Intelligence for IoT networks
  • multi-agent deep reinforcement learning for IoT networks
  • privacy-preserving mechanism for deep reinforcement learning
  • DRL based task offloading and resource management in IoT networks
  • design, validation and optimization of DRL in IoT networks
  • scalable DRL for IoT systems with increased complexity

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

27 pages, 1966 KiB  
Article
Time-Sensitive and Resource-Aware Concurrent Workflow Scheduling for Edge Computing Platforms Based on Deep Reinforcement Learning
by Jiaming Zhang, Tao Wang and Lianglun Cheng
Appl. Sci. 2023, 13(19), 10689; https://doi.org/10.3390/app131910689 - 26 Sep 2023
Viewed by 698
Abstract
The workflow scheduling on edge computing platforms in industrial scenarios aims to efficiently utilize the computing resources of edge platforms to meet user service requirements. Compared to ordinary task scheduling, tasks in workflow scheduling come with predecessor and successor constraints. The solutions to [...] Read more.
The workflow scheduling on edge computing platforms in industrial scenarios aims to efficiently utilize the computing resources of edge platforms to meet user service requirements. Compared to ordinary task scheduling, tasks in workflow scheduling come with predecessor and successor constraints. The solutions to scheduling problems typically include traditional heuristic methods and modern deep reinforcement learning approaches. For heuristic methods, an increase in constraints complicates the design of scheduling rules, making it challenging to devise suitable algorithms. Additionally, whenever the environment undergoes updates, it necessitates the redesign of the scheduling algorithms. For existing deep reinforcement learning-based scheduling methods, there are often challenges related to training difficulty and computation time. The addition of constraints makes it challenging for neural networks to make decisions while satisfying those constraints. Furthermore, previous methods mainly relied on RNN and its variants to construct neural network models, lacking a computation time advantage. In response to these issues, this paper introduces a novel workflow scheduling method based on reinforcement learning, which utilizes neural networks for direct decision-making. On the one hand, this approach leverages deep reinforcement learning, eliminating the need for researchers to define complex scheduling rules. On the other hand, it separates the parsing of the workflow and constraint handling from the scheduling decisions, allowing the neural network model to focus on learning how to schedule without the necessity of learning how to handle workflow definitions and constraints among sub-tasks. The method optimizes resource utilization and response time, as its objectives and the network are trained using the PPO algorithm combined with Self-Critic, and the parameter transfer strategy is utilized to find the balance point for multi-objective optimization. Leveraging the advantages of reinforcement learning, the network can be trained and tested using randomly generated datasets. The experimental results indicate that the proposed method can generate different scheduling outcomes to meet various scenario requirements without modifying the neural network. Furthermore, when compared to other deep reinforcement learning methods, the proposed approach demonstrates certain advantages in scheduling performance and computation time. Full article
(This article belongs to the Special Issue Deep Reinforcement Learning in IoT Networks)
Show Figures

Figure 1

Back to TopTop