AI Enabled Communication on IoT Edge Computing

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (31 December 2020) | Viewed by 20466

Special Issue Editors


E-Mail Website
Guest Editor
School of Engineering, Macquarie University, Sydney, NSW 2109, Australia
Interests: smart sensors; sensing technology; WSN; IoT; ICT; smart grid; energy harvesting
Special Issues, Collections and Topics in MDPI journals

grade E-Mail Website
Guest Editor

Special Issue Information

Dear Colleagues,

Recently, artificial intelligence (AI) has successfully been applied to various research domains, including computer vision, natural language processing, voice recognition, etc. In addition, AI with edge computing applications in Internet of Things (IoT) has made a significant breakthrough and technical direction in achieving high efficiency and adaptability in a variety of new applications, such as smart wearable devices in healthcare, smart automotive industry, recommender systems, and financial analysis. In recent years, AI has been beginning to emerge in the edge networking and IoT application domain. The design and application of AI techniques for edge IoT network management, operations, and automation can improve the way we address networking today, such as topology discovery, network measurement, network monitoring, network modeling, network control, and so on. On the other hand, network design and optimization for AI applications addresses a complementing topic, namely the support of AI-based systems through novel networking techniques, including new architectures as well as performance models for IoT edge computing. The networking research community is looking upon all these challenges as their opportunities in the Machine Learning era, showing edge computing applications in the IoT.

The main aim of this Special Issue is to integrate novel approaches efficiently, focusing on the performance evaluation and the comparison with existing solutions of AI-enabled communication on IoT edge computing. Both theoretical and experimental studies for AI-enabled edge computing architectures, frameworks, platforms, and protocols for IoT are encouraged. Furthermore, high-quality review and survey papers are welcomed.

The papers considered for possibe publication may focus on but not necessarily be limited to the following areas:

  • AI-enabled edge computing architectures, frameworks, platforms, and protocols for IoT;
  • Machine learning techniques in edge computing for IoT;
  • Edge network architecture and optimization for AI applications at scale;
  • AI Algorithms for dynamic and large-scale topology discovery;
  • AI for wireless network resource management and medium access control;
  • Energy-efficient edge network operations via AI algorithms;
  • Deep learning and reinforcement learning in network control and management;
  • Self-learning and adaptive networking protocols and algorithms;
  • Novel applications, and case studies with edge computing for IoT;
  • AI modeling and performance analysis in edge computing for IoT.

Prof. Dr. Subhas Mukhopadhyay
Prof. Dr. Arun Kumar Sangaiah
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial intelligence
  • Sensor networks
  • IoT-enabled sensors
  • Edge computing
  • Machine learning

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 1957 KiB  
Article
Learning-Based Task Offloading for Marine Fog-Cloud Computing Networks of USV Cluster
by Kuntao Cui, Bin Lin, Wenli Sun and Wenqiang Sun
Electronics 2019, 8(11), 1287; https://doi.org/10.3390/electronics8111287 - 05 Nov 2019
Cited by 18 | Viewed by 2972
Abstract
In recent years, unmanned surface vehicles (USVs) have made important advances in civil, maritime, and military applications. With the continuous improvement of autonomy, the increasing complexity of tasks, and the emergence of various types of advanced sensors, higher requirements are imposed on the [...] Read more.
In recent years, unmanned surface vehicles (USVs) have made important advances in civil, maritime, and military applications. With the continuous improvement of autonomy, the increasing complexity of tasks, and the emergence of various types of advanced sensors, higher requirements are imposed on the computing performance of USV clusters, especially for latency sensitive tasks. However, during the execution of marine operations, due to the relative movement of the USV cluster nodes and the network topology of the cluster, the wireless channel states are changing rapidly, and the computing resources of cluster nodes may be available or unavailable at any time. It is difficult to accurately predict in advance. Therefore, we propose an optimized offloading mechanism based on the classic multi-armed bandit (MAB) theory. This mechanism enables USV cluster nodes to dynamically make offloading decisions by learning the potential computing performance of their neighboring team nodes to minimize average computation task offloading delay. It is an optimized algorithm named Adaptive Upper Confidence Boundary (AUCB) algorithm, and corresponding simulations are designed to evaluate the performance. The algorithm enables the USV cluster to effectively adapt to the marine vehicular fog computing networks, balancing the trade-off between exploration and exploitation (EE). The simulation results show that the proposed algorithm can quickly converge to the optimal computation task offloading combination strategy under heavy and light input data loads. Full article
(This article belongs to the Special Issue AI Enabled Communication on IoT Edge Computing)
Show Figures

Figure 1

17 pages, 1742 KiB  
Article
Post Text Processing of Chinese Speech Recognition Based on Bidirectional LSTM Networks and CRF
by Li Yang, Ying Li, Jin Wang and Zhuo Tang
Electronics 2019, 8(11), 1248; https://doi.org/10.3390/electronics8111248 - 31 Oct 2019
Cited by 17 | Viewed by 3847
Abstract
With the rapid development of Internet of Things Technology, speech recognition has been applied more and more widely. Chinese Speech Recognition is a complex process. In the process of speech-to-text conversion, due to the influence of dialect, environmental noise, and context, the accuracy [...] Read more.
With the rapid development of Internet of Things Technology, speech recognition has been applied more and more widely. Chinese Speech Recognition is a complex process. In the process of speech-to-text conversion, due to the influence of dialect, environmental noise, and context, the accuracy of speech-to-text in multi-round dialogues and specific contexts is still not high. After the general speech recognition technology, the text after speech recognition can be detected and corrected in the specific context, which is helpful to improve the robustness of text comprehension and is a beneficial supplement to the speech recognition technology. In this paper, a text processing model after Chinese Speech Recognition is proposed, which combines a bidirectional long short-term memory (LSTM) network with a conditional random field (CRF) model. The task is divided into two stages: text error detection and text error correction. In this paper, a bidirectional long short-term memory (Bi-LSTM) network and conditional random field are used in two stages of text error detection and text error correction respectively. Through verification and system test on the SIGHAN 2013 Chinese Spelling Check (CSC) dataset, the experimental results show that the model can effectively improve the accuracy of text after speech recognition. Full article
(This article belongs to the Special Issue AI Enabled Communication on IoT Edge Computing)
Show Figures

Figure 1

16 pages, 6256 KiB  
Article
Satellite IoT Edge Intelligent Computing: A Research on Architecture
by Junyong Wei, Jiarong Han and Suzhi Cao
Electronics 2019, 8(11), 1247; https://doi.org/10.3390/electronics8111247 - 31 Oct 2019
Cited by 50 | Viewed by 6077
Abstract
As the number of satellites continues to increase, satellites become an important part of the IoT and 5G/6G communications. How to deal with the data of the satellite Internet of Things is a problem worth considering and paying attention to. Due to the [...] Read more.
As the number of satellites continues to increase, satellites become an important part of the IoT and 5G/6G communications. How to deal with the data of the satellite Internet of Things is a problem worth considering and paying attention to. Due to the current on-board processing capability and the limitation of the inter-satellite communication rate, the data acquisition from the satellite has a higher delay and the data utilization rate is lower. In order to use the data generated by the satellite IoT more effectively, we propose a satellite IoT edge intelligent computing architecture. In the article, we analyze the current methods of satellite data processing, combined with the development trend of future satellites, and use the characteristics of edge computing and machine learning to describe the satellite IoT edge intelligent computing architecture. Finally, we verify that the architecture can speed up the processing of satellite data. By demonstrating the performance of different neural network models in the satellite edge intelligent computing architecture, we can find that the lightweight of neural networks can promote the development of satellite IoT edge intelligent computing architecture. Full article
(This article belongs to the Special Issue AI Enabled Communication on IoT Edge Computing)
Show Figures

Figure 1

20 pages, 723 KiB  
Article
Multi-Source Reliable Multicast Routing with QoS Constraints of NFV in Edge Computing
by Shiming He, Kun Xie, Xuhui Zhou, Thabo Semong and Jin Wang
Electronics 2019, 8(10), 1106; https://doi.org/10.3390/electronics8101106 - 01 Oct 2019
Cited by 9 | Viewed by 4558
Abstract
Edge Computing (EC) allows processing to take place near the user, hence ensuring scalability and low latency. Network Function Virtualization (NFV) provides the significant convenience of network layout and reduces the service operation cost in EC and data center. Nowadays, the interests of [...] Read more.
Edge Computing (EC) allows processing to take place near the user, hence ensuring scalability and low latency. Network Function Virtualization (NFV) provides the significant convenience of network layout and reduces the service operation cost in EC and data center. Nowadays, the interests of the NFV layout focus on one-to-one communication, which is costly when applied to multicast or group services directly. Furthermore, many artificial intelligence applications and services of cloud and EC are generally communicated through groups and have special Quality of Service (QoS) and reliable requirements. Therefore, we are devoted to the problem of reliable Virtual Network Function (VNF) layout with various deployment costs in multi-source multicast. To guarantee QoS, we take into account the bandwidth, latency, and reliability constraints. Additionally, a heuristic algorithm, named Multi-Source Reliable Multicast Tree Construction (RMTC), is proposed. The algorithm aims to find a common link to place the Service Function Chain (SFC) in the multilevel overlay directed (MOD) network of the original network, so that the deployed SFC can be shared by all users, thereby improving the resource utilization. We then constructed a Steiner tree to find the reliable multicast tree. Two real topologies are used to evaluate the performance of the proposed algorithm. Simulation results indicate that, compared to other heuristic algorithms, our scheme effectively reduces the cost of reliable services and satisfies the QoS requirements. Full article
(This article belongs to the Special Issue AI Enabled Communication on IoT Edge Computing)
Show Figures

Graphical abstract

12 pages, 1394 KiB  
Article
Non-Cooperative Spectrum Access Strategy Based on Impatient Behavior of Secondary Users in Cognitive Radio Networks
by Zhen Zeng, Meng Liu, Jin Wang and Dongping Lan
Electronics 2019, 8(9), 995; https://doi.org/10.3390/electronics8090995 - 05 Sep 2019
Cited by 6 | Viewed by 1996
Abstract
In the cognitive radio network (CRN), secondary users (SUs) compete for limited spectrum resources, so the spectrum access process of SUs can be regarded as a non-cooperative game. With enough artificial intelligence (AI), SUs can adopt certain spectrum access strategies through their learning [...] Read more.
In the cognitive radio network (CRN), secondary users (SUs) compete for limited spectrum resources, so the spectrum access process of SUs can be regarded as a non-cooperative game. With enough artificial intelligence (AI), SUs can adopt certain spectrum access strategies through their learning ability, so as to improve their own benefit. Taking into account the impatience of the SUs with the waiting time to access the spectrum and the fact that the primary users (PUs) have preemptive priority to use the licensed spectrum in the CRN, this paper proposed the repairable queueing model with balking and reneging to investigate the spectrum access. Based on the utility function from an economic perspective, the relationship between the Nash equilibrium and the socially optimal spectrum access strategy of SUs was studied through the analysis of the system model. Then a reasonable spectrum pricing scheme was proposed to maximize the social benefits. Simulation results show that the proposed access mechanism can realize the consistency of Nash equilibrium strategy and social optimal strategy to maximize the benefits of the whole cognitive system. Full article
(This article belongs to the Special Issue AI Enabled Communication on IoT Edge Computing)
Show Figures

Figure 1

Back to TopTop