Deep Learning and Edge Computing for Internet of Things

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 31 August 2024 | Viewed by 13147

Special Issue Editors


E-Mail Website
Guest Editor
Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China, Shenzhen 518110, China
Interests: deep learning; internet of things; edge computing
Special Issues, Collections and Topics in MDPI journals
College of Computer and Information, Hohai University, Nanjing 211100, China
Interests: computer vision; artifical intelligence; multimedia computing; intelligent water conservancy
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The evolution of 5G and Internet of Things (IoT) technologies is leading to ubiquitous connections among humans and their environment, such as applications in autopilot transportation, mobile e-commerce, unmanned vehicles and healthcare, bringing revolutionary changes to our daily lives. Moreovercomputing environment, resulting in the requirement forsupport an increasing range of functionality: multi-sensory data processing and analysis, complex systems control strategies, and, ultimately, artificial intelligence. After several years of development, edge computing for deep learning has shown its incomparable practical value in the IoT environment. Pushing computing resources to the edge in closer proximity to devices enables low-latency service delivery for both safety and applications. However, edge computing still has abundant untapped potential for deep learning. Systems should leverage awareness of the surrounding environment and attach more importance to edge-edge intelligence collaboration and edge-cloud communication. Additionally, computation systems should provide more support for services like edge AI in order to optimize the computing process, including smart scheduling, privacy protection, environment-aware ability etc. Overall, edge computing is predicted to be one of the most useful tools for the ubiquitous IoT environment. This Special Issue aims to explore recent advances in edge computing technologies.

The topics of interest for this Special Issue include, but are not limited to:

  • Hardware-software design approaches for edge computing and processing.
  • Frameworks and models for edge-computing-enabled IoT.
  • Multi-agent planning and coordination for edge computing.
  • Testbed and simulation tools for smart edge computing.
  • Intelligent computation allocation and offloading.
  • Application case studies for the smart edge computing environment.
  • Security and privacy for smart edge computing systems.
  • Energy-efficient and green computing for edge-computing-enabled IoT.
  • Network optimization and communication protocols for edge AI.
  • Edge machine-learning architectures dealing with sensor and signal variabilities.

Prof. Dr. Shaohua Wan
Dr. Yirui Wu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • edge computing
  • deep learning
  • Internet of Things
  • computational intelligence
  • computing systems

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

22 pages, 993 KiB  
Article
Connection-Aware Heuristics for Scheduling and Distributing Jobs under Dynamic Dew Computing Environments
by Pablo Sanabria, Sebastián Montoya, Andrés Neyem, Rodrigo Toro Icarte, Matías Hirsch and Cristian Mateos
Appl. Sci. 2024, 14(8), 3206; https://doi.org/10.3390/app14083206 - 11 Apr 2024
Viewed by 343
Abstract
Due to the widespread use of mobile and IoT devices, coupled with their continually expanding processing capabilities, dew computing environments have become a significant focus for researchers. These environments enable resource-constrained devices to contribute computing power to a local network. One major challenge [...] Read more.
Due to the widespread use of mobile and IoT devices, coupled with their continually expanding processing capabilities, dew computing environments have become a significant focus for researchers. These environments enable resource-constrained devices to contribute computing power to a local network. One major challenge within these environments revolves around task scheduling, specifically determining the optimal distribution of jobs across the available devices in the network. This challenge becomes particularly pronounced in dynamic environments where network conditions constantly change. This work proposes integrating the “reliability” concept into cutting-edge human-design job distribution heuristics named ReleSEAS and RelBPA as a means of adapting to dynamic and ever-changing network conditions caused by nodes’ mobility. Additionally, we introduce a reinforcement learning (RL) approach, embedding both the notion of reliability and real-time network status into the RL agent. Our research rigorously contrasts our proposed algorithms’ throughput and job completion rates with their predecessors. Simulated results reveal a marked improvement in overall throughput, with our algorithms potentially boosting the environment’s performance. They also show a significant enhancement in job completion within dynamic environments compared to baseline findings. Moreover, when RL is applied, it surpasses the job completion rate of human-designed heuristics. Our study emphasizes the advantages of embedding inherent network characteristics into job distribution algorithms for dew computing. Such incorporation gives them a profound understanding of the network’s diverse resources. Consequently, this insight enables the algorithms to manage resources more adeptly and effectively. Full article
(This article belongs to the Special Issue Deep Learning and Edge Computing for Internet of Things)
Show Figures

Figure 1

19 pages, 10987 KiB  
Article
Hybrid Deep Convolutional Generative Adversarial Network (DCGAN) and Xtreme Gradient Boost for X-ray Image Augmentation and Detection
by Ahmad Hoirul Basori, Sharaf J. Malebary and Sami Alesawi
Appl. Sci. 2023, 13(23), 12725; https://doi.org/10.3390/app132312725 - 27 Nov 2023
Viewed by 664
Abstract
The COVID-19 pandemic has exerted a widespread influence on a global scale, leading numerous nations to prepare for the endemicity of COVID-19. The polymerase chain reaction (PCR) swab test has emerged as the prevailing technique for identifying viral infections within the current pandemic. [...] Read more.
The COVID-19 pandemic has exerted a widespread influence on a global scale, leading numerous nations to prepare for the endemicity of COVID-19. The polymerase chain reaction (PCR) swab test has emerged as the prevailing technique for identifying viral infections within the current pandemic. Following this, the application of chest X-ray imaging in individuals provides an alternate approach for evaluating the existence of viral infection. However, it is imperative to further boost the quality of collected chest pictures via additional data augmentation. The aim of this paper is to provide a technique for the automated analysis of X-ray pictures using server processing with a deep convolutional generative adversarial network (DCGAN). The proposed methodology aims to improve the overall image quality of X-ray scans. The integration of deep learning with Xtreme Gradient Boosting in the DCGAN technique aims to improve the quality of X-ray pictures processed on the server. The training model employed in this work is based on the Inception V3 learning model, which is combined with XGradient Boost. The results obtained from the training procedure were quite interesting: the training model had an accuracy rate of 98.86%, a sensitivity score of 99.1%, and a recall rate of 98.7%. Full article
(This article belongs to the Special Issue Deep Learning and Edge Computing for Internet of Things)
Show Figures

Figure 1

16 pages, 3248 KiB  
Article
Hash Based DNA Computing Algorithm for Image Encryption
by Hongming Li, Lilai Zhang, Hao Cao and Yirui Wu
Appl. Sci. 2023, 13(14), 8509; https://doi.org/10.3390/app13148509 - 23 Jul 2023
Viewed by 895
Abstract
Deoxyribonucleic Acid (DNA) computing has demonstrated great potential in data encryption due to its capability of parallel computation, minimal storage requirement, and unbreakable cryptography. Focusing on high-dimensional image data for encryption with DNA computing, we propose a novel hash encoding-based DNA computing algorithm, [...] Read more.
Deoxyribonucleic Acid (DNA) computing has demonstrated great potential in data encryption due to its capability of parallel computation, minimal storage requirement, and unbreakable cryptography. Focusing on high-dimensional image data for encryption with DNA computing, we propose a novel hash encoding-based DNA computing algorithm, which consists of a DNA hash encoding module and content-aware encrypting module. Inspired by the significant properties of the hash function, we build a quantity of hash mappings from image pixels to DNA computing bases, properly integrating the advantages of the hash function and DNA computing to boost performance. Considering the correlation relationship of pixels and patches for modeling, a content-aware encrypting module is proposed to reorganize the image data structure, resisting the crack with non-linear and high dimensional complexity originating from the correlation relationship. The experimental results suggest that the proposed method performs better than most comparative methods in key space, histogram analysis, pixel correlation, information entropy, and sensitivity measurements. Full article
(This article belongs to the Special Issue Deep Learning and Edge Computing for Internet of Things)
Show Figures

Figure 1

16 pages, 1218 KiB  
Article
A Combined Multi-Classification Network Intrusion Detection System Based on Feature Selection and Neural Network Improvement
by Yunhui Wang, Zifei Liu, Weichu Zheng, Jinyan Wang, Hongjian Shi and Mingyu Gu
Appl. Sci. 2023, 13(14), 8307; https://doi.org/10.3390/app13148307 - 18 Jul 2023
Viewed by 1066
Abstract
Feature loss in IoT scenarios is a common problem. This situation poses a greater challenge in terms of real-time and accuracy for the security of intelligent edge computing systems, which also includes network security intrusion detection systems (NIDS). Losing some packet information can [...] Read more.
Feature loss in IoT scenarios is a common problem. This situation poses a greater challenge in terms of real-time and accuracy for the security of intelligent edge computing systems, which also includes network security intrusion detection systems (NIDS). Losing some packet information can easily confuse NIDS and cause an oversight of security systems. We propose a novel network intrusion detection framework based on an improved neural network. The new framework uses 23 subframes and a mixer for multi-classification work, which improves the parallelism of NIDS and is more adaptable to edge networks. We also incorporate the K-Nearest Neighbors (KNN) algorithm and Genetic Algorithm (GA) for feature selection, reducing parameters, communication, and memory overhead. We named the above system as Combinatorial Multi-Classification-NIDS (CM-NIDS). Experiments demonstrate that our framework can be more flexible in terms of the parameters of binary classification, has a fairly high accuracy in multi-classification, and is less affected by feature loss. Full article
(This article belongs to the Special Issue Deep Learning and Edge Computing for Internet of Things)
Show Figures

Figure 1

18 pages, 4294 KiB  
Article
Faster RCNN Target Detection Algorithm Integrating CBAM and FPN
by Wenshun Sheng, Xiongfeng Yu, Jiayan Lin and Xin Chen
Appl. Sci. 2023, 13(12), 6913; https://doi.org/10.3390/app13126913 - 07 Jun 2023
Cited by 3 | Viewed by 1423
Abstract
In the process of image shooting, due to the influence of angle, distance, complex scenes, illumination intensity, and other factors, small targets and occluded targets will inevitably appear in the image. These targets have few effective pixels, few features, and no obvious features, [...] Read more.
In the process of image shooting, due to the influence of angle, distance, complex scenes, illumination intensity, and other factors, small targets and occluded targets will inevitably appear in the image. These targets have few effective pixels, few features, and no obvious features, which makes it difficult to extract their effective features and easily leads to false detection, missed detection, and repeated detection, thus affecting the performance of target detection models. To solve this problem, an improved faster region convolutional neural network (RCNN) algorithm integrating the convolutional block attention module (CBAM) and feature pyramid network (FPN) (CF-RCNN) is proposed to improve the detection and recognition accuracy of small-sized, occluded, or truncated objects in complex scenes. Firstly, it incorporates the CBAM attention mechanism in the feature extraction network in combination with the information filtered by spatial and channel attention modules, focusing on local efficient information of the feature image, which improves the detection ability in the face of obscured or truncated objects. Secondly, it introduces the FPN feature pyramid structure, and links high-level and bottom-level feature data to obtain high-resolution and strong semantic data to enhance the detection effect for small-sized objects. Finally, it optimizes non-maximum suppression (NMS) to compensate for the shortcomings of conventional NMS that mistakenly eliminates overlapping detection frames. The experimental results show that the mean average precision (MAP) of target detection of the improved algorithm on PASCAL VOC2012 public datasets is improved to 76.2%, which is 13.9 percentage points higher than those of the commonly used Faster RCNN and other algorithms. It is better than the commonly used small-sample target detection algorithm. Full article
(This article belongs to the Special Issue Deep Learning and Edge Computing for Internet of Things)
Show Figures

Figure 1

14 pages, 3241 KiB  
Article
Rainfall Similarity Search Based on Deep Learning by Using Precipitation Images
by Yufeng Yu, Xingu He, Yuelong Zhu and Dingsheng Wan
Appl. Sci. 2023, 13(8), 4883; https://doi.org/10.3390/app13084883 - 13 Apr 2023
Viewed by 952
Abstract
Precipitation images play an important role in meteorological forecasting and flood forecasting, but how to characterize precipitation images and conduct rainfall similarity analysis is challenging and meaningful work. This paper proposes a rainfall similarity research method based on deep learning by using precipitation [...] Read more.
Precipitation images play an important role in meteorological forecasting and flood forecasting, but how to characterize precipitation images and conduct rainfall similarity analysis is challenging and meaningful work. This paper proposes a rainfall similarity research method based on deep learning by using precipitation images. The algorithm first extracts regional precipitation, precipitation distribution, and precipitation center of the precipitation images and defines the similarity measures, respectively. Additionally, an ensemble weighting method of Normalized Discounted Cumulative Gain-Improved Particle Swarm Optimization (NDCG-IPSO) is proposed to weigh and fuse the three extracted features as the similarity measure of the precipitation image. During the experiment on similarity search for daily precipitation images in the Jialing River basin, the NDCG@10 of the search results reached 0.964, surpassing other methods. This indicates that the method proposed in this paper can better characterize the spatiotemporal characteristics of the precipitation image, thereby discovering similar rainfall processes and providing new ideas for hydrological forecasting. Full article
(This article belongs to the Special Issue Deep Learning and Edge Computing for Internet of Things)
Show Figures

Figure 1

14 pages, 1290 KiB  
Article
Attention-Based Active Learning Framework for Segmentation of Breast Cancer in Mammograms
by Xianjun Fu, Hao Cao, Hexuan Hu, Bobo Lian, Yansong Wang, Qian Huang and Yirui Wu
Appl. Sci. 2023, 13(2), 852; https://doi.org/10.3390/app13020852 - 07 Jan 2023
Cited by 1 | Viewed by 2471
Abstract
Breast cancer is one of most serious malignant tumors that affect women’s health. To carry out the early screening of breast cancer, mammography provides breast cancer images for doctors’ efficient diagnosis. However, breast cancer lumps can vary in size and shape, bringing difficulties [...] Read more.
Breast cancer is one of most serious malignant tumors that affect women’s health. To carry out the early screening of breast cancer, mammography provides breast cancer images for doctors’ efficient diagnosis. However, breast cancer lumps can vary in size and shape, bringing difficulties for the accurate recognition of both humans and machines. Moreover, the annotation of such images requires expert medical knowledge, which increases the cost of collecting datasets to boost the performance of deep learning methods. To alleviate these problems, we propose an attention-based active learning framework for breast cancer segmentation in mammograms; the framework consists of a basic breast cancer segmentation model, an attention-based sampling scheme and an active learning strategy for labelling. The basic segmentation model performs multi-scale feature fusion and enhancement on the basis of UNet, thus improving the distinguishing representation capability of the extracted features for further segmentation. Afterwards, the proposed attention-based sampling scheme assigns different weights for unlabeled breast cancer images by evaluating their uncertainty with the basic segmentation model. Finally, the active learning strategy selects unlabeled images with the highest weights for manual labeling, thus boosting the performance of the basic segmentation model via retraining with new labeled samples. Testing on four datasets, experimental results show that the proposed framework could greatly improve segmentation accuracy by about 15% compared with an existing method, while largely decreasing the cost of data annotation. Full article
(This article belongs to the Special Issue Deep Learning and Edge Computing for Internet of Things)
Show Figures

Figure 1

16 pages, 4456 KiB  
Article
An Adaptive Dynamic Channel Allocation Algorithm Based on a Temporal–Spatial Correlation Analysis for LEO Satellite Networks
by Juan Wang, Lijuan Sun, Jian Zhou and Chong Han
Appl. Sci. 2022, 12(21), 10939; https://doi.org/10.3390/app122110939 - 28 Oct 2022
Viewed by 1056
Abstract
Low Earth orbit (LEO) satellites that can be used as computing nodes are an important part of future communication networks. However, growing user demands, scarce channel resources and unstable satellite–ground links result in the challenge to design an efficient channel allocation algorithm for [...] Read more.
Low Earth orbit (LEO) satellites that can be used as computing nodes are an important part of future communication networks. However, growing user demands, scarce channel resources and unstable satellite–ground links result in the challenge to design an efficient channel allocation algorithm for the LEO satellite network. Edge computing (EC) provides sufficient computing power for LEO satellite networks and makes the application of reinforcement learning possible. In this paper, an adaptive dynamic channel allocation algorithm based on a temporal–spatial correlation analysis for LEO satellite networks is proposed. First, according to the user mobility model, the temporal–spatial correlation of handoff calls is analyzed. Second, the dynamic channel allocation process in the LEO satellite network is formally described as a Markov decision process. Third, according to the temporal–spatial correlation, a policy for different call events is designed and online reinforcement learning is used to solve the channel allocation problem. Finally, the simulation results under different traffic distributions and different traffic intensities show that the proposed algorithm can greatly reduce the rejection probability of the handoff call and then improve the total performance of the LEO satellite network. Full article
(This article belongs to the Special Issue Deep Learning and Edge Computing for Internet of Things)
Show Figures

Figure 1

Review

Jump to: Research

27 pages, 3394 KiB  
Review
UAV Detection and Tracking in Urban Environments Using Passive Sensors: A Survey
by Xiaochen Yan, Tingting Fu, Huaming Lin, Feng Xuan, Yi Huang, Yuchen Cao, Haoji Hu and Peng Liu
Appl. Sci. 2023, 13(20), 11320; https://doi.org/10.3390/app132011320 - 15 Oct 2023
Cited by 2 | Viewed by 2786
Abstract
Unmanned aerial vehicles (UAVs) have gained significant popularity across various domains, but their proliferation also raises concerns about security, public safety, and privacy. Consequently, the detection and tracking of UAVs have become crucial. Among the UAV-monitoring technologies, those suitable for urban Internet-of-Things (IoT) [...] Read more.
Unmanned aerial vehicles (UAVs) have gained significant popularity across various domains, but their proliferation also raises concerns about security, public safety, and privacy. Consequently, the detection and tracking of UAVs have become crucial. Among the UAV-monitoring technologies, those suitable for urban Internet-of-Things (IoT) environments primarily include radio frequency (RF), acoustic, and visual technologies. In this article, we provide a comprehensive review of passive UAV surveillance technologies, encompassing RF-based, acoustic-based, and vision-based methods for UAV detection, localization, and tracking. Our research reveals that certain lightweight UAV depth detection models have been effectively downsized for deployment on edge devices, facilitating the integration of edge computing and deep learning. In the city-wide anti-UAV, the integration of numerous urban infrastructure monitoring facilities presents a challenge in achieving a centralized computing center due to the large volume of data. To address this, calculations can be performed on edge devices, enabling faster UAV detection. Currently, there is a wide range of anti-UAV systems that have been deployed in both commercial and military sectors to address the challenges posed by UAVs. In this article, we provide an overview of the existing military and commercial anti-UAV systems. Furthermore, we propose several suggestions for developing general-purpose UAV-monitoring systems tailored for urban environments. These suggestions encompass considering the specific requirements of the application scenario, integrating detection and tracking mechanisms with appropriate countermeasures, designing for scalability and modularity, and leveraging advanced data analytics and machine learning techniques. To promote further research in the field of UAV-monitoring systems, we have compiled publicly available datasets comprising visual, acoustic, and radio frequency data. These datasets can be employed to evaluate the effectiveness of various UAV-monitoring techniques and algorithms. All of the datasets mentioned are linked in the text or in the references. Most of these datasets have been validated in multiple studies, and researchers can find more specific information in the corresponding papers or documents. By presenting this comprehensive overview and providing valuable insights, we aim to advance the development of UAV surveillance technologies, address the challenges posed by UAV proliferation, and foster innovation in the field of UAV monitoring and security. Full article
(This article belongs to the Special Issue Deep Learning and Edge Computing for Internet of Things)
Show Figures

Figure 1

Back to TopTop