Edge Computing for Urban Internet of Things

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Networks".

Deadline for manuscript submissions: closed (31 January 2023) | Viewed by 30630

Special Issue Editors


E-Mail Website
Leading Guest Editor
School of Computer Science and Engineering, University of Electronic Science and Technology of China (UESTC), Chengdu 610054, China
Interests: edge computing; IoT; low power networked systems; embedded intelligence

E-Mail Website
Co-Guest Editor
Department of Electrical and Computer Engineering, Rutgers University, New Brunswick, NJ 08901, USA
Interests: machine learning; sensor-based systems; intelligent infrastructure

E-Mail Website
Co-Guest Editor
Faculty of Electrical Engineering, Mathematics and Computer Science, Delft University of Technology, 2628 XE Delft, The Netherlands
Interests: mobile sensing; wearable computing

Special Issue Information

Dear Colleagues,

Recent years have witnessed increasing research interest in edge computing, which is a novel computing paradigm that enables delay-sensitive and data-intensive computing for resource-limited Internet of Things (IoT) systems. Rather than a tailored version of cloud computing, edge computing is envisioned to perform as the pervasive computing engine for various IoT systems, such as smart buildings, smart transportation, etc.

When edge computing meets urban IoT, both will face new challenges. On one hand, the heterogeneity of IoT applications and systems requires edge computing to be implemented in customized ways. The edge resources will also be placed in different network positions (WiFi APs, cellular base stations, smart vehicles, bus stops, satellite Internet constellations, etc.). On the other hand, the computing ability brought by edge computing will transform the traditional computing and communication paradigms of IoT systems including numerous embedded systems, low-power sensor networks, smartphone applications, smart vehicles, etc.

In this Special Issue, we are particularly interested in finding, defining, quantifying and solving the new issues that come out from the combination of edge computing and urban IoT, including new edge frameworks, system deployments, sensing systems, service management, low-power wireless/AI, resource sharing, performance modeling, prototypes, system experiences, edge-IoT applications, use cases in IoT/IoV/LEO network/blockchain/digital twins, etc.

Dr. Zhiwei Zhao
Dr. Jorge Ortiz
Dr. Guohao Lan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Edge computing
  • Urban Internet of Things
  • IoT sensing systems
  • Machine learning for IoT
  • Low-power networking/computing
  • Novel edge computing paradigms for IoT
  • System architecture for edge–IoT
  • Edge-assisted mobile and sensing systems
  • 5G/6G edge computing frameworks/systems/prototypes
  • Satellite/LEO edge computing
  • D2D/mobile computing in urban IoT
  • Computation offloading in edge systems
  • Reliable data transfer in LoRa/NB-IoT networks
  • Network models/protocols in LoRa/NB-IoT networks
  • Lightweight machine learning schemes
  • Distributed learning in edge networks
  • Performance modeling of edge–IoT systems
  • Resource management in edge networks
  • Service orchestration and embedding
  • Edge–IoT use cases and application systems
  • Intelligent transportation systems
  • Smart buildings/cities
  • Internet of Vehicles/edge-blockchain/digital twin

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 1091 KiB  
Article
LACE: Low-Cost Access Control Based on Edge Computing for Smart Buildings
by Haifeng Huang, Hongmin Tan, Xianyang Xu, Jianfei Zhang and Zhiwei Zhao
Electronics 2023, 12(2), 412; https://doi.org/10.3390/electronics12020412 - 13 Jan 2023
Cited by 1 | Viewed by 1465
Abstract
With the explosive growth in personal mobile devices, offloading computation through nearby mobile user devices as opportunistic edge servers to support complex applications with limited computation resources is receiving increasing attention. In this paper, we first establish the optimal opportunistic offloading problem using [...] Read more.
With the explosive growth in personal mobile devices, offloading computation through nearby mobile user devices as opportunistic edge servers to support complex applications with limited computation resources is receiving increasing attention. In this paper, we first establish the optimal opportunistic offloading problem using the statistical properties of user movement speed and CPU load of mobile edge servers. We then determine the amount of computation to be offloaded to individual mobile edge servers. Moreover, we design an adaptive mechanism based on PID to realize the function of continuing large data packets from breakpoints, mainly used to adjust the size of data packets automatically. It efficiently avoids data loss and reduces the cost of resources through the latency deviation as the variable of the gain function to estimate the packet size. Finally, an access control system based on edge computing is designed and developed to make full use of the mobile phones of nearby users. It can address the shortcomings of traditional schemes with high latency to some extent, and it makes latency lower and data reliability higher. Full article
(This article belongs to the Special Issue Edge Computing for Urban Internet of Things)
Show Figures

Figure 1

20 pages, 2113 KiB  
Article
Link-Aware Frame Selection for Efficient License Plate Recognition in Dynamic Edge Networks
by Jiaxin Liu, Rong Cong, Xiong Wang and Yaxin Zhou
Electronics 2022, 11(19), 3186; https://doi.org/10.3390/electronics11193186 - 04 Oct 2022
Cited by 2 | Viewed by 1364
Abstract
With the booming development of Internet of Things (IoT) and computer vision technology, running vision-based applications on IoT devices becomes an overwhelming tide. In vision-based applications, the Automatic License Plate Recognition (ALPR) is one of the fundamental services for smart-city applications such as [...] Read more.
With the booming development of Internet of Things (IoT) and computer vision technology, running vision-based applications on IoT devices becomes an overwhelming tide. In vision-based applications, the Automatic License Plate Recognition (ALPR) is one of the fundamental services for smart-city applications such as traffic control, auto-drive and safety monitoring. However, existing works about ALPR usually assume that IoT devices have sufficient power to transmit the whole captured stream to edge servers via stable network links. Considering the limited resources of IoT devices and high-dynamic wireless links, this assumption is not suitable for realizing an efficient ALPR service on low-power IoT devices in real wireless edge networks. In this paper, we propose a link-aware frame selection scheme for ALPR service in dynamic edge networks aiming to reduce the transmission energy consumption of IoT devices. Specifically, we tend to select a few key frames instead of the whole stream and transmit them under good links. We propose a two-layer recognition frame selection algorithm to optimize the frame selection by exploiting both the video content variation and real-time link quality. The extensive results show that, by carefully selecting the offloaded frames to edge servers, our algorithm can significantly reduce the energy consumption of devices by 46.71% and achieve 97.95% recognition accuracy in the high-dynamic wireless link of the edge network. Full article
(This article belongs to the Special Issue Edge Computing for Urban Internet of Things)
Show Figures

Figure 1

22 pages, 3493 KiB  
Article
Educational 5G Edge Computing: Framework and Experimental Study
by Qingyong Chen, Zhe Wang, Yu Su, Luwei Fu and Yuanlun Wei
Electronics 2022, 11(17), 2727; https://doi.org/10.3390/electronics11172727 - 30 Aug 2022
Cited by 10 | Viewed by 2263
Abstract
Benefiting from the large-scale commercial use of 5G, smart campuses have attracted increasing research attention in recent years and are expected to revolutionize traditional campus activities. However, there are some obstacles that hinder the practical deployment of MEC (multi-access edge computing). First, traditional [...] Read more.
Benefiting from the large-scale commercial use of 5G, smart campuses have attracted increasing research attention in recent years and are expected to revolutionize traditional campus activities. However, there are some obstacles that hinder the practical deployment of MEC (multi-access edge computing). First, traditional information infrastructures on campus cannot support latency-sensitive and computing-intensive smart applications, such as AR/VR, live interactive lectures and digital twin experiments. In addition, the mixture of old and new applications, isolated data islands and heterogeneous equipment management introduce more challenges. Moreover, the existing MEC framework proposed by ETSI and 3GPP cannot meet the specific deployment requirements of smart campuses, e.g., educational data security, real-time interactive applications, heterogeneous connections, and others. In this paper, we propose a 5G-based architecture for smart education information infrastructure; a new dedicated cloud architecture eMEC (educational multi-access edge computing) is defined. It consists of a UGW (universal access gateway) and an eMEP (educational multi-access edge computing platform), making it possible to satisfy education-specific requirements and long-term evolution. Furthermore, we implement the framework and conduct real-world field tests for eMEC in a university campus. Based on the framework and practical field tests, we also conduct a measurement study to unveil the spatial-temporal characteristics of mobile users in the smart campus and discuss exploiting them for better network performance. The experimental results show that the system achieves satisfactory performance in terms of both throughput and latency. Full article
(This article belongs to the Special Issue Edge Computing for Urban Internet of Things)
Show Figures

Figure 1

16 pages, 4251 KiB  
Article
A DASKL Descriptor via Encoding the Information of Keypoints and a 3D Local Surface for 3D Matching
by Yuanhao Wu, Chunyang Wang, Xuelian Liu, Chunhao Shi and Xuemei Li
Electronics 2022, 11(15), 2328; https://doi.org/10.3390/electronics11152328 - 27 Jul 2022
Viewed by 1236
Abstract
Three-dimensional matching is widely used in 3D vision tasks, such as 3D reconstruction, target recognition, and 3D model retrieval. The description of local features is the fundamental task of 3D matching; however, the descriptors only encode the surrounding surfaces of keypoints, and thus [...] Read more.
Three-dimensional matching is widely used in 3D vision tasks, such as 3D reconstruction, target recognition, and 3D model retrieval. The description of local features is the fundamental task of 3D matching; however, the descriptors only encode the surrounding surfaces of keypoints, and thus they cannot distinguish between similar local surfaces of objects. Therefore, we propose a novel local feature descriptor called deviation angle statistics of keypoints from local points and adjacent keypoints (DASKL). To encode a local surface fully, we first calculate a multiscale local reference axis (LRA); second, a local consistent strategy is used to redirect the normal direction, and the Poisson-disk sampling strategy is used to eliminate the redundancy in the data. Finally, the local surface is subdivided by two kinds of spatial features, and the histogram of the deviation angle between the LRA and the normal point in each subdivision space is generated. For the coding between keypoints, we calculate the LRA deviation angle between the nearest three keypoints and the adjacent keypoint. The performance of our DASKL descriptor is evaluated on several datasets (i.e., B3R, UWAOR, and LIDAR) with respect to Gaussian noise, varying mesh resolutions, clutter, and occlusion. The results show that our DASKL descriptor has achieved excellent performance in terms of description, robustness, and efficiency. Moreover, we further evaluate the generalization ability of the DASKL descriptor in a LIDAR real-scene dataset. Full article
(This article belongs to the Special Issue Edge Computing for Urban Internet of Things)
Show Figures

Graphical abstract

15 pages, 2795 KiB  
Article
Interference Signal Feature Extraction and Pattern Classification Algorithm Based on Deep Learning
by Jiangyi Qin, Fei Zhang, Kai Wang, Yuan Zuo and Chenxi Deng
Electronics 2022, 11(14), 2251; https://doi.org/10.3390/electronics11142251 - 19 Jul 2022
Cited by 2 | Viewed by 2194
Abstract
Aiming at the scarcity of Low Earth Orbit (LEO) satellite spectrum resources, this paper proposes an algorithm of interference signal feature extraction and pattern classification based on deep learning to further improve the stability of satellite–ground communication links. The algorithm can successfully predict [...] Read more.
Aiming at the scarcity of Low Earth Orbit (LEO) satellite spectrum resources, this paper proposes an algorithm of interference signal feature extraction and pattern classification based on deep learning to further improve the stability of satellite–ground communication links. The algorithm can successfully predict the interference signal pattern, start–stop time, frequency change range and other parameters, and has the advantages of excellent interference detection performance, high detection accuracy and small parameter prediction error, etc. It can be applied in the field of channel monitoring of communication satellite-to-ground communication links, and realize the repeated and efficient utilization of spectrum resources. Experiments show that the precision and recall of the algorithm for detecting five kinds of interference signals are all close to 100%, the prediction error of starting and ending time is less than 4 ms, and the prediction error of starting and ending frequency is less than 6 KHz. Full article
(This article belongs to the Special Issue Edge Computing for Urban Internet of Things)
Show Figures

Figure 1

19 pages, 505 KiB  
Article
Exploiting Duplications for Efficient Task Offloading in Multi-User Edge Computing
by Chang Shu, Yinhui Luo and Fang Liu
Electronics 2022, 11(14), 2244; https://doi.org/10.3390/electronics11142244 - 18 Jul 2022
Cited by 2 | Viewed by 1398
Abstract
The proliferation of IoT applications has pushed the horizon of edge computing, which provides processing ability at the edge of networks. Task offloading is one of the most important issues in edge computing and has attracted continuous research attention in recent years. With [...] Read more.
The proliferation of IoT applications has pushed the horizon of edge computing, which provides processing ability at the edge of networks. Task offloading is one of the most important issues in edge computing and has attracted continuous research attention in recent years. With task offloading, end devices can offload the entire task or only subtasks to the edge servers to meet the delay and energy requirements. Most existing offloading schemes are limited by the increasing complexity of task topologies, as considerable time is wasted for local/edge subtasks to wait for their precedent subtasks being executed at the edge/local device. This problem becomes even worse when the dependencies among subtasks become complex and the number of end-users increases. To address this problem, our key methodology is to exploit subtask duplications to reduce the inter-subtask delay and shorten the task completion time. Based on this, we propose a Duplication-based and Energy-aware Task Offloading scheme (DETO), which duplicates critical subtasks that have a large impact on the completion time and thus enhances the parallelism between local and edge computing. In addition, among numerous choices of subtask duplications, DETO evaluates the gain/cost ratio for each possible duplication and chooses the most efficient ones. As a result, the extra resource for duplications is greatly reduced. We also design a distributed DETO algorithm to support multi-user, multi-server edge computing. Extensive evaluation results show that DETO can effectively reduce the task completion time (by 12.22%) and improve the resource utilization (by 15.17%), in particular for multi-user edge computing networks. Full article
(This article belongs to the Special Issue Edge Computing for Urban Internet of Things)
Show Figures

Figure 1

25 pages, 2072 KiB  
Article
Voting-Based Scheme for Leader Election in Lead-Follow UAV Swarm with Constrained Communication
by Yuan Zuo, Wen Yao, Qiang Chang, Xiaozhou Zhu, Jianjun Gui and Jiangyi Qin
Electronics 2022, 11(14), 2143; https://doi.org/10.3390/electronics11142143 - 08 Jul 2022
Cited by 3 | Viewed by 2005
Abstract
The recent advances in unmanned aerial vehicles (UAVs) enormously improve their utility and expand their application scope. The UAV and swarm implementation further prevail in Smart City practices with the aid of edge computing and urban Internet of Things. The lead–follow formation in [...] Read more.
The recent advances in unmanned aerial vehicles (UAVs) enormously improve their utility and expand their application scope. The UAV and swarm implementation further prevail in Smart City practices with the aid of edge computing and urban Internet of Things. The lead–follow formation in UAV swarm is an important organization means and has been adopted in diverse exercises, for its efficiency and ease of control. However, the reliability of centralization makes the entire swarm system in risk of collapse and instability, if a fatal fault incident happens in the leader. The motivation is to build a mechanism helping the distributed swarm recover from possible failures. Existing ways include assigning definite backups, temporary clustering and traversing to select a new leader are traditional ways that lack flexibility and adaptability. In this article, we propose a voting-based leader election scheme inspired by the Raft method in distributed computation consensus to solve the problem. We further discuss the impact of communication conditions imposed on the decentralized voting process by implementing a network resource pool. To dynamically evaluate UAV individuals, we outline measurement design principles and provide a realizable calculation example. Lastly but not least, empirical simulation results manifest better performance than the Raft-based method. Our voting-based approach exhibits advantages and is a promising way for quick regrouping and fault recovery in lead–follow swarms. Full article
(This article belongs to the Special Issue Edge Computing for Urban Internet of Things)
Show Figures

Figure 1

15 pages, 2976 KiB  
Article
Research on Terminal-Side Computing Force Network Based on Massive Terminals
by Jianhua Gu, Jianhua Feng, Huiyang Xu and Ting Zhou
Electronics 2022, 11(13), 2108; https://doi.org/10.3390/electronics11132108 - 05 Jul 2022
Cited by 3 | Viewed by 1904
Abstract
With the explosive growth of the demand for computing power in the era of digital economy and the continuous enhancement of the computing power of terminals, how to provide high bandwidth, low-latency, and low-cost service by leveraging the user devices’ computing, storage, and [...] Read more.
With the explosive growth of the demand for computing power in the era of digital economy and the continuous enhancement of the computing power of terminals, how to provide high bandwidth, low-latency, and low-cost service by leveraging the user devices’ computing, storage, and network resources has become a research hotspot. However, due to differences in magnitude, architecture, and performance, the existing technologies for cloud computing and edge computing need to overcome many challenges such as android-based devices not supporting container technologies. In this paper, a terminal-side computing force network (TSCFN) architecture is proposed, which realizes the unified computing power management of massive user devices by layered and distributed architecture with highly dynamic and domain-federated deployments. At the same time, we propose a cloud-native container resource scheduling scheme based on the Android system to enhance the scalability of TSCFN. Taking the CDN service as a use case, the experiment results show that services provided by TSCFN can reduce latency and improve resource utilization, especially in an unstable network status. Compared with traditional CDN, delay duration of HomeCDN based on TSCFN is decreased by 96% in a bad network environment. Full article
(This article belongs to the Special Issue Edge Computing for Urban Internet of Things)
Show Figures

Figure 1

15 pages, 13390 KiB  
Article
Depth Image Denoising Algorithm Based on Fractional Calculus
by Tingsheng Huang, Chunyang Wang and Xuelian Liu
Electronics 2022, 11(12), 1910; https://doi.org/10.3390/electronics11121910 - 19 Jun 2022
Cited by 4 | Viewed by 2004
Abstract
Depth images are often accompanied by unavoidable and unpredictable noise. Depth image denoising algorithms mainly attempt to fill hole data and optimise edges. In this paper, we study in detail the problem of effectively filtering the data of depth images under noise interference. [...] Read more.
Depth images are often accompanied by unavoidable and unpredictable noise. Depth image denoising algorithms mainly attempt to fill hole data and optimise edges. In this paper, we study in detail the problem of effectively filtering the data of depth images under noise interference. The classical filtering algorithm tends to blur edge and texture information, whereas the fractional integral operator can retain more edge and texture information. In this paper, the Grünwald–Letnikov-type fractional integral denoising operator is introduced into the depth image denoising process, and the convolution template of this operator is studied and improved upon to build a fractional integral denoising model and algorithm for depth image denoising. Depth images from the Redwood dataset were used to add noise, and the mask constructed by the fractional integral denoising operator was used to denoise the images by convolution. The experimental results show that the fractional integration order with the best denoising effect was −0.4 ≤ ν ≤ −0.3 and that the peak signal-to-noise ratio was improved by +3 to +6 dB. Under the same environment, median filter denoising had −15 to −30 dB distortion. The filtered depth image was converted to a point cloud image, from which the denoising effect was subjectively evaluated. Overall, the results prove that the fractional integral denoising operator can effectively handle noise in depth images while preserving their edge and texture information and thus has an excellent denoising effect. Full article
(This article belongs to the Special Issue Edge Computing for Urban Internet of Things)
Show Figures

Figure 1

17 pages, 4128 KiB  
Article
A Vibration Fault Signal Identification Method via SEST
by Xuemei Li, Chunyang Wang, Xuelian Liu, Bo Xiao and Zishuo Wang
Electronics 2022, 11(9), 1300; https://doi.org/10.3390/electronics11091300 - 20 Apr 2022
Cited by 3 | Viewed by 1344
Abstract
(1) Background: with the development of intelligent transportation, effectively collecting and identifying the working state of vehicles is conducive to the analysis and processing of vehicle information by internet of vehicles, so as to reduce the occurrence of traffic accidents. Aiming at the [...] Read more.
(1) Background: with the development of intelligent transportation, effectively collecting and identifying the working state of vehicles is conducive to the analysis and processing of vehicle information by internet of vehicles, so as to reduce the occurrence of traffic accidents. Aiming at the problem of low identification accuracy of the mechanical vibration fault signal, a signal identification method based on time-frequency detection is introduced; (2) Methods: this paper constructs a parameter model of the synchroextracting S transform on the basis of the poor time-frequency concentration of the original S transform; (3) Results: in the case of SNR = −5~+30 dB, compared with other transformations, the Rényi entropy value of SEST is the smallest, and the Rényi entropy value is 0.5246 when SNR = +22 dB; (4) Conclusions: through simulation comparison and analysis, the excellent time-frequency concentration and anti-noise characteristics of the SEST are highlighted, and the rotor vibration fault signals such as rotor misalignment, unbalance and bearing wear are identified by SEST. Full article
(This article belongs to the Special Issue Edge Computing for Urban Internet of Things)
Show Figures

Figure 1

19 pages, 1457 KiB  
Article
Adversarial Attack and Defense: A Survey
by Hongshuo Liang, Erlu He, Yangyang Zhao, Zhe Jia and Hao Li
Electronics 2022, 11(8), 1283; https://doi.org/10.3390/electronics11081283 - 18 Apr 2022
Cited by 19 | Viewed by 9153
Abstract
In recent years, artificial intelligence technology represented by deep learning has achieved remarkable results in image recognition, semantic analysis, natural language processing and other fields. In particular, deep neural networks have been widely used in different security-sensitive tasks. Fields, such as facial payment, [...] Read more.
In recent years, artificial intelligence technology represented by deep learning has achieved remarkable results in image recognition, semantic analysis, natural language processing and other fields. In particular, deep neural networks have been widely used in different security-sensitive tasks. Fields, such as facial payment, smart medical and autonomous driving, which accelerate the construction of smart cities. Meanwhile, in order to fully unleash the potential of edge big data, there is an urgent need to push the AI frontier to the network edge. Edge AI, the combination of artificial intelligence and edge computing, supports the deployment of deep learning algorithms to edge devices that generate data, and has become a key driver of smart city development. However, the latest research shows that deep neural networks are vulnerable to attacks from adversarial example and output wrong results. This type of attack is called adversarial attack, which greatly limits the promotion of deep neural networks in tasks with extremely high security requirements. Due to the influence of adversarial attacks, researchers have also begun to pay attention to the research in the field of adversarial defense. In the game process of adversarial attacks and defense technologies, both attack and defense technologies have been developed rapidly. This article first introduces the principles and characteristics of adversarial attacks, and summarizes and analyzes the adversarial example generation methods in recent years. Then, it introduces the adversarial example defense technology in detail from the three directions of model, data, and additional network. Finally, combined with the current status of adversarial example generation and defense technology development, put forward challenges and prospects in this field. Full article
(This article belongs to the Special Issue Edge Computing for Urban Internet of Things)
Show Figures

Figure 1

17 pages, 625 KiB  
Article
Lightweight Path Recovery in IPv6 Internet-of-Things Systems
by Zhuoliu Liu, Luwei Fu, Maojun Pan and Zhiwei Zhao
Electronics 2022, 11(8), 1220; https://doi.org/10.3390/electronics11081220 - 12 Apr 2022
Cited by 1 | Viewed by 1237
Abstract
In an Internet-of-Things system supported by Internet Protocol version 6 (IPv6), the Routing Protocol for Low-Power and Lossy Networks (RPL) presents extensive applications in various network scenarios. In these novel scenarios characterized by the access of massive devices, path recovery, which reconstructs the [...] Read more.
In an Internet-of-Things system supported by Internet Protocol version 6 (IPv6), the Routing Protocol for Low-Power and Lossy Networks (RPL) presents extensive applications in various network scenarios. In these novel scenarios characterized by the access of massive devices, path recovery, which reconstructs the complete path of the packet transmission, plays a vital role in network measurement, topology inference, and information security. This paper proposes a Lightweight Path recovery algorithm (LiPa) for multi-hop point-to-point communication. The core idea of LiPa is to make full use of the spatial and temporal information of the network topology to recover the unknown paths iteratively. Specifically, spatial and temporal information refer to the potential correlations between different paths within a time slot and path status during different time slots, respectively. To verify the effect of our proposal, we separately analyze the performance of leveraging temporal information, spatial information, and their composition by extensive simulations. We also compare LiPa with two state-of-the-art methods in terms of the recovery accuracy and the gain–loss ratio. The experiment results show that LiPa significantly outperforms all its counterpart algorithms in different network settings. Thus, LiPa can be considered as a promising approach for packet-level path recovery with minor loss and great adaptability. Full article
(This article belongs to the Special Issue Edge Computing for Urban Internet of Things)
Show Figures

Figure 1

18 pages, 4791 KiB  
Article
A Lightweight Remote Sensing Image Super-Resolution Method and Its Application in Smart Cities
by Nenghuan Zhang, Yongbin Wang and Shuang Feng
Electronics 2022, 11(7), 1050; https://doi.org/10.3390/electronics11071050 - 27 Mar 2022
Cited by 4 | Viewed by 1662
Abstract
With the growth of urban population, a series of urban problems have emerged, and how to speed up smart city construction has received extensive attention. Remote sensing images have the advantages of wide spatial coverage and rich information, and it is suitable for [...] Read more.
With the growth of urban population, a series of urban problems have emerged, and how to speed up smart city construction has received extensive attention. Remote sensing images have the advantages of wide spatial coverage and rich information, and it is suitable for use as research data for smart cities. However, due to limitations in the imaging sensor conditions and complex weather, remote sensing images face the problems of insufficient resolution and cloud occlusion, which cannot meet the resolution requirements of smart city tasks. The remote sensing image super-resolution (SR) technique can improve the details and texture information without upgrading the imaging sensor system, which becomes a feasible solution for the above problems. In this paper, we propose a novel remote sensing image super-resolution method which leverages the texture features from internal and external references to help with SR reconstruction. We introduce the transformer attention mechanism to select and extract parts of texture features with high reference values to ensure that the network is lightweight, effective, and easier to deploy on edge computing devices. In addition, our network can automatically learn and adjust the alignment angles and scales of texture features for better SR results. Extensive comparison experiments show that our proposed method achieves superior performance compared with several state-of-the-art SR methods. In addition, we also evaluate the application value of our proposed SR method in urban region function recognition in smart cities. The dataset used in this task is low-quality. The comparative experiment between the original dataset and the SR dataset generated by our proposed SR method indicates that our method can effectively improve the recognition accuracy. Full article
(This article belongs to the Special Issue Edge Computing for Urban Internet of Things)
Show Figures

Figure 1

Back to TopTop