Topic Editors

Qingdao Institute of Software, College of Computer Science and Technology, China University of Petroleum (East China), Qingdao 266580, China
Department of Computing, The Hong Kong Polytechnic University, Hong Kong SAR, China
Graduate School of Science and Engineering, Hosei University, Tokyo, Japan

Future Internet Architecture: Difficulties and Opportunities

Abstract submission deadline
31 March 2024
Manuscript submission deadline
30 June 2024
Viewed by
20864

Topic Information

Dear Colleagues,

Since the formal deployment of the TCP/IP protocol in the 1980s, and with the development of network technology, Internet-borne applications (including email, cloud computing, online shopping, social networks, etc.) have increasingly affected people's work and life. New applications in areas such as smart transportation, smart cities, industrial Internet, telemedicine, and holographic communications have quietly emerged, and people are rapidly entering an intelligent world where everything is perceived and connected. Countless IoT-based applications have been developed to date; consider smart cities, smart highways, remote robotic surgery, unmanned driving, drones, VR games, etc. As the Internet officially ushers in the second stage of its development, the architecture and capabilities facing us in the future pose a huge challenge.

This topic aims to bring together relevant researchers from industry and academia to share their latest discoveries and developments in the fields of Internet architecture. The topics of interest include, but are not limited to, the following:

  1. Advanced communication network infrastructures for future Internet;
  2. Architecture and protocol system design for future Internet;
  3. Resource management, allocation, orchestration, and optimization for future Internet;
  4. Internet of Things technologies for future Internet;
  5. Multiple access and transmission control technologies for future Internet;
  6. Software-defined network functions and network virtualization technologies for future Internet;
  7. Spectrum-sharing technologies for future Internet;
  8. Big data and security issues for future Internet;
  9. Cloud computing, fog computing, and edge computing technologies for future Internet;
  10. Digital-twin technologies and applications for future Internet;
  11. Intelligent transportation system technologies for future Internet;
  12. System interoperability and flexible service composition for future Internet;
  13. Smart systems for public security and safety for future Internet;
  14. Social network analysis and mining for future Internet;
  15. Discovery and identification of false and bad information for future Internet;
  16. Intelligent analysis and processing of multimodal data for future Internet;
  17. Application of artificial intelligence for future Internet;
  18. Test platform and prototype deployment;
  19. Other application aspects for future Internet.

Dr. Peiying Zhang
Dr. Haotong Cao
Dr. Keping Yu
Topic Editors

 

Keywords

  • future Internet architecture
  • advanced technologies
  • network management and optimization
  • future communication technology
  • application of artificial intelligence

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Drones
drones
4.8 6.1 2017 14.8 Days CHF 2600 Submit
Electronics
electronics
2.9 4.7 2012 15.8 Days CHF 2200 Submit
Future Internet
futureinternet
3.4 6.7 2009 13.6 Days CHF 1600 Submit
Information
information
3.1 5.8 2010 22.6 Days CHF 1600 Submit
Mathematics
mathematics
2.4 3.5 2013 17.7 Days CHF 2600 Submit

Preprints is a platform dedicated to making early versions of research outputs permanently available and citable. MDPI journals allow posting on preprint servers such as Preprints.org prior to publication. For more details about reprints, please visit https://www.preprints.org.

Published Papers (22 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
Article
Flow-Based Joint Programming of Time Sensitive Task and Network
Electronics 2023, 12(19), 4103; https://doi.org/10.3390/electronics12194103 - 30 Sep 2023
Viewed by 179
Abstract
Owning to the application of artificial intelligence and big data analysis in industry, automobiles, aerospace, and other fields, the high-bandwidth candidate, time-sensitive networking (TSN), is introduced into the data communication network. Apart from keeping the safety-critical and real-time requirements, it faces challenges to [...] Read more.
Owning to the application of artificial intelligence and big data analysis in industry, automobiles, aerospace, and other fields, the high-bandwidth candidate, time-sensitive networking (TSN), is introduced into the data communication network. Apart from keeping the safety-critical and real-time requirements, it faces challenges to satisfy large traffic transmission, such as sampled video for computer vision. In this paper, we consider task scheduling and time-sensitive network together and formalize them into a first-order-constraints satisfy module theory (SMT) problem. Based on the result of the solver, we build flow-level scheduling based on IEEE 802.1 Qbv. By splitting the flow properly, it can reduce the constraint inequality as the traffic grows more than the traditional frame-based programming model and achieve near 100% utilization. It can be a general model for the deterministic task and network scheduling design. Full article
Article
Image Inpainting Based on Multi-Level Feature Aggregation Network for Future Internet
Electronics 2023, 12(19), 4065; https://doi.org/10.3390/electronics12194065 - 28 Sep 2023
Viewed by 196
Abstract
(1) Background: In the future Internet era, clarity and structural rationality are important factors in image inpainting. Currently, image inpainting techniques based on generative adversarial networks have made great progress; however, in practical applications, there are still problems of unreasonable or blurred inpainting [...] Read more.
(1) Background: In the future Internet era, clarity and structural rationality are important factors in image inpainting. Currently, image inpainting techniques based on generative adversarial networks have made great progress; however, in practical applications, there are still problems of unreasonable or blurred inpainting results for high-resolution images and images with complex structures. (2) Methods: In this work, we designed a lightweight multi-level feature aggregation network that extracts features from convolutions with different dilation rates, enabling the network to obtain more feature information and recover more reasonable missing image content. Fast Fourier convolution was designed and used in the generative network, enabling the generator to consider the global context at a shallow level, making it easier to perform high-resolution image inpainting tasks. (3) Results: The experiment shows that the method designed in this paper performs well in geometrically complex and high-resolution image inpainting tasks, providing a more reasonable and clearer inpainting image. Compared with the most advanced image inpainting methods, our method outperforms them in both subjective and objective evaluations. (4) Conclusions: The experimental results indicate that the method proposed in this paper has better clarity and more reasonable structural features. Full article
Show Figures

Figure 1

Article
A Spatio-Temporal Spotting Network with Sliding Windows for Micro-Expression Detection
Electronics 2023, 12(18), 3947; https://doi.org/10.3390/electronics12183947 - 19 Sep 2023
Viewed by 205
Abstract
Micro-expressions reveal underlying emotions and are widely applied in political psychology, lie detection, law enforcement and medical care. Micro-expression spotting aims to detect the temporal locations of facial expressions from video sequences and is a crucial task in micro-expression recognition. In this study, [...] Read more.
Micro-expressions reveal underlying emotions and are widely applied in political psychology, lie detection, law enforcement and medical care. Micro-expression spotting aims to detect the temporal locations of facial expressions from video sequences and is a crucial task in micro-expression recognition. In this study, the problem of micro-expression spotting is formulated as micro-expression classification per frame. We propose an effective spotting model with sliding windows called the spatio-temporal spotting network. The method involves a sliding window detection mechanism, combines the spatial features from the local key frames and the global temporal features and performs micro-expression spotting. The experiments are conducted on the CAS(ME)2 database and the SAMM Long Videos database, and the results demonstrate that the proposed method outperforms the state-of-the-art method by 30.58% for the CAS(ME)2 and 23.98% for the SAMM Long Videos according to overall F-scores. Full article
Show Figures

Figure 1

Article
A Collaborative Inference Algorithm in Low-Earth-Orbit Satellite Network for Unmanned Aerial Vehicle
Drones 2023, 7(9), 575; https://doi.org/10.3390/drones7090575 - 11 Sep 2023
Viewed by 385
Abstract
In recent years, the low-Earth-orbit (LEO) satellite network has achieved considerable development. Moreover, it is necessary to introduce edge computing into LEO networks, which can provide high-quality services, such as worldwide seamless low-delay computation offloading for unmanned aerial vehicles (UAVs) or user terminals [...] Read more.
In recent years, the low-Earth-orbit (LEO) satellite network has achieved considerable development. Moreover, it is necessary to introduce edge computing into LEO networks, which can provide high-quality services, such as worldwide seamless low-delay computation offloading for unmanned aerial vehicles (UAVs) or user terminals and nearby remote-sensing data processing for UAVs or satellites. However, because the computation resource of the satellite is relatively scarce compared to the ground server, it is hard for a single satellite to complete massive deep neural network (DNN) inference tasks in a short time. Consequently, in this paper, we focus on the multi-satellite collaborative inference problem and propose a novel COllaborative INference algorithm for LEO edge computing called COIN-LEO. COIN-LEO manages to split the complete DNN model into several submodels consisting of some consecutive layers and deploy these submodels to several satellites for inference. We innovatively leverage deep reinforcement learning (DRL) to efficiently split the model and use a neural network (NN) to predict the time required for inference tasks of a specific submodel on a specific satellite. By implementing COIN-LEO and evaluating its performance in a highly realistic satellite-network-emulation platform, we find that our COIN-LEO outperforms baseline algorithms in terms of inference throughput, time consumed and network traffic overhead. Full article
Show Figures

Figure 1

Article
Delay-Sensitive Service Provisioning in Software-Defined Low-Earth-Orbit Satellite Networks
Electronics 2023, 12(16), 3474; https://doi.org/10.3390/electronics12163474 - 16 Aug 2023
Viewed by 336
Abstract
With the advancement of space technology and satellite communications, low-Earth-orbit (LEO) satellite networks have experienced rapid development in the past decade. In the vision of 6G, LEO satellite networks play an important role in future 6G networks. On the other hand, a variety [...] Read more.
With the advancement of space technology and satellite communications, low-Earth-orbit (LEO) satellite networks have experienced rapid development in the past decade. In the vision of 6G, LEO satellite networks play an important role in future 6G networks. On the other hand, a variety of applications, including many delay-sensitive applications, are continuously emerging. Due to the highly dynamic nature of LEO satellite networks, supporting time-deterministic services in such networks is challenging. However, we can provide latency guarantees for most delay-sensitive applications through data plane traffic shaping and control plane routing optimization. This paper addresses the routing optimization problem for time-sensitive (TS) flows in software-defined low-Earth-orbit (LEO) satellite networks. We model the problem as an integer linear programming (ILP) model aiming to minimize path handovers and maximum link utilization while meeting TS flow latency constraints. Since this problem is NP-hard, we design an efficient longest continuous path (LCP) approximation algorithm. LCP selects the longest valid path in each topology snapshot that satisfies delay constraints. An auxiliary graph then determines the routing sequence with minimized handovers. We implement an LEO satellite network testbed with Open vSwitch (OVS) and an open-network operating system (ONOS) controller to evaluate LCP. The results show that LCP reduces the number of path handovers by up to 31.7% and keeps the maximum link utilization lowest for more than 75% of the time compared to benchmark algorithms. In summary, LCP achieves excellent path handover optimization and load balancing performance under TS flow latency constraints. Full article
Show Figures

Figure 1

Article
An Explainable Fake News Analysis Method with Stance Information
Electronics 2023, 12(15), 3367; https://doi.org/10.3390/electronics12153367 - 07 Aug 2023
Viewed by 485
Abstract
The high level of technological development has enabled fake news to spread faster than real news in cyberspace, leading to significant impacts on the balance and sustainability of current and future social systems. At present, collecting fake news data and using artificial intelligence [...] Read more.
The high level of technological development has enabled fake news to spread faster than real news in cyberspace, leading to significant impacts on the balance and sustainability of current and future social systems. At present, collecting fake news data and using artificial intelligence to detect fake news have an important impact on building a more sustainable and resilient society. Existing methods for detecting fake news have two main limitations: they focus only on the classification of news authenticity, neglecting the semantics between stance information and news authenticity. No cognitive-related information is involved, and there are not enough data on stance classification and news true-false classification for the study. Therefore, we propose a fake news analysis method based on stance information for explainable fake news detection. To make better use of news data, we construct a fake news dataset built on cognitive information. The dataset primarily consists of stance labels, along with true-false labels. We also introduce stance information to further improve news falsity analysis. To better explain the relationship between fake news and stance, we use propensity score matching for causal inference to calculate the correlation between stance information and true-false classification. The experiment result shows that the propensity score matching for causal inference yielded a negative correlation between stance consistency and fake news classification. Full article
Show Figures

Figure 1

Article
Network Resource Allocation Algorithm Using Reinforcement Learning Policy-Based Network in a Smart Grid Scenario
Electronics 2023, 12(15), 3330; https://doi.org/10.3390/electronics12153330 - 03 Aug 2023
Viewed by 405
Abstract
The exponential growth in user numbers has resulted in an overwhelming surge in data that the smart grid must process. To tackle this challenge, edge computing emerges as a vital solution. However, the current heuristic resource scheduling approaches often suffer from resource fragmentation [...] Read more.
The exponential growth in user numbers has resulted in an overwhelming surge in data that the smart grid must process. To tackle this challenge, edge computing emerges as a vital solution. However, the current heuristic resource scheduling approaches often suffer from resource fragmentation and consequently get stuck in local optimum solutions. This paper introduces a novel network resource allocation method for multi-domain virtual networks with the support of edge computing. The approach entails modeling the edge network as a multi-domain virtual network model and formulating resource constraints specific to the edge computing network. Secondly, a policy network is constructed for reinforcement learning (RL) and an optimal resource allocation strategy is obtained under the premise of ensuring resource requirements. In the experimental section, our algorithm is compared with three other algorithms. The experimental results show that the algorithm has an average increase of 5.30%, 8.85%, 15.47% and 22.67% in long-term average revenue–cost ratio, virtual network request acceptance ratio, long-term average revenue and CPU resource utilization, respectively. Full article
Show Figures

Figure 1

Article
A Novel Localization Technology Based on DV-Hop for Future Internet of Things
Electronics 2023, 12(15), 3220; https://doi.org/10.3390/electronics12153220 - 25 Jul 2023
Viewed by 362
Abstract
In recent years, localization has become a hot issue in many applications of the Internet of Things (IoT). The distance vector-hop (DV-Hop) algorithm is accepted for many fields due to its uncomplicated, low-budget, and common hardware, but [...] Read more.
In recent years, localization has become a hot issue in many applications of the Internet of Things (IoT). The distance vector-hop (DV-Hop) algorithm is accepted for many fields due to its uncomplicated, low-budget, and common hardware, but it has the disadvantage of low positioning accuracy. To solve this issue, an improved DV-Hop algorithm—TWGDV-Hop—is put forward in this article. Firstly, the position is broadcast by using three communication radii, the hop is subdivided, and a hop difference correction coefficient is introduced to correct hops between nodes to make them more accurate. Then, the strategy of the square error fitness function is spent in calculating the average distance per hop (ADPH), and the distance weighting factor is added to jointly modify ADPH to make them more accurate. Finally, a good point set and Levy flight strategy both are introduced into gray wolf algorithm (GWO) to enhance ergodic property and capacity for unfettering the local optimum of it. Then, the improved GWO is used to evolve the place of each node to be located, further improving the location accuracy of the node to be located. The results of simulation make known that the presented positioning algorithm has improved positioning accuracy by 51.5%, 40.35%, and 66.8% compared to original DV-Hop in square, X-shaped, and O-shaped random distribution environments, respectively, with time complexity somewhat increased. Full article
Show Figures

Figure 1

Review
UAV Ad Hoc Network Routing Algorithms in Space–Air–Ground Integrated Networks: Challenges and Directions
Drones 2023, 7(7), 448; https://doi.org/10.3390/drones7070448 - 06 Jul 2023
Viewed by 1031
Abstract
With the rapid development of 5G and 6G communications in recent years, there has been significant interest in space–air–ground integrated networks (SAGINs), which aim to achieve seamless all-area, all-time coverage. As a key component of SAGINs, flying ad hoc networks (FANETs) have been [...] Read more.
With the rapid development of 5G and 6G communications in recent years, there has been significant interest in space–air–ground integrated networks (SAGINs), which aim to achieve seamless all-area, all-time coverage. As a key component of SAGINs, flying ad hoc networks (FANETs) have been widely used in the agriculture and transportation sectors in recent years. Reliable communication in SAGINs requires efficient routing algorithms to support them. In this study, we analyze the unique communication architecture of FANETs in SAGINs. At the same time, existing routing protocols are presented and clustered. In addition, we review the latest research advances in routing algorithms over the last five years. Finally, we clarify the future research trends of FANET routing algorithms in SAGINs by discussing the algorithms and comparing the routing experiments with the characteristics of unmanned aerial vehicles. Full article
Show Figures

Figure 1

Article
Performance Analysis of a Drone-Assisted FSO Communication System over Málaga Turbulence under AoA Fluctuations
Drones 2023, 7(6), 374; https://doi.org/10.3390/drones7060374 - 03 Jun 2023
Viewed by 726
Abstract
Future wireless communications have been envisaged to benefit from integrating drones and free space optical (FSO) communications, which would provide links with line-of-sight propagation and large communication capacity. The theoretical performance analysis for a drone-assisted downlink FSO system is investigated. Furthermore, this paper [...] Read more.
Future wireless communications have been envisaged to benefit from integrating drones and free space optical (FSO) communications, which would provide links with line-of-sight propagation and large communication capacity. The theoretical performance analysis for a drone-assisted downlink FSO system is investigated. Furthermore, this paper utilizes the Málaga distribution to characterize the effect of atmospheric turbulence on the optical signal for the drone–terrestrial user link, taking into account atmospheric attenuation, pointing errors, and angle-of-arrival fluctuations. The probability density function and cumulative distribution function are then expressed in closed-form using the heterodyne detection and indirect modulation/direct detection techniques, respectively. Thereafter, the analytical expressions including the average bit error rate (BER) and the ergodic capacity are given. Particularly, the asymptotic behavior of the average BER of the considered system is presented using heterodyne detection at high optical power. The Monte Carlo simulation results certify the theoretical analytical results. Correspondingly, the field-of-view of the receiver is analyzed for optimal communication performance. Full article
Show Figures

Figure 1

Article
Node Selection Algorithm for Federated Learning Based on Deep Reinforcement Learning for Edge Computing in IoT
Electronics 2023, 12(11), 2478; https://doi.org/10.3390/electronics12112478 - 31 May 2023
Viewed by 832
Abstract
The Internet of Things (IoT) and edge computing technologies have been rapidly developing in recent years, leading to the emergence of new challenges in privacy and security. Personal privacy and data leakage have become major concerns in IoT edge computing environments. Federated learning [...] Read more.
The Internet of Things (IoT) and edge computing technologies have been rapidly developing in recent years, leading to the emergence of new challenges in privacy and security. Personal privacy and data leakage have become major concerns in IoT edge computing environments. Federated learning has been proposed as a solution to address these privacy issues, but the heterogeneity of devices in IoT edge computing environments poses a significant challenge to the implementation of federated learning. To overcome this challenge, this paper proposes a novel node selection strategy based on deep reinforcement learning to optimize federated learning in heterogeneous device IoT environments. Additionally, a metric model for IoT devices is proposed to evaluate the performance of different devices. The experimental results demonstrate that the proposed method can improve training accuracy by 30% in a heterogeneous device IoT environment. Full article
Show Figures

Figure 1

Article
Layerwise Adversarial Learning for Image Steganography
Electronics 2023, 12(9), 2080; https://doi.org/10.3390/electronics12092080 - 01 May 2023
Viewed by 823
Abstract
Image steganography is a subfield of pattern recognition. It involves hiding secret data in a cover image and extracting the secret data from the stego image (described as a container image) when needed. Existing image steganography methods based on Deep Neural Networks (DNN) [...] Read more.
Image steganography is a subfield of pattern recognition. It involves hiding secret data in a cover image and extracting the secret data from the stego image (described as a container image) when needed. Existing image steganography methods based on Deep Neural Networks (DNN) usually have a strong embedding capacity, but the appearance of container images is easily altered by visual watermarks of secret data. One of the reasons for this is that, during the end-to-end training process of their Hiding Network, the location information of the visual watermarks has changed. In this paper, we proposed a layerwise adversarial training method to solve the constraint. Specifically, unlike other methods, we added a single-layer subnetwork and a discriminator behind each layer to capture their representational power. The representational power serves two purposes: first, it can update the weights of each layer which alleviates memory requirements; second, it can update the weights of the same discriminator which guarantees that the location information of the visual watermarks remains unchanged. Experiments on two datasets show that the proposed method significantly outperforms the most advanced methods. Full article
Show Figures

Figure 1

Article
DCEC: D2D-Enabled Cost-Aware Cooperative Caching in MEC Networks
Electronics 2023, 12(9), 1974; https://doi.org/10.3390/electronics12091974 - 24 Apr 2023
Viewed by 551
Abstract
Various kinds of powerful intelligent mobile devices (MDs) need to access multimedia content anytime and anywhere, which places enormous pressure on mobile wireless networks. Fetching content from remote sources may introduce overly long accessing delays, which will result in a poor quality of [...] Read more.
Various kinds of powerful intelligent mobile devices (MDs) need to access multimedia content anytime and anywhere, which places enormous pressure on mobile wireless networks. Fetching content from remote sources may introduce overly long accessing delays, which will result in a poor quality of experience (QoE). In this article, we considered the advantages of combining mobile/multi-access edge computing (MEC) with device-to-device (D2D) technologies. We propose a D2D-enabled cooperative edge caching (DCEC) architecture to reduce the delay of accessing content. We designed the DCEC caching management scheme through the maximization of a monotone submodular function under matroid constraints. The DCEC scheme includes a proactive cache placement algorithm and a reactive cache replacement algorithm. Thus, we obtained an optimal content caching and content update, which minimized the average delay cost of fetching content files. Finally, simulations compared the DCEC network architecture with the MEC and D2D networks and the DCEC caching management scheme with the least-frequently used and least-recently used scheme. The numerical results verified that the proposed DCEC scheme was effective at improving the cache hit ratio and the average delay cost. Therefore, the users’ QoE was improved. Full article
Show Figures

Figure 1

Article
Image Inpainting with Parallel Decoding Structure for Future Internet
Electronics 2023, 12(8), 1872; https://doi.org/10.3390/electronics12081872 - 15 Apr 2023
Viewed by 724
Abstract
Image inpainting benefits much from the future Internet, but the memory and computational cost in encoding image features in deep learning methods poses great challenges to this field. In this paper, we propose a parallel decoding structure based on GANs for image inpainting, [...] Read more.
Image inpainting benefits much from the future Internet, but the memory and computational cost in encoding image features in deep learning methods poses great challenges to this field. In this paper, we propose a parallel decoding structure based on GANs for image inpainting, which comprises a single encoding network and a parallel decoding network. By adding a diet parallel extended-decoder path for semantic inpainting (Diet-PEPSI) unit to the encoder network, we can employ a new rate-adaptive dilated convolutional layer to share the weights to dynamically generate feature maps by the given dilation rate, which can effectively decrease the number of convolutional layer parameters. For the decoding network composed of rough paths and inpainting paths, we propose the use of an improved CAM for reconstruction in the decoder that results in a smooth transition at the border of defective areas. For the discriminator, we substitute the local discriminator with a region ensemble discriminator, which can attack the restraint of only the recovering square, like areas for traditional methods with the robust training of a new loss function. The experiments on CelebA and CelebA-HQ verify the significance of the proposed method regarding both resource overhead and recovery performance. Full article
Show Figures

Figure 1

Article
Facial Pose and Expression Transfer Based on Classification Features
Electronics 2023, 12(8), 1756; https://doi.org/10.3390/electronics12081756 - 07 Apr 2023
Viewed by 888
Abstract
Transferring facial pose and expression features from one face to another is a challenging problem and an interesting topic in pattern recognition, but is one of great importance with many applications. However, existing models usually learn to transfer pose and expression features with [...] Read more.
Transferring facial pose and expression features from one face to another is a challenging problem and an interesting topic in pattern recognition, but is one of great importance with many applications. However, existing models usually learn to transfer pose and expression features with classification labels, which cannot hold all the differences in shape and size between conditional faces and source faces. To solve this problem, we propose a generative adversarial network model based on classification features for facial pose and facial expression transfer. We constructed a two-stage classifier to capture the high-dimensional classification features for each face first. Then, the proposed generation model attempts to transfer pose and expression features with classification features. In addition, we successfully combined two cost functions with different convergence speeds to learn pose and expression features. Compared to state-of-the-art models, the proposed model achieved leading scores for facial pose and expression transfer on two datasets. Full article
Show Figures

Figure 1

Article
Two-Stage Generator Network for High-Quality Image Inpainting in Future Internet
Electronics 2023, 12(6), 1490; https://doi.org/10.3390/electronics12061490 - 22 Mar 2023
Viewed by 872
Abstract
Sharpness is an important factor for image inpainting in future Internet, but the massive model parameters involved may produce insufficient edge consistency and reduce image quality. In this paper, we propose a two-stage transformer-based high-resolution image inpainting method to address this issue. This [...] Read more.
Sharpness is an important factor for image inpainting in future Internet, but the massive model parameters involved may produce insufficient edge consistency and reduce image quality. In this paper, we propose a two-stage transformer-based high-resolution image inpainting method to address this issue. This model consists of a coarse and a fine generator network. A self-attention mechanism is introduced to guide the transformation of higher-order semantics across the network layers, accelerate the forward propagation and reduce the computational cost. An adaptive multi-head attention mechanism is applied to the fine network to control the input of the features in order to reduce the redundant computations during training. The pyramid and perception are fused as the loss function of the generator network to improve the efficiency of the model. The comparison with Pennet, GapNet and Partial show the significance of the proposed method in reducing parameter scale and improving the resolution and texture details of the inpainted image. Full article
Show Figures

Figure 1

Article
Non-Euclidean Graph-Convolution Virtual Network Embedding for Space–Air–Ground Integrated Networks
Drones 2023, 7(3), 165; https://doi.org/10.3390/drones7030165 - 27 Feb 2023
Cited by 1 | Viewed by 882
Abstract
For achieving seamless global coverage and real-time communications while providing intelligent applications with increased quality of service (QoS), AI-enabled space–air–ground integrated networks (SAGINs) have attracted widespread attention from all walks of life. However, high-intensity interactions pose fundamental challenges for resource orchestration and security [...] Read more.
For achieving seamless global coverage and real-time communications while providing intelligent applications with increased quality of service (QoS), AI-enabled space–air–ground integrated networks (SAGINs) have attracted widespread attention from all walks of life. However, high-intensity interactions pose fundamental challenges for resource orchestration and security issues. Meanwhile, virtual network embedding (VNE) is applied to the function decoupling of various physical networks due to its flexibility. Inspired by the above, for SAGINs with non-Euclidean structures, we propose a graph-convolution virtual network embedding algorithm. Specifically, based on the excellent decision-making properties of deep reinforcement learning (DRL), we design an orchestration network combined with graph convolution to calculate the embedding probability of nodes. It fuses the information of the neighborhood structure, fully fits the original characteristics of the physical network, and utilizes the specified reward mechanism to guide positive learning. Moreover, by imposing security-level constraints on physical nodes, it restricts resource access. All-around and rigorous experiments are carried out in a simulation environment. Finally, results on long-term average revenue, VNR acceptance ratio, and long-term revenue–cost ratio show that the proposed algorithm outperforms advanced baselines. Full article
Show Figures

Figure 1

Article
Collaborative Storage and Resolution Method between Layers in Hierarchical ICN Name Resolution Systems
Future Internet 2023, 15(2), 74; https://doi.org/10.3390/fi15020074 - 13 Feb 2023
Viewed by 595
Abstract
Name resolution system is an important infrastructure in Information Centric Networking (ICN) network architecture of identifier–locator separation mode. In the Local Name Resolution System (LNMRS), a hierarchical name resolution system for latency-sensitive scenarios; higher-level resolution nodes serve more users and suffer more storage [...] Read more.
Name resolution system is an important infrastructure in Information Centric Networking (ICN) network architecture of identifier–locator separation mode. In the Local Name Resolution System (LNMRS), a hierarchical name resolution system for latency-sensitive scenarios; higher-level resolution nodes serve more users and suffer more storage pressure, which causes the problem of unbalanced storage load between layers, and requires inter-layer collaborative storage under the constraint of deterministic service latency characteristics. In this paper, we use the constraints required for inter-layer collaborative resolution to construct an index neighbor structure and perform collaborative storage based on this structure. This method relieves storage pressure on high-level resolution nodes. Experimental results show that the increase of total storage load brought by the proposed method is 57.1% of that by MGreedy algorithm, 8.1% of that by Greedy algorithm, and 0.8% of that by the K-Mediod algorithm when relieving the same storage load for high-level resolution nodes. Meanwhile, deterministic service latency feature is still sustained when our proposed method is used for collaborative resolution. Full article
Show Figures

Figure 1

Article
Design of A Smart Tourism Management System through Multisource Data Visualization-Based Knowledge Discovery
Electronics 2023, 12(3), 642; https://doi.org/10.3390/electronics12030642 - 28 Jan 2023
Viewed by 2176
Abstract
Nowadays, tourism management is a universal concern in the world. It is important for generating tourism characteristics for travelers, so as to digitally facilitate tourism business scheduling. Currently, there is still a lack of technologies that are competent in managing tourism business affairs. [...] Read more.
Nowadays, tourism management is a universal concern in the world. It is important for generating tourism characteristics for travelers, so as to digitally facilitate tourism business scheduling. Currently, there is still a lack of technologies that are competent in managing tourism business affairs. Therefore, in this paper a smart tourism management system is designed through multisource data visualization-based knowledge discovery. Firstly, this work presents the total architecture of a tourism management system with respect to three modules: data collection, data visualization, and knowledge discovery. Then, multisource business data are processed with the use of visualization techniques so as to output statistical analysis results for different individuals. On this basis, characterized knowledge can be found from previous visualization results and demonstrated for travelers or administrators. In addition, a case study on real data is conducted to test running performance of the proposed tourism management system. The main body of public service tourism is the government or other social organizations that do not regard profit as the main purpose; public service tourism a general term for products and services with obvious public nature. The testing results show that user preferences can be mined and corresponding travelling plans can be suggested via multisource data visualization-based knowledge discovery means. Full article
Show Figures

Figure 1

Article
HetSev: Exploiting Heterogeneity-Aware Autoscaling and Resource-Efficient Scheduling for Cost-Effective Machine-Learning Model Serving
Electronics 2023, 12(1), 240; https://doi.org/10.3390/electronics12010240 - 03 Jan 2023
Viewed by 1049
Abstract
To accelerate the inference of machine-learning (ML) model serving, clusters of machines require the use of expensive hardware accelerators (e.g., GPUs) to reduce execution time. Advanced inference serving systems are needed to satisfy latency service-level objectives (SLOs) in a cost-effective manner. Novel autoscaling [...] Read more.
To accelerate the inference of machine-learning (ML) model serving, clusters of machines require the use of expensive hardware accelerators (e.g., GPUs) to reduce execution time. Advanced inference serving systems are needed to satisfy latency service-level objectives (SLOs) in a cost-effective manner. Novel autoscaling mechanisms that greedily minimize the number of service instances while ensuring SLO compliance are helpful. However, we find that it is not adequate to guarantee cost effectiveness across heterogeneous GPU hardware, and this does not maximize resource utilization. In this paper, we propose HetSev to address these challenges by incorporating heterogeneity-aware autoscaling and resource-efficient scheduling to achieve cost effectiveness. We develop an autoscaling mechanism which accounts for SLO compliance and GPU heterogeneity, thus provisioning the appropriate type and number of instances to guarantee cost effectiveness. We leverage multi-tenant inference to improve GPU resource utilization, while alleviating inter-tenant interference by avoiding the co-location of identical ML instances on the same GPU during placement decisions. HetSev is integrated into Kubernetes and deployed onto a heterogeneous GPU cluster. We evaluated the performance of HetSev using several representative ML models. Compared with default Kubernetes, HetSev reduces resource cost by up to 2.15× while meeting SLO requirements. Full article
Show Figures

Figure 1

Article
InfoMax Classification-Enhanced Learnable Network for Few-Shot Node Classification
Electronics 2023, 12(1), 239; https://doi.org/10.3390/electronics12010239 - 03 Jan 2023
Viewed by 1201
Abstract
Graph neural networks have a wide range of applications, such as citation networks, social networks, and knowledge graphs. Among various graph analyses, node classification has garnered much attention. While many of the recent network embedding models achieve promising performance, they usually require sufficient [...] Read more.
Graph neural networks have a wide range of applications, such as citation networks, social networks, and knowledge graphs. Among various graph analyses, node classification has garnered much attention. While many of the recent network embedding models achieve promising performance, they usually require sufficient labeled nodes for training, which does not meet the reality that only a few labeled nodes are available in novel classes. While few-shot learning is commonly employed in the vision and language domains to address the problem of insufficient training samples, there are still two characteristics of the few-shot node classification problem in the non-Euclidean domain that require investigation: (1) how to extract the most informative knowledge for a class and use it on testing data and (2) how to thoroughly explore the limited number of support sets and maximize the amount of information transferred to the query set. We propose an InfoMax Classification-Enhanced Learnable Network (ICELN) to address these issues, motivated by Deep Graph InfoMax (DGI), which adapts the InfoMax principle to the summary representation of a graph and the patch representation of a node. By increasing the amount of information that is shared between the query nodes and the class representation, an ICELN can transfer the maximum amount of information to unlabeled data and enhance the graph representation potential. The whole model is trained using an episodic method, which simulates the actual testing environment to ensure the meta-knowledge learned from previous experience may be used for entirely new classes that have not been studied before. Extensive experiments are conducted on five real-world datasets to demonstrate the advantages of an ICELN over the existing few-shot node classification methods. Full article
Show Figures

Figure 1

Review
Human-Computer Interaction System: A Survey of Talking-Head Generation
Electronics 2023, 12(1), 218; https://doi.org/10.3390/electronics12010218 - 01 Jan 2023
Cited by 5 | Viewed by 4295
Abstract
Virtual human is widely employed in various industries, including personal assistance, intelligent customer service, and online education, thanks to the rapid development of artificial intelligence. An anthropomorphic digital human can quickly contact people and enhance user experience in human–computer interaction. Hence, we design [...] Read more.
Virtual human is widely employed in various industries, including personal assistance, intelligent customer service, and online education, thanks to the rapid development of artificial intelligence. An anthropomorphic digital human can quickly contact people and enhance user experience in human–computer interaction. Hence, we design the human–computer interaction system framework, which includes speech recognition, text-to-speech, dialogue systems, and virtual human generation. Next, we classify the model of talking-head video generation by the virtual human deep generation framework. Meanwhile, we systematically review the past five years’ worth of technological advancements and trends in talking-head video generation, highlight the critical works and summarize the dataset. Full article
Show Figures

Figure 1

Back to TopTop