Topic Editors

Qingdao Institute of Software, College of Computer Science and Technology, China University of Petroleum (East China), Qingdao 266580, China
Department of Computing, The Hong Kong Polytechnic University, Hong Kong SAR, China
Graduate School of Science and Engineering, Hosei University, Tokyo, Japan

Future Internet Architecture: Difficulties and Opportunities

Abstract submission deadline
closed (31 March 2024)
Manuscript submission deadline
30 June 2024
Viewed by
44382

Topic Information

Dear Colleagues,

Since the formal deployment of the TCP/IP protocol in the 1980s, and with the development of network technology, Internet-borne applications (including email, cloud computing, online shopping, social networks, etc.) have increasingly affected people's work and life. New applications in areas such as smart transportation, smart cities, industrial Internet, telemedicine, and holographic communications have quietly emerged, and people are rapidly entering an intelligent world where everything is perceived and connected. Countless IoT-based applications have been developed to date; consider smart cities, smart highways, remote robotic surgery, unmanned driving, drones, VR games, etc. As the Internet officially ushers in the second stage of its development, the architecture and capabilities facing us in the future pose a huge challenge.

This topic aims to bring together relevant researchers from industry and academia to share their latest discoveries and developments in the fields of Internet architecture. The topics of interest include, but are not limited to, the following:

  1. Advanced communication network infrastructures for future Internet;
  2. Architecture and protocol system design for future Internet;
  3. Resource management, allocation, orchestration, and optimization for future Internet;
  4. Internet of Things technologies for future Internet;
  5. Multiple access and transmission control technologies for future Internet;
  6. Software-defined network functions and network virtualization technologies for future Internet;
  7. Spectrum-sharing technologies for future Internet;
  8. Big data and security issues for future Internet;
  9. Cloud computing, fog computing, and edge computing technologies for future Internet;
  10. Digital-twin technologies and applications for future Internet;
  11. Intelligent transportation system technologies for future Internet;
  12. System interoperability and flexible service composition for future Internet;
  13. Smart systems for public security and safety for future Internet;
  14. Social network analysis and mining for future Internet;
  15. Discovery and identification of false and bad information for future Internet;
  16. Intelligent analysis and processing of multimodal data for future Internet;
  17. Application of artificial intelligence for future Internet;
  18. Test platform and prototype deployment;
  19. Other application aspects for future Internet.

Dr. Peiying Zhang
Dr. Haotong Cao
Dr. Keping Yu
Topic Editors

 

Keywords

  • future Internet architecture
  • advanced technologies
  • network management and optimization
  • future communication technology
  • application of artificial intelligence

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Drones
drones
4.8 6.1 2017 17.9 Days CHF 2600 Submit
Electronics
electronics
2.9 4.7 2012 15.6 Days CHF 2400 Submit
Future Internet
futureinternet
3.4 6.7 2009 11.8 Days CHF 1600 Submit
Information
information
3.1 5.8 2010 18 Days CHF 1600 Submit
Mathematics
mathematics
2.4 3.5 2013 16.9 Days CHF 2600 Submit

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (29 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
21 pages, 5735 KiB  
Article
Advancing Borehole Imaging: A Classification Database Developed via Adaptive Ring Segmentation
by Zhaopeng Deng, Shuangyang Han, Zeqi Liu, Jian Wang and Haoran Zhao
Electronics 2024, 13(6), 1107; https://doi.org/10.3390/electronics13061107 - 18 Mar 2024
Viewed by 489
Abstract
The use of in-hole imaging to investigate geological structure characteristics is one of the crucial methods for the study of rock mass stability and rock engineering design. The in-hole images are usually influenced by the lighting and imaging characteristics, resulting in the presence [...] Read more.
The use of in-hole imaging to investigate geological structure characteristics is one of the crucial methods for the study of rock mass stability and rock engineering design. The in-hole images are usually influenced by the lighting and imaging characteristics, resulting in the presence of interference noise regions in the images and consequently impacting the classification accuracy. To enhance the analytical efficacy of in-hole images, this paper employs the proposed optimal non-concentric ring segmentation method to establish a new database. This method establishes the transformation function based on the Ansel Adams Zone System and the fluctuation values of the grayscale mean, adjusting the gray-level distribution of images to extract two visual blind spots of different scales. Thus, the inner and outer circles are located with these blind spots to achieve the adaptive acquisition of the optimal ring. Finally, we use the optimal non-concentric ring segmentation method to traverse all original images to obtain the borehole image classification database. To validate the effectiveness of this method, we conduct experiments using various segmentation and classification evaluation metrics. The results show that the Jaccard and Dice of the optimal non-concentric ring segmentation approach are 88.43% and 98.55%, respectively, indicating superior segmentation performance compared to other methods. Furthermore, after employing four commonly used classification models to validate the performance of the new classification database, the results demonstrate a significant improvement in accuracy and macro-average compared to the original database, with the highest increase in accuracy reaching 4.2%. These results fully demonstrate the effectiveness of the proposed optimal non-concentric ring segmentation method. Full article
Show Figures

Figure 1

24 pages, 9678 KiB  
Article
Fairness-Aware Dynamic Ride-Hailing Matching Based on Reinforcement Learning
by Yuan Liang
Electronics 2024, 13(4), 775; https://doi.org/10.3390/electronics13040775 - 16 Feb 2024
Viewed by 515
Abstract
The core issue in ridesharing is designing reasonable algorithms to match drivers and passengers. The ridesharing matching problem, influenced by various constraints such as weather, traffic, and supply–demand dynamics in real-world scenarios, requires optimization of multiple objectives like total platform revenue and passenger [...] Read more.
The core issue in ridesharing is designing reasonable algorithms to match drivers and passengers. The ridesharing matching problem, influenced by various constraints such as weather, traffic, and supply–demand dynamics in real-world scenarios, requires optimization of multiple objectives like total platform revenue and passenger waiting time. Due to its complexity in terms of constraints and optimization goals, the ridesharing matching problem becomes a central issue in the field of mobile transportation. However, the existing research lacks exploration into the fairness of driver income, and some algorithms are not practically applicable in the industrial context. To address these shortcomings, we have developed a fairness-oriented dynamic matching algorithm for ridesharing, effectively optimizing overall platform efficiency (expected total driver income) and income fairness among drivers (entropy of weighted amortization fairness information between drivers). Firstly, we introduced a temporal dependency of matching outcomes on subsequent matches in the scenario setup and used reinforcement learning to predict these temporal dependencies, overcoming the limitation of traditional matching algorithms that rely solely on historical data and current circumstances for order allocation. Then, we implemented a series of optimization solutions, including the introduction of a time window matching model, pruning operations, and metric representation adjustments, to enhance the algorithm’s adaptability and scalability for large datasets. These solutions also ensure the algorithm’s efficiency. Finally, experiments conducted on real datasets demonstrate that our fairness-oriented algorithm based on reinforcement learning achieves improvements of 81.4%, 28.5%, and 79.7% over traditional algorithms in terms of fairness, platform utility, and matching efficiency, respectively. Full article
Show Figures

Figure 1

26 pages, 17847 KiB  
Article
Distributed Mobility Management Support for Low-Latency Data Delivery in Named Data Networking for UAVs
by Mohammed Bellaj, Najib Naja and Abdellah Jamali
Future Internet 2024, 16(2), 57; https://doi.org/10.3390/fi16020057 - 10 Feb 2024
Viewed by 1227
Abstract
Named Data Networking (NDN) has emerged as a promising architecture to overcome the limitations of the conventional Internet Protocol (IP) architecture, particularly in terms of mobility, security, and data availability. However, despite the advantages it offers, producer mobility management remains a significant challenge [...] Read more.
Named Data Networking (NDN) has emerged as a promising architecture to overcome the limitations of the conventional Internet Protocol (IP) architecture, particularly in terms of mobility, security, and data availability. However, despite the advantages it offers, producer mobility management remains a significant challenge for NDN, especially for moving vehicles and emerging technologies such as Unmanned Aerial Vehicles (UAVs), known for their high-speed and unpredictable movements, which makes it difficult for NDN to maintain seamless communication. To solve this mobility problem, we propose a Distributed Mobility Management Scheme (DMMS) to support UAV mobility and ensure low-latency content delivery in NDN architecture. DMMS utilizes decentralized Anchors to forward proactively the consumer’s Interest packets toward the producer’s predicted location when handoff occurs. Moreover, it introduces a new forwarding approach that combines the standard and location-based forwarding strategy to improve forwarding efficiency under producer mobility without changing the network structure. Using a realistic scenario, DMMS is evaluated and compared against two well-known solutions, namely MAP-ME and Kite, using the ndnSIM simulations. We demonstrate that DMMS achieves better results compared to Kite and MAP-ME solutions in terms of network cost and consumer quality-of-service metrics. Full article
Show Figures

Figure 1

20 pages, 1128 KiB  
Article
Service Function Chain Deployment Algorithm Based on Deep Reinforcement Learning in Space–Air–Ground Integrated Network
by Xu Feng, Mengyang He, Lei Zhuang, Yanrui Song and Rumeng Peng
Future Internet 2024, 16(1), 27; https://doi.org/10.3390/fi16010027 - 16 Jan 2024
Viewed by 1351
Abstract
SAGIN is formed by the fusion of ground networks and aircraft networks. It breaks through the limitation of communication, which cannot cover the whole world, bringing new opportunities for network communication in remote areas. However, many heterogeneous devices in SAGIN pose significant challenges [...] Read more.
SAGIN is formed by the fusion of ground networks and aircraft networks. It breaks through the limitation of communication, which cannot cover the whole world, bringing new opportunities for network communication in remote areas. However, many heterogeneous devices in SAGIN pose significant challenges in terms of end-to-end resource management, and the limited regional heterogeneous resources also threaten the QoS for users. In this regard, this paper proposes a hierarchical resource management structure for SAGIN, named SAGIN-MEC, based on a SDN, NFV, and MEC, aiming to facilitate the systematic management of heterogeneous network resources. Furthermore, to minimize the operator deployment costs while ensuring the QoS, this paper formulates a resource scheduling optimization model tailored to SAGIN scenarios to minimize energy consumption. Additionally, we propose a deployment algorithm, named DRL-G, which is based on heuristics and DRL, aiming to allocate heterogeneous network resources within SAGIN effectively. Experimental results showed that SAGIN-MEC can reduce the end-to-end delay by 6–15 ms compared to the terrestrial edge network, and compared to other algorithms, the DRL-G algorithm can improve the service request reception rate by up to 20%. In terms of energy consumption, it reduces the average energy consumption by 4.4% compared to the PG algorithm. Full article
Show Figures

Figure 1

18 pages, 1239 KiB  
Article
Utilizing User Bandwidth Resources in Information-Centric Networking through Blockchain-Based Incentive Mechanism
by Qiang Liu, Rui Han and Yang Li
Future Internet 2024, 16(1), 11; https://doi.org/10.3390/fi16010011 - 28 Dec 2023
Viewed by 1397
Abstract
Idle bandwidth resources are inefficiently distributed among different users. Currently, the utilization of user bandwidth resources mostly relies on traditional IP networks, implementing relevant techniques at the application layer, which creates scalability issues and brings additional system overheads. Information-Centric Networking (ICN), based on [...] Read more.
Idle bandwidth resources are inefficiently distributed among different users. Currently, the utilization of user bandwidth resources mostly relies on traditional IP networks, implementing relevant techniques at the application layer, which creates scalability issues and brings additional system overheads. Information-Centric Networking (ICN), based on the idea of separating identifiers and locators, offers the potential to aggregate idle bandwidth resources from a network layer perspective. This paper proposes a method for utilizing user bandwidth resources in ICN; specifically, we treat the use of user bandwidth resources as a service and assign service IDs (identifiers), and when network congestion (the network nodes are overloaded) occurs, the traffic can be routed to the user side for forwarding through the ID/NA (Network Address) cooperative routing mechanism of ICN, thereby improving the scalability of ICN transmission and the utilization of underlying network resources. To enhance the willingness of users to contribute idle bandwidth resources, we establish a secure and trustworthy bandwidth trading market using blockchain technology. We also design an incentive mechanism based on the Proof-of-Network-Contribution (PoNC) consensus algorithm; users can “mine” by forwarding packets. The experimental results show that utilizing idle bandwidth can significantly improve the scalability of ICN transmission under experimental conditions, bringing a maximum throughput improvement of 19.4% and reducing the packet loss rate. Compared with existing methods, using ICN technology to aggregate idle bandwidth for network transmission will have a more stable and lower latency, and it brings a maximum utilization improvement of 13.7%. Full article
Show Figures

Figure 1

17 pages, 3490 KiB  
Article
Improving Audio Classification Method by Combining Self-Supervision with Knowledge Distillation
by Xuchao Gong, Hongjie Duan, Yaozhong Yang, Lizhuang Tan, Jian Wang and Athanasios V. Vasilakos
Electronics 2024, 13(1), 52; https://doi.org/10.3390/electronics13010052 - 21 Dec 2023
Viewed by 889
Abstract
The current audio single-mode self-supervised classification mainly adopts a strategy based on audio spectrum reconstruction. Overall, its self-supervised approach is relatively single and cannot fully mine key semantic information in the time and frequency domains. In this regard, this article proposes a self-supervised [...] Read more.
The current audio single-mode self-supervised classification mainly adopts a strategy based on audio spectrum reconstruction. Overall, its self-supervised approach is relatively single and cannot fully mine key semantic information in the time and frequency domains. In this regard, this article proposes a self-supervised method combined with knowledge distillation to further improve the performance of audio classification tasks. Firstly, considering the particularity of the two-dimensional audio spectrum, both self-supervised strategy construction is carried out in a single dimension in the time and frequency domains, and self-supervised construction is carried out in the joint dimension of time and frequency. Effectively learn audio spectrum details and key discriminative information through information reconstruction, comparative learning, and other methods. Secondly, in terms of feature self-supervision, two learning strategies for teacher-student models are constructed, which are internal to the model and based on knowledge distillation. Fitting the teacher’s model feature expression ability, further enhances the generalization of audio classification. Comparative experiments were conducted using the AudioSet dataset, ESC50 dataset, and VGGSound dataset. The results showed that the algorithm proposed in this paper has a 0.5% to 1.3% improvement in recognition accuracy compared to the optimal method based on audio single mode. Full article
Show Figures

Figure 1

18 pages, 2524 KiB  
Article
A Routing Strategy Based Genetic Algorithm Assisted by Ground Access Optimization for LEO Satellite Constellations
by Peiying Zhang, Chong Lv, Guanjun Xu, Haoyu Wang, Lizhuang Tan and Kostromitin Konstantin Igorevich
Electronics 2023, 12(23), 4762; https://doi.org/10.3390/electronics12234762 - 24 Nov 2023
Viewed by 770
Abstract
Large-scale low Earth orbit satellite networks (LSNs) have been attracting increasing attention in recent years. These systems offer advantages such as low latency, high bandwidth communication, and all terrain coverage. However, the main challenges faced by LSNs is the calculation and maintenance of [...] Read more.
Large-scale low Earth orbit satellite networks (LSNs) have been attracting increasing attention in recent years. These systems offer advantages such as low latency, high bandwidth communication, and all terrain coverage. However, the main challenges faced by LSNs is the calculation and maintenance of routing strategies. This is primarily due to the large scale and dynamic network topology of LSN constellations. As the number of satellites in constellations continues to rise, the feasibility of the centralized routing strategy, which calculates all shortest routes between every satellite, becomes increasingly limited by space and time constraints. This approach is also not suitable for the Walker Delta formation, which is becoming more popular for giant constellations. In order to find an effective routing strategy, this paper defines the satellite routing problem as a mixed linear integer programming problem (MILP), proposes a routing strategy based on a genetic algorithm (GA), and comprehensively considers the efficiency of source or destination ground stations to access satellite constellations. The routing strategy integrates ground station ingress and exit policies and inter-satellite packet forwarding policies and reduces the cost of routing decisions. The experimental results show that, compared with the traditional satellite routing algorithm, the proposed routing strategy has better link capacity utilization, a lower round trip communication time, and an improved traffic reception rate. Full article
Show Figures

Figure 1

22 pages, 976 KiB  
Article
Flow-Based Joint Programming of Time Sensitive Task and Network
by Yingying Chi, Huayu Zhang, Yong Liu, Ning Chen, Zhe Zheng, Hailong Zhu, Peiying Zhang and Haotian Zhan
Electronics 2023, 12(19), 4103; https://doi.org/10.3390/electronics12194103 - 30 Sep 2023
Cited by 1 | Viewed by 781
Abstract
Owning to the application of artificial intelligence and big data analysis in industry, automobiles, aerospace, and other fields, the high-bandwidth candidate, time-sensitive networking (TSN), is introduced into the data communication network. Apart from keeping the safety-critical and real-time requirements, it faces challenges to [...] Read more.
Owning to the application of artificial intelligence and big data analysis in industry, automobiles, aerospace, and other fields, the high-bandwidth candidate, time-sensitive networking (TSN), is introduced into the data communication network. Apart from keeping the safety-critical and real-time requirements, it faces challenges to satisfy large traffic transmission, such as sampled video for computer vision. In this paper, we consider task scheduling and time-sensitive network together and formalize them into a first-order-constraints satisfy module theory (SMT) problem. Based on the result of the solver, we build flow-level scheduling based on IEEE 802.1 Qbv. By splitting the flow properly, it can reduce the constraint inequality as the traffic grows more than the traditional frame-based programming model and achieve near 100% utilization. It can be a general model for the deterministic task and network scheduling design. Full article
Show Figures

Figure 1

15 pages, 829 KiB  
Article
Image Inpainting Based on Multi-Level Feature Aggregation Network for Future Internet
by Dong Wang, Liuqing Hu, Qing Li, Guanyi Wang and Hongan Li
Electronics 2023, 12(19), 4065; https://doi.org/10.3390/electronics12194065 - 28 Sep 2023
Viewed by 600
Abstract
(1) Background: In the future Internet era, clarity and structural rationality are important factors in image inpainting. Currently, image inpainting techniques based on generative adversarial networks have made great progress; however, in practical applications, there are still problems of unreasonable or blurred inpainting [...] Read more.
(1) Background: In the future Internet era, clarity and structural rationality are important factors in image inpainting. Currently, image inpainting techniques based on generative adversarial networks have made great progress; however, in practical applications, there are still problems of unreasonable or blurred inpainting results for high-resolution images and images with complex structures. (2) Methods: In this work, we designed a lightweight multi-level feature aggregation network that extracts features from convolutions with different dilation rates, enabling the network to obtain more feature information and recover more reasonable missing image content. Fast Fourier convolution was designed and used in the generative network, enabling the generator to consider the global context at a shallow level, making it easier to perform high-resolution image inpainting tasks. (3) Results: The experiment shows that the method designed in this paper performs well in geometrically complex and high-resolution image inpainting tasks, providing a more reasonable and clearer inpainting image. Compared with the most advanced image inpainting methods, our method outperforms them in both subjective and objective evaluations. (4) Conclusions: The experimental results indicate that the method proposed in this paper has better clarity and more reasonable structural features. Full article
Show Figures

Figure 1

14 pages, 1843 KiB  
Article
A Spatio-Temporal Spotting Network with Sliding Windows for Micro-Expression Detection
by Wenwen Fu, Zhihong An, Wendong Huang, Haoran Sun, Wenjuan Gong and Jordi Gonzàlez
Electronics 2023, 12(18), 3947; https://doi.org/10.3390/electronics12183947 - 19 Sep 2023
Cited by 1 | Viewed by 804
Abstract
Micro-expressions reveal underlying emotions and are widely applied in political psychology, lie detection, law enforcement and medical care. Micro-expression spotting aims to detect the temporal locations of facial expressions from video sequences and is a crucial task in micro-expression recognition. In this study, [...] Read more.
Micro-expressions reveal underlying emotions and are widely applied in political psychology, lie detection, law enforcement and medical care. Micro-expression spotting aims to detect the temporal locations of facial expressions from video sequences and is a crucial task in micro-expression recognition. In this study, the problem of micro-expression spotting is formulated as micro-expression classification per frame. We propose an effective spotting model with sliding windows called the spatio-temporal spotting network. The method involves a sliding window detection mechanism, combines the spatial features from the local key frames and the global temporal features and performs micro-expression spotting. The experiments are conducted on the CAS(ME)2 database and the SAMM Long Videos database, and the results demonstrate that the proposed method outperforms the state-of-the-art method by 30.58% for the CAS(ME)2 and 23.98% for the SAMM Long Videos according to overall F-scores. Full article
Show Figures

Figure 1

21 pages, 5755 KiB  
Article
A Collaborative Inference Algorithm in Low-Earth-Orbit Satellite Network for Unmanned Aerial Vehicle
by Zhengqian Xu, Peiying Zhang, Chengcheng Li, Hailong Zhu, Guanjun Xu and Chenhua Sun
Drones 2023, 7(9), 575; https://doi.org/10.3390/drones7090575 - 11 Sep 2023
Viewed by 1128
Abstract
In recent years, the low-Earth-orbit (LEO) satellite network has achieved considerable development. Moreover, it is necessary to introduce edge computing into LEO networks, which can provide high-quality services, such as worldwide seamless low-delay computation offloading for unmanned aerial vehicles (UAVs) or user terminals [...] Read more.
In recent years, the low-Earth-orbit (LEO) satellite network has achieved considerable development. Moreover, it is necessary to introduce edge computing into LEO networks, which can provide high-quality services, such as worldwide seamless low-delay computation offloading for unmanned aerial vehicles (UAVs) or user terminals and nearby remote-sensing data processing for UAVs or satellites. However, because the computation resource of the satellite is relatively scarce compared to the ground server, it is hard for a single satellite to complete massive deep neural network (DNN) inference tasks in a short time. Consequently, in this paper, we focus on the multi-satellite collaborative inference problem and propose a novel COllaborative INference algorithm for LEO edge computing called COIN-LEO. COIN-LEO manages to split the complete DNN model into several submodels consisting of some consecutive layers and deploy these submodels to several satellites for inference. We innovatively leverage deep reinforcement learning (DRL) to efficiently split the model and use a neural network (NN) to predict the time required for inference tasks of a specific submodel on a specific satellite. By implementing COIN-LEO and evaluating its performance in a highly realistic satellite-network-emulation platform, we find that our COIN-LEO outperforms baseline algorithms in terms of inference throughput, time consumed and network traffic overhead. Full article
Show Figures

Figure 1

22 pages, 6789 KiB  
Article
Delay-Sensitive Service Provisioning in Software-Defined Low-Earth-Orbit Satellite Networks
by Feihu Dong, Yasheng Zhang, Guiyu Liu, Hongzhi Yu and Chenhua Sun
Electronics 2023, 12(16), 3474; https://doi.org/10.3390/electronics12163474 - 16 Aug 2023
Viewed by 778
Abstract
With the advancement of space technology and satellite communications, low-Earth-orbit (LEO) satellite networks have experienced rapid development in the past decade. In the vision of 6G, LEO satellite networks play an important role in future 6G networks. On the other hand, a variety [...] Read more.
With the advancement of space technology and satellite communications, low-Earth-orbit (LEO) satellite networks have experienced rapid development in the past decade. In the vision of 6G, LEO satellite networks play an important role in future 6G networks. On the other hand, a variety of applications, including many delay-sensitive applications, are continuously emerging. Due to the highly dynamic nature of LEO satellite networks, supporting time-deterministic services in such networks is challenging. However, we can provide latency guarantees for most delay-sensitive applications through data plane traffic shaping and control plane routing optimization. This paper addresses the routing optimization problem for time-sensitive (TS) flows in software-defined low-Earth-orbit (LEO) satellite networks. We model the problem as an integer linear programming (ILP) model aiming to minimize path handovers and maximum link utilization while meeting TS flow latency constraints. Since this problem is NP-hard, we design an efficient longest continuous path (LCP) approximation algorithm. LCP selects the longest valid path in each topology snapshot that satisfies delay constraints. An auxiliary graph then determines the routing sequence with minimized handovers. We implement an LEO satellite network testbed with Open vSwitch (OVS) and an open-network operating system (ONOS) controller to evaluate LCP. The results show that LCP reduces the number of path handovers by up to 31.7% and keeps the maximum link utilization lowest for more than 75% of the time compared to benchmark algorithms. In summary, LCP achieves excellent path handover optimization and load balancing performance under TS flow latency constraints. Full article
Show Figures

Figure 1

21 pages, 2292 KiB  
Article
An Explainable Fake News Analysis Method with Stance Information
by Lu Yuan, Hao Shen, Lei Shi, Nanchang Cheng and Hangshun Jiang
Electronics 2023, 12(15), 3367; https://doi.org/10.3390/electronics12153367 - 07 Aug 2023
Cited by 2 | Viewed by 1212
Abstract
The high level of technological development has enabled fake news to spread faster than real news in cyberspace, leading to significant impacts on the balance and sustainability of current and future social systems. At present, collecting fake news data and using artificial intelligence [...] Read more.
The high level of technological development has enabled fake news to spread faster than real news in cyberspace, leading to significant impacts on the balance and sustainability of current and future social systems. At present, collecting fake news data and using artificial intelligence to detect fake news have an important impact on building a more sustainable and resilient society. Existing methods for detecting fake news have two main limitations: they focus only on the classification of news authenticity, neglecting the semantics between stance information and news authenticity. No cognitive-related information is involved, and there are not enough data on stance classification and news true-false classification for the study. Therefore, we propose a fake news analysis method based on stance information for explainable fake news detection. To make better use of news data, we construct a fake news dataset built on cognitive information. The dataset primarily consists of stance labels, along with true-false labels. We also introduce stance information to further improve news falsity analysis. To better explain the relationship between fake news and stance, we use propensity score matching for causal inference to calculate the correlation between stance information and true-false classification. The experiment result shows that the propensity score matching for causal inference yielded a negative correlation between stance consistency and fake news classification. Full article
Show Figures

Figure 1

22 pages, 1121 KiB  
Article
Network Resource Allocation Algorithm Using Reinforcement Learning Policy-Based Network in a Smart Grid Scenario
by Zhe Zheng, Yu Han, Yingying Chi, Fusheng Yuan, Wenpeng Cui, Hailong Zhu, Yi Zhang and Peiying Zhang
Electronics 2023, 12(15), 3330; https://doi.org/10.3390/electronics12153330 - 03 Aug 2023
Cited by 1 | Viewed by 947
Abstract
The exponential growth in user numbers has resulted in an overwhelming surge in data that the smart grid must process. To tackle this challenge, edge computing emerges as a vital solution. However, the current heuristic resource scheduling approaches often suffer from resource fragmentation [...] Read more.
The exponential growth in user numbers has resulted in an overwhelming surge in data that the smart grid must process. To tackle this challenge, edge computing emerges as a vital solution. However, the current heuristic resource scheduling approaches often suffer from resource fragmentation and consequently get stuck in local optimum solutions. This paper introduces a novel network resource allocation method for multi-domain virtual networks with the support of edge computing. The approach entails modeling the edge network as a multi-domain virtual network model and formulating resource constraints specific to the edge computing network. Secondly, a policy network is constructed for reinforcement learning (RL) and an optimal resource allocation strategy is obtained under the premise of ensuring resource requirements. In the experimental section, our algorithm is compared with three other algorithms. The experimental results show that the algorithm has an average increase of 5.30%, 8.85%, 15.47% and 22.67% in long-term average revenue–cost ratio, virtual network request acceptance ratio, long-term average revenue and CPU resource utilization, respectively. Full article
Show Figures

Figure 1

23 pages, 6268 KiB  
Article
A Novel Localization Technology Based on DV-Hop for Future Internet of Things
by Xiaoying Yang, Wanli Zhang, Chengfang Tan and Tongqing Liao
Electronics 2023, 12(15), 3220; https://doi.org/10.3390/electronics12153220 - 25 Jul 2023
Viewed by 765
Abstract
In recent years, localization has become a hot issue in many applications of the Internet of Things (IoT). The distance vector-hop (DV-Hop) algorithm is accepted for many fields due to its uncomplicated, low-budget, and common hardware, but [...] Read more.
In recent years, localization has become a hot issue in many applications of the Internet of Things (IoT). The distance vector-hop (DV-Hop) algorithm is accepted for many fields due to its uncomplicated, low-budget, and common hardware, but it has the disadvantage of low positioning accuracy. To solve this issue, an improved DV-Hop algorithm—TWGDV-Hop—is put forward in this article. Firstly, the position is broadcast by using three communication radii, the hop is subdivided, and a hop difference correction coefficient is introduced to correct hops between nodes to make them more accurate. Then, the strategy of the square error fitness function is spent in calculating the average distance per hop (ADPH), and the distance weighting factor is added to jointly modify ADPH to make them more accurate. Finally, a good point set and Levy flight strategy both are introduced into gray wolf algorithm (GWO) to enhance ergodic property and capacity for unfettering the local optimum of it. Then, the improved GWO is used to evolve the place of each node to be located, further improving the location accuracy of the node to be located. The results of simulation make known that the presented positioning algorithm has improved positioning accuracy by 51.5%, 40.35%, and 66.8% compared to original DV-Hop in square, X-shaped, and O-shaped random distribution environments, respectively, with time complexity somewhat increased. Full article
Show Figures

Figure 1

26 pages, 2034 KiB  
Review
UAV Ad Hoc Network Routing Algorithms in Space–Air–Ground Integrated Networks: Challenges and Directions
by Yuxi Lu, Wu Wen, Kostromitin Konstantin Igorevich, Peng Ren, Hongxia Zhang, Youxiang Duan, Hailong Zhu and Peiying Zhang
Drones 2023, 7(7), 448; https://doi.org/10.3390/drones7070448 - 06 Jul 2023
Cited by 3 | Viewed by 2817
Abstract
With the rapid development of 5G and 6G communications in recent years, there has been significant interest in space–air–ground integrated networks (SAGINs), which aim to achieve seamless all-area, all-time coverage. As a key component of SAGINs, flying ad hoc networks (FANETs) have been [...] Read more.
With the rapid development of 5G and 6G communications in recent years, there has been significant interest in space–air–ground integrated networks (SAGINs), which aim to achieve seamless all-area, all-time coverage. As a key component of SAGINs, flying ad hoc networks (FANETs) have been widely used in the agriculture and transportation sectors in recent years. Reliable communication in SAGINs requires efficient routing algorithms to support them. In this study, we analyze the unique communication architecture of FANETs in SAGINs. At the same time, existing routing protocols are presented and clustered. In addition, we review the latest research advances in routing algorithms over the last five years. Finally, we clarify the future research trends of FANET routing algorithms in SAGINs by discussing the algorithms and comparing the routing experiments with the characteristics of unmanned aerial vehicles. Full article
Show Figures

Figure 1

15 pages, 570 KiB  
Article
Performance Analysis of a Drone-Assisted FSO Communication System over Málaga Turbulence under AoA Fluctuations
by Bing Shen, Jiajia Chen, Guanjun Xu, Qiushi Chen and Jian Wang
Drones 2023, 7(6), 374; https://doi.org/10.3390/drones7060374 - 03 Jun 2023
Cited by 1 | Viewed by 1354
Abstract
Future wireless communications have been envisaged to benefit from integrating drones and free space optical (FSO) communications, which would provide links with line-of-sight propagation and large communication capacity. The theoretical performance analysis for a drone-assisted downlink FSO system is investigated. Furthermore, this paper [...] Read more.
Future wireless communications have been envisaged to benefit from integrating drones and free space optical (FSO) communications, which would provide links with line-of-sight propagation and large communication capacity. The theoretical performance analysis for a drone-assisted downlink FSO system is investigated. Furthermore, this paper utilizes the Málaga distribution to characterize the effect of atmospheric turbulence on the optical signal for the drone–terrestrial user link, taking into account atmospheric attenuation, pointing errors, and angle-of-arrival fluctuations. The probability density function and cumulative distribution function are then expressed in closed-form using the heterodyne detection and indirect modulation/direct detection techniques, respectively. Thereafter, the analytical expressions including the average bit error rate (BER) and the ergodic capacity are given. Particularly, the asymptotic behavior of the average BER of the considered system is presented using heterodyne detection at high optical power. The Monte Carlo simulation results certify the theoretical analytical results. Correspondingly, the field-of-view of the receiver is analyzed for optimal communication performance. Full article
Show Figures

Figure 1

14 pages, 772 KiB  
Article
Node Selection Algorithm for Federated Learning Based on Deep Reinforcement Learning for Edge Computing in IoT
by Shuai Yan, Peiying Zhang, Siyu Huang, Jian Wang, Hao Sun, Yi Zhang and Amr Tolba
Electronics 2023, 12(11), 2478; https://doi.org/10.3390/electronics12112478 - 31 May 2023
Cited by 2 | Viewed by 1532
Abstract
The Internet of Things (IoT) and edge computing technologies have been rapidly developing in recent years, leading to the emergence of new challenges in privacy and security. Personal privacy and data leakage have become major concerns in IoT edge computing environments. Federated learning [...] Read more.
The Internet of Things (IoT) and edge computing technologies have been rapidly developing in recent years, leading to the emergence of new challenges in privacy and security. Personal privacy and data leakage have become major concerns in IoT edge computing environments. Federated learning has been proposed as a solution to address these privacy issues, but the heterogeneity of devices in IoT edge computing environments poses a significant challenge to the implementation of federated learning. To overcome this challenge, this paper proposes a novel node selection strategy based on deep reinforcement learning to optimize federated learning in heterogeneous device IoT environments. Additionally, a metric model for IoT devices is proposed to evaluate the performance of different devices. The experimental results demonstrate that the proposed method can improve training accuracy by 30% in a heterogeneous device IoT environment. Full article
Show Figures

Figure 1

14 pages, 10502 KiB  
Article
Layerwise Adversarial Learning for Image Steganography
by Bin Chen, Lei Shi, Zhiyi Cao and Shaozhang Niu
Electronics 2023, 12(9), 2080; https://doi.org/10.3390/electronics12092080 - 01 May 2023
Cited by 2 | Viewed by 1245
Abstract
Image steganography is a subfield of pattern recognition. It involves hiding secret data in a cover image and extracting the secret data from the stego image (described as a container image) when needed. Existing image steganography methods based on Deep Neural Networks (DNN) [...] Read more.
Image steganography is a subfield of pattern recognition. It involves hiding secret data in a cover image and extracting the secret data from the stego image (described as a container image) when needed. Existing image steganography methods based on Deep Neural Networks (DNN) usually have a strong embedding capacity, but the appearance of container images is easily altered by visual watermarks of secret data. One of the reasons for this is that, during the end-to-end training process of their Hiding Network, the location information of the visual watermarks has changed. In this paper, we proposed a layerwise adversarial training method to solve the constraint. Specifically, unlike other methods, we added a single-layer subnetwork and a discriminator behind each layer to capture their representational power. The representational power serves two purposes: first, it can update the weights of each layer which alleviates memory requirements; second, it can update the weights of the same discriminator which guarantees that the location information of the visual watermarks remains unchanged. Experiments on two datasets show that the proposed method significantly outperforms the most advanced methods. Full article
Show Figures

Figure 1

20 pages, 1413 KiB  
Article
DCEC: D2D-Enabled Cost-Aware Cooperative Caching in MEC Networks
by Jingyan Wu, Jiawei Zhang and Yuefeng Ji
Electronics 2023, 12(9), 1974; https://doi.org/10.3390/electronics12091974 - 24 Apr 2023
Viewed by 1016
Abstract
Various kinds of powerful intelligent mobile devices (MDs) need to access multimedia content anytime and anywhere, which places enormous pressure on mobile wireless networks. Fetching content from remote sources may introduce overly long accessing delays, which will result in a poor quality of [...] Read more.
Various kinds of powerful intelligent mobile devices (MDs) need to access multimedia content anytime and anywhere, which places enormous pressure on mobile wireless networks. Fetching content from remote sources may introduce overly long accessing delays, which will result in a poor quality of experience (QoE). In this article, we considered the advantages of combining mobile/multi-access edge computing (MEC) with device-to-device (D2D) technologies. We propose a D2D-enabled cooperative edge caching (DCEC) architecture to reduce the delay of accessing content. We designed the DCEC caching management scheme through the maximization of a monotone submodular function under matroid constraints. The DCEC scheme includes a proactive cache placement algorithm and a reactive cache replacement algorithm. Thus, we obtained an optimal content caching and content update, which minimized the average delay cost of fetching content files. Finally, simulations compared the DCEC network architecture with the MEC and D2D networks and the DCEC caching management scheme with the least-frequently used and least-recently used scheme. The numerical results verified that the proposed DCEC scheme was effective at improving the cache hit ratio and the average delay cost. Therefore, the users’ QoE was improved. Full article
Show Figures

Figure 1

18 pages, 5552 KiB  
Article
Image Inpainting with Parallel Decoding Structure for Future Internet
by Peng Zhao, Bowei Chen, Xunli Fan, Haipeng Chen and Yongxin Zhang
Electronics 2023, 12(8), 1872; https://doi.org/10.3390/electronics12081872 - 15 Apr 2023
Viewed by 1122
Abstract
Image inpainting benefits much from the future Internet, but the memory and computational cost in encoding image features in deep learning methods poses great challenges to this field. In this paper, we propose a parallel decoding structure based on GANs for image inpainting, [...] Read more.
Image inpainting benefits much from the future Internet, but the memory and computational cost in encoding image features in deep learning methods poses great challenges to this field. In this paper, we propose a parallel decoding structure based on GANs for image inpainting, which comprises a single encoding network and a parallel decoding network. By adding a diet parallel extended-decoder path for semantic inpainting (Diet-PEPSI) unit to the encoder network, we can employ a new rate-adaptive dilated convolutional layer to share the weights to dynamically generate feature maps by the given dilation rate, which can effectively decrease the number of convolutional layer parameters. For the decoding network composed of rough paths and inpainting paths, we propose the use of an improved CAM for reconstruction in the decoder that results in a smooth transition at the border of defective areas. For the discriminator, we substitute the local discriminator with a region ensemble discriminator, which can attack the restraint of only the recovering square, like areas for traditional methods with the robust training of a new loss function. The experiments on CelebA and CelebA-HQ verify the significance of the proposed method regarding both resource overhead and recovery performance. Full article
Show Figures

Figure 1

14 pages, 5005 KiB  
Article
Facial Pose and Expression Transfer Based on Classification Features
by Zhiyi Cao, Lei Shi, Wei Wang and Shaozhang Niu
Electronics 2023, 12(8), 1756; https://doi.org/10.3390/electronics12081756 - 07 Apr 2023
Viewed by 1549
Abstract
Transferring facial pose and expression features from one face to another is a challenging problem and an interesting topic in pattern recognition, but is one of great importance with many applications. However, existing models usually learn to transfer pose and expression features with [...] Read more.
Transferring facial pose and expression features from one face to another is a challenging problem and an interesting topic in pattern recognition, but is one of great importance with many applications. However, existing models usually learn to transfer pose and expression features with classification labels, which cannot hold all the differences in shape and size between conditional faces and source faces. To solve this problem, we propose a generative adversarial network model based on classification features for facial pose and facial expression transfer. We constructed a two-stage classifier to capture the high-dimensional classification features for each face first. Then, the proposed generation model attempts to transfer pose and expression features with classification features. In addition, we successfully combined two cost functions with different convergence speeds to learn pose and expression features. Compared to state-of-the-art models, the proposed model achieved leading scores for facial pose and expression transfer on two datasets. Full article
Show Figures

Figure 1

14 pages, 6333 KiB  
Article
Two-Stage Generator Network for High-Quality Image Inpainting in Future Internet
by Peng Zhao, Dan Zhang, Shengling Geng and Mingquan Zhou
Electronics 2023, 12(6), 1490; https://doi.org/10.3390/electronics12061490 - 22 Mar 2023
Viewed by 1259
Abstract
Sharpness is an important factor for image inpainting in future Internet, but the massive model parameters involved may produce insufficient edge consistency and reduce image quality. In this paper, we propose a two-stage transformer-based high-resolution image inpainting method to address this issue. This [...] Read more.
Sharpness is an important factor for image inpainting in future Internet, but the massive model parameters involved may produce insufficient edge consistency and reduce image quality. In this paper, we propose a two-stage transformer-based high-resolution image inpainting method to address this issue. This model consists of a coarse and a fine generator network. A self-attention mechanism is introduced to guide the transformation of higher-order semantics across the network layers, accelerate the forward propagation and reduce the computational cost. An adaptive multi-head attention mechanism is applied to the fine network to control the input of the features in order to reduce the redundant computations during training. The pyramid and perception are fused as the loss function of the generator network to improve the efficiency of the model. The comparison with Pennet, GapNet and Partial show the significance of the proposed method in reducing parameter scale and improving the resolution and texture details of the inpainted image. Full article
Show Figures

Figure 1

20 pages, 1052 KiB  
Article
Non-Euclidean Graph-Convolution Virtual Network Embedding for Space–Air–Ground Integrated Networks
by Ning Chen, Shigen Shen, Youxiang Duan, Siyu Huang, Wei Zhang and Lizhuang Tan
Drones 2023, 7(3), 165; https://doi.org/10.3390/drones7030165 - 27 Feb 2023
Cited by 5 | Viewed by 1297
Abstract
For achieving seamless global coverage and real-time communications while providing intelligent applications with increased quality of service (QoS), AI-enabled space–air–ground integrated networks (SAGINs) have attracted widespread attention from all walks of life. However, high-intensity interactions pose fundamental challenges for resource orchestration and security [...] Read more.
For achieving seamless global coverage and real-time communications while providing intelligent applications with increased quality of service (QoS), AI-enabled space–air–ground integrated networks (SAGINs) have attracted widespread attention from all walks of life. However, high-intensity interactions pose fundamental challenges for resource orchestration and security issues. Meanwhile, virtual network embedding (VNE) is applied to the function decoupling of various physical networks due to its flexibility. Inspired by the above, for SAGINs with non-Euclidean structures, we propose a graph-convolution virtual network embedding algorithm. Specifically, based on the excellent decision-making properties of deep reinforcement learning (DRL), we design an orchestration network combined with graph convolution to calculate the embedding probability of nodes. It fuses the information of the neighborhood structure, fully fits the original characteristics of the physical network, and utilizes the specified reward mechanism to guide positive learning. Moreover, by imposing security-level constraints on physical nodes, it restricts resource access. All-around and rigorous experiments are carried out in a simulation environment. Finally, results on long-term average revenue, VNR acceptance ratio, and long-term revenue–cost ratio show that the proposed algorithm outperforms advanced baselines. Full article
Show Figures

Figure 1

17 pages, 1026 KiB  
Article
Collaborative Storage and Resolution Method between Layers in Hierarchical ICN Name Resolution Systems
by Yanxia Li and Yang Li
Future Internet 2023, 15(2), 74; https://doi.org/10.3390/fi15020074 - 13 Feb 2023
Viewed by 943
Abstract
Name resolution system is an important infrastructure in Information Centric Networking (ICN) network architecture of identifier–locator separation mode. In the Local Name Resolution System (LNMRS), a hierarchical name resolution system for latency-sensitive scenarios; higher-level resolution nodes serve more users and suffer more storage [...] Read more.
Name resolution system is an important infrastructure in Information Centric Networking (ICN) network architecture of identifier–locator separation mode. In the Local Name Resolution System (LNMRS), a hierarchical name resolution system for latency-sensitive scenarios; higher-level resolution nodes serve more users and suffer more storage pressure, which causes the problem of unbalanced storage load between layers, and requires inter-layer collaborative storage under the constraint of deterministic service latency characteristics. In this paper, we use the constraints required for inter-layer collaborative resolution to construct an index neighbor structure and perform collaborative storage based on this structure. This method relieves storage pressure on high-level resolution nodes. Experimental results show that the increase of total storage load brought by the proposed method is 57.1% of that by MGreedy algorithm, 8.1% of that by Greedy algorithm, and 0.8% of that by the K-Mediod algorithm when relieving the same storage load for high-level resolution nodes. Meanwhile, deterministic service latency feature is still sustained when our proposed method is used for collaborative resolution. Full article
Show Figures

Figure 1

14 pages, 2101 KiB  
Article
Design of A Smart Tourism Management System through Multisource Data Visualization-Based Knowledge Discovery
by Zhicong Qin and Younghwan Pan
Electronics 2023, 12(3), 642; https://doi.org/10.3390/electronics12030642 - 28 Jan 2023
Viewed by 3507
Abstract
Nowadays, tourism management is a universal concern in the world. It is important for generating tourism characteristics for travelers, so as to digitally facilitate tourism business scheduling. Currently, there is still a lack of technologies that are competent in managing tourism business affairs. [...] Read more.
Nowadays, tourism management is a universal concern in the world. It is important for generating tourism characteristics for travelers, so as to digitally facilitate tourism business scheduling. Currently, there is still a lack of technologies that are competent in managing tourism business affairs. Therefore, in this paper a smart tourism management system is designed through multisource data visualization-based knowledge discovery. Firstly, this work presents the total architecture of a tourism management system with respect to three modules: data collection, data visualization, and knowledge discovery. Then, multisource business data are processed with the use of visualization techniques so as to output statistical analysis results for different individuals. On this basis, characterized knowledge can be found from previous visualization results and demonstrated for travelers or administrators. In addition, a case study on real data is conducted to test running performance of the proposed tourism management system. The main body of public service tourism is the government or other social organizations that do not regard profit as the main purpose; public service tourism a general term for products and services with obvious public nature. The testing results show that user preferences can be mined and corresponding travelling plans can be suggested via multisource data visualization-based knowledge discovery means. Full article
Show Figures

Figure 1

18 pages, 553 KiB  
Article
HetSev: Exploiting Heterogeneity-Aware Autoscaling and Resource-Efficient Scheduling for Cost-Effective Machine-Learning Model Serving
by Hao Mo, Ligu Zhu, Lei Shi, Songfu Tan and Suping Wang
Electronics 2023, 12(1), 240; https://doi.org/10.3390/electronics12010240 - 03 Jan 2023
Viewed by 1587
Abstract
To accelerate the inference of machine-learning (ML) model serving, clusters of machines require the use of expensive hardware accelerators (e.g., GPUs) to reduce execution time. Advanced inference serving systems are needed to satisfy latency service-level objectives (SLOs) in a cost-effective manner. Novel autoscaling [...] Read more.
To accelerate the inference of machine-learning (ML) model serving, clusters of machines require the use of expensive hardware accelerators (e.g., GPUs) to reduce execution time. Advanced inference serving systems are needed to satisfy latency service-level objectives (SLOs) in a cost-effective manner. Novel autoscaling mechanisms that greedily minimize the number of service instances while ensuring SLO compliance are helpful. However, we find that it is not adequate to guarantee cost effectiveness across heterogeneous GPU hardware, and this does not maximize resource utilization. In this paper, we propose HetSev to address these challenges by incorporating heterogeneity-aware autoscaling and resource-efficient scheduling to achieve cost effectiveness. We develop an autoscaling mechanism which accounts for SLO compliance and GPU heterogeneity, thus provisioning the appropriate type and number of instances to guarantee cost effectiveness. We leverage multi-tenant inference to improve GPU resource utilization, while alleviating inter-tenant interference by avoiding the co-location of identical ML instances on the same GPU during placement decisions. HetSev is integrated into Kubernetes and deployed onto a heterogeneous GPU cluster. We evaluated the performance of HetSev using several representative ML models. Compared with default Kubernetes, HetSev reduces resource cost by up to 2.15× while meeting SLO requirements. Full article
Show Figures

Figure 1

19 pages, 5633 KiB  
Article
InfoMax Classification-Enhanced Learnable Network for Few-Shot Node Classification
by Xin Xu, Junping Du, Jie Song and Zhe Xue
Electronics 2023, 12(1), 239; https://doi.org/10.3390/electronics12010239 - 03 Jan 2023
Cited by 1 | Viewed by 1590
Abstract
Graph neural networks have a wide range of applications, such as citation networks, social networks, and knowledge graphs. Among various graph analyses, node classification has garnered much attention. While many of the recent network embedding models achieve promising performance, they usually require sufficient [...] Read more.
Graph neural networks have a wide range of applications, such as citation networks, social networks, and knowledge graphs. Among various graph analyses, node classification has garnered much attention. While many of the recent network embedding models achieve promising performance, they usually require sufficient labeled nodes for training, which does not meet the reality that only a few labeled nodes are available in novel classes. While few-shot learning is commonly employed in the vision and language domains to address the problem of insufficient training samples, there are still two characteristics of the few-shot node classification problem in the non-Euclidean domain that require investigation: (1) how to extract the most informative knowledge for a class and use it on testing data and (2) how to thoroughly explore the limited number of support sets and maximize the amount of information transferred to the query set. We propose an InfoMax Classification-Enhanced Learnable Network (ICELN) to address these issues, motivated by Deep Graph InfoMax (DGI), which adapts the InfoMax principle to the summary representation of a graph and the patch representation of a node. By increasing the amount of information that is shared between the query nodes and the class representation, an ICELN can transfer the maximum amount of information to unlabeled data and enhance the graph representation potential. The whole model is trained using an episodic method, which simulates the actual testing environment to ensure the meta-knowledge learned from previous experience may be used for entirely new classes that have not been studied before. Extensive experiments are conducted on five real-world datasets to demonstrate the advantages of an ICELN over the existing few-shot node classification methods. Full article
Show Figures

Figure 1

19 pages, 1337 KiB  
Review
Human-Computer Interaction System: A Survey of Talking-Head Generation
by Rui Zhen, Wenchao Song, Qiang He, Juan Cao, Lei Shi and Jia Luo
Electronics 2023, 12(1), 218; https://doi.org/10.3390/electronics12010218 - 01 Jan 2023
Cited by 9 | Viewed by 7160
Abstract
Virtual human is widely employed in various industries, including personal assistance, intelligent customer service, and online education, thanks to the rapid development of artificial intelligence. An anthropomorphic digital human can quickly contact people and enhance user experience in human–computer interaction. Hence, we design [...] Read more.
Virtual human is widely employed in various industries, including personal assistance, intelligent customer service, and online education, thanks to the rapid development of artificial intelligence. An anthropomorphic digital human can quickly contact people and enhance user experience in human–computer interaction. Hence, we design the human–computer interaction system framework, which includes speech recognition, text-to-speech, dialogue systems, and virtual human generation. Next, we classify the model of talking-head video generation by the virtual human deep generation framework. Meanwhile, we systematically review the past five years’ worth of technological advancements and trends in talking-head video generation, highlight the critical works and summarize the dataset. Full article
Show Figures

Figure 1

Back to TopTop