Topic Editors

1. Digital Systems, University of Piraeus, Piraeus, Greece
2. Electrical and Computer Engineering, University of Western Macedonia, 5010 Kozani, Greece
Department of Electrical and Computer Engineering, University of Western Macedonia, 50100 Kozani, Greece
Department of Computer Science, International Hellenic University, 65404 Kavala, Greece
Department of Networks and Digital Media, School of Computer Science and Mathematics, SEC, Kingston University, London KT1 2EE, UK
Electrical and Computer Engineering, University of Western Macedonia, 5010 Kozani, Greece

Next Generation Intelligent Communications and Networks

Abstract submission deadline
31 May 2024
Manuscript submission deadline
31 August 2024
Viewed by
24106

Topic Information

Dear Colleagues,

As a response to the spectrum scarcity caused by the aggressive proliferation of wireless devices and quality-of-service (QoS) and quality-of-experience (QoE) hungry services, which are expected to support a broad range of diverse multi-scale and multi-environment applications, sixth-generation (6G) wireless networks have adopted higher frequency bands, such as the millimenter wave, the terahertz (THz) and the optical band. High-frequency wireless communications are recognized as a technological enabler of a varied set of use cases, from in-body nano-scale networks to indoor and outdoor wireless personal/local area and fronthaul/backhaul networks. Nano-scale applications require compact transceiver designs and self-organized ad hoc network topologies. On the other hand, macro-scale applications demand flexibility, sustainability, adaptability and security in an ever-changing heterogeneous environment. Moreover, the ability to support a high data rate of up to 1 Tb/s and energy-efficient massive connectivity are only some of the key demands. To address the aforementioned requirements, artificial intelligence (AI), in combination with novel structures capable of altering the wireless environment, have been regarded as complementary pillars to 6G wireless THz systems. AI is expected to enable a series of new features in next-generation networks, including, but not limited to: self-aggregation, context awareness, self-configuration and opportunistic deployment. In addition, integrating AI in wireless networks is predicted to bring about a revolutionary transformation of conventional cognitive radio systems into intelligent platforms by unlocking the full potential of radio signals and exploiting new degrees of freedom. Considering this context, this Special Issue aims to present papers investigating AI-empowered and/or -enabled next-generation wireless systems and networks. Potential topics include, but are not limited to, the following:

  • Identification of communication systems and networks requirements that call for the use of AI approaches.
  • AI-enabled architectures with an emphasis on open radio-access networks, SD fabric and verticals, such as agriculture, self-driving vehicles, automation, industry 4.0, etc.
  • Semantic and task-oriented communications beyond Shannon performance.
  • Topics related to AI-empowered physical layers, such as: machine learning channel modeling and/or estimation approaches based on point cloud ray tracing algorithms or similar schemes, as well as channel prediction, involving reconfigurable-intelligent-surface-enabled wireless systems; modulation recognition and signal detection in complex wireless environments; and analog, digital, hybrid and reconfigurable intelligent surface (RIS) beamforming design.
  • Medium and multiple access control: 3D radio resource management, channel allocation, power management, blockage avoidance schemes, localization approaches, pro-active and predictive mobility management, intelligent routing, etc.
  • Novel AI deployment schemes for next-generation networks.

Dr. Alexandros-Apostolos Boulogeorgos
Dr. Panagiotis Sarigiannidis
Dr. Thomas Lagkas
Prof. Dr. Vasileios Argyriou
Prof. Dr. Pantelis Angelidis
Topic Editors

Keywords

  • artificial intelligence
  • explainable AI
  • federated learning
  • machine learning
  • medium and multiple access control
  • physical layer
  • radio resource management
  • reinforcement learning
  • transfer learning
  • semantic communications

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
applsci
2.7 4.5 2011 15.8 Days CHF 2300 Submit
Digital
digital
- - 2021 24.1 Days CHF 1000 Submit
Electronics
electronics
2.9 4.7 2012 15.8 Days CHF 2200 Submit
Sensors
sensors
3.9 6.8 2001 16.4 Days CHF 2600 Submit
Telecom
telecom
- 3.1 2020 16.9 Days CHF 1000 Submit

Preprints is a platform dedicated to making early versions of research outputs permanently available and citable. MDPI journals allow posting on preprint servers such as Preprints.org prior to publication. For more details about reprints, please visit https://www.preprints.org.

Published Papers (17 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
Article
CSI Feedback Model Based on Multi-Source Characterization in FDD Systems
Sensors 2023, 23(19), 8139; https://doi.org/10.3390/s23198139 - 28 Sep 2023
Viewed by 118
Abstract
In wireless communication, to fully utilize the spectrum and energy efficiency of the system, it is necessary to obtain the channel state information (CSI) of the link. However, in Frequency Division Duplexing (FDD) systems, CSI feedback wastes part of the spectrum resources. In [...] Read more.
In wireless communication, to fully utilize the spectrum and energy efficiency of the system, it is necessary to obtain the channel state information (CSI) of the link. However, in Frequency Division Duplexing (FDD) systems, CSI feedback wastes part of the spectrum resources. In order to save spectrum resources, the CSI needs to be compressed. However, many current deep-learning algorithms have complex structures and a large number of model parameters. When the computational and storage resources are limited, the large number of model parameters will decrease the accuracy of CSI feedback, which cannot meet the application requirements. In this paper, we propose a neural network-based CSI feedback model, Mix_Multi_TransNet, which considers both the spatial characteristics and temporal sequence of the channel, aiming to provide higher feedback accuracy while reducing the number of model parameters. Through experiments, it is found that Mix_Multi_TransNet achieves higher accuracy than the traditional CSI feedback network in both indoor and outdoor scenes. In the indoor scene, the NMSE gains of Mix_Multi_TransNet are 4.06 dB, 4.92 dB, 4.82 dB, and 6.47 dB for compression ratio η = 1/8, 1/16, 1/32, 1/64, respectively. In the outdoor scene, the NMSE gains of Mix_Multi_TransNet are 3.63 dB, 6.24 dB, 4.71 dB, 4.60 dB, and 2.93 dB for compression ratio η = 1/4, 1/8, 1/16, 1/32, 1/64, respectively. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

Article
Solving Load Balancing Problems in Routing and Limiting Traffic at the Network Edge
Appl. Sci. 2023, 13(17), 9489; https://doi.org/10.3390/app13179489 - 22 Aug 2023
Viewed by 384
Abstract
This study focuses on creating and investigating models that optimize load balancing in communication networks by managing routing and traffic limitations. The purpose is to use these models to optimize the network’s routing and traffic limitations while ensuring predictable quality of service levels, [...] Read more.
This study focuses on creating and investigating models that optimize load balancing in communication networks by managing routing and traffic limitations. The purpose is to use these models to optimize the network’s routing and traffic limitations while ensuring predictable quality of service levels, and adhering to traffic engineering requirements for routing and limiting traffic at the network edge. In order to achieve this aim, a mathematical optimization model was developed based on a chosen optimality criterion. Two modifications of the traffic engineering routing were created, including the linear limitation model (TER-LLM) and traffic engineering limitation (TER-TEL), each considering the main features of packet flow: intensity and priority. The proposed solutions were compared by analyzing various data inputs, including the ratio of flow parameters and the intensity with which packets will be limited at the border router. The study presents recommendations on the optimal use of the proposed solutions based on their respective features and advantages. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

Article
Physical-Layer Security with Irregular Reconfigurable Intelligent Surfaces for 6G Networks
Sensors 2023, 23(4), 1881; https://doi.org/10.3390/s23041881 - 07 Feb 2023
Viewed by 1735
Abstract
The goal of 6G is to make far-reaching changes in communication systems with stricter demands, such as high throughput, extremely low latency, stronger security, and ubiquitous connectivity. Several promising techniques, such as reconfigurable intelligent surfaces (RISs), have been introduced to achieve these goals. [...] Read more.
The goal of 6G is to make far-reaching changes in communication systems with stricter demands, such as high throughput, extremely low latency, stronger security, and ubiquitous connectivity. Several promising techniques, such as reconfigurable intelligent surfaces (RISs), have been introduced to achieve these goals. An RIS is a 2D low-cost array of reflecting elements that can adjust the electromagnetic properties of an incident signal. In this paper, we guarantee secrecy by using an irregular RIS (IRIS). The main idea of an IRIS is to irregularly activate reflecting elements for a given number of RIS elements. In this work, we consider a communication scenario in which, with the aid of an IRIS, a multi-antenna base station establishes a secure link with a legitimate single-antenna user in the presence of a single-antenna eavesdropper. To this end, we formulate a topology-and-precoding optimization problem to maximize the secrecy rate. We then propose a Tabu search-based algorithm to jointly optimize the RIS topology and the precoding design. Finally, we present simulation results to validate the proposed algorithm, which highlights the performance gain of the IRIS in improving secure transmissions compared to an RIS. Our results show that exploiting an IRIS can allow additional spatial diversity to be achieved, resulting in secrecy performance improvement and overcoming the limitations of conventional RIS-assisted systems (e.g., a large number of active elements). Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

Article
Machine Learning Based Recommendation System for Web-Search Learning
Telecom 2023, 4(1), 118-134; https://doi.org/10.3390/telecom4010008 - 01 Feb 2023
Cited by 1 | Viewed by 2073
Abstract
Nowadays, e-learning and web-based learning are the most integrated new learning methods in schools, colleges, and higher educational institutions. The recent web-search-based learning methodological approach has helped online users (learners) to search for the required topics from the available online resources. The learners [...] Read more.
Nowadays, e-learning and web-based learning are the most integrated new learning methods in schools, colleges, and higher educational institutions. The recent web-search-based learning methodological approach has helped online users (learners) to search for the required topics from the available online resources. The learners extracted knowledge from textual, video, and image formats through web searching. This research analyzes the learner’s significant attention to searching for the required information online and develops a new recommendation system using machine learning (ML) to perform the web searching. The learner’s navigation and eye movements are recorded using sensors. The proposed model automatically analyzes the learners’ interests while performing online searches and the origin of the acquired and learned information. The ML model maps the text and video contents and obtains a better recommendation. The proposed model analyzes and tracks online resource usage and comprises the following steps: information logging, information processing, and word mapping operations. The learner’s knowledge of the captured online resources using the sensors is analyzed to enhance the response time, selectivity, and sensitivity. On average, the learners spent more hours accessing the video and the textual information and fewer hours accessing the images. The percentage of participants addressing the two different subject quizzes, Q1 and Q2, increased when the learners attempted the quiz after the web search; 43.67% of the learners addressed the quiz Q1 before completing the web search, and 75.92% addressed the quiz Q2 after the web search. The average word counts analysis corresponding to text, videos, overlapping text or video, and comprehensive resources indicates that the proposed model can also apply for a continuous multi sessions online search learning environment. The experimental analysis indicates that better measures are obtained for the proposed recommender using sensors and ML compared with other methods in terms of recall, ranking score, and precision. The proposed model achieves a precision of 27% when the recommendation size becomes 100. The root mean square error (RMSE) lies between 8% and 16% when the number of learners < 500, and the maximum value of RMSE is 21% when the number of learners reaches 1500. The proposed recommendation model achieves better results than the state-of-the-art methods. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

Article
Cooperative Transmission Mechanism Based on Revenue Learning for Vehicular Networks
Appl. Sci. 2022, 12(24), 12651; https://doi.org/10.3390/app122412651 - 09 Dec 2022
Viewed by 793
Abstract
With the rapid development of science and technology and the improvement of people’s living standards, vehicles have gradually become the main means of travel. The increase in vehicles has also brought about an increasing incidence of car accidents. In order to reduce traffic [...] Read more.
With the rapid development of science and technology and the improvement of people’s living standards, vehicles have gradually become the main means of travel. The increase in vehicles has also brought about an increasing incidence of car accidents. In order to reduce traffic accidents, many researchers have proposed the use of vehicular networks to quickly transmit information. As long as these vehicles can receive information from other vehicles or buildings nearby in a timely manner, they can avoid accidents. In vehicular networks, the traditional double connection technique, through interference coordination scheduling strategy based on graph theory, can ensure the fairness of vehicles and obtain suitable neighborhood interference resistance with limited computing resources. However, when a base station transmits data to the vehicular user, the nearby base station and the vehicular network user may be in a state of suspended communication. Thus, the resource utilization of the above double connection vehicular network is not sufficient, resulting in a waste of resources. To solve this issue, this paper presents a study based on earnings learning with a vehicular network multi-point collaborative transmission mechanism, in which the vehicular network users communicate with the surrounding collaborative transmission. We use the Q-learning algorithm in the reinforcement learning process to enable vehicular network users to learn from each other and make cooperative decisions in different environments. In reinforcement learning, the agent makes a decision and changes the state of the environment. Then, the environment feeds back the benefit to the agent through the related algorithm so that the agent gradually learns the optimal decision. Simulation results demonstrate the superiority of our proposed approach with the revenue machine learning model compared with the benchmark schemes. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

Article
Augmented Lagrangian-Based Reinforcement Learning for Network Slicing in IIoT
Electronics 2022, 11(20), 3385; https://doi.org/10.3390/electronics11203385 - 19 Oct 2022
Cited by 1 | Viewed by 980
Abstract
Network slicing enables the multiplexing of independent logical networks on the same physical network infrastructure to provide different network services for different applications. The resource allocation problem involved in network slicing is typically a decision-making problem, falling within the scope of reinforcement learning. [...] Read more.
Network slicing enables the multiplexing of independent logical networks on the same physical network infrastructure to provide different network services for different applications. The resource allocation problem involved in network slicing is typically a decision-making problem, falling within the scope of reinforcement learning. The advantage of adapting to dynamic wireless environments makes reinforcement learning a good candidate for problem solving. In this paper, to tackle the constrained mixed integer nonlinear programming problem in network slicing, we propose an augmented Lagrangian-based soft actor–critic (AL-SAC) algorithm. In this algorithm, a hierarchical action selection network is designed to handle the hybrid action space. More importantly, inspired by the augmented Lagrangian method, both neural networks for Lagrange multipliers and a penalty item are introduced to deal with the constraints. Experiment results show that the proposed AL-SAC algorithm can strictly satisfy the constraints, and achieve better performance than other benchmark algorithms. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

Article
DDS: A Delay-Based Differentiated Service Virtual Network Embedding Algorithm
Appl. Sci. 2022, 12(19), 9897; https://doi.org/10.3390/app12199897 - 01 Oct 2022
Viewed by 772
Abstract
Network virtualization (NV) is considered a promising technology that may solve the problem of Internet rigidity. The resource competition of multiple virtual networks for shared substrate network resources is a challenging problem in NV called virtual network embedding (VNE). Existing approaches do not [...] Read more.
Network virtualization (NV) is considered a promising technology that may solve the problem of Internet rigidity. The resource competition of multiple virtual networks for shared substrate network resources is a challenging problem in NV called virtual network embedding (VNE). Existing approaches do not consider the differences between multi-tenant requests and adopt a single embedding method, resulting in poor performance. This paper proposes a virtual network embedding algorithm that distinguishes the network types requested by tenants. This method divides virtual network requests into ordinary requests and delay-sensitive requests according to the delay constraints, provides personalized mapping strategies for different networks, and flexibly responds to the resource requirements and quality of service (QoS) requirements of the virtual network. The simulation results show that, compared with other algorithms, the proposed algorithm improves the request acceptance ratio by about 2% to 15% and the substrate network resources are more effectively utilized. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

Article
QoS-Aware Downlink Traffic Scheduling for Cellular Networks with Dual Connectivity
Electronics 2022, 11(19), 3085; https://doi.org/10.3390/electronics11193085 - 27 Sep 2022
Cited by 2 | Viewed by 825
Abstract
In a cellular network, how to preserve users’ quality of service (QoS) demands is an important issue. To provide better data services, researchers and industry have discussed the deployment of small cells in cellular networks to support dual connectivity enhancement for user equipments [...] Read more.
In a cellular network, how to preserve users’ quality of service (QoS) demands is an important issue. To provide better data services, researchers and industry have discussed the deployment of small cells in cellular networks to support dual connectivity enhancement for user equipments (UEs). By such an enhancement, a base station can dispatch downlink data to its surrounding small cells, and UEs that are located in the overlapping areas of the base station and small cells can receive downlink data from both sides simultaneously. We observe that previous works do not jointly consider QoS requirements and system capabilities when making scheduling decisions. Therefore, in this work, we design a QoS traffic scheduling scheme for dual connectivity networks. The designed scheme contains two parts. First, we propose a data dispatching decision scheme for the base station to decide how much data should be dispatched to small cells. When making a dispatching decision, the proposed scheme aims to maximize throughput and ensure that data flows can be processed in time. Second, we design a radio resource scheduling method, which aims to reduce dropping ratios of high-priority QoS data flows, while avoiding wasting radio resources. In this work, we verify our design using simulation programs. The experimental results show that compared to the existing methods, the proposed scheme can effectively increase system throughput and decrease packet drop ratios. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

Article
A Learning Automaton-Based Algorithm for Maximizing the Transfer Data Rate in a Biological Nanonetwork
Appl. Sci. 2022, 12(19), 9499; https://doi.org/10.3390/app12199499 - 22 Sep 2022
Viewed by 834
Abstract
Biological nanonetworks have been envisaged to be the most appropriate alternatives to classical electromagnetic nanonetworks for applications in biological environments. Due to the diffusional method of the message exchange process, transfer data rates are not proportional to their electromagnetic counterparts. In addition, the [...] Read more.
Biological nanonetworks have been envisaged to be the most appropriate alternatives to classical electromagnetic nanonetworks for applications in biological environments. Due to the diffusional method of the message exchange process, transfer data rates are not proportional to their electromagnetic counterparts. In addition, the molecular channel has memory affecting the reception of a message, as the molecules from previously transmitted messages remain in the channel, affecting the number of information molecules that are required from a node to perceive a transmitted message. As a result, the ability of a node to receive a message is directly connected to the transmission rate from the transmitter. In this work, a learning automaton approach has been followed as a way to provide the receiver nodes with an algorithm that could firstly enhance their reception capability and secondly boost the performance of the transfer data rate between the biological communication parties. To this end, a complete set of simulation scenarios has been devised, simulating different distances between nodes and various input signal distributions. Most of the operational parameters, such as the speed of convergence for different numbers of ascension and descension steps and the number of information molecules per message, have been tested pertaining to the performance characteristics of the biological nanonetwork. The applied analysis revealed that the proposed protocol manages to adapt to the communication channel changes, such as the number of remaining information molecules, and can be successfully employed at nanoscale dimensions as a tool for pursuing an increased transfer data rate, even with time-variant channel characteristics. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

Article
MAToC: A Novel Match-Action Table Architecture on Corundum for 8 × 25G Networking
Appl. Sci. 2022, 12(17), 8734; https://doi.org/10.3390/app12178734 - 31 Aug 2022
Viewed by 959
Abstract
Packet processing offloads are increasingly needed by high-speed networks. This paper proposes a high throughput, low latency, scalable and reconfigurable Match-Action Table (MAT) architecture based on the open source FPGA-based NIC Corundum. The flexibility and capability of this scheme is demonstrated by an [...] Read more.
Packet processing offloads are increasingly needed by high-speed networks. This paper proposes a high throughput, low latency, scalable and reconfigurable Match-Action Table (MAT) architecture based on the open source FPGA-based NIC Corundum. The flexibility and capability of this scheme is demonstrated by an example implementation of IP layer forwarding offload. It makes the NIC work as a router that can forward packets for different subnet and virtual local area networks (VLAN). Experiments are performed on a Zynq MPSoC device with two QSFPs and the results show that it can work at line rate of 8 × 25 Gbps (200 Gbps), within a maximum latency of 76 nanoseconds. In addition, a high-performance MAT pipeline with full-featured, resource-efficient TCAM and a compact frame merging deparser are presented. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

Article
Spectrum Sensing Based on STFT-ImpResNet for Cognitive Radio
Electronics 2022, 11(15), 2437; https://doi.org/10.3390/electronics11152437 - 04 Aug 2022
Cited by 1 | Viewed by 1098
Abstract
Spectrum sensing is a crucial technology for cognitive radio. The existing spectrum sensing methods generally suffer from certain problems, such as insufficient signal feature representation, low sensing efficiency, high sensibility to noise uncertainty, and drastic degradation in deep networks. In view of these [...] Read more.
Spectrum sensing is a crucial technology for cognitive radio. The existing spectrum sensing methods generally suffer from certain problems, such as insufficient signal feature representation, low sensing efficiency, high sensibility to noise uncertainty, and drastic degradation in deep networks. In view of these challenges, we propose a spectrum sensing method based on short-time Fourier transform and improved residual network (STFT-ImpResNet) in this work. Specifically, in STFT, the received signal is transformed into a two-dimensional time-frequency matrix which is normalized to a gray image as the input of the network. An improved residual network is designed to classify the signal samples, and a dropout layer is added to the residual block to mitigate over-fitting effectively. We conducted comprehensive evaluations on the proposed spectrum sensing method, which demonstrate that—compared with other current spectrum sensing algorithms—STFT-ImpResNet exhibits higher accuracy and lower computational complexity, as well as strong robustness to noise uncertainty, and it can meet the needs of real-time detection. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

Article
Node-Based QoS-Aware Security Framework for Sinkhole Attacks in Mobile Ad-Hoc Networks
Telecom 2022, 3(3), 407-432; https://doi.org/10.3390/telecom3030022 - 29 Jun 2022
Cited by 2 | Viewed by 1470
Abstract
Most networks strive to provide good security and an acceptable level of performance. Quality of service (QoS) plays an important role in the performance of a network. Mobile ad hoc networks (MANETs) are a decentralized and self-configuring type of wireless network. MANETs are [...] Read more.
Most networks strive to provide good security and an acceptable level of performance. Quality of service (QoS) plays an important role in the performance of a network. Mobile ad hoc networks (MANETs) are a decentralized and self-configuring type of wireless network. MANETs are generally challenging and the provision of security and QoS becomes a huge challenge. Many researchers in literature have proposed parallel mechanisms that investigate either security or QoS. This paper presents a security framework that is QoS-aware in MANETs using a network protocol called optimized link state routing protocol (OLSR). Security and QoS targets may not necessarily be similar but this framework seeks to bridge the gap for the provision of an optimal functioning MANET. The framework is evaluated for throughput, jitter, and delay against a sinkhole attack presented in the network. The contributions of this paper are (a) implementation of a sinkhole attack using OLSR, (b) the design and implementation of a lightweight-intrusion detection system using OLSR, and (c) a framework that removes fake routes and bandwidth optimization. The simulation results revealed that the QoS-aware framework increased the performance of the network by more than 70% efficiency in terms of network throughput. Delay and jitter levels were reduced by close to 85% as compared to when the network was under attack. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

Article
The Smart in Smart Cities: A Framework for Image Classification Using Deep Learning
Sensors 2022, 22(12), 4390; https://doi.org/10.3390/s22124390 - 10 Jun 2022
Cited by 4 | Viewed by 1227
Abstract
The need for a smart city is more pressing today due to the recent pandemic, lockouts, climate changes, population growth, and limitations on availability/access to natural resources. However, these challenges can be better faced with the utilization of new technologies. The zoning design [...] Read more.
The need for a smart city is more pressing today due to the recent pandemic, lockouts, climate changes, population growth, and limitations on availability/access to natural resources. However, these challenges can be better faced with the utilization of new technologies. The zoning design of smart cities can mitigate these challenges. It identifies the main components of a new smart city and then proposes a general framework for designing a smart city that tackles these elements. Then, we propose a technology-driven model to support this framework. A mapping between the proposed general framework and the proposed technology model is then introduced. To highlight the importance and usefulness of the proposed framework, we designed and implemented a smart image handling system targeted at non-technical personnel. The high cost, security, and inconvenience issues may limit the cities’ abilities to adopt such solutions. Therefore, this work also proposes to design and implement a generalized image processing model using deep learning. The proposed model accepts images from users, then performs self-tuning operations to select the best deep network, and finally produces the required insights without any human intervention. This helps in automating the decision-making process without the need for a specialized data scientist. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

Article
A Reinforcement Learning Based Data Caching in Wireless Networks
Appl. Sci. 2022, 12(11), 5692; https://doi.org/10.3390/app12115692 - 03 Jun 2022
Viewed by 1652
Abstract
Data caching has emerged as a promising technique to handle growing data traffic and backhaul congestion of wireless networks. However, there is a concern regarding how and where to place contents to optimize data access by the users. Data caching can be exploited [...] Read more.
Data caching has emerged as a promising technique to handle growing data traffic and backhaul congestion of wireless networks. However, there is a concern regarding how and where to place contents to optimize data access by the users. Data caching can be exploited close to users by deploying cache entities at Small Base Stations (SBSs). In this approach, SBSs cache contents through the core network during off-peak traffic hours. Then, SBSs provide cached contents to content-demanding users during peak traffic hours with low latency. In this paper, we exploit the potential of data caching at the SBS level to minimize data access delay. We propose an intelligence-based data caching mechanism inspired by an artificial intelligence approach known as Reinforcement Learning (RL). Our proposed RL-based data caching mechanism is adaptive to dynamic learning and tracks network states to capture users’ diverse and varying data demands. Our proposed approach optimizes data caching at the SBS level by observing users’ data demands and locations to efficiently utilize the limited cache resources of SBS. Extensive simulations are performed to evaluate the performance of proposed caching mechanism based on various factors such as caching capacity, data library size, etc. The obtained results demonstrate that our proposed caching mechanism achieves 4% performance gain in terms of delay vs. contents, 3.5% performance gain in terms of delay vs. users, 2.6% performance gain in terms of delay vs. cache capacity, 18% performance gain in terms of percentage traffic offloading vs. popularity skewness (γ), and 6% performance gain in terms of backhaul saving vs. cache capacity. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

Article
Deep Learning for Joint Pilot Design and Channel Estimation in MIMO-OFDM Systems
Sensors 2022, 22(11), 4188; https://doi.org/10.3390/s22114188 - 31 May 2022
Cited by 8 | Viewed by 1704
Abstract
In MIMO-OFDM systems, pilot design and estimation algorithm jointly determine the reliability and effectiveness of pilot-based channel estimation methods. In order to improve the channel estimation accuracy with less pilot overhead, a deep learning scheme for joint pilot design and channel estimation is [...] Read more.
In MIMO-OFDM systems, pilot design and estimation algorithm jointly determine the reliability and effectiveness of pilot-based channel estimation methods. In order to improve the channel estimation accuracy with less pilot overhead, a deep learning scheme for joint pilot design and channel estimation is proposed. This new hybrid network structure is named CAGAN, which is composed of a concrete autoencoder (concrete AE) and a conditional generative adversarial network (cGAN). We first use concrete AE to find and select the most informative position in the time-frequency grid to achieve pilot optimization design and then input the optimized pilots to cGAN to complete channel estimation. Simulation experiments show that the CAGAN scheme outperforms the traditional LS and MMSE estimation methods with fewer pilots, and has good robustness to environmental noise. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

Article
B5GEMINI: AI-Driven Network Digital Twin
Sensors 2022, 22(11), 4106; https://doi.org/10.3390/s22114106 - 28 May 2022
Cited by 6 | Viewed by 4204
Abstract
Network Digital Twin (NDT) is a new technology that builds on the concept of Digital Twins (DT) to create a virtual representation of the physical objects of a telecommunications network. NDT bridges physical and virtual spaces to enable coordination and synchronization of physical [...] Read more.
Network Digital Twin (NDT) is a new technology that builds on the concept of Digital Twins (DT) to create a virtual representation of the physical objects of a telecommunications network. NDT bridges physical and virtual spaces to enable coordination and synchronization of physical parts while eliminating the need to directly interact with them. There is broad consensus that Artificial Intelligence (AI) and Machine Learning (ML) are among the key enablers to this technology. In this work, we present B5GEMINI, which is an NDT for 5G and beyond networks that makes an extensive use of AI and ML. First, we present the infrastructural and architectural components that support B5GEMINI. Next, we explore four paradigmatic applications where AI/ML can leverage B5GEMINI for building new AI-powered applications. In addition, we identify the main components of the AI ecosystem of B5GEMINI, outlining emerging research trends and identifying the open challenges that must be solved along the way. Finally, we present two relevant use cases in the application of NDTs with an extensive use of ML. The first use case lays in the cybersecurity domain and proposes the use of B5GEMINI to facilitate the design of ML-based attack detectors and the second addresses the design of energy efficient ML components and introduces the modular development of NDTs adopting the Digital Map concept as a novelty. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

Article
A 24-to-30 GHz Ultra-High-Linearity Down-Conversion Mixer for 5G Applications Using a New Linearization Method
Sensors 2022, 22(10), 3802; https://doi.org/10.3390/s22103802 - 17 May 2022
Cited by 1 | Viewed by 1400
Abstract
The linearity of active mixers is usually determined by the input transistors, and many works have been proposed to improve it by modified input stages at the cost of a more complex structure or more power consumption. A new linearization method of active [...] Read more.
The linearity of active mixers is usually determined by the input transistors, and many works have been proposed to improve it by modified input stages at the cost of a more complex structure or more power consumption. A new linearization method of active mixers is proposed in this paper; the input 1 dB compression point (IP1dB) and output 1 dB compression point (OP1dB) are greatly improved by exploiting the “reverse uplift” phenomenon. Compared with other linearization methods, the proposed one is simpler, more efficient, and sacrifices less conversion gain. Using this method, an ultra-high-linearity double-balanced down-conversion mixer with wide IF bandwidth is designed and fabricated in a 130 nm SiGe BiCMOS process. The proposed mixer includes a Gilbert-cell, a pair of phase-adjusting inductors, and a Marchand-balun-based output network. Under a 1.6 V supply voltage, the measurement results show that the mixer exhibits an excellent IP1dB of +7.2~+10.1 dBm, an average OP1dB of +5.4 dBm, which is the state-of-the-art linearity performance in mixers under a silicon-based process, whether active or passive. Moreover, a wide IF bandwidth of 8 GHz from 3 GHz to 11 GHz was achieved. The circuit consumes 19.8 mW and occupies 0.48 mm2, including all pads. The use of the "reverse uplift" allows us to implement high-linearity circuits more efficiently, which is helpful for the design of 5G high-speed communication transceivers. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

Back to TopTop