Topic Editors

School of Engineering and Technology, University of Hertfordshire, Hatfield, Hertfordshire AL10 9AB, UK
Department of Electrical and Electronic Engineering, University of Hertfordshire, Hatfield AL10 9EU, UK
Dr. Oluyomi Simpson
School of Engineering and Technology, University of Hertfordshire, Hatfield, Hertfordshire AL10 9AB, UK

Machine Learning in Communication Systems and Networks

Abstract submission deadline
20 June 2023
Manuscript submission deadline
20 August 2023
Viewed by
8957

Topic Information

Dear Colleagues,

Recent advances in machine learning, including the availability of powerful computing platforms, have received huge attention from related academic, research and industry communities. Machine learning is considered as a promising tool to tackle the challenge in increasingly complex, heterogeneous and dynamic communication environments. Machine learning would be able to contribute to intelligent management and optimization of communication systems and networks by enabling us to predict changes, find patterns of uncertainties in the communication environment, and make data-driven decisions.

This Topic will focus on machine learning-based solutions to manage complex issues in communication systems and networks across various layers and within various ranges of communication applications. The objective of the Topic is to share and discuss recent advances and future trends of machine learning for intelligent communication. Original studies (unpublished and not currently under review by another journal) are welcome in relevant areas, including (but not limited to) the following:

  • Fundamental limits of machine learning in communication.
  • Design and implementation of advanced machine learning algorithms (including distributed learning) in communication.
  • Machine learning for physical layer and cross-layer processing (e.g., channel modeling and estimation, interference avoidance, beamforming and antenna configuration, etc.).
  • Machine learning for adaptive radio resource allocation and optimization.
  • Machine learning for network slicing, virtualization and software defined networking.
  • Service performance optimization and evaluation of machine learning based solutions in various vertical applications (e.g., healthcare, transport, aquaculture, farming, etc.).
  • Machine learning for anomaly detection in communication systems and networks.
  • Security, privacy and trust of machine learning over communication systems and networks.

Prof. Dr. Yichuang Sun
Dr. Haeyoung Lee
Dr. Oluyomi Simpson
Topic Editors

Keywords

  • wireless communications
  • mobile communications
  • vehicular communications
  • 5G/6G systems and networks
  • artificial intelligence
  • machine learning
  • deep learning

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
applsci
2.838 3.7 2011 14.9 Days 2300 CHF Submit
Sensors
sensors
3.847 6.4 2001 15 Days 2400 CHF Submit
Electronics
electronics
2.690 3.7 2012 14.4 Days 2000 CHF Submit
Photonics
photonics
2.536 2.3 2014 13 Days 1800 CHF Submit
Journal of Sensor and Actuator Networks
jsan
- 6.9 2012 18.4 Days 1600 CHF Submit

Preprints is a platform dedicated to making early versions of research outputs permanently available and citable. MDPI journals allow posting on preprint servers such as Preprints.org prior to publication. For more details about reprints, please visit https://www.preprints.org.

Published Papers (10 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
Article
Fast-Convergence Reinforcement Learning for Routing in LEO Satellite Networks
Sensors 2023, 23(11), 5180; https://doi.org/10.3390/s23115180 - 29 May 2023
Viewed by 184
Abstract
Fast convergence routing is a critical issue for Low Earth Orbit (LEO) constellation networks because these networks have dynamic topology changes, and transmission requirements can vary over time. However, most of the previous research has focused on the Open Shortest Path First (OSPF) [...] Read more.
Fast convergence routing is a critical issue for Low Earth Orbit (LEO) constellation networks because these networks have dynamic topology changes, and transmission requirements can vary over time. However, most of the previous research has focused on the Open Shortest Path First (OSPF) routing algorithm, which is not well-suited to handle the frequent changes in the link state of the LEO satellite network. In this regard, we propose a Fast-Convergence Reinforcement Learning Satellite Routing Algorithm (FRL–SR) for LEO satellite networks, where the satellite can quickly obtain the network link status and adjust its routing strategy accordingly. In FRL–SR, each satellite node is considered an agent, and the agent selects the appropriate port for packet forwarding based on its routing policy. Whenever the satellite network state changes, the agent sends “hello” packets to the neighboring nodes to update their routing policy. Compared to traditional reinforcement learning algorithms, FRL–SR can perceive network information faster and converge faster. Additionally, FRL–SR can mask the dynamics of the satellite network topology and adaptively adjust the forwarding strategy based on the link state. The experimental results demonstrate that the proposed FRL–SR algorithm outperforms the Dijkstra algorithm in the performance of average delay, packet arriving ratio, and network load balance. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

Article
A REM Update Methodology Based on Clustering and Random Forest
Appl. Sci. 2023, 13(9), 5362; https://doi.org/10.3390/app13095362 - 25 Apr 2023
Viewed by 367
Abstract
In this paper, we propose a radio environment map (REM) update methodology based on clustering and machine learning for indoor coverage. We use real measurements collected by the TurtleBot3 mobile robot using the received signal strength indicator (RSSI) as a measure of link [...] Read more.
In this paper, we propose a radio environment map (REM) update methodology based on clustering and machine learning for indoor coverage. We use real measurements collected by the TurtleBot3 mobile robot using the received signal strength indicator (RSSI) as a measure of link quality between transmitter and receiver. We propose a practical framework for timely updates to the REM for dynamic wireless communication environments where we need to deal with variations in physical element distributions, environmental factors, movements of people and devices, and so on. In the proposed approach, we first rely on a historical dataset from the area of interest, which is used to determine the number of clusters via the K-means algorithm. Next, we divide the samples from the historical dataset into clusters, and we train one random forest (RF) model with the corresponding historical data from each cluster. Then, when new data measurements are collected, these new samples are assigned to one cluster for a timely update of the RF model. Simulation results validate the superior performance of the proposed scheme, compared with several well-known ML algorithms and a baseline scheme without clustering. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

Article
A Cascade Network for Blind Recognition of LDPC Codes
Electronics 2023, 12(9), 1979; https://doi.org/10.3390/electronics12091979 - 24 Apr 2023
Viewed by 428
Abstract
Coding blind recognition plays a vital role in non-cooperative communication. Most of the algorithm for coding blind recognition of Low Density Parity Check (LDPC) codes is difficult to apply and the problem of high time complexity and high space complexity cannot be solved. [...] Read more.
Coding blind recognition plays a vital role in non-cooperative communication. Most of the algorithm for coding blind recognition of Low Density Parity Check (LDPC) codes is difficult to apply and the problem of high time complexity and high space complexity cannot be solved. Inspired by deep learning, we propose an architecture for coding blind recognition of LDPC codes. This architecture concatenates a Transformer-based network with a convolution neural network (CNN). The CNN is used to suppress the noise in real time, followed by a Transformer-based neural network aimed to identify the rate and length of the LDPC codes. In order to train denoise networks and recognition networks with high performance, we build our own datasets and define loss functions for the denoise networks. Simulation results show that this architecture is able to achieve better performance than the traditional method at a lower signal-noise ratio (SNR). Compared with the existing methods, this approach is more flexible and can therefore be quickly deployed. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

Communication
Optical Encoding Model Based on Orbital Angular Momentum Powered by Machine Learning
Sensors 2023, 23(5), 2755; https://doi.org/10.3390/s23052755 - 02 Mar 2023
Viewed by 1018
Abstract
Based on orbital angular momentum (OAM) properties of Laguerre–Gaussian beams LG(p,), a robust optical encoding model for efficient data transmission applications is designed. This paper presents an optical encoding model based on an intensity profile generated by a coherent [...] Read more.
Based on orbital angular momentum (OAM) properties of Laguerre–Gaussian beams LG(p,), a robust optical encoding model for efficient data transmission applications is designed. This paper presents an optical encoding model based on an intensity profile generated by a coherent superposition of two OAM-carrying Laguerre–Gaussian modes and a machine learning detection method. In the encoding process, the intensity profile for data encoding is generated based on the selection of p and indices, while the decoding process is performed using a support vector machine (SVM) algorithm. Two different decoding models based on an SVM algorithm are tested to verify the robustness of the optical encoding model, finding a BER =109 for 10.2 dB of signal-to-noise ratio in one of the SVM models. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

Article
A Study on the Impact of Integrating Reinforcement Learning for Channel Prediction and Power Allocation Scheme in MISO-NOMA System
Sensors 2023, 23(3), 1383; https://doi.org/10.3390/s23031383 - 26 Jan 2023
Cited by 1 | Viewed by 1115
Abstract
In this study, the influence of adopting Reinforcement Learning (RL) to predict the channel parameters for user devices in a Power Domain Multi-Input Single-Output Non-Orthogonal Multiple Access (MISO-NOMA) system is inspected. In the channel prediction-based RL approach, the Q-learning algorithm is developed and [...] Read more.
In this study, the influence of adopting Reinforcement Learning (RL) to predict the channel parameters for user devices in a Power Domain Multi-Input Single-Output Non-Orthogonal Multiple Access (MISO-NOMA) system is inspected. In the channel prediction-based RL approach, the Q-learning algorithm is developed and incorporated into the NOMA system so that the developed Q-model can be employed to predict the channel coefficients for every user device. The purpose of adopting the developed Q-learning procedure is to maximize the received downlink sum-rate and decrease the estimation loss. To satisfy this aim, the developed Q-algorithm is initialized using different channel statistics and then the algorithm is updated based on the interaction with the environment in order to approximate the channel coefficients for each device. The predicted parameters are utilized at the receiver side to recover the desired data. Furthermore, based on maximizing the sum-rate of the examined user devices, the power factors for each user can be deduced analytically to allocate the optimal power factor for every user device in the system. In addition, this work inspects how the channel prediction based on the developed Q-learning model, and the power allocation policy, can both be incorporated for the purpose of multiuser recognition in the examined MISO-NOMA system. Simulation results, based on several performance metrics, have demonstrated that the developed Q-learning algorithm can be a competitive algorithm for channel estimation when compared to different benchmark schemes such as deep learning-based long short-term memory (LSTM), RL based actor-critic algorithm, RL based state-action-reward-state-action (SARSA) algorithm, and standard channel estimation scheme based on minimum mean square error procedure. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

Article
Performance Enhancement in Federated Learning by Reducing Class Imbalance of Non-IID Data
Sensors 2023, 23(3), 1152; https://doi.org/10.3390/s23031152 - 19 Jan 2023
Cited by 1 | Viewed by 764
Abstract
Due to the distributed data collection and learning in federated learnings, many clients conduct local training with non-independent and identically distributed (non-IID) datasets. Accordingly, the training from these datasets results in severe performance degradation. We propose an efficient algorithm for enhancing the performance [...] Read more.
Due to the distributed data collection and learning in federated learnings, many clients conduct local training with non-independent and identically distributed (non-IID) datasets. Accordingly, the training from these datasets results in severe performance degradation. We propose an efficient algorithm for enhancing the performance of federated learning by overcoming the negative effects of non-IID datasets. First, the intra-client class imbalance is reduced by rendering the class distribution of clients close to Uniform distribution. Second, the clients to participate in federated learning are selected to make their integrated class distribution close to Uniform distribution for the purpose of mitigating the inter-client class imbalance, which represents the class distribution difference among clients. In addition, the amount of local training data for the selected clients is finely adjusted. Finally, in order to increase the efficiency of federated learning, the batch size and the learning rate of local training for the selected clients are dynamically controlled reflecting the effective size of the local dataset for each client. In the performance evaluation on CIFAR-10 and MNIST datasets, the proposed algorithm achieves 20% higher accuracy than existing federated learning algorithms. Moreover, in achieving this huge accuracy improvement, the proposed algorithm uses less computation and communication resources compared to existing algorithms in terms of the amount of data used and the number of clients joined in the training. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

Article
Vehicular Environment Identification Based on Channel State Information and Deep Learning
Sensors 2022, 22(22), 9018; https://doi.org/10.3390/s22229018 - 21 Nov 2022
Viewed by 863
Abstract
This paper presents a novel vehicular environment identification approach based on deep learning. It consists of exploiting the vehicular wireless channel characteristics in the form of Channel State Information (CSI) in the receiver side of a connected vehicle in order to identify the [...] Read more.
This paper presents a novel vehicular environment identification approach based on deep learning. It consists of exploiting the vehicular wireless channel characteristics in the form of Channel State Information (CSI) in the receiver side of a connected vehicle in order to identify the environment type in which the vehicle is driving, without any need to implement specific sensors such as cameras or radars. We consider environment identification as a classification problem, and propose a new convolutional neural network (CNN) architecture to deal with it. The estimated CSI is used as the input feature to train the model. To perform the identification process, the model is targeted for implementation in an autonomous vehicle connected to a vehicular network (VN). The proposed model is extensively evaluated, showing that it can reliably recognize the surrounding environment with high accuracy (96.48%). Our model is compared to related approaches and state-of-the-art classification architectures. The experiments show that our proposed model yields favorable performance compared to all other considered methods. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

Article
Federated Deep Reinforcement Learning for Joint AeBSs Deployment and Computation Offloading in Aerial Edge Computing Network
Electronics 2022, 11(21), 3641; https://doi.org/10.3390/electronics11213641 - 07 Nov 2022
Cited by 2 | Viewed by 983
Abstract
In the 6G aerial network, all aerial communication nodes have computing and storage functions and can perform real-time wireless signal processing and resource management. In order to make full use of the computing resources of aerial nodes, this paper studies the mobile edge [...] Read more.
In the 6G aerial network, all aerial communication nodes have computing and storage functions and can perform real-time wireless signal processing and resource management. In order to make full use of the computing resources of aerial nodes, this paper studies the mobile edge computing (MEC) system based on aerial base stations (AeBSs), proposes the joint optimization problem of computation the offloading and deployment control of AeBSs for the goals of the lowest task processing delay and energy consumption, and designs a deployment and computation offloading scheme based on federated deep reinforcement learning. Specifically, each low-altitude AeBS agent simultaneously trains two neural networks to handle the generation of the deployment and offloading strategies, respectively, and a high-altitude global node aggregates the local model parameters uploaded by each low-altitude platform. The agents can be trained offline and updated quickly online according to changes in the environment and can quickly generate the optimal deployment and offloading strategies. The simulation results show that our method can achieve good performance in a very short time. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

Article
Sightless but Not Blind: A Non-Ideal Spectrum Sensing Algorithm Countering Intelligent Jamming for Wireless Communication
Electronics 2022, 11(20), 3402; https://doi.org/10.3390/electronics11203402 - 20 Oct 2022
Viewed by 579
Abstract
Aiming at the existing intelligent anti-jamming communication methods that fail to consider the problem that sensing is inaccurate, this paper puts forward an intelligent anti-jamming method for wireless communication under non-ideal spectrum sensing (NISS). Under the malicious jamming environment, the wireless communication system [...] Read more.
Aiming at the existing intelligent anti-jamming communication methods that fail to consider the problem that sensing is inaccurate, this paper puts forward an intelligent anti-jamming method for wireless communication under non-ideal spectrum sensing (NISS). Under the malicious jamming environment, the wireless communication system uses Q-learning (QL) to learn the change law of jamming, and considers the false alarm and missed detection probability of jamming sensing, and selects the channel with long-term optimal reporting in each time slot for communication. The simulation results show that under linear sweep jamming and intelligent blocking jamming, the proposed algorithm converges faster than QL with the same decision accuracy. Compared with wide-band spectrum sensing (WBSS), an algorithm which failed to consider non-ideal spectrum sensing, the decision accuracy of the proposed algorithm is higher with the same convergence rate. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

Communication
High Speed Decoding for High-Rate and Short-Length Reed–Muller Code Using Auto-Decoder
Appl. Sci. 2022, 12(18), 9225; https://doi.org/10.3390/app12189225 - 14 Sep 2022
Viewed by 702
Abstract
In this paper, we show that applying a machine learning technique called auto-decoder (AD) to high-rate and short length Reed–Muller (RM) decoding enables it to achieve maximum likelihood decoding (MLD) performance and faster decoding speed than when fast Hadamard transform (FHT) is applied [...] Read more.
In this paper, we show that applying a machine learning technique called auto-decoder (AD) to high-rate and short length Reed–Muller (RM) decoding enables it to achieve maximum likelihood decoding (MLD) performance and faster decoding speed than when fast Hadamard transform (FHT) is applied in additive white Gaussian noise (AWGN) channels. The decoding speed is approximately 1.8 times and 125 times faster than the FHT decoding for R(1,4) and R(2,4), respectively. The number of nodes in the hidden layer of AD is larger than that of the input layer, unlike the conventional auto-encoder (AE). Two ADs are combined in parallel and merged together, and then cascaded to one fully connected layer to improve the bit error rate (BER) performance of the code. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

Back to TopTop