Topic Editors

School of Engineering and Technology, University of Hertfordshire, Hatfield AL10 9AB, UK
Department of Electrical and Electronic Engineering, University of Hertfordshire, Hatfield AL10 9EU, UK
Dr. Oluyomi Simpson
School of Engineering and Technology, University of Hertfordshire, Hatfield AL10 9AB, UK

Machine Learning in Communication Systems and Networks

Abstract submission deadline
closed (31 October 2023)
Manuscript submission deadline
closed (31 December 2023)
Viewed by
34773
Topic Machine Learning in Communication Systems and Networks book cover image

A printed edition is available here.

Topic Information

Dear Colleagues,

Recent advances in machine learning, including the availability of powerful computing platforms, have received huge attention from related academic, research and industry communities. Machine learning is considered as a promising tool to tackle the challenge in increasingly complex, heterogeneous and dynamic communication environments. Machine learning would be able to contribute to intelligent management and optimization of communication systems and networks by enabling us to predict changes, find patterns of uncertainties in the communication environment, and make data-driven decisions.

This Topic will focus on machine learning-based solutions to manage complex issues in communication systems and networks across various layers and within various ranges of communication applications. The objective of the Topic is to share and discuss recent advances and future trends of machine learning for intelligent communication. Original studies (unpublished and not currently under review by another journal) are welcome in relevant areas, including (but not limited to) the following:

  • Fundamental limits of machine learning in communication.
  • Design and implementation of advanced machine learning algorithms (including distributed learning) in communication.
  • Machine learning for physical layer and cross-layer processing (e.g., channel modeling and estimation, interference avoidance, beamforming and antenna configuration, etc.).
  • Machine learning for adaptive radio resource allocation and optimization.
  • Machine learning for network slicing, virtualization and software defined networking.
  • Service performance optimization and evaluation of machine learning based solutions in various vertical applications (e.g., healthcare, transport, aquaculture, farming, etc.).
  • Machine learning for anomaly detection in communication systems and networks.
  • Security, privacy and trust of machine learning over communication systems and networks.

Prof. Dr. Yichuang Sun
Dr. Haeyoung Lee
Dr. Oluyomi Simpson
Topic Editors

Keywords

  • wireless communications
  • mobile communications
  • vehicular communications
  • 5G/6G systems and networks
  • artificial intelligence
  • machine learning
  • deep learning

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
applsci
2.7 4.5 2011 16.9 Days CHF 2400
Sensors
sensors
3.9 6.8 2001 17 Days CHF 2600
Electronics
electronics
2.9 4.7 2012 15.6 Days CHF 2400
Photonics
photonics
2.4 2.3 2014 15.5 Days CHF 2400
Journal of Sensor and Actuator Networks
jsan
3.5 7.6 2012 20.4 Days CHF 2000
Telecom
telecom
- 3.1 2020 26.1 Days CHF 1200

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (21 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
6 pages, 163 KiB  
Editorial
Machine Learning in Communication Systems and Networks
by Yichuang Sun, Haeyoung Lee and Oluyomi Simpson
Sensors 2024, 24(6), 1925; https://doi.org/10.3390/s24061925 - 17 Mar 2024
Viewed by 597
Abstract
The landscape of communication environments is undergoing a revolutionary transformation, driven by the relentless evolution of technology and the growing demands of an interconnected world [...] Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
20 pages, 668 KiB  
Article
Beamforming Optimization with the Assistance of Deep Learning in a Rate-Splitting Multiple-Access Simultaneous Wireless Information and Power Transfer System with a Power Beacon
by Mario R. Camana, Carla E. Garcia and Insoo Koo
Electronics 2024, 13(5), 872; https://doi.org/10.3390/electronics13050872 - 23 Feb 2024
Viewed by 569
Abstract
This study examined the implementation of rate-splitting multiple access (RSMA) in a multiple-input single-output system using simultaneous wireless information and power transfer (SWIPT) technology. The coexistence of a base station and a power beacon was considered, aiming to transmit information and energy to [...] Read more.
This study examined the implementation of rate-splitting multiple access (RSMA) in a multiple-input single-output system using simultaneous wireless information and power transfer (SWIPT) technology. The coexistence of a base station and a power beacon was considered, aiming to transmit information and energy to two sets of users. One set comprises users who solely harvest energy, whereas the other can decode information and energy using a power-splitting (PS) structure. The main objective of this optimization was to minimize the total transmit power of the system while satisfying the rate requirements for PS users and ensuring minimum energy harvesting (EH) for both PS and EH users. The non-convex problem was addressed by dividing it into two subproblems. The first subproblem was solved using a deep learning-based scheme, combining principal component analysis and a deep neural network. The semidefinite relaxation method was used to solve the second subproblem. The proposed method offers lower computational complexity compared to traditional iterative-based approaches. The simulation results demonstrate the superior performance of the proposed scheme compared to traditional methods such as non-orthogonal multiple access and space-division multiple access. Furthermore, the ability of the proposed method to generalize was validated by assessing its effectiveness across several challenging scenarios. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

21 pages, 878 KiB  
Article
Service-Aware Hierarchical Fog–Cloud Resource Mappingfor e-Health with Enhanced-Kernel SVM
by Alaa AlZailaa, Hao Ran Chi, Ayman Radwan and Rui L. Aguiar
J. Sens. Actuator Netw. 2024, 13(1), 10; https://doi.org/10.3390/jsan13010010 - 01 Feb 2024
Viewed by 1189
Abstract
Fog–cloud-based hierarchical task-scheduling methods are embracing significant challenges to support e-Health applications due to the large number of users, high task diversity, and harsher service-level requirements. Addressing the challenges of fog–cloud integration, this paper proposes a new service/network-aware fog–cloud hierarchical resource-mapping scheme, which [...] Read more.
Fog–cloud-based hierarchical task-scheduling methods are embracing significant challenges to support e-Health applications due to the large number of users, high task diversity, and harsher service-level requirements. Addressing the challenges of fog–cloud integration, this paper proposes a new service/network-aware fog–cloud hierarchical resource-mapping scheme, which achieves optimized resource utilization efficiency and minimized latency for service-level critical tasks in e-Health applications. Concretely, we develop a service/network-aware task classification algorithm. We adopt support vector machine as a backbone with fast computational speed to support real-time task scheduling, and we develop a new kernel, fusing convolution, cross-correlation, and auto-correlation, to gain enhanced specificity and sensitivity. Based on task classification, we propose task priority assignment and resource-mapping algorithms, which aim to achieve minimized overall latency for critical tasks and improve resource utilization efficiency. Simulation results showcase that the proposed algorithm is able to achieve average execution times for critical/non-critical tasks of 0.23/0.50 ms in diverse networking setups, which surpass the benchmark scheme by 73.88%/52.01%, respectively. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

21 pages, 3793 KiB  
Article
Simplified Deep Reinforcement Learning Approach for Channel Prediction in Power Domain NOMA System
by Mohamed Gaballa and Maysam Abbod
Sensors 2023, 23(21), 9010; https://doi.org/10.3390/s23219010 - 06 Nov 2023
Viewed by 1168
Abstract
In this work, the impact of implementing Deep Reinforcement Learning (DRL) in predicting the channel parameters for user devices in a Power Domain Non-Orthogonal Multiple Access system (PD-NOMA) is investigated. In the channel prediction process, DRL based on deep Q networks (DQN) algorithm [...] Read more.
In this work, the impact of implementing Deep Reinforcement Learning (DRL) in predicting the channel parameters for user devices in a Power Domain Non-Orthogonal Multiple Access system (PD-NOMA) is investigated. In the channel prediction process, DRL based on deep Q networks (DQN) algorithm will be developed and incorporated into the NOMA system so that this developed DQN model can be employed to estimate the channel coefficients for each user device in NOMA system. The developed DQN scheme will be structured as a simplified approach to efficiently predict the channel parameters for each user in order to maximize the downlink sum rates for all users in the system. In order to approximate the channel parameters for each user device, this proposed DQN approach is first initialized using random channel statistics, and then the proposed DQN model will be dynamically updated based on the interaction with the environment. The predicted channel parameters will be utilized at the receiver side to recover the desired data. Furthermore, this work inspects how the channel estimation process based on the simplified DQN algorithm and the power allocation policy, can both be integrated for the purpose of multiuser detection in the examined NOMA system. Simulation results, based on several performance metrics, have demonstrated that the proposed simplified DQN algorithm can be a competitive algorithm for channel parameters estimation when compared to different benchmark schemes for channel estimation processes such as deep neural network (DNN) based long-short term memory (LSTM), RL based Q algorithm, and channel estimation scheme based on minimum mean square error (MMSE) procedure. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

35 pages, 1550 KiB  
Article
Recent Advances in Machine Learning for Network Automation in the O-RAN
by Mutasem Q. Hamdan, Haeyoung Lee, Dionysia Triantafyllopoulou, Rúben Borralho, Abdulkadir Kose, Esmaeil Amiri, David Mulvey, Wenjuan Yu, Rafik Zitouni, Riccardo Pozza, Bernie Hunt, Hamidreza Bagheri, Chuan Heng Foh, Fabien Heliot, Gaojie Chen, Pei Xiao, Ning Wang and Rahim Tafazolli
Sensors 2023, 23(21), 8792; https://doi.org/10.3390/s23218792 - 28 Oct 2023
Cited by 1 | Viewed by 2458
Abstract
The evolution of network technologies has witnessed a paradigm shift toward open and intelligent networks, with the Open Radio Access Network (O-RAN) architecture emerging as a promising solution. O-RAN introduces disaggregation and virtualization, enabling network operators to deploy multi-vendor and interoperable solutions. However, [...] Read more.
The evolution of network technologies has witnessed a paradigm shift toward open and intelligent networks, with the Open Radio Access Network (O-RAN) architecture emerging as a promising solution. O-RAN introduces disaggregation and virtualization, enabling network operators to deploy multi-vendor and interoperable solutions. However, managing and automating the complex O-RAN ecosystem presents numerous challenges. To address this, machine learning (ML) techniques have gained considerable attention in recent years, offering promising avenues for network automation in O-RAN. This paper presents a comprehensive survey of the current research efforts on network automation usingML in O-RAN.We begin by providing an overview of the O-RAN architecture and its key components, highlighting the need for automation. Subsequently, we delve into O-RAN support forML techniques. The survey then explores challenges in network automation usingML within the O-RAN environment, followed by the existing research studies discussing application of ML algorithms and frameworks for network automation in O-RAN. The survey further discusses the research opportunities by identifying important aspects whereML techniques can benefit. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

17 pages, 4307 KiB  
Article
Applying an Adaptive Neuro-Fuzzy Inference System to Path Loss Prediction in a Ruby Mango Plantation
by Supachai Phaiboon and Pisit Phokharatkul
J. Sens. Actuator Netw. 2023, 12(5), 71; https://doi.org/10.3390/jsan12050071 - 07 Oct 2023
Viewed by 1116
Abstract
The application of wireless sensor networks (WSNs) in smart agriculture requires accurate path loss prediction to determine the coverage area and system capacity. However, fast fading from environment changes, such as leaf movement, unsymmetrical tree structures and near-ground effects, makes the path loss [...] Read more.
The application of wireless sensor networks (WSNs) in smart agriculture requires accurate path loss prediction to determine the coverage area and system capacity. However, fast fading from environment changes, such as leaf movement, unsymmetrical tree structures and near-ground effects, makes the path loss prediction inaccurate. Artificial intelligence (AI) technologies can be used to facilitate this task for training the real environments. In this study, we performed path loss measurements in a Ruby mango plantation at a frequency of 433 MHz. Then, an adaptive neuro-fuzzy inference system (ANFIS) was applied to path loss prediction. The ANFIS required two inputs for the path loss prediction: the distance and antenna height corresponding to the tree level (i.e., trunk and bottom, middle, and top canopies). We evaluated the performance of the ANFIS by comparing it with empirical path loss models widely used in the literature. The ANFIS demonstrated a superior prediction accuracy with high sensitivity compared to the empirical models, although the performance was affected by the tree level. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

22 pages, 4842 KiB  
Article
Automatic Modulation Classification with Deep Neural Networks
by Clayton A. Harper, Mitchell A. Thornton and Eric C. Larson
Electronics 2023, 12(18), 3962; https://doi.org/10.3390/electronics12183962 - 20 Sep 2023
Cited by 1 | Viewed by 1153
Abstract
Automatic modulation classification is an important component in many modern aeronautical communication systems to achieve efficient spectrum usage in congested wireless environments and other communications systems applications. In recent years, numerous convolutional deep learning architectures have been proposed for automatically classifying the modulation [...] Read more.
Automatic modulation classification is an important component in many modern aeronautical communication systems to achieve efficient spectrum usage in congested wireless environments and other communications systems applications. In recent years, numerous convolutional deep learning architectures have been proposed for automatically classifying the modulation used on observed signal bursts. However, a comprehensive analysis of these differing architectures and the importance of each design element has not been carried out. Thus, it is unclear what trade-offs the differing designs of these convolutional neural networks might have. In this research, we investigate numerous architectures for automatic modulation classification and perform a comprehensive ablation study to investigate the impacts of varying hyperparameters and design elements on automatic modulation classification accuracy. We show that a new state-of-the-art accuracy can be achieved using a subset of the studied design elements, particularly as applied to modulation classification over intercepted bursts of varying time duration. In particular, we show that a combination of dilated convolutions, statistics pooling, and squeeze-and-excitation units results in the strongest performing classifier achieving 98.9% peak accuracy and 63.7% overall accuracy on the RadioML 2018.01A dataset. We further investigate this best performer according to various other criteria, including short signal bursts of varying length, common misclassifications, and performance across differing modulation categories and modes. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

19 pages, 1297 KiB  
Article
Low-Latency Collaborative Predictive Maintenance: Over-the-Air Federated Learning in Noisy Industrial Environments
by Ali Bemani and Niclas Björsell
Sensors 2023, 23(18), 7840; https://doi.org/10.3390/s23187840 - 12 Sep 2023
Viewed by 884
Abstract
The emergence of Industry 4.0 has revolutionized the industrial sector, enabling the development of compact, precise, and interconnected assets. This transformation has not only generated vast amounts of data but also facilitated the migration of learning and optimization processes to edge devices. Consequently, [...] Read more.
The emergence of Industry 4.0 has revolutionized the industrial sector, enabling the development of compact, precise, and interconnected assets. This transformation has not only generated vast amounts of data but also facilitated the migration of learning and optimization processes to edge devices. Consequently, modern industries can effectively leverage this paradigm through distributed learning to define product quality and implement predictive maintenance (PM) strategies. While computing speeds continue to advance rapidly, the latency in communication has emerged as a bottleneck for fast edge learning, particularly in time-sensitive applications such as PM. To address this issue, we explore Federated Learning (FL), a privacy-preserving framework. FL entails updating a global AI model on a parameter server (PS) through aggregation of locally trained models from edge devices. We propose an innovative approach: analog aggregation over-the-air of updates transmitted concurrently over wireless channels. This leverages the waveform-superposition property in multi-access channels, significantly reducing communication latency compared to conventional methods. However, it is vulnerable to performance degradation due to channel properties like noise and fading. In this study, we introduce a method to mitigate the impact of channel noise in FL over-the-air communication and computation (FLOACC). We integrate a novel tracking-based stochastic approximation scheme into a standard federated stochastic variance reduced gradient (FSVRG). This effectively averages out channel noise’s influence, ensuring robust FLOACC performance without increasing transmission power gain. Numerical results confirm our approach’s superior communication efficiency and scalability in various FL scenarios, especially when dealing with noisy channels. Simulation experiments also highlight significant enhancements in prediction accuracy and loss function reduction for analog aggregation in over-the-air FL scenarios. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

26 pages, 3747 KiB  
Article
An Adaptive Bandwidth Management Algorithm for Next-Generation Vehicular Networks
by Chenn-Jung Huang, Kai-Wen Hu and Hao-Wen Cheng
Sensors 2023, 23(18), 7767; https://doi.org/10.3390/s23187767 - 08 Sep 2023
Viewed by 945
Abstract
The popularity of video services such as video call or video on-demand has made it impossible for people to live without them in their daily lives. It can be anticipated that the explosive growth of vehicular communication owing to the widespread use of [...] Read more.
The popularity of video services such as video call or video on-demand has made it impossible for people to live without them in their daily lives. It can be anticipated that the explosive growth of vehicular communication owing to the widespread use of in-vehicle video infotainment applications in the future will result in increasing fragmentation and congestion of the wireless transmission spectrum. Accordingly, effective bandwidth management algorithms are demanded to achieve efficient communication and stable scalability in next-generation vehicular networks. To the best of our current knowledge, a noticeable gap remains in the existing literature regarding the application of the latest advancements in network communication technologies. Specifically, this gap is evident in the lack of exploration regarding how cutting-edge technologies can be effectively employed to optimize bandwidth allocation, especially in the realm of video service applications within the forthcoming vehicular networks. In light of this void, this paper presents a seamless integration of cutting-edge 6G communication technologies, such as terahertz (THz) and visible light communication (VLC), with the existing 5G millimeter-wave and sub-6 GHz base stations. This integration facilitates the creation of a network environment characterized by high transmission rates and extensive coverage. Our primary aim is to ensure the uninterrupted playback of real-time video applications for vehicle users. These video applications encompass video conferencing, live video, and on-demand video services. The outcomes of our simulations convincingly indicate that the proposed strategy adeptly addresses the challenge of bandwidth competition among vehicle users. Moreover, it notably boosts the efficient utilization of bandwidth from less crowded base stations, optimizes the fulfillment of bandwidth prerequisites for various video applications, and elevates the overall video quality experienced by users. Consequently, our findings serve as a successful validation of the practicality and effectiveness of the proposed methodology. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

18 pages, 5968 KiB  
Article
A Multi-Task Classification Method for Application Traffic Classification Using Task Relationships
by Ui-Jun Baek, Boseon Kim, Jee-Tae Park, Jeong-Woo Choi and Myung-Sup Kim
Electronics 2023, 12(17), 3597; https://doi.org/10.3390/electronics12173597 - 25 Aug 2023
Viewed by 909
Abstract
As IT technology advances, the number and types of applications, such as SNS, content, and shopping, have increased across various fields, leading to the emergence of complex and diverse application traffic. As a result, the demand for effective network operation, management, and analysis [...] Read more.
As IT technology advances, the number and types of applications, such as SNS, content, and shopping, have increased across various fields, leading to the emergence of complex and diverse application traffic. As a result, the demand for effective network operation, management, and analysis has increased. In particular, service or application traffic classification research is an important area of study in network management. Web services are composed of a combination of multiple applications, and one or more application traffic can be mixed within service traffic. However, most existing research only classifies application traffic by service unit, resulting in high misclassification rates and making detailed management impossible. To address this issue, this paper proposes three multitask learning methods for application traffic classification using the relationships among tasks composed of browsers, protocols, services, and application units. The proposed methods aim to improve classification performance under the assumption that there are relationships between tasks. Experimental results demonstrate that by utilizing relationships between various tasks, the proposed method can classify applications with 4.4%p higher accuracy. Furthermore, the proposed methods can provide network administrators with information about multiple perspectives with high confidence, and the generalized multitask methods are freely portable to other backbone networks. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

20 pages, 2103 KiB  
Article
Joint Optimization of Bandwidth and Power Allocation in Uplink Systems with Deep Reinforcement Learning
by Chongli Zhang, Tiejun Lv, Pingmu Huang, Zhipeng Lin, Jie Zeng and Yuan Ren
Sensors 2023, 23(15), 6822; https://doi.org/10.3390/s23156822 - 31 Jul 2023
Viewed by 1111
Abstract
Wireless resource utilizations are the focus of future communication, which are used constantly to alleviate the communication quality problem caused by the explosive interference with increasing users, especially the inter-cell interference in the multi-cell multi-user systems. To tackle this interference and improve the [...] Read more.
Wireless resource utilizations are the focus of future communication, which are used constantly to alleviate the communication quality problem caused by the explosive interference with increasing users, especially the inter-cell interference in the multi-cell multi-user systems. To tackle this interference and improve the resource utilization rate, we proposed a joint-priority-based reinforcement learning (JPRL) approach to jointly optimize the bandwidth and transmit power allocation. This method aims to maximize the average throughput of the system while suppressing the co-channel interference and guaranteeing the quality of service (QoS) constraint. Specifically, we de-coupled the joint problem into two sub-problems, i.e., the bandwidth assignment and power allocation sub-problems. The multi-agent double deep Q network (MADDQN) was developed to solve the bandwidth allocation sub-problem for each user and the prioritized multi-agent deep deterministic policy gradient (P-MADDPG) algorithm by deploying a prioritized replay buffer that is designed to handle the transmit power allocation sub-problem. Numerical results show that the proposed JPRL method could accelerate model training and outperform the alternative methods in terms of throughput. For example, the average throughput was approximately 10.4–15.5% better than the homogeneous-learning-based benchmarks, and about 17.3% higher than the genetic algorithm. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

16 pages, 754 KiB  
Article
Fast-Convergence Reinforcement Learning for Routing in LEO Satellite Networks
by Zhaolong Ding, Huijie Liu, Feng Tian, Zijian Yang and Nan Wang
Sensors 2023, 23(11), 5180; https://doi.org/10.3390/s23115180 - 29 May 2023
Viewed by 1789
Abstract
Fast convergence routing is a critical issue for Low Earth Orbit (LEO) constellation networks because these networks have dynamic topology changes, and transmission requirements can vary over time. However, most of the previous research has focused on the Open Shortest Path First (OSPF) [...] Read more.
Fast convergence routing is a critical issue for Low Earth Orbit (LEO) constellation networks because these networks have dynamic topology changes, and transmission requirements can vary over time. However, most of the previous research has focused on the Open Shortest Path First (OSPF) routing algorithm, which is not well-suited to handle the frequent changes in the link state of the LEO satellite network. In this regard, we propose a Fast-Convergence Reinforcement Learning Satellite Routing Algorithm (FRL–SR) for LEO satellite networks, where the satellite can quickly obtain the network link status and adjust its routing strategy accordingly. In FRL–SR, each satellite node is considered an agent, and the agent selects the appropriate port for packet forwarding based on its routing policy. Whenever the satellite network state changes, the agent sends “hello” packets to the neighboring nodes to update their routing policy. Compared to traditional reinforcement learning algorithms, FRL–SR can perceive network information faster and converge faster. Additionally, FRL–SR can mask the dynamics of the satellite network topology and adaptively adjust the forwarding strategy based on the link state. The experimental results demonstrate that the proposed FRL–SR algorithm outperforms the Dijkstra algorithm in the performance of average delay, packet arriving ratio, and network load balance. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

17 pages, 17873 KiB  
Article
A REM Update Methodology Based on Clustering and Random Forest
by Mario R. Camana, Carla E. Garcia, Taewoong Hwang and Insoo Koo
Appl. Sci. 2023, 13(9), 5362; https://doi.org/10.3390/app13095362 - 25 Apr 2023
Cited by 3 | Viewed by 1228
Abstract
In this paper, we propose a radio environment map (REM) update methodology based on clustering and machine learning for indoor coverage. We use real measurements collected by the TurtleBot3 mobile robot using the received signal strength indicator (RSSI) as a measure of link [...] Read more.
In this paper, we propose a radio environment map (REM) update methodology based on clustering and machine learning for indoor coverage. We use real measurements collected by the TurtleBot3 mobile robot using the received signal strength indicator (RSSI) as a measure of link quality between transmitter and receiver. We propose a practical framework for timely updates to the REM for dynamic wireless communication environments where we need to deal with variations in physical element distributions, environmental factors, movements of people and devices, and so on. In the proposed approach, we first rely on a historical dataset from the area of interest, which is used to determine the number of clusters via the K-means algorithm. Next, we divide the samples from the historical dataset into clusters, and we train one random forest (RF) model with the corresponding historical data from each cluster. Then, when new data measurements are collected, these new samples are assigned to one cluster for a timely update of the RF model. Simulation results validate the superior performance of the proposed scheme, compared with several well-known ML algorithms and a baseline scheme without clustering. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

16 pages, 16819 KiB  
Article
A Cascade Network for Blind Recognition of LDPC Codes
by Xiang Zhang and Wei Zhang
Electronics 2023, 12(9), 1979; https://doi.org/10.3390/electronics12091979 - 24 Apr 2023
Viewed by 1127
Abstract
Coding blind recognition plays a vital role in non-cooperative communication. Most of the algorithm for coding blind recognition of Low Density Parity Check (LDPC) codes is difficult to apply and the problem of high time complexity and high space complexity cannot be solved. [...] Read more.
Coding blind recognition plays a vital role in non-cooperative communication. Most of the algorithm for coding blind recognition of Low Density Parity Check (LDPC) codes is difficult to apply and the problem of high time complexity and high space complexity cannot be solved. Inspired by deep learning, we propose an architecture for coding blind recognition of LDPC codes. This architecture concatenates a Transformer-based network with a convolution neural network (CNN). The CNN is used to suppress the noise in real time, followed by a Transformer-based neural network aimed to identify the rate and length of the LDPC codes. In order to train denoise networks and recognition networks with high performance, we build our own datasets and define loss functions for the denoise networks. Simulation results show that this architecture is able to achieve better performance than the traditional method at a lower signal-noise ratio (SNR). Compared with the existing methods, this approach is more flexible and can therefore be quickly deployed. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

13 pages, 30538 KiB  
Communication
Optical Encoding Model Based on Orbital Angular Momentum Powered by Machine Learning
by Erick Lamilla, Christian Sacarelo, Manuel S. Alvarez-Alvarado, Arturo Pazmino and Peter Iza
Sensors 2023, 23(5), 2755; https://doi.org/10.3390/s23052755 - 02 Mar 2023
Cited by 6 | Viewed by 1938
Abstract
Based on orbital angular momentum (OAM) properties of Laguerre–Gaussian beams LG(p,), a robust optical encoding model for efficient data transmission applications is designed. This paper presents an optical encoding model based on an intensity profile generated by a coherent [...] Read more.
Based on orbital angular momentum (OAM) properties of Laguerre–Gaussian beams LG(p,), a robust optical encoding model for efficient data transmission applications is designed. This paper presents an optical encoding model based on an intensity profile generated by a coherent superposition of two OAM-carrying Laguerre–Gaussian modes and a machine learning detection method. In the encoding process, the intensity profile for data encoding is generated based on the selection of p and indices, while the decoding process is performed using a support vector machine (SVM) algorithm. Two different decoding models based on an SVM algorithm are tested to verify the robustness of the optical encoding model, finding a BER =109 for 10.2 dB of signal-to-noise ratio in one of the SVM models. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

28 pages, 3576 KiB  
Article
A Study on the Impact of Integrating Reinforcement Learning for Channel Prediction and Power Allocation Scheme in MISO-NOMA System
by Mohamed Gaballa, Maysam Abbod and Ammar Aldallal
Sensors 2023, 23(3), 1383; https://doi.org/10.3390/s23031383 - 26 Jan 2023
Cited by 2 | Viewed by 2349
Abstract
In this study, the influence of adopting Reinforcement Learning (RL) to predict the channel parameters for user devices in a Power Domain Multi-Input Single-Output Non-Orthogonal Multiple Access (MISO-NOMA) system is inspected. In the channel prediction-based RL approach, the Q-learning algorithm is developed and [...] Read more.
In this study, the influence of adopting Reinforcement Learning (RL) to predict the channel parameters for user devices in a Power Domain Multi-Input Single-Output Non-Orthogonal Multiple Access (MISO-NOMA) system is inspected. In the channel prediction-based RL approach, the Q-learning algorithm is developed and incorporated into the NOMA system so that the developed Q-model can be employed to predict the channel coefficients for every user device. The purpose of adopting the developed Q-learning procedure is to maximize the received downlink sum-rate and decrease the estimation loss. To satisfy this aim, the developed Q-algorithm is initialized using different channel statistics and then the algorithm is updated based on the interaction with the environment in order to approximate the channel coefficients for each device. The predicted parameters are utilized at the receiver side to recover the desired data. Furthermore, based on maximizing the sum-rate of the examined user devices, the power factors for each user can be deduced analytically to allocate the optimal power factor for every user device in the system. In addition, this work inspects how the channel prediction based on the developed Q-learning model, and the power allocation policy, can both be incorporated for the purpose of multiuser recognition in the examined MISO-NOMA system. Simulation results, based on several performance metrics, have demonstrated that the developed Q-learning algorithm can be a competitive algorithm for channel estimation when compared to different benchmark schemes such as deep learning-based long short-term memory (LSTM), RL based actor-critic algorithm, RL based state-action-reward-state-action (SARSA) algorithm, and standard channel estimation scheme based on minimum mean square error procedure. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

16 pages, 3990 KiB  
Article
Performance Enhancement in Federated Learning by Reducing Class Imbalance of Non-IID Data
by Mihye Seol and Taejoon Kim
Sensors 2023, 23(3), 1152; https://doi.org/10.3390/s23031152 - 19 Jan 2023
Cited by 5 | Viewed by 2200
Abstract
Due to the distributed data collection and learning in federated learnings, many clients conduct local training with non-independent and identically distributed (non-IID) datasets. Accordingly, the training from these datasets results in severe performance degradation. We propose an efficient algorithm for enhancing the performance [...] Read more.
Due to the distributed data collection and learning in federated learnings, many clients conduct local training with non-independent and identically distributed (non-IID) datasets. Accordingly, the training from these datasets results in severe performance degradation. We propose an efficient algorithm for enhancing the performance of federated learning by overcoming the negative effects of non-IID datasets. First, the intra-client class imbalance is reduced by rendering the class distribution of clients close to Uniform distribution. Second, the clients to participate in federated learning are selected to make their integrated class distribution close to Uniform distribution for the purpose of mitigating the inter-client class imbalance, which represents the class distribution difference among clients. In addition, the amount of local training data for the selected clients is finely adjusted. Finally, in order to increase the efficiency of federated learning, the batch size and the learning rate of local training for the selected clients are dynamically controlled reflecting the effective size of the local dataset for each client. In the performance evaluation on CIFAR-10 and MNIST datasets, the proposed algorithm achieves 20% higher accuracy than existing federated learning algorithms. Moreover, in achieving this huge accuracy improvement, the proposed algorithm uses less computation and communication resources compared to existing algorithms in terms of the amount of data used and the number of clients joined in the training. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

15 pages, 1028 KiB  
Article
Vehicular Environment Identification Based on Channel State Information and Deep Learning
by Soheyb Ribouh, Rahmad Sadli, Yassin Elhillali, Atika Rivenq and Abdenour Hadid
Sensors 2022, 22(22), 9018; https://doi.org/10.3390/s22229018 - 21 Nov 2022
Viewed by 1574
Abstract
This paper presents a novel vehicular environment identification approach based on deep learning. It consists of exploiting the vehicular wireless channel characteristics in the form of Channel State Information (CSI) in the receiver side of a connected vehicle in order to identify the [...] Read more.
This paper presents a novel vehicular environment identification approach based on deep learning. It consists of exploiting the vehicular wireless channel characteristics in the form of Channel State Information (CSI) in the receiver side of a connected vehicle in order to identify the environment type in which the vehicle is driving, without any need to implement specific sensors such as cameras or radars. We consider environment identification as a classification problem, and propose a new convolutional neural network (CNN) architecture to deal with it. The estimated CSI is used as the input feature to train the model. To perform the identification process, the model is targeted for implementation in an autonomous vehicle connected to a vehicular network (VN). The proposed model is extensively evaluated, showing that it can reliably recognize the surrounding environment with high accuracy (96.48%). Our model is compared to related approaches and state-of-the-art classification architectures. The experiments show that our proposed model yields favorable performance compared to all other considered methods. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

17 pages, 1136 KiB  
Article
Federated Deep Reinforcement Learning for Joint AeBSs Deployment and Computation Offloading in Aerial Edge Computing Network
by Lei Liu, Yikun Zhao, Fei Qi, Fanqin Zhou, Weiliang Xie, Haoran He and Hao Zheng
Electronics 2022, 11(21), 3641; https://doi.org/10.3390/electronics11213641 - 07 Nov 2022
Cited by 2 | Viewed by 1827
Abstract
In the 6G aerial network, all aerial communication nodes have computing and storage functions and can perform real-time wireless signal processing and resource management. In order to make full use of the computing resources of aerial nodes, this paper studies the mobile edge [...] Read more.
In the 6G aerial network, all aerial communication nodes have computing and storage functions and can perform real-time wireless signal processing and resource management. In order to make full use of the computing resources of aerial nodes, this paper studies the mobile edge computing (MEC) system based on aerial base stations (AeBSs), proposes the joint optimization problem of computation the offloading and deployment control of AeBSs for the goals of the lowest task processing delay and energy consumption, and designs a deployment and computation offloading scheme based on federated deep reinforcement learning. Specifically, each low-altitude AeBS agent simultaneously trains two neural networks to handle the generation of the deployment and offloading strategies, respectively, and a high-altitude global node aggregates the local model parameters uploaded by each low-altitude platform. The agents can be trained offline and updated quickly online according to changes in the environment and can quickly generate the optimal deployment and offloading strategies. The simulation results show that our method can achieve good performance in a very short time. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

13 pages, 3098 KiB  
Article
Sightless but Not Blind: A Non-Ideal Spectrum Sensing Algorithm Countering Intelligent Jamming for Wireless Communication
by Ziming Pu, Yingtao Niu, Peng Xiang and Guoliang Zhang
Electronics 2022, 11(20), 3402; https://doi.org/10.3390/electronics11203402 - 20 Oct 2022
Viewed by 1116
Abstract
Aiming at the existing intelligent anti-jamming communication methods that fail to consider the problem that sensing is inaccurate, this paper puts forward an intelligent anti-jamming method for wireless communication under non-ideal spectrum sensing (NISS). Under the malicious jamming environment, the wireless communication system [...] Read more.
Aiming at the existing intelligent anti-jamming communication methods that fail to consider the problem that sensing is inaccurate, this paper puts forward an intelligent anti-jamming method for wireless communication under non-ideal spectrum sensing (NISS). Under the malicious jamming environment, the wireless communication system uses Q-learning (QL) to learn the change law of jamming, and considers the false alarm and missed detection probability of jamming sensing, and selects the channel with long-term optimal reporting in each time slot for communication. The simulation results show that under linear sweep jamming and intelligent blocking jamming, the proposed algorithm converges faster than QL with the same decision accuracy. Compared with wide-band spectrum sensing (WBSS), an algorithm which failed to consider non-ideal spectrum sensing, the decision accuracy of the proposed algorithm is higher with the same convergence rate. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

9 pages, 1242 KiB  
Communication
High Speed Decoding for High-Rate and Short-Length Reed–Muller Code Using Auto-Decoder
by Hyun Woo Cho and Young Joon Song
Appl. Sci. 2022, 12(18), 9225; https://doi.org/10.3390/app12189225 - 14 Sep 2022
Viewed by 1261
Abstract
In this paper, we show that applying a machine learning technique called auto-decoder (AD) to high-rate and short length Reed–Muller (RM) decoding enables it to achieve maximum likelihood decoding (MLD) performance and faster decoding speed than when fast Hadamard transform (FHT) is applied [...] Read more.
In this paper, we show that applying a machine learning technique called auto-decoder (AD) to high-rate and short length Reed–Muller (RM) decoding enables it to achieve maximum likelihood decoding (MLD) performance and faster decoding speed than when fast Hadamard transform (FHT) is applied in additive white Gaussian noise (AWGN) channels. The decoding speed is approximately 1.8 times and 125 times faster than the FHT decoding for R(1,4) and R(2,4), respectively. The number of nodes in the hidden layer of AD is larger than that of the input layer, unlike the conventional auto-encoder (AE). Two ADs are combined in parallel and merged together, and then cascaded to one fully connected layer to improve the bit error rate (BER) performance of the code. Full article
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)
Show Figures

Figure 1

Back to TopTop