AI Used in Mobile Communications and Networks

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 16 May 2024 | Viewed by 3563

Special Issue Editors


E-Mail Website
Guest Editor
School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China
Interests: mobile communications and networks; artificial intelligence

E-Mail Website
Guest Editor
School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China
Interests: mobile communications and networks; intelligent signal detection
School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China
Interests: mobile communications and networks; intelligent UAV communications

Special Issue Information

Dear Colleagues,

The continuous development of mobile communication and networks has connected everyone into the intelligent connections of all things. With the advent of the 5G/B5G era, mobile communication networks have grown more sophisticated, driving the critical need for intelligent network management, efficient resource allocation, and ultra-reliable low-latency communication. Artificial Intelligence (AI) techniques, well known in computer science disciplines as powerful tools to solve complex problems, are beginning to be applied in wireless communications to promote intelligent communication and spawn many new services. Therefore, with the joint promotion of big data, deep learning, and computing power, applying AI technology to mobile communication networks mobile communication networks has opened new research opportunities. In this Special Issue, we are particularly interested in the problems and challenges of AI applied to mobile communication networks, optimization methods, solutions, new technologies, theories, and network security.

Dr. Yiming Liu
Prof. Dr. Zhi Zhang
Dr. Yue Meng
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • mobile communication networks
  • intelligent network management
  • deep learning
  • reinforcement learning

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

33 pages, 12756 KiB  
Article
Exploring the Landscape of AI-SDN: A Comprehensive Bibliometric Analysis and Future Perspectives
by Firdaus Sahran, Hamza H. M. Altarturi and Nor Badrul Anuar
Electronics 2024, 13(1), 26; https://doi.org/10.3390/electronics13010026 - 20 Dec 2023
Viewed by 1049
Abstract
The rising influence of artificial intelligence (AI) enables widespread adoption of the technology in every aspect of computing, including Software-Defined Networking (SDN). Technological adoption leads to the convergence of AI and SDN, producing solutions that overcome limitations present in traditional networking architecture. Although [...] Read more.
The rising influence of artificial intelligence (AI) enables widespread adoption of the technology in every aspect of computing, including Software-Defined Networking (SDN). Technological adoption leads to the convergence of AI and SDN, producing solutions that overcome limitations present in traditional networking architecture. Although numerous review articles discuss the convergence of these technologies, there is a lack of bibliometric trace in this field, which is important for identifying trends, new niches, and future directions. Therefore, this study aims to fill the gap by presenting a thorough bibliometric analysis of AI-related SDN studies, referred to as AI-SDN. The study begins by identifying 474 unique documents in the Web of Science (WoS) database published from 2009 until recently. The study uses bibliometric analysis to identify the general information, countries, authorship, and content of the selected articles, thereby providing insights into the geographical and institutional landscape shaping AI-SDN research. The findings provide a robust roadmap for further investigation in this field, including the background and taxonomy of the AI-SDN field. Finally, the article discusses several challenges and the future of AI-SDN in academic research. Full article
(This article belongs to the Special Issue AI Used in Mobile Communications and Networks)
Show Figures

Figure 1

18 pages, 980 KiB  
Article
Predictive Modeling of Signal Degradation in Urban VANETs Using Artificial Neural Networks
by Bappa Muktar, Vincent Fono and Meyo Zongo
Electronics 2023, 12(18), 3928; https://doi.org/10.3390/electronics12183928 - 18 Sep 2023
Viewed by 790
Abstract
In urban Vehicular Ad Hoc Network (VANET) environments, buildings play a crucial role as they can act as obstacles that attenuate the transmission signal between vehicles. Such obstacles lead to multipath effects, which could substantially impact data transmission due to fading. Therefore, quantifying [...] Read more.
In urban Vehicular Ad Hoc Network (VANET) environments, buildings play a crucial role as they can act as obstacles that attenuate the transmission signal between vehicles. Such obstacles lead to multipath effects, which could substantially impact data transmission due to fading. Therefore, quantifying the impact of buildings on transmission quality is a key parameter of the propagation model, especially in critical scenarios involving emergency vehicles where reliable communication is of utmost importance. In this research, we propose a supervised learning approach based on Artificial Neural Networks (ANNs) to develop a predictive model capable of estimating the level of signal degradation, represented by the Bit Error Rate (BER), based on the obstacles perceived by moving emergency vehicles. By establishing a relationship between the level of signal degradation and the encountered obstacles, our proposed mechanism enables efficient routing decisions being made prior to the transmission process. Consequently, data packets are routed through paths that exhibit the lowest BER. To collect the training data, we employed Network Simulator 3 (NS-3) in conjunction with the Simulation of Urban MObility (SUMO) simulator, leveraging real-world data sourced from the OpenStreetMap (OSM) geographic database. OSM enabled us to gather geospatial data related to the Two-Dimensional (2D) geometric structure of buildings, which served as input for our Artificial Neural Network (ANN). To determine the most suitable algorithm for our ANN, we assessed the accuracy of ten learning algorithms in MATLAB, utilizing five key metrics: Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), Correlation Coefficient (R), and Maximum Prediction Error (MaxPE). For each algorithm, we conducted fifteen iterations based on ten hidden neurons and gauged its accuracy against the aforementioned metrics. Our analysis highlighted that the ANN underpinned by the Conjugate Gradient With Powell/Beale Restarts (CGB) learning algorithm exhibited superior performance in terms of MSE, RMSE, MAE, R, and MaxPE compared to other algorithms such as Levenberg–Marquardt (LM), Bayesian Regularization (BR), BFGS Quasi-Newton (BFG), Resilient Backpropagation (RP), Scaled Conjugate Gradient (SCG), Fletcher–Powell Conjugate Gradient (CGF), Polak–Ribiére Conjugate Gradient (CGP), One-Step Secant (OSS), and Variable Learning Rate Backpropagation (GDX). The BER prediction by our ANN incorporates the TWO-RAY Ground (TRG) propagation model, an adjustable parameter within NS-3. When subjected to 300 new samples, the trained ANN’s simulation outcomes illustrated its capability to learn, generalize, and successfully predict the BER for a new data instance. Overall, our research contributes to enhancing the performance and reliability of communication in urban VANET environments, especially in critical scenarios involving emergency vehicles, by leveraging supervised learning and artificial neural networks to predict signal degradation levels and optimize routing decisions accordingly. Full article
(This article belongs to the Special Issue AI Used in Mobile Communications and Networks)
Show Figures

Figure 1

23 pages, 1033 KiB  
Article
Deep Learning-Based Attack Detection and Classification in Android Devices
by Alfonso Gómez and Antonio Muñoz
Electronics 2023, 12(15), 3253; https://doi.org/10.3390/electronics12153253 - 28 Jul 2023
Cited by 10 | Viewed by 1276
Abstract
The increasing proliferation of Androidbased devices, which currently dominate the market with a staggering 72% global market share, has made them a prime target for attackers. Consequently, the detection of Android malware has emerged as a critical research area. Both academia and industry [...] Read more.
The increasing proliferation of Androidbased devices, which currently dominate the market with a staggering 72% global market share, has made them a prime target for attackers. Consequently, the detection of Android malware has emerged as a critical research area. Both academia and industry have explored various approaches to develop robust and efficient solutions for Android malware detection and classification, yet it remains an ongoing challenge. In this study, we present a supervised learning technique that demonstrates promising results in Android malware detection. The key to our approach lies in the creation of a comprehensive labeled dataset, comprising over 18,000 samples classified into five distinct categories: Adware, Banking, SMS, Riskware, and Benign applications. The effectiveness of our proposed model is validated using well-established datasets such as CICMalDroid2020, CICMalDroid2017, and CICAndMal2017. Comparing our results with state-of-the-art techniques in terms of precision, recall, efficiency, and other relevant factors, our approach outperforms other semi-supervised methods in specific parameters. However, we acknowledge that our model does not exhibit significant deviations when compared to alternative approaches concerning certain aspects. Overall, our research contributes to the ongoing efforts in the development of advanced techniques for Android malware detection and classification. We believe that our findings will inspire further investigations, leading to enhanced security measures and protection for Android devices in the face of evolving threats. Full article
(This article belongs to the Special Issue AI Used in Mobile Communications and Networks)
Show Figures

Figure 1

Back to TopTop