Machine Learning for Mobile Networks

A special issue of Future Internet (ISSN 1999-5903). This special issue belongs to the section "Network Virtualization and Edge/Fog Computing".

Deadline for manuscript submissions: closed (20 January 2023) | Viewed by 4412

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science and Engineering, Kyung Hee University, Yongin-si 17104, Gyeonggi-do, Korea
Interests: edge computing; machine learning; networking intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science and Engineering, Kyung Hee University, Yongin-si, Gyeonggi-do 17104, Korea
Interests: federated learning; game theory; wireless resource management; edge computing

Special Issue Information

Dear Colleagues,

Recent years have disclosed a tremendous increase in interest in smart wireless applications, such as intelligent transportation systems, digital healthcare, and Industry 4.0, among others, from both academia and industry. The end-devices of these applications generate a significant amount of data. Furthermore, there are many scenarios where it is not possible to mathematically model mobile network functions. To address such limitations, one can use machine learning to model mobile network functions for making them smarter. Machine learning will use the data generated by mobile networks for training and thus improve their performance. Machine learning can be divided mainly into two categories, centralized machine learning and federated learning. Centralized machine learning relies on training of the machine learning model at a third-party centralized location, whereas federated learning enables on-device machine learning without the need to migrate the device’s data to the centralized server. Centralized machine learning requires the migration of device data to the centralized server, and this raises privacy concerns. Meanwhile, federated learning faces data heterogeneity issues. The purpose of this SI is to cover both centralized machine learning and federated learning for wireless systems. Suggested topics include but are not limited to the following:

  • Federated-learning-based smart applications (e.g., healthcare, intelligent transportation systems);
  • Communication efficient federated learning models;
  • Incentive mechanism for federated learning;
  • Personalized federated learning;
  • Deep-learning-based applications;
  • Applications of centralized and federated learning in optimization of wireless systems, including radio resources management, caching, edge computing resource management;
  • Privacy-aware design for machine learning;
  • Robust architecture for federated learning.

Prof. Dr. Choong Seon Hong
Dr. Latif U. Khan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • federated learning
  • centralized machine learning
  • mobile network
  • wireless systems
  • radio resources management
  • caching, edge computing resource management

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

27 pages, 1093 KiB  
Article
On the Use of Knowledge Transfer Techniques for Biomedical Named Entity Recognition
by Tahir Mehmood, Ivan Serina, Alberto Lavelli, Luca Putelli and Alfonso Gerevini
Future Internet 2023, 15(2), 79; https://doi.org/10.3390/fi15020079 - 17 Feb 2023
Cited by 2 | Viewed by 1623
Abstract
Biomedical named entity recognition (BioNER) is a preliminary task for many other tasks, e.g., relation extraction and semantic search. Extracting the text of interest from biomedical documents becomes more demanding as the availability of online data is increasing. Deep learning models have been [...] Read more.
Biomedical named entity recognition (BioNER) is a preliminary task for many other tasks, e.g., relation extraction and semantic search. Extracting the text of interest from biomedical documents becomes more demanding as the availability of online data is increasing. Deep learning models have been adopted for biomedical named entity recognition (BioNER) as deep learning has been found very successful in many other tasks. Nevertheless, the complex structure of biomedical text data is still a challenging aspect for deep learning models. Limited annotated biomedical text data make it more difficult to train deep learning models with millions of trainable parameters. The single-task model, which focuses on learning a specific task, has issues in learning complex feature representations from a limited quantity of annotated data. Moreover, manually constructing annotated data is a time-consuming job. It is, therefore, vital to exploit other efficient ways to train deep learning models on the available annotated data. This work enhances the performance of the BioNER task by taking advantage of various knowledge transfer techniques: multitask learning and transfer learning. This work presents two multitask models (MTMs), which learn shared features and task-specific features by implementing the shared and task-specific layers. In addition, the presented trained MTM is also fine-tuned for each specific dataset to tailor it from a general features representation to a specialized features representation. The presented empirical results and statistical analysis from this work illustrate that the proposed techniques enhance significantly the performance of the corresponding single-task model (STM). Full article
(This article belongs to the Special Issue Machine Learning for Mobile Networks)
Show Figures

Figure 1

14 pages, 347 KiB  
Article
Distributed Bandwidth Allocation Strategy for QoE Fairness of Multiple Video Streams in Bottleneck Links
by Yazhi Liu, Dongyu Wei, Chunyang Zhang and Wei Li
Future Internet 2022, 14(5), 152; https://doi.org/10.3390/fi14050152 - 18 May 2022
Cited by 2 | Viewed by 2197
Abstract
In QoE fairness optimization of multiple video streams, a distributed video stream fairness scheduling strategy based on federated deep reinforcement learning is designed to address the problem of low bandwidth utilization due to unfair bandwidth allocation and the problematic convergence of distributed algorithms [...] Read more.
In QoE fairness optimization of multiple video streams, a distributed video stream fairness scheduling strategy based on federated deep reinforcement learning is designed to address the problem of low bandwidth utilization due to unfair bandwidth allocation and the problematic convergence of distributed algorithms in cooperative control of multiple video streams. The proposed strategy predicts a reasonable bandwidth allocation weight for the current video stream according to its player state and the global characteristics provided by the server. Then the congestion control protocol allocates the proportion of available bandwidth, matching its bandwidth allocation weight to each video stream in the bottleneck link. The strategy trains a local predictive model on each client and periodically performs federated aggregation to generate the optimal global scheme. In addition, the proposed strategy constructs global parameters containing information about the overall state of the video system to improve the performance of the distributed scheduling algorithm. The experimental results show that the introduction of global parameters can improve the algorithm’s QoE fairness and overall QoE efficiency by 10% and 8%, respectively. The QoE fairness and overall QoE efficiency are improved by 8% and 7%, respectively, compared with the latest scheme. Full article
(This article belongs to the Special Issue Machine Learning for Mobile Networks)
Show Figures

Figure 1

Back to TopTop