Next Article in Journal
Diagnosis of Energy Crisis of Pakistan and Assessment of DSM as Viable Solution
Previous Article in Journal
Comparative Analysis for Machine-Learning-Based Optimal Control of Upper Extremity Rehabilitation Robots
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Brief Survey: Machine Learning in Handover Cellular Network †

by
Viviana Párraga-Villamar
1,*,
Pablo Lupera-Morillo
1,
Felipe Grijalva
2 and
Henry Carvajal
3
1
Departamento de Electrónica, Telecomunicaciones y Redes de Información (DETRI), Escuela Politécnica Nacional, Quito 170525, Ecuador
2
Colegio de Ciencias e Ingenierías “El Politécnico”, Universidad San Francisco de Quito USFQ, Quito 170901, Ecuador
3
Faculty of Engineering and Applied Sciences (FICA), Telecommunications Engineering, Universidad de Las Américas (UDLA), Quito 170503, Ecuador
*
Author to whom correspondence should be addressed.
Presented at the XXXI Conference on Electrical and Electronic Engineering, Quito, Ecuador, 29 November–1 December 2023.
Eng. Proc. 2023, 47(1), 2; https://doi.org/10.3390/engproc2023047002
Published: 26 September 2023
(This article belongs to the Proceedings of XXXI Conference on Electrical and Electronic Engineering)

Abstract

:
The proposed work offers a concise review of the application of machine learning (ML) to cellular network handovers (HO) via the Systematic Mapping Study (SMS) methodology, emphasizing the problem areas and requirements. The key points include the paramount role of high-quality data, with meticulous data acquisition and preprocessing as vital steps in ML dataset construction. The article identifies prevalent parameters for HO enhancement and underscores the diversity of ML algorithms, aligning them with specific data input and tasks. This study establishes a robust basis for forthcoming research in applying machine learning to cellular network HOs.

1. Introduction

Cellular wireless technology is rapidly advancing due to the growing demand for faster and more efficient communication systems. Within these systems, effective mobility management is crucial, particularly in the context of HO mechanisms. HO, defined as the process of transferring an ongoing connection from one base station to another to prevent connection loss [1], plays a pivotal role in ensuring seamless communication.
ML is an automated learning approach that extracts knowledge from data by uncovering hidden patterns. These acquired patterns are leveraged to predict, optimize, or automate tasks, offering solutions to diverse problems across various domains [2]. Numerous researchers have embraced ML techniques and algorithms to enhance HO mechanisms within cellular networks.
The application of ML to HO mechanisms in cellular networks stems from the realization that mobile cellular networks have evolved into massive generators and carriers of data. ML can analyze these data to address issues such as failed HOs. Current research in this field is geared toward enhancing the quality of service provided to subscribers [3]. Consequently, many studies have gathered data from cellular networks for analysis. For instance, telecom companies utilize data on voice and SMS usage patterns to predict user churn [4], while others have examined signal measurements, network performance logs, and heterogeneous data to develop interference management algorithms and predict HOs [5,6].
However, processing such extensive datasets requires suitable methods and tools. Some researchers employ statistical methods to select the most significant data features [4], but the use of ML techniques and algorithms for large-scale data analysis has gained prominence in this domain. The ML process hinges on selecting the appropriate algorithm, including supervised, unsupervised, or reinforcement learning. In supervised learning, a labeled analysis variable is essential, such as distinguishing between soft HOs (ensuring no interruption during the process) and hard HOs (characterized by an actual interruption of the connection during the transition between base stations) [7]. Unsupervised learning, in contrast, clusters data without relying on labels, as seen in the co-clustering algorithm based on the Latent Block Model (LBM) for grouping similar Long-Term Evolution (LTE) cells according to Key Performance Indicators (KPIs) for congestion prediction [8]. Reinforcement learning is also applicable, where rewards and penalties guide decision-making based on input data. For instance, a Q-Table modeled algorithm may evaluate the received link beam power using Reference Signal Received Power (RSRP), cell ID, and reward values during HO [1].
Furthermore, within the ML process, defining the specific task the algorithm aims to solve is crucial. In the context of ML in cellular network HOs, tasks may include self-organizing, which automates the HO process through self-organization of connections [9]; the management of HO control parameters (HCP), where parameters are automatically configured to enhance efficiency [10]; and self-optimizing networks, which achieve automatic system optimization [6].
In summary, this article provides a concise review of the state-of-the-art application of machine learning to cellular network HOs. Emphasis is placed on the prerequisites for applying machine learning to cellular network data to enhance HO performance.

2. Methodology

The literature review was conducted using the Systematic Mapping Study (SMS) methodology, which delineates the steps for identifying available academic evidence and areas necessitating further research on a given topic. This methodology involves the analysis of an extensive collection of primary studies, encompassing articles and publications, to carry out an initial examination of them [11]. Subsequently, the studies enabling this review were selected and organized in accordance with the framework illustrated in Figure 1.
Articles pertaining to ML in HO cellular networks have underscored that the principal resource for an ML process is the input data. Within this context, two pivotal processes can be distinguished. Firstly, the data acquisition process, which concerns the manner in which HO data are obtained within cellular networks for subsequent analysis. Secondly, the construction of the dataset that will be incorporated into the modeling process, wherein the data are subject to cleansing and new variables are generated in accordance with the study’s requirements. Following the delineation of the data input, the ML process advances to encompass the algorithms employed, contingent upon the chosen learning approach and the modeling objectives referred to as the task of the ML process.

3. Results

The results derived from this study primarily center around the prerequisites for ML applications aimed at enhancing HO processes within cellular networks. Beginning with the foundational step of defining input data for modeling, the focus extends to encompass data acquisition, delineating the sources from which data are procured, and dataset construction, which governs data suitability and novel data generation. Further exploration delves into the ML process concerning handovers, encompassing algorithms employed in modeling and the task specification directed at handover improvement.

3.1. Input Data

3.1.1. Data Acquisition

In some instances, researchers turn to simulators like Matlab to assess the effectiveness of their developed models. In [12], the mobility patterns of cellular service users were extracted to predict trajectories using statistical models and Deep Learning. The objective was to optimize HO, considering signal strength and network load balancing. To finalize the research, the model’s performance was evaluated by simulating the HO using the mobility predictions, demonstrating that the prediction error was the lowest compared to existing methods [12].
Another study utilized an Automatic Tuning Optimization (ATO) algorithm with Matlab to evaluate its performance. This algorithm leverages user rate and the reference power of the received signal to adapt HCP in LTE-A and 5G Heterogeneous Networks (HetNets). The ATO algorithm was assessed with numerous macro eNBs (MeNBs) and small eNBs (SeNBs) based on 3GPP [6].
In pursuit of improving HO in 5G multi-level intraRatio Access Technology (RAT), a HO decision algorithm was developed. Matlab simulations were used, and input conditions were defined for each event. If these conditions were met for a certain Time-to-Trigger (TTT), the simulated base station (BS) would initiate the HO process. However, if the RSRP of the user equipment (UE) fell below the output condition or failed to meet the input condition after the TTT, the UE remained connected to the current BS [13].
Actual mobile network data collection has been conducted directly by cellular operators or through mobile devices using cellular applications. In [1], the serving BS collected reports containing RSRP measurements from the Mobile Stations for a centralized CMAB agent to make HO decisions. Each BS ran an algorithm to determine whether a BS needed a HO. Furthermore, a telecommunications company provided real LTE traffic data to extract network characteristics for predicting 5G traffic [14].
HO prediction is a significant area of research in 5G systems. To predict bandwidth and HO in 5G or 4G/5G networks, data captured through cellular applications, including parameters such as bandwidth, LTE neighbors, Received Signal Strength Indicator (RSSI), Reference Signal Reception Quality (RSRQ), and speed, among others, were utilized [15].
In [16], the history of radio base HOs was used to predict HO occurrences through two methods: one in vehicles with defined topologies, and another in areas with mobile people. Data were collected from the real world using Android applications installed in vehicles.

3.1.2. Dataset Construction

In ML, it is crucial that researchers have a clear understanding of the characteristics that will be fed into the model to achieve the desired results. Researchers in the field of HO and ML have identified various factors influencing HO behavior. These factors include UE measurements such as RSSI, RSRQ, user speed, distance to the BS, signal-to-noise ratio (SNR), as well as data obtained from the base radio, such as network traffic, channel capacity, error rate, and HO requests. These measurements give rise to characteristics such as HO failure (HOF), Unnecessary HO (UHO), Radio Link Failure (RLF), Too Early HO (TEHO) and Too Late HO (TLHO).
The primary decision parameters for inter-cell HO can be categorized based on speed, RSS or RSRP, cost, energy efficiency, and interference [17].
Additionally, some authors contend that reasons for HOs include a reduced signal strength insufficient to maintain the call, the user equipment moving away from the base station, the exhaustion of all the BS’s capacity with pending calls, the assignment of a channel to another BS’s call in the overlapping range, and interference avoidance when one user equipment item uses a channel that interferes with another user equipment item using the same channel in another cell [18].

3.2. Machine Learning in Handover

In the context of HO in cellular networks, the application of ML involves two essential components: “Algorithms” and “Task.”

3.2.1. Algorithms

ML algorithms play a pivotal role in the modeling process. These algorithms are selected based on the learning approach and the objectives of the HO optimization. They can be categorized into three primary types.
Supervised
The choice of ML algorithms in HO research is closely tied to the available data and the desired outcomes. Supervised learning algorithms are employed when working with labeled data to predict specific behaviors.
To mitigate call losses in cellular networks stemming from suboptimal HO performance or channel allocation, researchers have applied supervised algorithms, including neural networks. In one instance, an adaptive HO threshold, based on signal-to-interference ratio and available channels, was used in a decision matrix to label data and determine HO decisions [19].
Unsupervised
The SIM-Known semantic information model aids in HO decisions by considering various cellular network contexts, such as network, application, user, device, and HO. Through semantic analysis, it categorizes HOs as network-controlled, mobile-controlled, network-assisted, or mobile-assisted [10].
Another unsupervised algorithm employed for predicting cellular network traffic, including HOs, is convolutional graphs. This approach extracts network features using real LTE traffic data, and has been instrumental in predicting traffic patterns in 5G networks, utilizing auxiliary features like time and day [7].
Reinforcement
Intelligent decision making in HO optimization leverages historical user trajectory information, often employing Double Deep Reinforcement Learning algorithms. These algorithms consider network, channel, base radio antenna beamforming, and Signal-to-Interference-plus-Noise Ratio (SINR) aspects to optimize the density of HOs and base radios [20].
In the pursuit of optimizing HO and power allocation for maximizing throughput in 5G HetNet systems, researchers have proposed models utilizing reinforcement learning techniques. Each mobile device contributes base radio selection information and power requirements to an incentive-based system, which periodically updates selection policies based on results obtained [21].

3.2.2. Task

The “Task” in ML for HO optimization comprises specific objectives or problems that ML models aim to solve.
Self-organizing
Self-organizing networks (SON) represent a key application of machine learning in HO optimization. SON aims to optimize cellular networks through three distinct phases, detection of failures, diagnosis of the causes of these failures (including issues like hardware problems and parameter misconfigurations), and recovery, to establish a fault-free network [9].
HO Control Parameters
In the realm of management HO (HM), ML models are applied to optimize HCPs. Research efforts focus on developing algorithms for the adaptive adjustment of HCPs based on various network factors, including RSRP, SINR, and UE speed. Performance indicators such as HO failure rate (HOFR) are also considered [10].
Incorrect HCP configurations can lead to an increased RLF rate. Therefore, frequent adjustments based on UE mobility help minimize UHOs. The adjustment process often involves considering TTT based on the number of HOs Performed (HOPP) within a measurement interval [6].
Self-optimizing network
Self-optimizing networks represent another significant application of machine learning in cellular networks. Self-optimizing networks processes include self-configuration, self-optimization, and self-healing within cellular networks. Self-optimizing networks initiates by collecting Key Performance Indicators (KPIs) such as retainability, HO success rate, RSRP, RSRQ, SINR, distance, and average throughput [9].
In a study focusing on 5G HetNets, an auto-tuning self-optimization approach was proposed to address mobility challenges in LTE. The auto-tuning optimization (ATO) algorithm utilizes the user’s speed and the reference power of the received signal to adjust HO margin and activation time. HCPs are adapted after each measurement, reducing HOs Per Path (HOPP), HO delay, and call drop rates (CDR) [6].

4. Discussion

The proposed study outlines the prerequisites for conducting a ML investigation applied to the HO process within cellular networks. The fundamental resource for building ML models lies in the data, which enable the model to acquire knowledge and forecast future behavior for process enhancement, such as HO. Through a comprehensive review of the literature concerning the utilization of machine learning in cellular network HOs, key parameters commonly employed in this context have been identified.
To undertake a machine learning modeling for the HO process in cellular networks, we propose a structured approach, as depicted in Table 1. Firstly, it entails the collection of relevant parameters associated with HO, contingent upon the specific scenario and the accessibility of these parameters. Access channels can vary, including retrieval via the base radio for access to wireless network data or wireless network measurements. Alternatively, access may be facilitated through user devices to acquire wireless network data or user-specific information. In cases where direct access is unavailable, simulation tools such as MATLAB can be employed.
Secondly, the acquired data necessitate preprocessing, which involves data cleansing to eliminate outliers and the creation of new variables to capture data behavior during HOs, thereby enhancing their utility. Finally, this process culminates in the construction of a comprehensive dataset.
Within the machine learning process, it is imperative to precisely define the intended task for the model. In the context of this case study, the objective is to identify parameters related to HOs that can be inferred through machine learning modeling, while recognizing their reliance on input data.
Furthermore, after specifying input parameters and expected outcomes, machine learning algorithms that align with the study’s requirements are selected. The choice of algorithms is intrinsically linked to the characteristics of the data, encompassing both input and output, and a range of algorithms are available to accommodate these data-specific needs.
In accordance with the steps, the application of machine learning to HO process data within cellular networks serves the purpose of discovering features that enhance the quality of service provided to users when their mobile devices necessitate a HO process. Following the development of the machine learning model, a comprehensive evaluation is imperative to gauge the effectiveness and efficiency of the proposed model.

5. Conclusions

The challenges posed by HOs in cellular networks revolve around their impact on increasing signaling load within the mobile network. This not only diminishes the quality of service delivered to users but also incurs a computational cost for the network. Detecting HO failures is both essential and intricate, given that these failures can stem from a wide array of causes, including hardware issues and suboptimal parameter configurations in cellular networks.
The application of machine learning to cellular network HOs offers the prospect of context-aware HO decisions. Such decisions are rooted in the consideration and correlation of criteria drawn from diverse data collection scenarios, encompassing perspectives from users, wireless networks, and base radio stations. Emphasis has been consistently placed on the pivotal role of high-quality data as the bedrock for training effective machine learning models. Rigorous data collection and preprocessing are critical precursors to constructing datasets that underpin specific HO-related tasks. The literature frequently underscores signal strength as a paramount parameter for enhancing the HO process, encompassing metrics such as RSSI or RSRP, alongside RSRQ, distance, speed, and network traffic. The wide spectrum of machine learning algorithms available for use is intricately intertwined with the nature of the data and the specific tasks at hand.
Furthermore, it is crucial to recognize that the successful integration of machine learning into cellular network HOs can yield substantial enhancements in the quality of service furnished to users. The predictive and optimization capabilities of machine learning can effectively reduce connection interruptions, ultimately enhancing the user experience within cellular networks. Advancements in machine learning within this domain have facilitated the conceptualization and development of solutions such as self-organizing and self-optimizing HOs. These innovations represent a forward-looking vision for self-optimizing HOs that encompass the realms of self-configuration, self-optimization, and self-healing.
In conclusion, we underscore the imperative need for comprehensive evaluations of the models devised to gauge their real-world effectiveness and efficiency. This serves as a foundational stepping stone for future research endeavors in the realm of applying self-optimizing machine learning in the context of cellular network HOs. As communication technology continues its inexorable evolution, machine learning stands as an indispensable tool for the optimization and enhancement of service quality within these pivotal networks. It is our sincere hope that this work will inspire and guide researchers and practitioners alike in their quest for innovative solutions in this ever-evolving field.

Author Contributions

Conceptualization, methodology, investigation, writing—original draft preparation, writing—review and editing, V.P.-V.; supervision, P.L.-M., F.G. and H.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We are grateful for the support of the Internal Research Project without Funding PII-DETRI-2022-02 entitled “Analysis of the performance of cellular networks based on field measurements and Machine Learning techniques (Case study city of Quito)” of the Escuela Politécnica Nacional.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yajnanarayana, V.; Rydén, H.; Hévizi, L. 5G Handover using Reinforcement Learning. In Proceedings of the 2020 IEEE 3rd 5G World Forum (5GWF), Bangalore, India, 10–12 September 2020; pp. 349–354. [Google Scholar] [CrossRef]
  2. Boutaba, R.; Mohammad, A.; Salahuddin, N.; Sara, A.; Nashid, S.; Estrada-Solano, F.; Caicedo, O. A comprehensive survey on machine learning for networking: Evolution, applications and research opportunities. J. Internet Serv. Appl. 2018, 9, 16. [Google Scholar] [CrossRef]
  3. Hafez, H. Mining Big Data in Telecommunications Industry: Challenges, Techniques, and Revenue Opportunity. Int. J. Comput. Electr. Autom. Control Inf. Eng. 2016, 10, 183–190. [Google Scholar]
  4. Azeem, M.; Usman, M.; Fong, A. A churn prediction model for prepaid customers in telecom using fuzzy classifiers. Telecommun. Syst. 2017, 66, 603–614. [Google Scholar] [CrossRef]
  5. He, Y.; Yu, F.; Zhao, N.; Yin, H.; Yao, H.; Qiu, R. Big Data Analytics in Mobile Cellular Networks. IEEE Access 2016, 4, 1985–1996. [Google Scholar] [CrossRef]
  6. Alhammadi, A.; Roslee, M.; Alias, M.; Shayea, I.; Alquhali, A. Velocity-Aware Handover Self-Optimization Management for Next Generation Networks. Appl. Sci. 2020, 10, 1354. [Google Scholar] [CrossRef]
  7. Sreejith, S.; Rajak, A. Study on optimization of handoff process using fuzzy logic for mobile communication. J. Phys. Conf. Ser 2020, 1706, 012161. [Google Scholar] [CrossRef]
  8. Kassan, S.; Hadj–Kacem, I.; Jemaa, S.B.; Allio, S. A Hybrid machine learning based model for congestion prediction in mobile networks. In Proceedings of the 2022 IEEE 33rd Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), Kyoto, Japan, 12–15 September 2022; pp. 583–588. [Google Scholar] [CrossRef]
  9. Chen, M.; Zhu, K.; Chen, B. Root Cause Analysis for Self-organizing Cellular Network: An Active Learning Approach. Mob. Netw. Appl. 2020, 25, 2506–2516. [Google Scholar] [CrossRef]
  10. Vivas, F.; Caicedo, O.; Nieves, J. A Semantic and Knowledge-Based Approach for Handover Management. Sensors 2021, 21, 4234. [Google Scholar] [CrossRef] [PubMed]
  11. Zaldumbide, J.; Parraga, V. Systematic Mapping Study of Literature on Educational Data Mining to Determine Factors That Affect School Performance. In Proceedings of the 2018 International Conference on Information Systems and Computer Science (INCISCOS), Quito, Ecuador, 13–15 November 2018; pp. 239–245. [Google Scholar] [CrossRef]
  12. Bahra, N.; Pierre, S. A Hybrid User Mobility Prediction Approach for Handover Management in Mobile Networks. Telecom 2021, 2, 199–212. [Google Scholar] [CrossRef]
  13. Kapadia, P.; Seet, B.-C. Multi-Tier Cellular Handover with Multi-Access Edge Computing and Deep Learning. Telecom 2021, 2, 446–471. [Google Scholar] [CrossRef]
  14. Zhao, S.; Xiaopeng, J.; Guy, J.; Rittwik, J.; Wen-Ling, H.; Raif, R.; Manoop, T.; Syed, A.; Yi, C.; Borcea, C. Cellular Network Traffic Prediction Incorporating Handover: A Graph Convolutional Approach. In Proceedings of the 2020 17th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON), Como, Italy, 22–25 June 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–9. [Google Scholar] [CrossRef]
  15. Mei, L.; Gou, J.; Cai, Y.; Cao, H.; Liu, Y. Realtime Mobile Bandwidth and Handoff Predictions in 4G/5G Networks. arXiv 2021, arXiv:2104.12959. Available online: http://arxiv.org/abs/2104.12959 (accessed on 15 February 2023). [CrossRef]
  16. Abdah, H.; Barraca, J.; Aguiar, R. Handover Prediction Integrated with Service Migration in 5G Systems. In Proceedings of the ICC 2020—2020 IEEE International Conference on Communications (ICC), Dublin, Ireland, 7–11 June 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–7. [Google Scholar] [CrossRef]
  17. Achhab, T.; Abboud, F.; Assalem, A. A Robust Self-Optimization Algorithm Based on Idiosyncratic Adaptation of Handover Parameters for Mobility Management in LTE-A Heterogeneous Networks. IEEE Access 2021, 9, 154237–154264. [Google Scholar] [CrossRef]
  18. Dhake, T.; Jain, E.; Kabra, P.; Khan, A.; Joglekar, C. Effect of Power Adjustment on Handover in Communication Network. Eng. Technol. 2020, 8, 7. [Google Scholar]
  19. Islam, M.; Hasan, M.; Begum, A. Improvement of the Handover Performance and Channel Allocation Scheme using Fuzzy Logic, Artificial Neural Network and Neuro-Fuzzy System to Reduce Call Drop in Cellular Network. J. Eng. Adv. 2020, 1, 130–138. [Google Scholar] [CrossRef]
  20. Mollel, M.; Attai, I.; Metin, O.; Shubi, F.; Kisangiri, M.; Sajjad, H.; Muhammad, A.; Qammer, H. Intelligent Handover decision scheme using double deep reinforcement learning. Phys. Commun. 2020, 42, 101133. [Google Scholar] [CrossRef]
  21. Guo, D.; Tang, L.; Zhang, X.; Liang, Y. Joint Optimization of Handover Control and Power Allocation Based on Multi-Agent Deep Reinforcement Learning. IEEE Trans. Veh. Technol. 2020, 69, 13124–13138. [Google Scholar] [CrossRef]
Figure 1. Paper structure.
Figure 1. Paper structure.
Engproc 47 00002 g001
Table 1. Requirements to apply machine learning in HO of cellular networks.
Table 1. Requirements to apply machine learning in HO of cellular networks.
Data
Data AcquisitionWireless networkRSSI, RSNR, SINR, bandwidth, RSRQ, channel capacity, traffic
User equipmentSpeed, mobility patterns, distance to BS
Base stationHO history, call set-up success rate, field intensity measurements, voice quality measurements
Dataset ConstructionHO failure (HOF)
HO too early (TEHO)
HO too late (TLHO)
Unnecessary HO (UHO)
Radio Link Failure (RLF)
Machine Learning
AlgorithmsSupervisedSupport vector machine (SVM), artificial neural network (ANN), decision tree, random forest, Graph Convolutional
Unsupervisedk-means, Principal Component Analysis (PCA), maximization of expectations, analysis of independent components
ReinforcementQ-learning, SARSA, Proximate Policy Optimization, Markov Chain
TaskSelf-organizationHRQ, PPHO, type HO
HO control parametersTTT, DCR, HM, cost HO
Network self-optimizationUHO, TEHO, TLHO, HOSR, HOSR, cost HO
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Párraga-Villamar, V.; Lupera-Morillo, P.; Grijalva, F.; Carvajal, H. Brief Survey: Machine Learning in Handover Cellular Network. Eng. Proc. 2023, 47, 2. https://doi.org/10.3390/engproc2023047002

AMA Style

Párraga-Villamar V, Lupera-Morillo P, Grijalva F, Carvajal H. Brief Survey: Machine Learning in Handover Cellular Network. Engineering Proceedings. 2023; 47(1):2. https://doi.org/10.3390/engproc2023047002

Chicago/Turabian Style

Párraga-Villamar, Viviana, Pablo Lupera-Morillo, Felipe Grijalva, and Henry Carvajal. 2023. "Brief Survey: Machine Learning in Handover Cellular Network" Engineering Proceedings 47, no. 1: 2. https://doi.org/10.3390/engproc2023047002

Article Metrics

Back to TopTop