Edge-Cloud Computing and Federated-Split Learning in the Internet of Things

A special issue of Future Internet (ISSN 1999-5903). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: closed (31 January 2024) | Viewed by 24231

Special Issue Editors


E-Mail Website
Guest Editor
Information Sciences and Technology Department, Pennsylvania State University, Abington, PA 19001, USA
Interests: network virtualization; cloud-native networking; edge-cloud computing; federated-split learning; internet of things; internet of intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Science, Fudan University, Shanghai 200433, China
Interests: edge-cloud computing; service computing; big data architecture; internet of things; distributed systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The wide deployment of the Internet of Things (IoT) calls for new machine learning (ML) methods and distributed computing paradigms to enable various ML-based IoT applications to effectively process the huge amount of data in IoT. Federated Learning (FL) is a new collaborative learning method that allows multiple data owners to cooperate in ML model training without exposing private data. Split Learning (SL) is an emerging collaborative learning method that splits an ML model into multiple portions that are trained collaboratively by different entities. FL and SL, each with unique advantages and limitations, may complement each other to facilitate effective collaborative learning in an IoT environment. On the other hand, the rapid development of edge-cloud computing technologies enables a distributed computing platform in IoT upon which the FL and SL frameworks can be deployed. Therefore, FL and SL upon an edge-cloud platform in an IoT environment have formed an active research area that attracts interest from academia and industry.

This Special Issue aims to present the latest research advances in this interdisciplinary field of edge-cloud computing and federated-split learning. The Special Issue covers, but is not limited to, the following topics:

  • Algorithms for federated learning in edge-cloud computing;
  • Model aggregation for federated learning;
  • Communication-efficient federated learning;
  • Client incentive and selection in federated learning;
  • Decentralized framework architecture of federated learning;
  • Split learning algorithms and frameworks;
  • Split learning upon an edge-cloud computing platform;
  • Split learning performance evaluation and improvement;
  • Combining federated learning and split learning;
  • Hybrid federated-split learning frameworks upon an edge-cloud computing platform;
  • Privacy and security issues of federated and split learning;
  • Applications of federated and split learning in IoT (e.g., in industrial IoT, smart city, smart transportation, and smart health environments);
  • Blockchain-assisted federated and split learning;
  • Resource management in edge-cloud computing for supporting federated and split learning;
  • Unified computation-network virtualization in edge-cloud computing for supporting federated and split learning;
  • Service orchestration in edge-cloud computing for supporting federated and split learning.

Dr. Qiang Duan
Prof. Dr. Zhihui Lu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • federated learning
  • split learning
  • hybrid federated-split learning
  • edge-cloud computing
  • internet of things

Related Special Issue

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

13 pages, 361 KiB  
Article
Secure Data Sharing in Federated Learning through Blockchain-Based Aggregation
by Bowen Liu and Qiang Tang
Future Internet 2024, 16(4), 133; https://doi.org/10.3390/fi16040133 - 15 Apr 2024
Viewed by 476
Abstract
In this paper, we explore the realm of federated learning (FL), a distributed machine learning (ML) paradigm, and propose a novel approach that leverages the robustness of blockchain technology. FL, a concept introduced by Google in 2016, allows multiple entities to collaboratively train [...] Read more.
In this paper, we explore the realm of federated learning (FL), a distributed machine learning (ML) paradigm, and propose a novel approach that leverages the robustness of blockchain technology. FL, a concept introduced by Google in 2016, allows multiple entities to collaboratively train an ML model without the need to expose their raw data. However, it faces several challenges, such as privacy concerns and malicious attacks (e.g., data poisoning attacks). Our paper examines the existing EIFFeL framework, a protocol for decentralized real-time messaging in continuous integration and delivery pipelines, and introduces an enhanced scheme that leverages the trustworthy nature of blockchain technology. Our scheme eliminates the need for a central server and any other third party, such as a public bulletin board, thereby mitigating the risks associated with the compromise of such third parties. Full article
Show Figures

Figure 1

24 pages, 2749 KiB  
Article
UP-SDCG: A Method of Sensitive Data Classification for Collaborative Edge Computing in Financial Cloud Environment
by Lijun Zu, Wenyu Qi, Hongyi Li, Xiaohua Men, Zhihui Lu, Jiawei Ye and Liang Zhang
Future Internet 2024, 16(3), 102; https://doi.org/10.3390/fi16030102 - 18 Mar 2024
Viewed by 795
Abstract
The digital transformation of banks has led to a paradigm shift, promoting the open sharing of data and services with third-party providers through APIs, SDKs, and other technological means. While data sharing brings personalized, convenient, and enriched services to users, it also introduces [...] Read more.
The digital transformation of banks has led to a paradigm shift, promoting the open sharing of data and services with third-party providers through APIs, SDKs, and other technological means. While data sharing brings personalized, convenient, and enriched services to users, it also introduces security risks, including sensitive data leakage and misuse, highlighting the importance of data classification and grading as the foundational pillar of security. This paper presents a cloud-edge collaborative banking data open application scenario, focusing on the critical need for an accurate and automated sensitive data classification and categorization method. The regulatory outpost module addresses this requirement, aiming to enhance the precision and efficiency of data classification. Firstly, regulatory policies impose strict requirements concerning data protection. Secondly, the sheer volume of business and the complexity of the work situation make it impractical to rely on manual experts, as they incur high labor costs and are unable to guarantee significant accuracy. Therefore, we propose a scheme UP-SDCG for automatically classifying and grading financially sensitive structured data. We developed a financial data hierarchical classification library. Additionally, we employed library augmentation technology and implemented a synonym discrimination model. We conducted an experimental analysis using simulation datasets, where UP-SDCG achieved precision surpassing 95%, outperforming the other three comparison models. Moreover, we performed real-world testing in financial institutions, achieving good detection results in customer data, supervision, and additional in personally sensitive information, aligning with application goals. Our ongoing work will extend the model’s capabilities to encompass unstructured data classification and grading, broadening the scope of application. Full article
Show Figures

Figure 1

19 pages, 3172 KiB  
Article
Multi-Level Split Federated Learning for Large-Scale AIoT System Based on Smart Cities
by Hanyue Xu, Kah Phooi Seng, Jeremy Smith and Li Minn Ang
Future Internet 2024, 16(3), 82; https://doi.org/10.3390/fi16030082 - 28 Feb 2024
Viewed by 1272
Abstract
In the context of smart cities, the integration of artificial intelligence (AI) and the Internet of Things (IoT) has led to the proliferation of AIoT systems, which handle vast amounts of data to enhance urban infrastructure and services. However, the collaborative training of [...] Read more.
In the context of smart cities, the integration of artificial intelligence (AI) and the Internet of Things (IoT) has led to the proliferation of AIoT systems, which handle vast amounts of data to enhance urban infrastructure and services. However, the collaborative training of deep learning models within these systems encounters significant challenges, chiefly due to data privacy concerns and dealing with communication latency from large-scale IoT devices. To address these issues, multi-level split federated learning (multi-level SFL) has been proposed, merging the benefits of split learning (SL) and federated learning (FL). This framework introduces a novel multi-level aggregation architecture that reduces communication delays, enhances scalability, and addresses system and statistical heterogeneity inherent in large AIoT systems with non-IID data distributions. The architecture leverages the Message Queuing Telemetry Transport (MQTT) protocol to cluster IoT devices geographically and employs edge and fog computing layers for initial model parameter aggregation. Simulation experiments validate that the multi-level SFL outperforms traditional SFL by improving model accuracy and convergence speed in large-scale, non-IID environments. This paper delineates the proposed architecture, its workflow, and its advantages in enhancing the robustness and scalability of AIoT systems in smart cities while preserving data privacy. Full article
Show Figures

Figure 1

15 pages, 2529 KiB  
Article
A Lightweight Neural Network Model for Disease Risk Prediction in Edge Intelligent Computing Architecture
by Feng Zhou, Shijing Hu, Xin Du, Xiaoli Wan and Jie Wu
Future Internet 2024, 16(3), 75; https://doi.org/10.3390/fi16030075 - 26 Feb 2024
Viewed by 959
Abstract
In the current field of disease risk prediction research, there are many methods of using servers for centralized computing to train and infer prediction models. However, this centralized computing method increases storage space, the load on network bandwidth, and the computing pressure on [...] Read more.
In the current field of disease risk prediction research, there are many methods of using servers for centralized computing to train and infer prediction models. However, this centralized computing method increases storage space, the load on network bandwidth, and the computing pressure on the central server. In this article, we design an image preprocessing method and propose a lightweight neural network model called Linge (Lightweight Neural Network Models for the Edge). We propose a distributed intelligent edge computing technology based on the federated learning algorithm for disease risk prediction. The intelligent edge computing method we proposed for disease risk prediction directly performs prediction model training and inference at the edge without increasing storage space. It also reduces the load on network bandwidth and reduces the computing pressure on the server. The lightweight neural network model we designed has only 7.63 MB of parameters and only takes up 155.28 MB of memory. In the experiment with the Linge model compared with the EfficientNetV2 model, the accuracy and precision increased by 2%, the recall rate increased by 1%, the specificity increased by 4%, the F1 score increased by 3%, and the AUC (Area Under the Curve) value increased by 2%. Full article
Show Figures

Figure 1

23 pages, 14269 KiB  
Article
Implementation and Evaluation of a Federated Learning Framework on Raspberry PI Platforms for IoT 6G Applications
by Lorenzo Ridolfi, David Naseh, Swapnil Sadashiv Shinde and Daniele Tarchi
Future Internet 2023, 15(11), 358; https://doi.org/10.3390/fi15110358 - 31 Oct 2023
Cited by 3 | Viewed by 1651
Abstract
With the advent of 6G technology, the proliferation of interconnected devices necessitates a robust, fully connected intelligence network. Federated Learning (FL) stands as a key distributed learning technique, showing promise in recent advancements. However, the integration of novel Internet of Things (IoT) applications [...] Read more.
With the advent of 6G technology, the proliferation of interconnected devices necessitates a robust, fully connected intelligence network. Federated Learning (FL) stands as a key distributed learning technique, showing promise in recent advancements. However, the integration of novel Internet of Things (IoT) applications and virtualization technologies has introduced diverse and heterogeneous devices into wireless networks. This diversity encompasses variations in computation, communication, storage resources, training data, and communication modes among connected nodes. In this context, our study presents a pivotal contribution by analyzing and implementing FL processes tailored for 6G standards. Our work defines a practical FL platform, employing Raspberry Pi devices and virtual machines as client nodes, with a Windows PC serving as a parameter server. We tackle the image classification challenge, implementing the FL model via PyTorch, augmented by the specialized FL library, Flower. Notably, our analysis delves into the impact of computational resources, data availability, and heating issues across heterogeneous device sets. Additionally, we address knowledge transfer and employ pre-trained networks in our FL performance evaluation. This research underscores the indispensable role of artificial intelligence in IoT scenarios within the 6G landscape, providing a comprehensive framework for FL implementation across diverse and heterogeneous devices. Full article
Show Figures

Figure 1

23 pages, 4077 KiB  
Article
Task Scheduling for Federated Learning in Edge Cloud Computing Environments by Using Adaptive-Greedy Dingo Optimization Algorithm and Binary Salp Swarm Algorithm
by Weihong Cai and Fengxi Duan
Future Internet 2023, 15(11), 357; https://doi.org/10.3390/fi15110357 - 30 Oct 2023
Cited by 1 | Viewed by 1494
Abstract
With the development of computationally intensive applications, the demand for edge cloud computing systems has increased, creating significant challenges for edge cloud computing networks. In this paper, we consider a simple three-tier computational model for multiuser mobile edge computing (MEC) and introduce two [...] Read more.
With the development of computationally intensive applications, the demand for edge cloud computing systems has increased, creating significant challenges for edge cloud computing networks. In this paper, we consider a simple three-tier computational model for multiuser mobile edge computing (MEC) and introduce two major problems of task scheduling for federated learning in MEC environments: (1) the transmission power allocation (PA) problem, and (2) the dual decision-making problems of joint request offloading and computational resource scheduling (JRORS). At the same time, we factor in server pricing and task completion, in order to improve the user-friendliness and fairness in scheduling decisions. The solving of these problems simultaneously ensures both scheduling efficiency and system quality of service (QoS), to achieve a balance between efficiency and user satisfaction. Then, we propose an adaptive greedy dingo optimization algorithm (AGDOA) based on greedy policies and parameter adaptation to solve the PA problem and construct a binary salp swarm algorithm (BSSA) that introduces binary coding to solve the discrete JRORS problem. Finally, simulations were conducted to verify the better performance compared to the traditional algorithms. The proposed algorithm improved the convergence speed of the algorithm in terms of scheduling efficiency, improved the system response rate, and found solutions with a lower energy consumption. In addition, the search results had a higher fairness and system welfare in terms of system quality of service. Full article
Show Figures

Figure 1

15 pages, 2011 KiB  
Article
Latency-Aware Semi-Synchronous Client Selection and Model Aggregation for Wireless Federated Learning
by Liangkun Yu, Xiang Sun, Rana Albelaihi and Chen Yi
Future Internet 2023, 15(11), 352; https://doi.org/10.3390/fi15110352 - 26 Oct 2023
Viewed by 1377
Abstract
Federated learning (FL) is a collaborative machine-learning (ML) framework particularly suited for ML models requiring numerous training samples, such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Random Forest, in the context of various applications, e.g., next-word prediction and eHealth. FL [...] Read more.
Federated learning (FL) is a collaborative machine-learning (ML) framework particularly suited for ML models requiring numerous training samples, such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Random Forest, in the context of various applications, e.g., next-word prediction and eHealth. FL involves various clients participating in the training process by uploading their local models to an FL server in each global iteration. The server aggregates these models to update a global model. The traditional FL process may encounter bottlenecks, known as the straggler problem, where slower clients delay the overall training time. This paper introduces the Latency-awarE Semi-synchronous client Selection and mOdel aggregation for federated learNing (LESSON) method. LESSON allows clients to participate at different frequencies: faster clients contribute more frequently, therefore mitigating the straggler problem and expediting convergence. Moreover, LESSON provides a tunable trade-off between model accuracy and convergence rate by setting varying deadlines. Simulation results show that LESSON outperforms two baseline methods, namely FedAvg and FedCS, in terms of convergence speed and maintains higher model accuracy compared to FedCS. Full article
Show Figures

Figure 1

17 pages, 1823 KiB  
Article
E-SAWM: A Semantic Analysis-Based ODF Watermarking Algorithm for Edge Cloud Scenarios
by Lijun Zu, Hongyi Li, Liang Zhang, Zhihui Lu, Jiawei Ye, Xiaoxia Zhao and Shijing Hu
Future Internet 2023, 15(9), 283; https://doi.org/10.3390/fi15090283 - 22 Aug 2023
Cited by 2 | Viewed by 932
Abstract
With the growing demand for data sharing file formats in financial applications driven by open banking, the use of the OFD (open fixed-layout document) format has become widespread. However, ensuring data security, traceability, and accountability poses significant challenges. To address these concerns, we [...] Read more.
With the growing demand for data sharing file formats in financial applications driven by open banking, the use of the OFD (open fixed-layout document) format has become widespread. However, ensuring data security, traceability, and accountability poses significant challenges. To address these concerns, we propose E-SAWM, a dynamic watermarking service framework designed for edge cloud scenarios. This framework incorporates dynamic watermark information at the edge, allowing for precise tracking of data leakage throughout the data-sharing process. By utilizing semantic analysis, E-SAWM generates highly realistic pseudostatements that exploit the structural characteristics of documents within OFD files. These pseudostatements are strategically distributed to embed redundant bits into the structural documents, ensuring that the watermark remains resistant to removal or complete destruction. Experimental results demonstrate that our algorithm has a minimal impact on the original file size, with the watermarked text occupying less than 15%, indicating a high capacity for carrying the watermark. Additionally, compared to existing explicit watermarking schemes for OFD files based on annotation structure, our proposed watermarking scheme is suitable for the technical requirements of complex dynamic watermarking in edge cloud scenario deployment. It effectively overcomes vulnerabilities associated with easy deletion and tampering, providing high concealment and robustness. Full article
Show Figures

Figure 1

26 pages, 1726 KiB  
Article
Towards Efficient Resource Allocation for Federated Learning in Virtualized Managed Environments
by Fotis Nikolaidis, Moysis Symeonides and Demetris Trihinas
Future Internet 2023, 15(8), 261; https://doi.org/10.3390/fi15080261 - 31 Jul 2023
Cited by 4 | Viewed by 1532
Abstract
Federated learning (FL) is a transformative approach to Machine Learning that enables the training of a shared model without transferring private data to a central location. This decentralized training paradigm has found particular applicability in edge computing, where IoT devices and edge nodes [...] Read more.
Federated learning (FL) is a transformative approach to Machine Learning that enables the training of a shared model without transferring private data to a central location. This decentralized training paradigm has found particular applicability in edge computing, where IoT devices and edge nodes often possess limited computational power, network bandwidth, and energy resources. While various techniques have been developed to optimize the FL training process, an important question remains unanswered: how should resources be allocated in the training workflow? To address this question, it is crucial to understand the nature of these resources. In physical environments, the allocation is typically performed at the node level, with the entire node dedicated to executing a single workload. In contrast, virtualized environments allow for the dynamic partitioning of a node into containerized units that can adapt to changing workloads. Consequently, the new question that arises is: how can a physical node be partitioned into virtual resources to maximize the efficiency of the FL process? To answer this, we investigate various resource allocation methods that consider factors such as computational and network capabilities, the complexity of datasets, as well as the specific characteristics of the FL workflow and ML backend. We explore two scenarios: (i) running FL over a finite number of testbed nodes and (ii) hosting multiple parallel FL workflows on the same set of testbed nodes. Our findings reveal that the default configurations of state-of-the-art cloud orchestrators are sub-optimal when orchestrating FL workflows. Additionally, we demonstrate that different libraries and ML models exhibit diverse computational footprints. Building upon these insights, we discuss methods to mitigate computational interferences and enhance the overall performance of the FL pipeline execution. Full article
Show Figures

Figure 1

27 pages, 2836 KiB  
Article
FedCO: Communication-Efficient Federated Learning via Clustering Optimization
by Ahmed A. Al-Saedi, Veselka Boeva and Emiliano Casalicchio
Future Internet 2022, 14(12), 377; https://doi.org/10.3390/fi14120377 - 13 Dec 2022
Cited by 5 | Viewed by 1916
Abstract
Federated Learning (FL) provides a promising solution for preserving privacy in learning shared models on distributed devices without sharing local data on a central server. However, most existing work shows that FL incurs high communication costs. To address this challenge, we propose a [...] Read more.
Federated Learning (FL) provides a promising solution for preserving privacy in learning shared models on distributed devices without sharing local data on a central server. However, most existing work shows that FL incurs high communication costs. To address this challenge, we propose a clustering-based federated solution, entitled Federated Learning via Clustering Optimization (FedCO), which optimizes model aggregation and reduces communication costs. In order to reduce the communication costs, we first divide the participating workers into groups based on the similarity of their model parameters and then select only one representative, the best performing worker, from each group to communicate with the central server. Then, in each successive round, we apply the Silhouette validation technique to check whether each representative is still made tight with its current cluster. If not, the representative is either moved into a more appropriate cluster or forms a cluster singleton. Finally, we use split optimization to update and improve the whole clustering solution. The updated clustering is used to select new cluster representatives. In that way, the proposed FedCO approach updates clusters by repeatedly evaluating and splitting clusters if doing so is necessary to improve the workers’ partitioning. The potential of the proposed method is demonstrated on publicly available datasets and LEAF datasets under the IID and Non-IID data distribution settings. The experimental results indicate that our proposed FedCO approach is superior to the state-of-the-art FL approaches, i.e., FedAvg, FedProx, and CMFL, in reducing communication costs and achieving a better accuracy in both the IID and Non-IID cases. Full article
Show Figures

Graphical abstract

Review

Jump to: Research

54 pages, 1251 KiB  
Review
Federated Learning for Intrusion Detection Systems in Internet of Vehicles: A General Taxonomy, Applications, and Future Directions
by Jadil Alsamiri and Khalid Alsubhi
Future Internet 2023, 15(12), 403; https://doi.org/10.3390/fi15120403 - 14 Dec 2023
Viewed by 6503
Abstract
In recent years, the Internet of Vehicles (IoV) has garnered significant attention from researchers and automotive industry professionals due to its expanding range of applications and services aimed at enhancing road safety and driver/passenger comfort. However, the massive amount of data spread across [...] Read more.
In recent years, the Internet of Vehicles (IoV) has garnered significant attention from researchers and automotive industry professionals due to its expanding range of applications and services aimed at enhancing road safety and driver/passenger comfort. However, the massive amount of data spread across this network makes securing it challenging. The IoV network generates, collects, and processes vast amounts of valuable and sensitive data that intruders can manipulate. An intrusion detection system (IDS) is the most typical method to protect such networks. An IDS monitors activity on the road to detect any sign of a security threat and generates an alert if a security anomaly is detected. Applying machine learning methods to large datasets helps detect anomalies, which can be utilized to discover potential intrusions. However, traditional centralized learning algorithms require gathering data from end devices and centralizing it for training on a single device. Vehicle makers and owners may not readily share the sensitive data necessary for training the models. Granting a single device access to enormous volumes of personal information raises significant privacy concerns, as any system-related problems could result in massive data leaks. To alleviate these problems, more secure options, such as Federated Learning (FL), must be explored. A decentralized machine learning technique, FL allows model training on client devices while maintaining user data privacy. Although FL for IDS has made significant progress, to our knowledge, there has been no comprehensive survey specifically dedicated to exploring the applications of FL for IDS in the IoV environment, similar to successful systems research in deep learning. To address this gap, we undertake a well-organized literature review on IDSs based on FL in an IoV environment. We introduce a general taxonomy to describe the FL systems to ensure a coherent structure and guide future research. Additionally, we identify the relevant state of the art in FL-based intrusion detection within the IoV domain, covering the years from FL’s inception in 2016 through 2023. Finally, we identify challenges and future research directions based on the existing literature. Full article
Show Figures

Figure 1

25 pages, 813 KiB  
Review
Exploring Homomorphic Encryption and Differential Privacy Techniques towards Secure Federated Learning Paradigm
by Rezak Aziz, Soumya Banerjee, Samia Bouzefrane and Thinh Le Vinh
Future Internet 2023, 15(9), 310; https://doi.org/10.3390/fi15090310 - 13 Sep 2023
Cited by 4 | Viewed by 3214
Abstract
The trend of the next generation of the internet has already been scrutinized by top analytics enterprises. According to Gartner investigations, it is predicted that, by 2024, 75% of the global population will have their personal data covered under privacy regulations. This alarming [...] Read more.
The trend of the next generation of the internet has already been scrutinized by top analytics enterprises. According to Gartner investigations, it is predicted that, by 2024, 75% of the global population will have their personal data covered under privacy regulations. This alarming statistic necessitates the orchestration of several security components to address the enormous challenges posed by federated and distributed learning environments. Federated learning (FL) is a promising technique that allows multiple parties to collaboratively train a model without sharing their data. However, even though FL is seen as a privacy-preserving distributed machine learning method, recent works have demonstrated that FL is vulnerable to some privacy attacks. Homomorphic encryption (HE) and differential privacy (DP) are two promising techniques that can be used to address these privacy concerns. HE allows secure computations on encrypted data, while DP provides strong privacy guarantees by adding noise to the data. This paper first presents consistent attacks on privacy in federated learning and then provides an overview of HE and DP techniques for secure federated learning in next-generation internet applications. It discusses the strengths and weaknesses of these techniques in different settings as described in the literature, with a particular focus on the trade-off between privacy and convergence, as well as the computation overheads involved. The objective of this paper is to analyze the challenges associated with each technique and identify potential opportunities and solutions for designing a more robust, privacy-preserving federated learning framework. Full article
Show Figures

Figure 1

Back to TopTop