Next Article in Journal
Sound Identification Method for Gas and Coal Dust Explosions Based on MLP
Previous Article in Journal
Exploring Relationships between Boltzmann Entropy of Images and Building Classification Accuracy in Land Cover Mapping
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Joint Client Selection and CPU Frequency Control in Wireless Federated Learning Networks with Power Constraints

School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou 450001, China
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(8), 1183; https://doi.org/10.3390/e25081183
Submission received: 30 June 2023 / Revised: 30 July 2023 / Accepted: 5 August 2023 / Published: 9 August 2023
(This article belongs to the Section Signal and Data Analysis)

Abstract

:
Federated learning (FL) represents a distributed machine learning approach that eliminates the necessity of transmitting privacy-sensitive local training samples. However, within wireless FL networks, resource heterogeneity introduces straggler clients, thereby decelerating the learning process. Additionally, the learning process is further slowed due to the non-independent and identically distributed (non-IID) nature of local training samples. Coupled with resource constraints during the learning process, there arises an imperative need for optimizing client selection and resource allocation strategies to mitigate these challenges. While numerous studies have made strides in this regard, few have considered the joint optimization of client selection and computational power (i.e., CPU frequency) for both clients and the edge server during each global iteration. In this paper, we initially define a cost function encompassing learning latency and non-IID characteristics. Subsequently, we pose a joint client selection and CPU frequency control problem that minimizes the time-averaged cost function subject to long-term power constraints. By utilizing Lyapunov optimization theory, the long-term optimization problem is transformed into a sequence of short-term problems. Finally, an algorithm is proposed to determine the optimal client selection decision and corresponding optimal CPU frequency for both the selected clients and the server. Theoretical analysis provides performance guarantees and our simulation results substantiate that our proposed algorithm outperforms comparative algorithms in terms of test accuracy while maintaining low power consumption.

1. Introduction

As the rapid expansion of Internet of Things (IoT) communication unfolds, an immense volume of data generated by massive machine-type devices circulates via wireless access technology. This advancement instigates the pervasive utilization of machine-learning-based applications in various aspects of people’s everyday life, including smart transportation and smart healthcare. In the traditional centralized learning paradigm, the raw data of each device are initially uploaded to the edge server via a wireless channel, facilitating aggregation for subsequent processing and analysis. This methodology, however, potentially leads to privacy data leakage. Consequently, the exploration of machine learning mechanisms that protect user data privacy is of paramount significance. FL was proposed as a solution to address the limitations of traditional centralized machine learning methods in ensuring user data privacy [1]. This approach permits each device to partake in the collaborative training of a shared model, negating the need to share the device’s proprietary data. Since its inception, federated learning has garnered considerable interest from both academia and industry, finding broad applications in fields such as mobile cloud computing [2], the industrial Internet of Things [3], and device-to-device communication [4].
Despite the significant advantages offered by federated learning, its application in wireless networks presents certain challenges that warrant attention. Firstly, the scarcity of wireless resources, such as channel bandwidth, limit the number of clients capable of participating in each learning iteration. Additionally, client selection for each iteration needs to be guided by the real-time state of the channel. Secondly, the computational power and power budget of devices and the edge server are finite, necessitating the optimization of computational power and power allocation throughout the multi-iteration learning process in FL, which is essential for both extending battery life and advancing green communication. A third challenge arises from the latency incurred during the federated learning process, which constrains its use in scenarios that are latency-sensitive. The latency is determined by the time required for straggler clients to upload the local model during each learning iteration, the time spent aggregating the model on the edge server, which is influenced by the transmission power and CPU frequency in each learning iteration. Furthermore, the heterogeneity of client behavior and the variable dynamics of wireless environments may lead to the acquisition of non-independent and identically distributed (non-IID) training data [5]. Some clients may hold data that significantly deviate from independent identically distributed (IID) training data, making it challenging for the model to generalize effectively across all clients.

1.1. Related Work

Ever since the proposition of federated learning, an extensive research effort has been devoted to improving its performance in wireless networks. This advancement primarily hinges on designing appropriate client selection schemes [6,7] or optimizing resource allocation [8,9,10]. For instance, a client scheduling strategy based on channel and learning qualities was proposed in [7]. Ref. [9] investigated both the CPU frequency and transmission power control strategy of all IoT clients to minimize the energy consumption under latency requirement. The studies presented in [11,12,13] focused on the optimization of the client selection process and resource allocation simultaneously, with the objective of improving federated learning performance. Specifically, Ref. [11] delved into a joint client selection and resource allocation problem, seeking to optimize the trade-off between the number of selected clients and the total energy consumption, while [12] focused on minimizing training loss while adhering to the constraints of delay and energy consumption. Additionally, Ref. [13] explored the process of client selection and resource allocation under the condition of non-IID data distributions.
The aforementioned studies were formulated in the context of individual global iterations, overlooking the interdependence between different iterations. This neglects the cumulative learning effect over multiple iterations, potentially yielding less effective learning models and limiting overall system performance. Hence, the need for long-term optimization that accounts for the interconnectedness of global iterations becomes evident. Numerous research efforts [14,15,16,17,18] have targeted long-term optimization in federated learning, focusing on various aspects of the problem. For instance, Ref. [14] sought to optimize the client selection process in each learning round with the aim of minimizing training latency under fairness constraints. The study presented in [15] introduced a dynamic scheduling mechanism aimed at optimizing the federated learning process, striking a balance between the enhancement of learning performance and the reduction of training latency. Ref. [16] focused on the optimization of radio transmission parameters and computation resources, attempting to minimize power consumption while upholding learning performance and latency constraints. Refs. [17,18] focused on client selection and bandwidth allocation under energy constraints in wireless FL networks. Specifically, the study in [17] aimed to maximize the weighted sum of selected clients, whereas [18] focused on minimizing the cost of time and accuracy. While the above works explored long-term optimization in federated learning, the optimization of the latency and the impact of its non-IID nature under the long-term power constraints of both clients and the server for FL have not been considered.

1.2. Contribution

In this paper, we consider a client selection and CPU frequency control problem in wireless FL networks. Different from the extant literature, our approach concurrently optimizes the selection of clients and CPU frequency for both clients and the edge server. The objective of the proposed problem is to minimize a predefined cost function, which incorporates latency and model robustness, under long-term power constraints. The main contributions of our work are as follows:
(1)
We develop a comprehensive framework for the long-term client selection and CPU frequency control problem, taking into account the interdependence of different global iterations and long-term power consumption constraints for both clients and the server. The aim is to expedite the learning process by incorporating client and server latency, as well as the effect of the non-IID distribution of local training samples.
(2)
Leveraging Lyapunov optimization theory, we transform the long-term problem into a set of per-iteration problems. We introduce an algorithm to tackle the per-iteration problem, accompanied by a theoretical performance guarantee.
(3)
We conduct extensive experiments, inclusive of several comparative experiments. Simulation results demonstrate that our proposed algorithm can yield superior test accuracy while maintaining low power consumption.
The remainder of this paper is structured as follows. Our proposed framework’s system model, along with the optimization problem formulation, is elucidated in Section 2. The solution via the Lyapunov optimization theory, is laid out in Section 3. Section 4 comprises the simulation results, showcasing the superiority of our proposed scheme. Finally, Section 5 concludes and discusses the paper.

2. System Model and Problem Formulation

The proposed federated learning framework is shown in Figure 1, consisting of a set of clients K = { 1 , , K } and a server, with K indicating the total number of clients. Each client k K possesses a local dataset D k = { ( x i , y i ) } i = 1 d k , wherein x i and y i denote the i-th sample and its associated ground-truth label of client k, respectively, and d k stands for the dataset size originating from q k label classes.

2.1. Learning Model

Assuming T global iterations, we adopt a k t = 1 to represent the selection of client k in global iteration t = 0 , , T 1 , with a k t = 0 otherwise. The client selection decisions are denoted by a t = ( a 1 t , , a K t ) . The server aims to construct a global model by minimizing the following global loss function:
w * = arg min w k = 1 K [ d k f k ( w ) ] k = 1 K d k ,
where f k ( w ) is the local loss function at client k. For instance, the loss function for linear regression is given by:
f k ( w ) = 1 2 ( x i T w y i ) 2 .
The goal of the training process is to find the optimal model w * though iteration. A global iteration t consists of four steps:
(1)
Each client shares its side’s information. Subsequently, the server selects a group of clients and broadcasts the current global model w t to them.
(2)
The selected clients execute a local iteration to update their local models w k t based on their respective datasets.
(3)
The selected clients upload their newly updated local models to the server.
(4)
The server aggregates all the received local models to establish a new global model, as represented by w t + 1 = k = 1 K ( a k t w k t ) / k = 1 K a k t .

2.2. Power Consumption Model

In each global iteration, the selected clients engage in training and uploading models while the server aggregates these received models. This process contributes to power consumption. We represent the overall CPU frequency control decisions of the clients as f t = ( f 1 t , , f K t ) , where f k t indicates the CPU frequency of client k in the global iteration t. Notably, if a k t = 0 , then f k t also equals zero. The power computation of the training model can be expressed as P k t , t r = γ 1 ( f k t ) 3 a k t , where γ 1 denotes the capacitance coefficient of clients [18]. Let P k t , u p = p k t a k t denote the power spent for uploading the model; thus, the total power consumption of client k during the global iteration t is given by:
P k t = P k t , t r + P k t , u p .
On the other hand, the CPU frequency of the server during the global iteration t is represented by f r t , and the server’s capacitance coefficient is denoted as γ 2 . Consequently, the power consumption of the server during the global iteration t can be formulated as follows:
P r t = γ 2 ( f r t ) 3 .

2.3. Latency Model

Let m represent the number of local iterations in each global iteration, and c k stand for the number of CPU cycles necessary to process a sample from client k. The local training latency for selected client k in global iteration t can then be calculated as τ k t , t r = m c k d k a k t / f k t , which will linearly decrease as the allocated local computing power f k t increases.
When the local training is finished, the selected clients upload their models to the server via orthogonal frequency-division multiple access (OFDMA). The total available bandwidth is denoted as B, and it is assumed that this bandwidth is equally allocated to the selected clients during the global iteration t. Consequently, the bandwidth allocated to a selected client k in global iteration t can be represented as
b t = B / k = 1 K a k t .
The model size is represented as s; therefore, the latency for model uploading is given by
τ k t , u p = s a k t b t log 2 ( 1 + h k t p k t N 0 b t ) ,
where h k t denotes the channel gain between client k and the server during the global iteration t, which is assumed to be available at the transmitter side. N 0 denotes the power spectral density of noise. The total latency of client k can be formulated as:
τ k t = τ k t , t r + τ k t , u p .
At the server side, let τ r t denote the latency of the server in global iteration t, which can be written as:
τ r t = ϕ k = 1 K a k t f r t ,
where ϕ is the quantity of processing cycles required to carry out a single summation operation [16].
We assume that the server starts aggregating after receiving all the local models of selected clients. Therefore, the learning latency of global iteration t is bottlenecked by the straggler clients and can be derived as:
τ t = max k = 1 , K ( τ k t ) + τ r t .

2.4. Cost Model

The non-IID nature of data introduces biases in the training process, which significantly impacts the accuracy of FL. As noted in [13], a larger number of label classes might result in a more robust trained model, and the non-IID nature could decrease when clients possess more label classes. In this paper, we use label classes q k to quantify the non-IID nature with an aim to minimize both the learning latency and accuracy degradation caused by it. However, reducing the latter could potentially increase the learning latency. Therefore, we propose a cost objective function U t to balance the two goals during the global iteration t:
U t ( a t , f t , f r t ) = τ t μ k = 1 K ( a k t q k ) ,
where μ is a price parameter, which turns the label classes into a cost form [19].

2.5. Problem Formulation

From the aforementioned discussion, we consider an optimization problem that minimizes the time-averaged cost function through joint client selection and CPU frequency control as follows:
P 1 min G 0 , , G T 1 1 T t = 0 T 1 U t ( a t , f t , f r t )
s . t . f k min f k t f k max , k , t ,
f r min f r t f r max , t ,
a k t { 0 , 1 } , k , t ,
1 T t = 0 T 1 P k t P ¯ k , k ,
1 T t = 0 T 1 P r t P ¯ r .
where G t = ( a t , f t , f r t ) is the optimization variables in global iteration t, t = 0 , 1 , , T 1 . Constraint (12) and (13) specify the CPU frequency range of each client and the server, respectively. Constraint (14) defines whether each client is selected or not. Constraint (15) guarantees that the average power consumption of each client is limited by P ¯ k , while constraint (16) guarantees that the average power consumption of the server is limited by P ¯ r . For clarity, in the following sections of this paper, we succinctly refer to the cost function introduced in Equation (10) as U t .

3. Problem Solution and Algorithm Design

A direct resolution of problem P 1 is not viable due to the time-averaged optimization objective and long-term power constraints. Therefore, in this paper, problem P 1 is initially transformed into a per-iteration problem by utilizing Lyapunov optimization theory. Subsequently, this per-iteration problem is decomposed into two distinct subproblems: a CPU frequency control problem, which assumes fixed client selection decisions, and a client selection problem that operates under the optimal CPU frequency setting.

3.1. Problem Transformation via Stochastic Optimization Theory

The resolution of problem P 1 necessitates comprehensive information, such as channel gain, pertaining to T global iterations. However, the unavailability of future information in the present moment presents a formidable challenge. To circumvent this issue, P 1 is converted into a series of subproblems, the solutions for which do not rely on the knowledge of future iterations. This transformation is achieved through the application of Lyapunov optimization theory [20] and the introduction of virtual queue techniques. For each client, a virtual power deficit queue Z k t is established, with an initial condition of Z k 0 = 0 , and updated at the end of each global iteration as follows:
Z k t + 1 = max { P k t P ¯ k + Z k t , 0 } ,
where Z k t encapsulates the disparity between power consumption and the long-term power constraint of client k over T iterations. A similar approach can be used to construct a virtual power deficit queue Y r t for the server, as depicted:
Y r t + 1 = max { P r t P ¯ r + Y r t , 0 } .
To maintain the mean rate stability of the queues, we first establish a Lyapunov function in the following form:
L ( Θ t ) = 1 2 [ k = 1 K ( Z k t ) 2 + ( Y r t ) 2 ] ,
where Θ t symbolizes all the virtual deficit queues. Then we formulate Lyapunov drift to measure the expected increase as of L ( Θ t ) as follows:
Δ ( Θ t ) = E [ L ( Θ t + 1 ) L ( Θ t ) | Θ t ] .
With the objective of restricting the growth of virtual deficit queues and minimizing the cost function, the objective function is integrated into the Lyapunov drift. Consequently, the drift-plus-cost function is defined as follows:
Δ ( Θ t ) + V E [ U t | Θ t ] ,
where V serves as a control parameter that aids in balancing the trade-off between minimizing the objective function and adhering to the power constraints. An observation of (21) indicates that it solely involves the current iteration t, signifying that the original problem P1 can be transitioned into a real-time problem solved on a per-iteration basis. The application of Lyapunov optimization theory provides the following lemma regarding the upper bound of the drift-plus-cost function:
Theorem 1.
Assume P k max P k t for each client k, and P r max P r t for the server in global iteration t. The drift-plus-cost function satisfies:
Δ ( Θ t ) + V E [ U t | Θ t ] C 1 + k = 1 K Z k t E [ P k t P ¯ k | Θ t ] + Y r t E [ P r t P ¯ r | Θ t ] + V E [ U t | Θ t ] ,
where C 1 is a finite constant, which satisfies C 1 1 2 k = 1 K ( P k max P ¯ k ) 2 + 1 2 ( P r max P ¯ r ) 2 .
Proof. 
The proof is given in Appendix A. □
By minimizing the upper bound in Equation (22), virtual deficit queue stability is achieved concurrently with cost function minimization. Upon excluding all constants (i.e., C, P ¯ k Z k t , P ¯ r Y r t ), problem P 1 can be transformed into a per-iteration problem P 2 :
P 2 min G t k = 1 K ( P k t Z k t ) + P r t Y r t + V U t s . t . ( 12 ) ( 14 ) .

3.2. Problem Solution

To simplify the complexity, U t in Equation (21) is substituted with an upper bound U ˜ t = k = 1 K τ k t + τ r t μ k = 1 K ( a k t q k t ) , derivable through the application of max k = 1 , K ( τ k t ) k = 1 K τ k t . Consequently, the resolution of P2 can be reoriented towards the following problem:
P 3 min G t k = 1 K ( P k t Z k t ) + P r t Y r t + V U ˜ t s . t . ( 12 ) ( 14 ) .
Problem P 3 manifests as a mixed-integer problem and poses a significant challenge for direct resolution. However, given any a t , the objective function of P3 transforms into a convex function with respect to the CPU frequency of the selected clients and the server, i.e., f k t and f r t . Consequently, the optimal CPU frequencies for selected clients and the server can be efficiently procured as
( f k t ) * = f k min , i f V m c k d k 3 Z k t γ 1 4 < f k min f k max , i f V m c k d k 3 Z k t γ 1 4 > f k max V m c k d k 3 Z k t γ 1 4 , otherwise ,
and
( f r t ) * = f r min , i f V ϕ k = 1 K a k t 3 Y r t γ 2 4 < f r min f r max , i f V ϕ k = 1 K a k t 3 Y r t γ 2 4 > f r max V ϕ k = 1 K a k t 3 Y r t γ 2 4 , otherwise ,
respectively.
With the optimal CPU frequency established, the objective of problem P 2 becomes a function of a t and can consequently be transformed as follows:
P 4 min a t k = 1 K ( P k t Z k t ) + P r t Y r t + V U t s . t . f k t = ( f k t ) * , k , f r t = ( f r t ) * , ( 14 ) .
A straightforward strategy to resolve P 4 involves traversing all possible client selection scenarios and then selecting the scheme that minimizes the objective function. However, the complexity of this approach escalates rapidly with an increase in the total number of clients. Therefore, we introduce an efficient algorithm designed to address P 4 in Algorithm 1. In this proposed algorithm, during each global iteration, clients with I k t = P k t Z k t V μ q k t lower than 0 are included into the initial set X 0 t . Thereafter, considering that learning latency is determined by straggler clients, these | X 0 t | clients are incorporated one by one into the auxiliary selection set X a t in ascending order according to their total latency τ k t , thereby generating | X 0 t | auxiliary selection sets. Here | · | signifies the count of elements within the set. These | X 0 t | auxiliary selection sets are subsequently accumulated in the client selection set X t . We then compute the value of the objective function of P 4 for each auxiliary selection set within X t and select the optimal auxiliary selection set ( X a t ) * that minimizes the objective function of P 4 . Utilizing our proposed algorithm, throughout each global iteration, only | X 0 t | computations of the objective function are required to attain the optimal solution. Consequently, this represents a significantly lower complexity compared to the exhaustive traversal method.
Algorithm 1 Client Selection Algorithm
1:
Input:  Z k t = 0 , k , Y r t = 0
2:
Set X 0 t = , X a t = , X t =
3:
for  k K  do
4:
    Calculate I k t = P k t Z k t V μ q k t
5:
    if  I k t 0  then
6:
         X 0 t = X 0 t { k }
7:
    end if
8:
end for
9:
Rank the clients in X 0 t according to their τ k t . Therefore we have τ 1 t τ 2 t τ | X 0 t | t
10:
for  x X 0 t  do
11:
    Update X a t = X a t { x }
12:
    Add X t to X t , i.e., X t = X t { X t }
13:
    Calculate J ( X a t ) = k X a t ( P k t Z k t ) + P r t Y r t + V U t
14:
end for
15:
Find ( X a t ) * = arg min X a t X t ( J ( X a t ) )
16:
Return ( a t ) * , where ( a k t ) * = 1 { k ( X a t ) * } , k

3.3. Analysis of the Proposed Optimization Scheme’s Optimality

Given the trade-off between minimizing the time-averaged cost and reducing power consumption violations, the analysis of the proposed optimization strategy’s optimality is provided herein.
Theorem 2.
The average cost function satisfies:
1 T t = 0 T 1 E [ U t | Θ t ] C 2 V + φ * ,
where C 2 C 1 + k = 1 K Z k t , max max ( P k max P ¯ k ) + Y r t , max max ( P r max P ¯ r ) , and φ * is the optimal solution of problem P 1 .
Proof. 
The proof is given in Appendix B. □
Theorem 3.
Assume E [ U t | Θ t ] φ min . The power consumption of each client k and the server are bounded by T P ¯ k + 2 T C 2 + 2 T V φ * 2 T V φ min and T P ¯ r + 2 T C 2 + 2 T V φ * 2 T V φ min , respectively.
Proof. 
The proof is given in Appendix C. □
Theorem 2 elucidates that the discrepancy between the objective value yielded by the proposed algorithm and the original optimal value is less than or equal to O ( 1 / V ) . This suggests that the cost determined by the proposed optimization scheme can approximate the original optimal value to an arbitrary degree through the augmentation of the control parameter V. In accordance with Theorem 3, the energy deficit queues of all clients and the server adhere to an upper limit of O ( V ) at any iteration, a limit that escalates in accordance with the control parameter V. Nonetheless, an excessively large value of V may result in an unduly large upper boundary for the virtual power deficit queue backlog, which could lead to power consumption surpassing the power budget. In summary, the proposed algorithm delivers a [ O ( 1 / V ) , O ( V ) ] trade-off between cost and power consumption, a balance that can be managed by adjusting the parameter V.
Theorem 4.
Virtual queue of each client k and the server satisfies:
lim T Z k T T = 0 , lim T Y r T T = 0 .
Proof. 
The proof is given in Appendix D. □
Theorem 4 indicates that the virtual power deficit queue backlog is bounded as the global iteration approaches infinity, i.e., all virtual queues remain mean rate stable across the FL iteration.

4. Experiment Result and Analysis

4.1. Experiment Settings

In the conducted experiment, FL was implemented using PyTorch, considering a system setup in which K clients are randomly positioned within a circular area of a 500 m radius with a central server. The path loss model is defined as 128.1 + 37.6 log 10 i + ψ , where i represents the distance between a client and the server in kilometers, while ψ is a Gaussian random variable exhibiting a variance of 8 dB. The total bandwidth, B, is set to 100 MHz, with the noise power spectral density N 0 = 174 dBm/Hz.
The power used for uploading the local model is arbitrarily assigned between 10 and 20 dBm. The model size s is set as 1 Mbit. For all clients, the number of local iterations in each global iteration m is set to 1. The number of CPU cycles necessary for processing a sample per client is randomly distributed within the range of [ 1 , 3 ] × 10 4 cycles/sample. Average power constraints are established at P ¯ k = 100 mW and P ¯ r = 500 mW. The decision parameter V is assigned the value of 10, with a justification provided later. The CPU frequency range of the clients and the central server, f k t and f r t , span from 0.1 GHz to 2.5 GHz and from 0.1 GHz to 3.3 GHz, respectively. Furthermore, the capacitance coefficients for the clients and the server, the price parameter of the cost function, and the number of CPU cycles needed to perform a single summation are set to γ 1 = γ 2 = 10 28 , μ = 1.6 × 10 3 , and ϕ = 10 6 .
The MNIST dataset [21] was employed for the experiment, consisting of 60,000 training samples and 10,000 test samples with 10 label classes from 0 to 9. Each client’s local dataset was assembled by randomly selecting one or two label classes from the MNIST dataset with d k = 100 samples. A multi-layer perceptron (MLP) model with a single hidden layer containing 64 nodes was utilized, with ReLU as the activation function. The learning rate was set to 0.01, and the batch size was 10.
To demonstrate the advantage of our proposed algorithm, we introduce the following three algorithms as comparison benchmarks:
  • Selected All: In this algorithm, all the clients are selected in each global iteration. The CPU frequency for both the clients and the central server is consistently set at their maximum values in every global iteration.
  • Greedy: For a rational comparison with our proposed algorithm, the long-term average number of client selected per round is tuned to be consistent with that of our proposed algorithm in this comparative algorithm. As such, we establish a client selection latency threshold. Clients are subsequently chosen one by one in ascending order based on their individual total latency τ k t until the learning latency τ t surpasses the preset client-selection latency threshold. Furthermore, with the prerequisite of adhering to the CPU frequency constraint, all participating clients and servers maintain a constant power level, identical to the long-term power constraint.
  • Random: In this comparative algorithm, clients are randomly selected in each round. The number of clients selected is maintained at a constant value, which is equal to the average number per round in our proposed algorithm. Aside from this variation, all other configurations align with those of the Greedy algorithm.

4.2. Analysis of Experimental Results

Conceptually, reducing the time required for each global iteration and minimizing the impact of the non-IID nature on the model convergence speed enables the training model to reach a specific accuracy more rapidly within a given learning time. Figure 2 demonstrates how the test accuracy of our proposed algorithm and comparative algorithms varies with the learning time under the number of clients K = 100 . It is apparent that the proposed algorithm exhibits a performance almost equivalent to the Selected All algorithm in terms of convergence speed. Even though all the clients participate in each global iteration, fostering a swift convergence speed, the effects of the non-IID nature stemming from each client’s dataset cannot be negated, thereby undermining its performance. Conversely, in our proposed model, clients with more label classes in their dataset may inherently have a higher selection priority. Simultaneously, the mean rate stable properties of the virtual queue in the proposed algorithm ensure fairness for clients with fewer label classes. Our proposed model significantly outperforms both the Greedy and Random algorithms. In the Greedy algorithm, while the impact of straggler clients is mitigated, it does not address the influence of the non-IID nature. Since the presence of the non-IID nature and straggler clients are not taken into account, the convergence speed of the Random algorithm is impeded.
Figure 3 demonstrates the corresponding average power consumption of the client side and the server side in each global iteration. The Selected All algorithm, when compared to other methods, exhibits substantially larger power consumption, primarily because it lacks a power constraint. However, due to the mean rate stable characteristic of the virtual queue, as demonstrated by Theorem 4, the power consumption under our proposed algorithm adheres to the long-term power constraint. Notably, the average power consumption under our proposed algorithm is approximately similar to that observed in the Greedy and Random algorithms.
To further validate the proposed optimization scheme, its performance is examined under a varying total number of clients K, as depicted in Figure 4 and Table 1. In the conducted experiment, we take into account the average test accuracy during the concluding half second when the learning time spans 30 seconds. Accompanying the increase in the total number of clients, the average number of clients selected per iteration also escalates, which, under normal circumstances, should enhance test accuracy. Nonetheless, the increase in the number of clients may cause a corresponding increment in each iteration’s training duration. As a consequence, the total iterations that can be carried out within a fixed time duration may decrease, thereby potentially reducing accuracy. Hence, the test accuracy does not bear a linear relationship with the total number of clients, which can be observed from Figure 4. Nevertheless, our proposed algorithm continues to surpass comparative algorithms, as it adeptly manages non-IID characteristics and straggler clients. Furthermore, our proposed algorithm exhibits a consistent ability to maintain low power consumption as the total number of clients increases. This finding aligns with our previous analysis, demonstrating that the virtual power deficit queues are mean rate stable in our proposed algorithm.
Figure 5 depicts the variation in the average power consumption and the average cost in correlation with the control parameter V within our proposed optimization scheme. A clear observation is that the average cost experiences a decrease, while the average power consumption undergoes an increase with an escalating control parameter V. This aligns with the [ O ( 1 / V ) , O ( V ) ] cost–power trade-off indicated by Theorems 2 and 3. Figure 6 illustrates the variation in the optimal average number of selected clients per iteration with the control parameter V. As previously stated, a control parameter value of V = 10 was selected for the experiment. This choice was made because it yields an appropriate average number of selected clients to effectively address the accuracy degradation incited by non-IID characteristics, along with the low average cost and power consumption.

5. Discussion

In this paper, we explored a problem involving the selection of clients and the concurrent control of the CPU frequency for both the selected clients and the server within wireless FL networks. Lyapunov optimization theory was applied to transform the original problem into a per-iteration problem, which facilitated the design of an algorithm for problem resolution. Theoretical analysis offers performance guarantees, wherein controlling the parameter V empowers us to reduce cost while minimizing power consumption. Simulation results demonstrated that the proposed algorithm outperforms benchmark algorithms in terms of test accuracy by mitigating the impact of non-IID characteristics and straggler clients. By managing the virtual queues, the proposed algorithm was able to adhere to long-term power constraints. Furthermore, the simulation results verified that our proposed algorithm successfully realized the [ O ( 1 / V ) , O ( V ) ] cost–power trade-off.
It is noteworthy that this study is currently confined to a simple star network topology. Expanding our analysis to encompass more intricate network structures such as hierarchical networks and multi-base station networks would undoubtedly enhance its applicability. Additionally, in practical wireless networks, client participation in learning can be affected by factors such as mobility, network congestion, or power availability fluctuations, potentially leading to client dropouts from the FL process and thereby impacting overall learning performance. Hence, the implications of client dropouts merit further investigation.

Author Contributions

Conceptualization, Z.Z.; methodology, Z.Z., Y.L. and F.W.; software, Z.Z. and S.S.; validation, Z.Z., Y.L. and S.S.; formal analysis, Z.Z., Y.L. and F.W.; investigation, Z.Z., Y.L. and Y.Z.; writing—original draft preparation, Z.Z.; writing—review and editing, Y.L. and F.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the National Natural Science Foundation of China under Grant 61801433 and Grant 62101505, and in part by the Science and Technology Planning Project of Henan Province under Grant 222102210003 and Grant 222102210278.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
FLFederated learning
IoT Internet of Things
non-IIDNon-independent and identically distributed
IIDIndependent identically distributed
OFDMAOrthogonal frequency-division multiple access
MLPMulti-layer perceptron
ReLURectified Linear Unit

Appendix A

Since Δ ( Θ t ) plays a crucial role in the proof, we first bound Δ ( Θ t ) . Plugging (17)–(19) into Δ ( Θ t ) , we have:
Δ ( Θ t ) = 1 2 E [ k = 1 K ( Z k t + 1 ) 2 + ( Y r t + 1 ) 2 k = 1 K ( Z k t ) 2 ( Y r t ) 2 | Θ t ] = 1 2 E [ k = 1 K { max { P k t P ¯ k + Z k t , 0 } } 2 k = 1 K ( Z k t ) 2 + { max { P r t P ¯ r + Y r t , 0 } } 2 ( Y r t ) 2 | Θ t ] 1 2 E [ k = 1 K ( P k t P ¯ k + Z k t ) 2 k = 1 K ( Z k t ) 2 + ( P r t P ¯ r + Y r t ) 2 ( Y r t ) 2 | Θ t ] C 1 + k = 1 K Z k t E [ P k t P ¯ k | Θ t ] + Y r t E [ P r t P ¯ r | Θ t ] ,
where the first inequality is due to { max { a , 0 } } 2 a 2 . Thus, we have:
Δ ( Θ t ) + V E [ U t | Θ t ] C 1 + k = 1 K Z k t E [ P k t P ¯ k | Θ t ] + Y r t E [ P r t P ¯ r | Θ t ] + V E [ U t | Θ t ] ,
This concludes the proof.

Appendix B

According to Appendix A, we have:
Δ ( Θ t ) + V E [ U t | Θ t ] C 1 + k = 1 K Z k t , max max ( E [ P k max P ¯ k | Θ t ] ) + Y r t , max max ( E [ P r max P ¯ r | Θ t ] ) + V φ * C 2 + V φ *
Summing (A3) over T global iterations, we obtain:
t = 0 T 1 V E [ U t | Θ t ] T C 2 + T V φ * t = 0 T 1 Δ ( Θ t ) .
Next we sum Δ ( Θ t ) over T global iterations. The following formula can be derived from (20):
t = 0 T 1 Δ ( Θ t ) = L ( Θ T ) L ( Θ 0 ) .
Plugging (A5) into (A4), we have:
t = 0 T 1 V E [ U t | Θ t ] T C 2 + T V φ * L ( Θ T ) + L ( Θ 0 ) .
Dividing by T V , using the fact that L ( Θ T ) 0 , L ( Θ 0 ) = 0 , we have:
1 T t = 0 T 1 E [ U t | Θ t ] C 2 V + φ * .
This concludes the proof.

Appendix C

Rearranging (A6), we obtain:
L ( Θ T ) L ( Θ 0 ) T C 2 + T V φ * t = 0 T 1 V E [ U t | Θ t ] .
Substituting E [ U t | Θ t ] φ min into (A8), we have:
L ( Θ T ) T C 2 + T V φ * T V φ min .
Due to (19), we obtain:
( Z k T ) 2 + ( Y r T ) 2 k = 1 K ( Z k T ) 2 + ( Y r T ) 2 2 T C 2 + 2 T V φ * 2 T V φ min .
Thus, we have:
( Z k T ) 2 2 T C 2 + 2 T V φ * 2 T V φ min , ( Y r T ) 2 2 T C 2 + 2 T V φ * 2 T V φ min .
Next we bound the virtual power deficit queues. From (17) and (18), we have:
Z k t + 1 = Z k t min { P ¯ k P k t , Z k t } , Y r t + 1 = Y r t min { P ¯ r P r t , Y r t } .
Rearranging terms and summing both sides over t global iterations, we have:
Z k t Z k 0 = τ = 0 t 1 min { P ¯ k P k τ , Z k t } τ = 0 t 1 ( P k τ P ¯ k ) = τ = 0 t 1 P k τ t P ¯ k ,
Y r t Y r 0 = τ = 0 t 1 min { P ¯ r P r τ , Y k t } τ = 0 t 1 ( P r τ P ¯ r ) = τ = 0 t 1 P r τ t P ¯ r .
Thus, we have:
Z k T τ = 0 T 1 P k τ T P ¯ k , Y r T τ = 0 T 1 P r τ T P ¯ r .
Thus far, we have bounded the virtual power deficit queues. Plugging (A15) into (A11), we have:
τ = 0 T 1 P k τ T P ¯ k + 2 T C 2 + 2 T V φ * 2 T V φ min ,
τ = 0 T 1 P r τ T P ¯ r + 2 T C 2 + 2 T V φ * 2 T V φ min .
This concludes the proof.

Appendix D

Rearranging terms of (A11), we have:
Z k T 2 T C 2 + 2 T V φ * 2 T V φ min , Y r T 2 T C 2 + 2 T V φ * 2 T V φ min .
Dividing by T and taking limits of both sides, we have:
lim T Z k T T = 0 , lim T Y r T T = 0 .
This concludes the proof.

References

  1. McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 20–22 April 2017. [Google Scholar]
  2. Wang, S.; Tuor, T.; Salonidis, T.; Leung, K.K.; Makaya, C.; He, T.; Chan, K. Adaptive federated learning in resource constrained edge computing systems. IEEE J. Sel. Areas Commun. 2019, 37, 1205–1221. [Google Scholar] [CrossRef] [Green Version]
  3. Gao, W.; Zhao, Z.; Min, G.; Ni, Q.; Jiang, Y. Resource allocation for latency-aware federated learning in industrial internet of things. IEEE Trans. Ind. Inform. 2021, 17, 8505–8513. [Google Scholar] [CrossRef]
  4. Hu, R.; Guo, Y.; Gong, Y. Energy-efficient distributed machine learning at wireless edge with device-to-device communication. In Proceedings of the ICC 2022-IEEE International Conference on Communications, Seoul, Republic of Korea, 16–20 May 2022. [Google Scholar]
  5. Zhao, Z.; Feng, C.; Hong, W.; Jiang, J.; Jia, C.; Quek, T.Q.; Peng, M. Federated learning with non-iid data in wireless networks. IEEE Trans. Wirel. Commun. 2021, 21, 1927–1942. [Google Scholar] [CrossRef]
  6. Nishio, T.; Yonetani, R. Client Selection for Federated Learning with Heterogeneous Resources in Mobile Edge. In Proceedings of the ICC 2019—2019 IEEE International Conference on Communications (ICC), Shanghai, China, 20–24 May 2019. [Google Scholar]
  7. Leng, J.; Lin, Z.; Ding, M.; Wang, P.; Smith, D.; Vucetic, B. Client scheduling in wireless federated learning based on channel and learning qualities. IEEE Wirel. Commun. Lett. 2022, 11, 732–735. [Google Scholar] [CrossRef]
  8. Yang, Z.; Chen, M.; Saad, W.; Hong, C.S.; Shikh-Bahaei, M. Energy efficient federated learning over wireless communication networks. IEEE Trans. Wirel. Commun. 2020, 20, 1935–1949. [Google Scholar] [CrossRef]
  9. Yao, J.; Ansari, N. Enhancing federated learning in fog-aided IoT by CPU frequency and wireless power control. IEEE Internet Things J. 2020, 8, 3438–3445. [Google Scholar] [CrossRef]
  10. Zhao, Z.; Xia, J.; Fan, L.; Lei, X.; Karagiannidis, G.K.; Nallanathan, A. System optimization of federated learning networks with a constrained latency. IEEE Trans. Veh. Technol. 2021, 71, 1095–1100. [Google Scholar] [CrossRef]
  11. Yu, L.; Albelaihi, R.; Sun, X.; Ansari, N.; Devetsikiotis, M. Jointly optimizing client selection and resource management in wireless federated learning for internet of things. IEEE Internet Things J. 2021, 9, 4385–4395. [Google Scholar] [CrossRef]
  12. Chen, M.; Yang, Z.; Saad, W.; Yin, C.; Poor, H.V.; Cui, S. A joint learning and communications framework for federated learning over wireless networks. IEEE Trans. Wirel. Commun. 2020, 20, 269–283. [Google Scholar] [CrossRef]
  13. Liu, S.; Yu, G.; Chen, X.; Bennis, M. Joint user association and resource allocation for wireless hierarchical federated learning with iid and non-iid data. IEEE Trans. Wirel. Commun. 2022, 21, 7852–7866. [Google Scholar] [CrossRef]
  14. Huang, T.; Lin, W.; Wu, W.; He, L.; Li, K.; Zomaya, A.Y. An efficiency-boosting client selection scheme for federated learning with fairness guarantee. IEEE Trans. Parallel Distrib. Syst. 2020, 32, 1552–1564. [Google Scholar] [CrossRef]
  15. Guo, K.; Chen, Z.; Yang, H.H.; Quek, T.Q. Dynamic scheduling for heterogeneous federated learning in private 5g edge networks. IEEE J. Sel. Top. Signal Process. 2021, 16, 26–40. [Google Scholar] [CrossRef]
  16. Battiloro, C.; Di Lorenzo, P.; Merluzzi, M.; Barbarossa, S. Lyapunov-based optimization of edge resources for energy-efficient adaptive federated learning. IEEE Trans. Green Commun. Netw. 2022, 7, 265–280. [Google Scholar] [CrossRef]
  17. Xu, J.; Wang, H. Client selection and bandwidth allocation in wireless federated learning networks: A long-term perspective. IEEE Trans. Wirel. Commun. 2020, 20, 1188–1200. [Google Scholar] [CrossRef]
  18. Ji, Y.; Kou, Z.; Zhong, X.; Li, H.; Yang, F.; Zhang, S. Client Selection and Bandwidth Allocation for Federated Learning: An Online Optimization Perspective. In Proceedings of the GLOBECOM 2022—2022 IEEE Global Communications Conference, Rio de Janeiro, Brazil, 4–8 December 2022. [Google Scholar]
  19. Ji, X.; Tian, J.; Zhang, H.; Wu, D.; Li, T. Joint Device Selection and Bandwidth Allocation for Cost-Efficient Federated Learning in Industrial Internet of Things. IEEE Internet Things J. 2023, 10, 9148–9160. [Google Scholar] [CrossRef]
  20. Neely, M.J. Stochastic network optimization with application to communication and queueing systems. Synth. Lect. Commun. Netw. 2010, 3, 1–211. [Google Scholar]
  21. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
Figure 1. Federated learning framework in wireless networks.
Figure 1. Federated learning framework in wireless networks.
Entropy 25 01183 g001
Figure 2. Test accuracy versus learning latency with the number of clients K = 100 .
Figure 2. Test accuracy versus learning latency with the number of clients K = 100 .
Entropy 25 01183 g002
Figure 3. Average power consumption of each client and the server. (a) Each client. (b) Server.
Figure 3. Average power consumption of each client and the server. (a) Each client. (b) Server.
Entropy 25 01183 g003
Figure 4. Average of test accuracy versus total number of clients.
Figure 4. Average of test accuracy versus total number of clients.
Entropy 25 01183 g004
Figure 5. The impact of V. (a) Average power consumption of clients and average cost versus control parameter V. (b) Average power consumption of the server and average cost versus control parameter V.
Figure 5. The impact of V. (a) Average power consumption of clients and average cost versus control parameter V. (b) Average power consumption of the server and average cost versus control parameter V.
Entropy 25 01183 g005
Figure 6. Average number of selected clients versus control parameter V.
Figure 6. Average number of selected clients versus control parameter V.
Entropy 25 01183 g006
Table 1. Average power consumption of clients and the server versus total number of clients.
Table 1. Average power consumption of clients and the server versus total number of clients.
Average Power ConsumptionTotal Number of ClientsProposedSelected AllGreedyRandom
clients707016.21 mW113,212.66 mW5616.25 mW5600.00 mW
808020.03 mW129,386.61 mW6401.84 mW6400.00 mW
909033.43 mW145,559.47 mW7229.53 mW7200.01 mW
10010,036.40 mW161,733.26 mW7743.76 mW7800.00 mW
11011,030.69 mW177,907.72 mW8439.06 mW8400.01 mW
12012,053.63 mW194,080.37 mW9336.41 mW9300.01 mW
13013,047.53 mW210,254.14 mW9940.70 mW9900.01 mW
server70499.86 mW3593.70 mW500.00 mW500.00 mW
80499.94 mW3593.70 mW500.00 mW500.00 mW
90499.94 mW3593.70 mW500.00 mW500.00 mW
100499.99 mW3593.70 mW500.00 mW500.00 mW
110500.03 mW3593.70 mW500.00 mW500.00 mW
120500.04 mW3593.70 mW500.00 mW500.00 mW
130500.24 mW3593.70 mW500.00 mW500.00 mW
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhou, Z.; Shi, S.; Wang, F.; Zhang, Y.; Li, Y. Joint Client Selection and CPU Frequency Control in Wireless Federated Learning Networks with Power Constraints. Entropy 2023, 25, 1183. https://doi.org/10.3390/e25081183

AMA Style

Zhou Z, Shi S, Wang F, Zhang Y, Li Y. Joint Client Selection and CPU Frequency Control in Wireless Federated Learning Networks with Power Constraints. Entropy. 2023; 25(8):1183. https://doi.org/10.3390/e25081183

Chicago/Turabian Style

Zhou, Zhaohui, Shijie Shi, Fasong Wang, Yanbin Zhang, and Yitong Li. 2023. "Joint Client Selection and CPU Frequency Control in Wireless Federated Learning Networks with Power Constraints" Entropy 25, no. 8: 1183. https://doi.org/10.3390/e25081183

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop