Next Article in Journal
Performance of a Vector-Controlled PMSM Drive without Using Current Sensors
Previous Article in Journal
Optimal Per-Loss Reinsurance for a Risk Model with a Thinning-Dependence Structure
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Event-Triggered Optimal Consensus of Heterogeneous Nonlinear Multi-Agent Systems

1
Institute of Machine Intelligence, University of Shanghai for Science and Technology, Shanghai 200093, China
2
Department of Control Science and Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(23), 4622; https://doi.org/10.3390/math10234622
Submission received: 6 November 2022 / Revised: 5 December 2022 / Accepted: 5 December 2022 / Published: 6 December 2022
(This article belongs to the Section Dynamical Systems)

Abstract

:
This paper deals with optimal consensus problems of a general heterogeneous nonlinear multi-agent system. A novel filter is proposed for each agent by integrating local gradients with neighboring output information. Using this filter and introducing an appropriate auxiliary variable, the event-triggered control algorithm is obtained within the framework of the prescribed performance control. One of the remarkable properties of the proposed algorithm is that it can save resources by updating control signals only when necessary rather than periodically while achieving optimal consensus. Theoretical and simulation verifications of the algorithm without the Zeno behavior are carefully studied. Instructions are also presented for control parameter selection to keep the residual errors as small as desired.

1. Introduction

Due to its broad application in quadrotors, mobile robots, and network optimization of resource allocation, research on distributed control of multi-agent systems has emerged extensively in the past two decades. Consensus control as a critical distributed feature of networks, where all agents incorporate only local perception to yield a common state, is now a subject of active research [1,2]. Consensus problems for multi-agent systems have yielded many interesting results, ranging from simple integrators to complex nonlinear dynamics and from fixed communication topologies to switched topologies; see [3,4,5,6,7] and references therein. In practice, resource allocation in computer networks and collision avoidance in multi-robot systems can be modeled as distributed optimization problems. Recently, so-called optimal consensus techniques have been proposed, in which established consensus problems and distributed optimization problems are simultaneously solved [8].
By combining set valued stability analysis with convex analysis, continuous-time distributed optimization of convex function sums was realized under a weight-balanced digraph in [9]. By requiring the local cost function to be twice-differentiable, an average consensus strategy was presented such that each agent can asymptotically tend towards the minimum of the global cost function in [10]. This requirement has been removed by [11], where gradient-based distributed algorithms were proposed under both directed and undirected graphs. A common feature of the above works [9,10,11] is that the inherent nonlinear properties and possible disturbances of the agent are not considered. In the presence of external disturbance, an internal-model-based approach was developed to handle dynamic optimization problems in [12]. The disturbances need to satisfy the matching condition, which means the disturbance can only occur in the exact same equation as the control input. These requirements limit the application of the developed methods to a relatively specific class of systems. Recently, by applying three types of disturbance estimators, distributed optimization strategies have been extended to address second-order systems with mismatched and matched disturbances in [13]. In practice, this extension considering only second-order systems and time-dependent perturbations may not be sufficient due to ubiquitous higher-order dynamics and agent state-dependent uncertainty. However, with few exceptions, research on distributed optimal consensus control of more general uncertain higher-order nonlinear systems has received little attention. By constructing an optimal consensus proportional and integral variable, optimal consensus issues of pure-feedback systems have been solved in [14]. Most of the above results rely on the classic time-triggered control paradigm, where the update of the control signal is periodic even when the system is performing well. This can lead to wasted computing and communication resources, as remarked by [15,16].
In this work, we study the optimal consensus problem for nonlinear systems, focusing on saving resources by updating control signals only when necessary rather than periodically. In particular, a more general heterogeneous multi-agent system is investigated where mismatched uncertainties and possibly agent nonidentical dynamical orders are considered. A novel filter for each agent is proposed which combines local gradients with neighboring output information. Subsequently, we complete the optimal consensus control law design by introducing an event-triggered condition and adopting the backstepping design procedure. Using the integrating factor method and the Lyapunov function, we rigorously demonstrate that all agents achieve the approximate optimal consensus under the proposed protocol, and Zeno behavior can be ruled out. The feasibility of implementing our algorithm is demonstrated on a group of single-link manipulators. Compared with the current results, the main contributions are as follows:
  • Different from the classical time-triggered setting, our proposed event-triggered method is able to significantly reduce the unnecessary control input updates while ensuring the approximate optimal consensus.
  • In contrast to the study of first- and second-order multi-agent systems, where external disturbances are assumed to be bounded a priori, we study a more general nonlinear dynamics where uncertainty is allowed to grow arbitrarily as the variation of agent states and higher-order heterogeneous dynamics are involved.
The rest of this manuscript is organized as follows. In Section 2, the preliminary knowledge and the optimal consensus problem to be addressed are introduced. Section 3 introduces our novel filter system and event-triggered control algorithm. Section 4 illustrates simulation results of a group of manipulators, while Section 5 draws conclusions and discusses recommendations and outlook.
Notation: Let 0 and 1 denote, respectively, the vectors of zeros and ones with length . For a function ϕ ( t ) , we say that ϕ L [ 0 , t f ) if sup 0 t < t f ϕ ( t ) < .

2. Problem Formulation and Preliminaries

2.1. Preliminaries

Let G = { V , E , A } represent the graph describing the communication topology, where V = { 1 , , n } , E V × V , and A = [ a i j ] R n × n represent node set, edge set, and adjacency matrix, respectively. An edge ( i , j ) E indicates that node j can obtain information from node i, node i is the neighbor of node j, and necessarily ( i , j ) E for an undirected graph. The set of all neighbors of node i is denoted by N i . An undirected graph G is connected if for each distinct pair of nodes i and j, there is a path from i to j. Let a i j = 1 if ( j , i ) E and a i j = 0 otherwise. The Laplacian matrix L = [ l i j ] R n × n related to G is defined as l i i = j N i a i j and l i j = a i j , i j .
Lemma 1.
([17]). If the undirected graph G is connected, then there exists a nonsingular matrix P ¯ = [ p 1 , , p n ] R n × n such that
L = P ¯ 0 0 n 1 T 0 n 1 Λ P ¯ T = P Λ P T
where Λ = diag { λ 2 , , λ n } with λ being the positive real eigenvalues of L , = , 2 , n , p are right eigenvectors of L associated with λ , p 1 = 1 n n , and P = [ p 2 , , p n ] .

2.2. Problem Formulation

Consider a multi-agent system composed of n nonlinear agents. Each agent can be described by the following strict-feedback dynamics [18]:
x ˙ i , m = f i , m ( x ¯ i , m ) + g i , m ( x ¯ i , m ) x i , m + 1 + d i , m ( t ) , m = 1 , , q i 1 x ˙ i , q i = f i , q i ( x ¯ i , q i ) + g i , q i ( x ¯ i , q i ) u i + d i , q i ( t )
in which x ¯ i , = [ x i , 1 , , x i , ] T R , = 1 , , q i , are the state vectors. y i = x i , 1 R and u i R represent the agent’s output and the control input, respectively. g i , ( x ¯ i , ) , f i , ( x ¯ i , ) : R R are nonlinear dynamics functions with unknown analytical expressions. d i , ( t ) : [ 0 , ) R denotes uncertain disturbances. Moreover, f i , ( x ¯ i , ) and g i , ( x ¯ i , ) are locally Lipschitz in x ¯ i , and d i , ( t ) are piecewise continuous in t. Agent i has a differentiable local cost function h i ( y i ) : R R , which is strongly convex and has a Lipschitz gradient, i.e., there is a constant M i > 0 satisfying | h i ( a ) h i ( b ) | M i for all a , b R .
Our control goal is to develop a distributed event-triggered controller u i in (1) such that the agent outputs achieve the approximate optimal consensus, i.e., lim sup t | y i ( t ) y * | ε for all i = 1 , , n , where y * is the optimal solution of min s R h ( s ) = i = 1 n h i ( s ) , and ε is a positive constant that can be made arbitrarily small. In addition, all closed-loop signals are bounded.
To achieve the control goal, we make the following standard assumptions about the agent (1).
Assumption 1.
The communication graph G is undirected and connected.
Assumption 2.
Unknown d i , ( t ) , i = 1 , , n , = 1 , , q i , are bounded. There are strictly positive functions and continuous g ̲ i , ( x ¯ i , ) such that g ̲ i , ( x ¯ i , ) | g i , ( x ¯ i , ) | for all x ¯ i , R . Furthermore, the sign of g i , ( x ¯ i , ) is known.

3. Main Result

3.1. Controller Design

To realize the optimal consensus goal in a distributed manner, we propose a novel third-order filter for agent i utilizing locally available information. Inspired by [5,11], the filter is defined as follows:
z ˙ i , 0 = y i z i , 1 z ˙ i , 1 = α h i ( y i ) β j N i a i j ( y i y j ) z i , 2
z ˙ i , 2 = α β j N i a i j ( y i y j )
where α > 0 and β > 0 are constants. Furthermore, the auxiliary variable is defined as
e i , 1 = z i , 0 + z ˙ i , 0 .
Subsequently, a backstepping design method is proposed for each agent by adopting the prescribed performance control technique presented in [19,20]. Specifically, let us define the error variables:
e i , m = x i , m v i , m 1 , m = 2 , , q i
where v i , m 1 are the virtual control. A performance function is introduced as ρ i , ( t ) = ( ρ i , , 0 ρ i , , ) e γ i , t + ρ i , , , = 1 , , q i , where γ i , , ρ i , , , and ρ i , , 0 are positive constants satisfying ρ i , , < ρ i , , 0 and | e i , ( 0 ) | < ρ i , , 0 . Without loss of generality, let us assume g i , ( x ¯ i , ) > 0 . The virtual control laws v i , are proposed as
v i , = ϖ i , ϕ i ,
where ϖ i , are positive gains, ϕ i , = ln ( 1 + ζ i , 1 ζ i , ) , and ζ i , = e i , ρ i , . We propose the event-triggered distributed controller on the ith agent as
u i ( t ) = v i , q i ( t i , k ) , t [ t i , k , t i , k + 1 ) t i , k + 1 = inf { t R : | δ i ( t ) | = η i }
where t i , 0 = 0 , k = 0 , 1 , 2 , , δ i ( t ) = v i , q i ( t ) v i , q i ( t k ) , and η i is a positive design parameter.
The strategy behind (3) is to use only relative output measurements and local gradients to generate the reference output. The purpose of introducing (2) is to include an integral term for the tracking error between the output y i and the reference output z i , 1 to facilitate stability analysis; similar efforts can also be found in time-triggered control consensus methods [5,14].

3.2. Stability Analysis

Theorem 1.
Consider a group of n uncertain heterogeneous nonlinear agents (1) controlled by the distributed event-triggered controller (6) with the filter (3). Under Assumptions 1 and 2, it holds that each agent can realize the approximate optimal consensus while all signals in the closed-loop system remain bounded. Furthermore, Zeno behavior can be avoided, i.e., there is a constant i * > 0 satisfying t i , k + 1 t i , k i * for all k = 0 , 1 , 2 , .
Proof. 
The state variables x i , 1 , , x i , q i of agent i in (1) can be represented by v i , , z i , 0 , z i , 1 , z i , 2 and t as
x i , 1 = ζ i , 1 ρ i , 1 ( t ) z i , 0 + z i , 1 x i , m = ζ i , m ρ i , m ( t ) + v i , m 1 ( ζ i , m 1 ) , m = 2 , , q i .
Note that g i , and f i , are functions of x ¯ i , . Therefore, they can be expressed as g i , 1 ( x i , 1 ) = g i , 1 ( ζ i , 1 ρ i , 1 z i , 0 + z i , 1 ) , f i , 1 ( x i , 1 ) = f i , 1 ( ζ i , 1 ρ i , 1 z i , 0 + z i , 1 ) , g i , m ( x ¯ i , m ) = g i , m ( ζ i , 1 ρ i , 1 z i , 0 + z i , 1 , , ζ i , m ρ i , m + v i , m 1 ( ζ i , m 1 ) ) , f i , m ( x ¯ i , m ) = f i , m ( ζ i , 1 ρ i , 1 z i , 0 + z i , 1 , , ζ i , m ρ i , m + v i , m 1 ( ζ i , m 1 ) ) . Accordingly, the dynamics of z i , 0 , z i , 1 , and z i , 2 in (3) can be rewritten as
z ˙ i , 0 = ζ i , 1 ρ i , 1 z i , 0 z ˙ i , 1 = α h i ( ζ i , 1 ρ i , 1 z i , 0 + z i , 1 ) β j N i a i j ( ( ζ i , 1 ρ i , 1 z i , 0 + z i , 1 ) ( ζ j , 1 ρ j , 1 z j , 0 + z j , 1 ) ) z i , 2 z ˙ i , 2 = α β j N i a i j ( ( ζ i , 1 ρ i , 1 z i , 0 + z i , 1 ) ( ζ j , 1 ρ j , 1 z j , 0 + z j , 1 ) ) .
Noting from (6) that | v i , q i ( t ) u i ( t ) | η i , the time derivatives of ζ i , , = 1 , , q i are given by
ζ ˙ i , 1 = 1 ρ i , 1 ( ( z ˙ i , 0 + x ˙ i , 1 z ˙ i , 1 ) ζ i , 1 ρ ˙ i , 1 ) = 1 ρ i , 1 ( z ˙ i , 0 z ˙ i , 1 ζ i , 1 ρ ˙ i , 1 + f i , 1 ( ζ i , 1 ρ i , 1 z i , 0 + z i , 1 ) + g i , 1 ( ζ i , 1 ρ i , 1 z i , 0 + z i , 1 ) ( ζ i , 2 ρ i , 2 + v i , 1 ( ζ i , 1 ) ) + d i , 1 )
ζ ˙ i , m = 1 ρ i , m ( e ˙ i , m ζ i , m ρ ˙ i , m ) = 1 ρ i , m ( f i , m ( ζ i , 1 ρ i , 1 z i , 0 + z i , 1 , , ζ i , m ρ i , m + v i , m 1 ( ζ i , m 1 ) ) + g i , m ( ζ i , 1 ρ i , 1 z i , 0 + z i , 1 , , ζ i , m ρ i , m + v i , m 1 ( ζ i , m 1 ) ) × ( ζ i , m + 1 ρ i , m + 1 + v i , m ( ζ i , m ) ) + d i , m + 2 ϖ i , m 1 1 ζ i , m 1 2 ζ ˙ i , m 1 ζ i , m ρ ˙ i , m )
ζ ˙ i , q i = 1 ρ i , q i ( e ˙ i , q i ζ i , q i ρ ˙ i , q i ) = 1 ρ i , q i ( f i , q i ( ζ i , 1 ρ i , 1 z i , 0 + z i , 1 , , ζ i , q i ρ i , q i + v i , q i 1 ( ζ i , q i 1 ) ) + g i , q i ( ζ i , 1 ρ i , 1 z i , 0 + z i , 1 , , ζ i , q i ρ i , q i + v i , q i 1 ( ζ i , q i 1 ) ) v i , q i ( ζ i , q i ) + η i + d i , q i + 2 ϖ i , q i 1 1 ζ i , q i 1 2 ζ ˙ i , q i 1 ζ i , q i ρ ˙ i , q i )
where m = 2 , , q i 1 . Let us define the column vector ω = [ ζ 1 , 1 , , ζ 1 , q 1 , , ζ n , 1 , , ζ n , q n , z 1 , 0 , z 1 , 1 , z 1 , 2 , , z n , 0 , z n , 1 , z n , 2 ] T and the nonempty and open set Ω = { ω R q 1 + + q n + 3 n | 1 < ζ i , i < 1 , i = 1 , , n , i = 1 , , q i } . Since ρ i , , 0 < e i , ( 0 ) < ρ i , , 0 , = 1 , , q i , it holds that ω ( 0 ) Ω . According to (8)–(11), the map f of the closed-loop dynamics ω ˙ = f ( ω , t ) over the set Ω is piecewise continuous and locally Lipschitz. Therefore, a unique maximal solution ω of (8)–(11) over Ω on [ 0 , t f ) exists, i.e., ω ( t ) Ω , t [ 0 , t f ) .
Now we show that z i , 0 ( t ) , z i , 1 ( t ) , and z i , 2 ( t ) are bounded on [ 0 , t f ) . Applying the integrating factor method to (2) yields
z i , 0 ( t ) = e t z i , 0 ( 0 ) + 0 t e ( t σ ) e i , 1 ( σ ) d σ .
It follows from (12) and e i , 1 = ρ i , 1 ζ i , 1 that z i , 0 is bounded for any bounded ζ i , 1 . Thus, we can conclude that z i , 0 L [ 0 , t f ) . To analyze the boundedness of z i , 1 and z i , 2 , we define z ¯ 1 = 1 n y * and z ¯ 2 = 1 β H ( z ¯ 1 ) , where H ( s ) = [ h ( s 1 ) , , h ( s n ) ] T for s = [ s 1 , , s n ] T . Denote the regulation errors as z ˜ 1 = z 1 z ¯ 1 and z ˜ 2 = z 2 z ¯ 2 , which have the following dynamics:
z ˜ ˙ 1 = β L e 1 β L z ˜ 1 α β z 2 α H ( e 1 z 0 + z 1 ) z ˜ ˙ 2 = L z 1
where z = [ z 1 , , , z n , ] T with = 0 , 1 , 2 and e 1 = [ e 1 , 1 , , e n , 1 ] T . Motivated by [14], let us consider the following Lyapunov function
V = 1 2 r T I n ϑ I n ϑ I n α β P Λ 1 P T + ϑ β I n r
where r = [ z ˜ 1 T , z ˜ 2 T ] T , ϑ < β is positive constant only introduced for stability analysis purposes, and P and Λ are defined in Lemma 1. A straightforward computation similar to [14] gives
V ˙ κ 1 V + κ 2 z 0 2 + κ 3 e 1 2 .
Here κ 1 , κ 2 , and κ 3 are positive constants whose values depend only on α , β , ϑ , the Laplacian matrix L , and the Lipschitz constant M i for h i . Since e 1 and z 0 are bounded on [ 0 , t f ) , we can conclude from (15) that V is bounded on [ 0 , t f ) . As a result, we have z i , 1 , z i , 2 L [ 0 , t f ) . It then follows from (8) that z ˙ i , 0 , z ˙ i , 1 , z ˙ i , 2 L [ 0 , t f ) .
Step 1. Consider the Lyapunov function V i , 1 = 1 2 ϕ i , 1 2 . Differentiating with t and using (9), we have
V ˙ i , 1 = 2 ϕ i , 1 ( 1 ζ i , 1 2 ) ρ i , 1 ( z ˙ i , 0 z ˙ i , 1 ζ i , 1 ρ ˙ i , 1 + f i , 1 ( ζ i , 1 ρ i , 1 z i , 0 + z i , 1 ) + g i , 1 ( ζ i , 1 ρ i , 1 z i , 0 + z i , 1 ) ( ζ i , 2 ρ i , 2 ϖ i , 1 ϕ i , 1 ) + d i , 1 ) .
Since ζ i , ( 1 , 1 ) , = 1 , , q i for all [ 0 , t f ) , z i , 0 , z i , 1 , z ˙ i , 0 , z ˙ i , 1 L [ 0 , t f ) , and ρ i , 1 , ρ ˙ i , 1 , ρ i , 2 and d i are always bounded, considering the continuity of f i , 1 and g i , 1 and applying the extreme value theorem and Assumption 2 can obtain that | z ˙ i , 0 z ˙ i , 1 ζ i , 1 ρ ˙ i , 1 + f i , 1 ( ζ i , 1 ρ i , 1 z i , 0 + z i , 1 ) + g i , 1 ( ζ i , 1 ρ i , 1 z i , 0 + z i , 1 ) ζ i , 2 ρ i , 2 + d i , 1 | f ¯ i , 1 * , g ̲ i , 1 * g i , 1 ( ζ i , 1 ρ i , 1 z i , 0 + z i , 1 ) for positive constants f ¯ i , 1 * and g ̲ i , 1 * for all t [ 0 , t f ) . This together with the fact that 2 ( 1 ζ i , 1 2 ) ρ i , 1 > 0 gives:
V ˙ i , 1 2 ( 1 ζ i , 1 2 ) ρ i , 1 ( g ̲ i , 1 * ϖ i , 1 ϕ i , 1 2 + f ¯ i , 1 * | ϕ i , 1 | ) , t [ 0 , t f ) .
Therefore, V ˙ i , 1 ( t ) < 0 provided | ϕ i , 1 | > f ¯ i , 1 * g ̲ i , 1 * ϖ i , 1 , t [ 0 , t f ) . We obtain
| ϕ i , 1 ( t ) | ϕ i , 1 * = max { | ϕ i , 1 ( 0 ) | , f ¯ i , 1 * g ̲ i , 1 * ϖ i , 1 } , t [ 0 , t f ) .
Hence, the virtual control v i , 1 ( ζ i , 1 ( t ) ) is bounded on [ 0 , t f ) . By taking the inverse logarithmic function in ϕ i , 1 , we obtain
1 < ζ ̲ i , 1 ζ i , 1 ( t ) ζ ¯ i , 1 < 1 , t [ 0 , t f )
where ζ ̲ i , 1 = ( e ϕ i , 1 * 1 ) / ( e ϕ i , 1 * + 1 ) and ζ ¯ i , 1 = ( e ϕ i , 1 * 1 ) / ( e ϕ i , 1 * + 1 ) . In addition, noting (9) and v ˙ i , 1 ( t ) = 2 ϖ i , 1 1 ζ i , 1 2 ζ ˙ i , 1 , we have ζ ˙ i , 1 ( t ) and v i , 1 ( t ) are bounded on [ 0 , t f ) .
Step m ( m = 2 , , q i ). Applying the method from step 1 recursively to the remaining steps and selecting V i , m = 1 2 ϕ i , m 2 , we can obtain
V ˙ i , m 2 ( 1 ζ i , m 2 ) ρ i , m ( g ̲ i , m * ϖ i , m ϕ i , m 2 + f ¯ i , m * | ϕ i , m | ) , t [ 0 , t f )
where f ¯ i , m * > 0 and g ̲ i , m * > 0 are constants satisfying:
g ̲ i , m * g i , m ( ζ i , 1 ρ i , 1 z i , 0 + z i , 1 , , ζ i , m ρ i , m + v i , m 1 ( ζ i , m 1 ) ) , m = 2 , , q i | f i , ( ζ i , 1 ρ i , 1 z i , 0 + z i , 1 , , ζ i , ρ i , + v i , 1 ( ζ i , 1 ) ) + d i , + 2 ϖ i , 1 1 ζ i , 1 2 ζ ˙ i , 1 ζ i , ρ ˙ i , + g i , ( ζ i , 1 ρ i , 1 z i , 0 + z i , 1 , , ζ i , ρ i , + v i , 1 ( ζ i , 1 ) ) ζ i , + 1 ρ i , + 1 | f ¯ i , * , = 2 , , q i 1
| f i , q i ( ζ i , 1 ρ i , 1 z i , 0 + z i , 1 , , ζ i , q i ρ i , q i + v i , q i 1 ( ζ i , q i 1 ) ) + η i + d i , q i + 2 ϖ i , q i 1 1 ζ i , q i 1 2 ζ ˙ i , q i 1 ζ i , q i ρ ˙ i , q i | f ¯ i , q i * .
It then follows from (19) that
| ϕ i , m ( t ) | ϕ i , m * = max { | ϕ i , m ( 0 ) | , f ¯ i , m * g ̲ i , m * ϖ i , m } , t [ 0 , t f ) .
Consequently, the virtual control v i , m ( ζ i , 1 ( t ) ) is bounded on [ 0 , t f ) and ζ i , m satisfies
1 < ζ ̲ i , m ζ i , m ( t ) ζ ¯ i , m < 1 , t [ 0 , t f )
where ζ ̲ i , m = ( e ϕ i , m * 1 ) / ( e ϕ i , m * + 1 ) and ζ ¯ i , m = ( e ϕ i , m * 1 ) / ( e ϕ i , m * + 1 ) . In addition, noting (10) and (11) and v ˙ i , m ( t ) = 2 ϖ i , m 1 ζ i , m 2 ζ ˙ i , m , we have ζ ˙ i , m ( t ) , v i , m ( t ) , and v ˙ i , m ( t ) are bounded on [ 0 , t f ) . Notice by (18), (21), and z i , 0 , z i , 1 , z i , 2 L [ 0 , t f ) that ω ( t ) Ω * for all t [ 0 , t f ) , where Ω * is a nonempty and compact subset of Ω . Therefore, the solution is global, i.e., t f = .
Next, we show that all agents can realize the approximate optimal consensus. Invoking (12) and (18) leads to
| z i , 0 ( t ) | = e t | z i , 0 ( 0 ) | + 0 t e ( t σ ) | e i , 1 ( σ ) | d σ e t | z i , 0 ( 0 ) | + 0 t e ( t σ ) ρ i , 1 ( σ ) d σ .
This indicates that
| z i , 0 ( t ) | e t | z i , 0 ( 0 ) | + ( ρ i , 1 , 0 ρ i , 1 , ) t e t + ρ i , 1 , ( 1 e t ) , if γ i , 1 = 1 e t | z i , 0 ( 0 ) | + ρ i , 1 , 0 ρ i , 1 , 1 γ i , 1 ( e γ i , 1 t e t ) + ρ i , 1 , ( 1 e t ) , else
In addition, by (15), we further have
V ( t ) = e κ 1 t V ( 0 ) + 0 t e κ 1 ( t σ ) ( κ 2 z 0 ( σ ) 2 + κ 3 e 1 ( σ ) 2 ) d σ .
Noting that | e i , 1 ( t ) | ( ρ i , , 0 ρ i , , ) e γ i , t + ρ i , , and (22), we can conclude e 1 and z 0 exponentially converge to the compact set Ψ 0 = { z 0 R n | z 0 2 i = 1 n ρ i , 1 , 2 } and Ψ 1 = { e 1 R n | e 1 2 i = 1 n ρ i , 1 , 2 } , respectively. Furthermore, we can conclude from (23) that V also converges exponentially to the compact set Ψ 2 = { V R | V ( κ 2 + κ 3 ) i = 1 n ρ i , 1 , 2 κ 1 } . which can be kept arbitrarily small by reducing ρ i , 1 , . By the definition of V in (14), z ˜ i , 1 and z ˜ i , 2 tend towards an arbitrarily small neighborhood around zero. Recalling z ¯ 1 = 1 n y * and z ˜ 1 = z 1 z ¯ 1 , we can conclude that all agents can reach the approximate optimal consensus, i.e., lim sup t | y i ( t ) y * | ε for all i = 1 , , n , where ε is a positive constant that can be made arbitrarily small by decreasing ρ i , 1 , .
We are now in a position to show that there is a constant i * > 0 such that t i , k + 1 t i , k i * for all k = 0 , 1 , 2 . Recalling δ i ( t ) = v i , q i ( t ) v i , q i ( t k ) for all t [ t i , k , t i , k + 1 ) , we have | δ ˙ i ( t ) | = | v ˙ i , q i ( t ) | . Since v ˙ i , q i is bounded, there exists a positive constant ς i such that | δ ˙ i ( t ) | ς i . Considering that δ i ( t i , k ) = 0 and lim t t i , k + 1 δ i ( t ) = η i , it can be inferred that there must exist a positive constant i * = η i ς i satisfying t i , k + 1 t i , k i * . Thus, we have completed proving that the resulting control input updates are free from Zeno behavior. □

4. Simulation Results

We illustrate the application of our method with a numerical example involving six single-link robotic manipulators, as shown in Figure 1. The ith manipulator is described by the following dynamics:
x ˙ i , 1 = x i , 2 x ˙ i , 2 = 1 J i ( u i ξ i x i , 2 M i g χ i sin ( x i , 1 ) ) , i = 1 , , 6
where x ¯ i = [ x i , 1 , x i , 2 ] T with x i , 1 R and x i , 2 R representing the angle of the link and the angular velocity, respectively. u i R denotes the control torque. g , J i , ξ i , M i , and χ i are uncertain physical parameters whose definition can be found in [21]. The communication topology between the manipulators is shown in Figure 2. The parameters of the manipulator models are: g = 9.81 , J i = 0.1 + 0.03 i , ξ i = 0.2 + 0.01 i , M i = 0.5 + 0.02 i , and χ i = 0.3 + 0.15 i . The local cost functions h i are chosen as:
h 1 ( x 1 , 1 ) = ( x 1 , 1 0.5 ) 2 , h 2 ( x 2 , 1 ) = x 2 , 1 2 + e 0.1 x 2 , 1 , h 3 ( x 3 , 1 ) = ( x 3 , 1 + 0.1 ) 2 , h 4 ( x 4 , 1 ) = 1.5 x 4 , 1 2 + 2 x 4 , 1 + 1 , h 5 ( x 5 , 1 ) = 1.3 x 5 , 1 2 1 , h 6 ( x 6 , 1 ) = x 6 , 1 2 x 6 , 1 + 4 .
The initial configurations of the manipulators are x ¯ 1 ( 0 ) = [ 0.8 , 1 ] T , x ¯ 2 ( 0 ) = [ 0.8 , 1.5 ] T , x ¯ 3 ( 0 ) = [ 0.8 , 1 ] T , x ¯ 4 ( 0 ) = [ 0.8 , 1 ] T , x ¯ 5 ( 0 ) = [ 0.8 , 1 ] T , and x ¯ 6 ( 0 ) = [ 1.5 , 1 ] T . The filter (2) and (3) and controller (6) are implemented with the parameters ϖ i , 1 = 1 , ϖ i , 2 = 5 , η i = 0.6 , α = 1 , and β = 1 . The performance functions are set to ρ i , 1 ( t ) = ( 5 0.01 ) e 0.35 t + 0.01 and ρ i , 2 ( t ) = ( 10 0.01 ) e 0.35 t + 0.01 .
Simulation results show the performance of the proposed event-triggered controller in Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10. The manipulator angles and velocities are exhibited in Figure 3, from which a satisfactory consensus has been reached. Furthermore, the minimizer of the global cost function h ( y ) = i = 1 6 h i ( y ) is y * = 0.022 . Figure 3 also reveals that each manipulator angle converges to y * , although only the neighboring output information and local gradients are available. In addition, as the optimal consensus approaches, the velocity of the manipulator decreases and converges to zero. The evolution of the errors e i , 1 and e i , 2 is shown in Figure 4 along with their corresponding performance functions. Clearly, e i , 1 and e i , 2 always fulfill the predefined performance specification, as manifested by the theoretical analysis, despite the presence of event-triggered control inputs. The required control input of each manipulator is pictured in Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10, respectively. We can observe from Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10 that the value of the manipulator control is updated aperiodically, only when certain conditions violate the specification (refer to (6)). If the condition is not violated, the value of the control will remain as the value of the last updated controller. To show the advantages of the proposed event-triggered method, a comparative simulation is also carried out, in which the control signal update is periodic and the period is chosen as 1 ms. Simulation results show the performance of the time-triggered controller in Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16 and Figure 17. Furthermore, Table 1 summarizes the corresponding update times of the five manipulator control signals under the event-triggered controller and the time-triggered controller. These results confirm that our event-triggered control method can achieve the approximate optimal consensus similar to the time-triggered controller, while greatly reducing the number of control signal updates and saving computation and communication resources.

5. Conclusions

A novel event-triggered control framework has been presented in this work to realize optimal consensus control of nonlinear agents. It saves resources without sacrificing consensus convergence by proposing new filters and introducing event-triggered rules to reduce unnecessary control signal updates. Furthermore, we confirmed the Zeno-free nature of the method by demonstrating that there is a positive lower bound between two consecutive updates of the control input. Simulation results of single-link manipulators validated our theoretical finding. Some open questions are worthy of further investigation, including the extension of the current results to more general switching topologies and robust analysis of communication delays.

Author Contributions

Conceptualization, G.W.; Investigation, Y.J.; Writing — review and editing, Y.J., Q.L. and C.W.; Project administration, G.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Natural Science Foundation of Shanghai under Grant 22ZR1443600 and the Shanghai Artificial Intelligence Innovation and Development Special Support Project under Grant 2019RGZN01041.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Psillakis, H.E. Consensus in networks of agents with unknown high-frequency gain signs and switching topology. IEEE Trans. Autom. Control 2017, 62, 3993–3998. [Google Scholar] [CrossRef]
  2. Abdessameud, A.; Tayebi, A. Distributed consensus algorithms for a class of high-order multi-agent systems on directed graphs. IEEE Trans. Autom. Control 2018, 63, 3464–3470. [Google Scholar] [CrossRef]
  3. Ren, W. On consensus algorithms for double-integrator dynamics. IEEE Trans. Autom. Control 2008, 53, 1503–1509. [Google Scholar] [CrossRef]
  4. Wang, G. Distributed control of higher-order nonlinear multi-agent systems with unknown non-identical control directions under general directed graphs. Automatica 2019, 110, 108559. [Google Scholar] [CrossRef]
  5. Wang, G. Consensus control in heterogeneous nonlinear multiagent systems with position feedback and switching topologies. IEEE Trans. Netw. Sci. Eng. 2022, 9, 3546–3557. [Google Scholar] [CrossRef]
  6. Wang, Q.; Psillakis, H.E.; Sun, C. Cooperative control of multiple agents with unknown high-frequency gain signs under unbalanced and switching topologies. IEEE Trans. Autom. Control 2019, 64, 2495–2501. [Google Scholar] [CrossRef] [Green Version]
  7. Mei, J.; Ren, W.; Chen, J. Distributed consensus of second-order multi-agent systems with heterogeneous unknown inertias and control gains under a directed graph. IEEE Trans. Autom. Control 2016, 61, 2019–2034. [Google Scholar] [CrossRef]
  8. Yang, F.; Yu, Z.; Huang, D.; Jiang, H. Distributed optimization for second-order multi-agent systems over directed networks. Mathematics 2022, 10, 3803. [Google Scholar] [CrossRef]
  9. Gharesifard, B.; Cortés, J. Distributed continuous-time convex optimization on weight-balanced digraphs. IEEE Trans. Autom. Control 2014, 59, 781–786. [Google Scholar] [CrossRef] [Green Version]
  10. Zanella, F.; Varagnolo, D.; Cenedese, A.; Pillonetto, G.; Schenato, L. Newton-Raphson consensus for distributed convex optimization. In Proceedings of the 2011 50th IEEE Conference on Decision and Control and European Control Conference, Orlando, FL, USA, 12–15 December 2011; pp. 5917–5922. [Google Scholar]
  11. Kia, S.S.; Cortés, J.; Martínez, S. Distributed convex optimization via continuous-time coordination algorithms with discrete-time communication. Automatica 2015, 55, 254–264. [Google Scholar] [CrossRef]
  12. Wang, X.; Yi, P.; Hong, Y. Dynamic optimization for multi-agent systems with external disturbances. Control Theory Technol. 2014, 12, 132–138. [Google Scholar] [CrossRef]
  13. Wang, X.; Li, S.; Wang, G. Distributed optimization for disturbed second-order multiagent systems based on active antidisturbance control. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 2104–2117. [Google Scholar] [CrossRef] [PubMed]
  14. Gkesoulis, A.K.; Psillakis, H.E.; Lagos, A.R. Optimal consensus via OCPI regulation for unknown pure-feedback agents with disturbances and state-delays. IEEE Trans. Autom. Control 2022, 67, 4338–4345. [Google Scholar] [CrossRef]
  15. Xing, L.; Wen, C.; Liu, Z.; Su, H.; Cai, J. Event-triggered adaptive control for a class of uncertain nonlinear systems. IEEE Trans. Autom. Control 2017, 62, 2071–2076. [Google Scholar] [CrossRef]
  16. Zhuang, J.; Li, Z.; Hou, Z.; Yang, C. Event-triggered consensus control of nonlinear strict feedback multi-agent systems. Mathematics 2022, 10, 1596. [Google Scholar] [CrossRef]
  17. Bai, H.; Arcak, M. Instability mechanisms in cooperative control. IEEE Trans. Autom. Control 2009, 55, 258–263. [Google Scholar] [CrossRef]
  18. Wang, G.; Wang, C.; Ding, Z.; Ji, Y. Distributed consensus of nonlinear multi-agent systems with mismatched uncertainties and unknown high-frequency gains. IEEE Trans. Circuits Syst. II Express Briefs 2021, 68, 938–942. [Google Scholar] [CrossRef]
  19. Bechlioulis, C.P.; Rovithakis, G.A. A low-complexity global approximation-free control scheme with prescribed performance for unknown pure feedback systems. Automatica 2014, 50, 1217–1226. [Google Scholar] [CrossRef]
  20. Bechlioulis, C.P.; Rovithakis, G.A. Robust adaptive control of feedback linearizable MIMO nonlinear systems with prescribed performance. IEEE Trans. Autom. Control 2008, 53, 2090–2099. [Google Scholar] [CrossRef]
  21. Spong, M.W.; Hutchinson, S.; Vidyasagar, M. Robot Modeling and Control; John Wiley & Sons: Hoboken, NJ, USA, 2020. [Google Scholar]
Figure 1. Single-link manipulator.
Figure 1. Single-link manipulator.
Mathematics 10 04622 g001
Figure 2. Communication topology.
Figure 2. Communication topology.
Mathematics 10 04622 g002
Figure 3. Trajectories of the angles x i , 1 and the velocities x i , 2 under the event−triggered control.
Figure 3. Trajectories of the angles x i , 1 and the velocities x i , 2 under the event−triggered control.
Mathematics 10 04622 g003
Figure 4. Trajectories of the errors e i , 1 and e i , 2 under the event−triggered control.
Figure 4. Trajectories of the errors e i , 1 and e i , 2 under the event−triggered control.
Mathematics 10 04622 g004
Figure 5. Event−triggered control input u 1 .
Figure 5. Event−triggered control input u 1 .
Mathematics 10 04622 g005
Figure 6. Event−triggered control input u 2 .
Figure 6. Event−triggered control input u 2 .
Mathematics 10 04622 g006
Figure 7. Event−triggered control input u 3 .
Figure 7. Event−triggered control input u 3 .
Mathematics 10 04622 g007
Figure 8. Event−triggered control input u 4 .
Figure 8. Event−triggered control input u 4 .
Mathematics 10 04622 g008
Figure 9. Event−triggered control input u 5 .
Figure 9. Event−triggered control input u 5 .
Mathematics 10 04622 g009
Figure 10. Event−triggered control input u 6 .
Figure 10. Event−triggered control input u 6 .
Mathematics 10 04622 g010
Figure 11. Trajectories of the angles x i , 1 and the velocities x i , 2 under the time−triggered control.
Figure 11. Trajectories of the angles x i , 1 and the velocities x i , 2 under the time−triggered control.
Mathematics 10 04622 g011
Figure 12. Time−triggered control input u 1 .
Figure 12. Time−triggered control input u 1 .
Mathematics 10 04622 g012
Figure 13. Time−triggered control input u 2 .
Figure 13. Time−triggered control input u 2 .
Mathematics 10 04622 g013
Figure 14. Time−triggered control input u 3 .
Figure 14. Time−triggered control input u 3 .
Mathematics 10 04622 g014
Figure 15. Time−triggered control input u 4 .
Figure 15. Time−triggered control input u 4 .
Mathematics 10 04622 g015
Figure 16. Time−triggered control input u 5 .
Figure 16. Time−triggered control input u 5 .
Mathematics 10 04622 g016
Figure 17. Time−triggered control input u 6 .
Figure 17. Time−triggered control input u 6 .
Mathematics 10 04622 g017
Table 1. The number of updates for controls under event-triggered controllers and periodic time-triggered controllers.
Table 1. The number of updates for controls under event-triggered controllers and periodic time-triggered controllers.
Manipulator 1Manipulator 2Manipulator 3
Event-triggered control1130716630
Periodic time-triggered control15,00015,00015,000
Manipulator 4Manipulator 5Manipulator 6
Event-triggered control949551171
Periodic time-triggered control15,00015,00015,000
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ji, Y.; Wang, G.; Li, Q.; Wang, C. Event-Triggered Optimal Consensus of Heterogeneous Nonlinear Multi-Agent Systems. Mathematics 2022, 10, 4622. https://doi.org/10.3390/math10234622

AMA Style

Ji Y, Wang G, Li Q, Wang C. Event-Triggered Optimal Consensus of Heterogeneous Nonlinear Multi-Agent Systems. Mathematics. 2022; 10(23):4622. https://doi.org/10.3390/math10234622

Chicago/Turabian Style

Ji, Yunfeng, Gang Wang, Qingdu Li, and Chaoli Wang. 2022. "Event-Triggered Optimal Consensus of Heterogeneous Nonlinear Multi-Agent Systems" Mathematics 10, no. 23: 4622. https://doi.org/10.3390/math10234622

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop