Next Article in Journal
Air Duct Optimization Design Based on Local Turbulence Loss Analysis and IMOCS Algorithm
Next Article in Special Issue
Formation Control for Second-Order Multi-Agent Systems with Collision Avoidance
Previous Article in Journal
CNN Model with Multilayer ASPP and Two-Step Cross-Stage for Semantic Segmentation
Previous Article in Special Issue
Formation Control of Mobile Robots Based on Pin Control of Complex Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Precise Dynamic Consensus under Event-Triggered Communication

by
Irene Perez-Salesa
*,†,
Rodrigo Aldana-Lopez
*,† and
Carlos Sagues
Departamento de Informática e Ingeniería de Sistemas (DIIS), Instituto de Investigación en Ingeniería de Aragón (I3A), Universidad de Zaragoza, María de Luna 1, 50018 Zaragoza, Spain
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Machines 2023, 11(2), 128; https://doi.org/10.3390/machines11020128
Submission received: 21 December 2022 / Revised: 9 January 2023 / Accepted: 11 January 2023 / Published: 17 January 2023
(This article belongs to the Special Issue Advanced Motion Control of Multiple Robots)

Abstract

:
This work addresses the problem of dynamic consensus, which consists of estimating the dynamic average of a set of time-varying signals distributed across a communication network of multiple agents. This problem has many applications in robotics, with formation control and target tracking being some of the most prominent ones. In this work, we propose a consensus algorithm to estimate the dynamic average in a distributed fashion, where discrete sampling and event-triggered communication are adopted to reduce the communication burden. Compared to other linear methods in the state of the art, our proposal can obtain exact convergence under continuous communication even when the dynamic average signal is persistently varying. Contrary to other sliding-mode approaches, our method reduces chattering in the discrete-time setting. The proposal is based on the discretization of established exact dynamic consensus results that use high-order sliding modes. The convergence of the protocol is verified through formal analysis, based on homogeneity properties, as well as through several numerical experiments. Concretely, we numerically show that an advantageous trade-off exists between the maximum steady-state consensus error and the communication rate. As a result, our proposal can outperform other state-of-the-art approaches, even when event-triggered communication is used in our protocol.

1. Introduction

The problem of dynamic consensus, also referred to as dynamic average tracking, consists of computing the average of a set of time-varying signals distributed across a communication network. This problem has recently attracted a lot of attention due to its applications in general cyber-physical systems and robotics. For instance, dynamic consensus algorithms can be used to coordinate multi-robot systems, formation control, and target tracking, as in [1,2]. A more detailed overview of the common problems in the field of multi-robot coordination can be found in [3].
Many dynamic consensus approaches in continuous time exist in the literature [4]. For example, a high-order linear dynamic consensus approach is proposed in [5], which is capable of achieving arbitrarily small steady-state errors when persistently varying signals are used by tuning a protocol parameter. The protocol in [6] adopts a similar linear approach and analyses its performance against bounded cyber-attacks in the communication network. In [7], another linear consensus protocol is proposed to address the problem of having active and passive sensing nodes with time-varying roles. While these approaches obtain good theoretical results in continuous time, only discrete-time communication is possible in practice. In contrast, other works are designed directly in a discrete-time setting where local time-varying signals are sampled with a fixed sampling step shared across the network. For example, a high-order discrete-time linear dynamic consensus algorithm is proposed in [8] with similar performance to those in continuous time. The proposal in [9] extends the previous work to ensure it is robust to initialization errors. Similar ideas are applied to distributed convex optimization in [10].
Besides having discrete communication, maintaining a low communication burden in the network is desirable, both from power consumption and communication bandwidth usage perspectives. For this purpose, an event-triggered approach can be used to decide when agents should communicate instead of doing so at every sampling instant. Several works use event-triggered conditions in which the estimate for the average signal for a particular agent is only shared when it changes sufficiently with respect to the last transmitted value. For example, an event-triggered communication rule is applied to a continuous-time dynamic consensus problem in [11]. In [12], an event-triggered dynamic consensus approach is analysed for homogeneous and heterogeneous multi-agent systems. These ideas have shown to be helpful in distributed state estimation. For instance, deterministic [13] and stochastic [14] event-triggering rules have been used to construct distributed Kalman filters. The proposal in [15] borrows these ideas to construct an extended Kalman filter. Distributed set-membership estimators have also been studied in the context of event-triggered communication [16]. Event-triggered schemes for estimation typically result in a trade-off between the quality of the estimates and the number of triggered communication events, as is pointed out in [17]. This allows the user to tune the parameters of the event-triggering condition to reduce communication while achieving a desired estimation error for the consensus estimates. A more in-depth look into event-triggered strategies can be found in [18].
However, most event-triggered and discrete-time approaches for dynamic consensus are based on linear techniques, even in recent works [15]. This means that the estimations cannot be exact when the dynamic average is persistently varying, e.g., when the local signals behave as a sinusoidal. Concretely, these approaches typically attain a bounded steady-state error, which can become arbitrarily small by increasing some parameters in the algorithm [4]. Despite this, as typical with high-gain arguments, increasing such parameters can reduce the algorithm’s robustness when only imperfect measurements or noisy local signals are available [19]. In contrast, exact dynamic consensus algorithms can achieve exact convergence towards the dynamic average under reasonable assumptions for the local signals, including the persistently varying case. For example, a discontinuous First-Order Sliding Mode (FOSM) approach is used in [20] to achieve exact convergence in this setting. Similar ideas are applied in [21] for continuous-time state estimation. The work in [22] proposes a High-Order Sliding Mode (HOSM) approach for exact dynamic consensus, which was then extended in [23] to account for initialization errors. Note that these approaches are mainly based on sliding modes to achieve such an exact convergence. As a result, continuous-time analysis is mostly available for these protocols.
Unfortunately, sliding modes deteriorate their performance in the discrete-time setting due to the so-called chattering ([24], Chapter 3). In particular, FOSM approaches for dynamic consensus in [20,21] deteriorate its performance in an order proportional to the time step in the discrete-time setting. A standard solution to attenuate chattering is to use HOSM [24]. To the best of our knowledge, the only methods that use HOSM for dynamic consensus are [22,23]. However, discrete-time analysis or event-triggered extensions for these approaches have not been discussed so far in the literature.
As a result of this discussion, we contribute a novel distributed dynamic consensus algorithm with the following features. First, the algorithm uses discrete-time samples of the local signal. We also propose the adoption of event-triggered communication between agents to reduce the communication burden. Our proposal is consistently exact, which means that the exact estimation of the average signal is obtained when the time step approaches zero and when events are triggered at each sampling step. We show the advantage of our method against linear approaches, in which the permanent consensus error cannot be eliminated. Moreover, in the general discrete-time case, we show that our proposal reduces chattering when compared to exact FOSM approaches.
This article is organized as follows. Section 2 provides a description of the problem of interest and our proposal to solve the consensus problem on a discrete-time setting using event-triggered communication. A formal analysis of the convergence for our protocol can be found in Section 3. In Section 4, we validate our proposal and compare it to other methods through simulation experiments. Furthermore, Section 5 provides a qualitative discussion comparing our proposal to other related works. The conclusions and future work are drawn in Section 6.

2. Problem Statement and Protocol Proposal

In this Section, we describe the problem of interest and our proposed consensus protocol to solve it using event-triggered communication between the agents.

2.1. Notation

Let x = [ x 1 , , x n ] R n , then x : = max { x i } i = 1 n . Let 𝟙 = [ 1 , , 1 ] R n , U = 𝟙 𝟙 the matrix with unit elements and I n R n × n the identity matrix. Let sign ( x ) = 1 if x > 0 ,   sign ( x ) = 1 if x < 0 and sign ( 0 ) = 0 . Moreover, if x R , let x α : = | x | α sign ( x ) for α > 0 and x 0 : = sign ( x ) . When x = [ x 1 , , x n ] R n , then x α : = x 1 α , , x n α for α 0 . Let x ( n ) represent the nth order derivative of x. The notation c [ 1 , 1 ] , with some c R , refers to the interval [ c , c ] .

2.2. Problem Statement

Consider a multi-agent system distributed in a network G = ( V , E ) with node set V = { 1 , , N } and edge set E V × V , modelling communication links between agents. In the following, G is an undirected connected graph of N agents characterised by an adjacency matrix A and equally described by an incidence matrix D , choosing an arbitrary orientation ([25], Chapter 8). Let N i denote the index set of neighbours for an agent i V .
We consider that each agent i V has access to a local time-varying signal z i ( t ) R at some fixed sampling instants t { k Δ } k = 0 where Δ is the sampling period. We write z i [ k ] : = z i ( k Δ ) for simplicity. The goal is for all the agents to estimate the dynamic average signal [4]:
z ¯ ( t ) = 1 N i = 1 N z i ( t )
in a distributed fashion, by sharing information between them.

2.3. Protocol Proposal

Our proposal to solve the multi-agent consensus problem is a protocol that adopts event-triggered communication to reduce the communication burden in the network of agents. It also uses High-Order Sliding Modes to achieve precise consensus results.
Event-triggered communication: To reduce the communication burden in the network, an event-triggered approach is used to decide when an agent communicates with its neighbours. Denote with z ^ i [ k ] the estimation that agent i V has for the signal z ¯ ( t ) at t = k Δ . Then, the communication link ( i , j ) E is updated at the sequence of instants { k i j } k = 0 provided by the recursive triggering rule:
k i j = k if | z ^ i [ k ] z ^ i [ k 1 i j ] | ε | z ^ j [ k ] z ^ j [ k 1 i j ] | ε k 1 i j otherwise
where ε 0 is a design parameter and by setting 0 i j = 0 without loss of generality. The interpretation of (2) is that both agents i , j for a link ( i , j ) E have stored local copies of each other’s estimation z ^ i [ k 1 i j ] , z ^ j [ k 1 i j ] from the last time the link was updated. If agent i detects that its current estimation z ^ i [ k ] satisfies | z ^ i [ k ] z ^ i [ k 1 i j ] | ε , it then sends z ^ i [ k ] to agent j, which responds by sending its current estimation z ^ j [ k ] back. Subsequently, both agents update k i j = k . A similar protocol occurs when agent j detects | z ^ j [ k ] z ^ j [ k 1 i j ] | ε .
The proposed algorithm to estimate z ¯ ( t ) in this setting is of the following form. Each agent i { 1 , , N } has m + 1 internal states χ i , 0 [ k ] , , χ i , m [ k ] from which the local estimation z ^ i [ k ] is computed according to:
Internal state update : χ i , μ [ k + 1 ] = ν = 0 m μ Δ ν ν ! χ i , μ + ν [ k ] + Δ g μ j N i F μ z ^ i [ k i j ] z ^ j [ k i j ] , for μ { 0 , , m 1 } χ i , m [ k + 1 ] = χ i , m [ k ] + Δ g m j N i F m z ^ i [ k i j ] z ^ j [ k i j ] , Estimation output : z ^ i [ k ] : = z i [ k ] χ i , 0 [ k ]
with gains g 0 , , g m > 0 and F μ ( ) = m μ m + 1 . The proposal for the exact dynamic consensus algorithm with the event-triggered communication between the agents is summarized in Algorithm 1. Figure 1 illustrates the flow of information for each agent i V .    
Algorithm 1: Exact dynamic consensus with event-triggered communication
Machines 11 00128 i001
To ensure the convergence of (3), we require the following assumption for the signals z i ( t ) , regardless of the sampling period Δ :
Assumption A1.
Given L > 0 , then it follows that:
z ¯ ( m + 1 ) ( t ) z i ( m + 1 ) ( t ) L , t 0
Theorem 1.
Let G be a connected graph and Assumption 1 hold for some fixed L > 0 . Then, for any Δ , ε > 0 there exist sufficiently high gains g 0 , , g m and some K N , such that if i = 1 N χ i , μ [ 0 ] = 0 , μ { 0 , , m } then:
| z ^ i [ k ] z ¯ [ k ] | c max { Δ m + 1 , ε N } , k K
for some constant c > 0 .
The condition i = 1 N χ i , μ [ 0 ] = 0 in the previous theorem has been used in other works in the literature [4,22] and is complied trivially by setting χ i , μ [ 0 ] = 0 . The proof for Theorem 1 is provided in Section 3.1.
In addition, if samples of the derivatives z i ( μ ) [ k ] , μ { 0 , , m } are also available, either by assumption or through a differentiation algorithm [26], then additional outputs can be obtained as:
z ^ i , μ [ k ] : = z i ( μ ) [ k ] χ i , μ [ k ]
which estimate the derivatives { z ¯ ( μ ) ( t ) } μ = 0 m at t = k Δ . The following result regarding the outputs (4) is obtained as a consequence of the proof of Theorem 1 in Section 3.1.
Corollary 1.
Consider the assumptions of Theorem 1 and the outputs in (4). Then, in addition to the conclusion of Theorem 1, there also exist c 0 , , c μ > 0 such that:
| z ^ i , μ [ k ] z ¯ ( μ ) [ k ] | c μ max Δ m μ + 1 , ( ε N ) m μ + 1 m + 1
Remark 1.
While the performance of our proposal in (3) is not exact as a result of Theorem 1, we say that it is consistently exact. This means that the exact convergence is obtained when ( Δ , ε ) ( 0 , 0 ) , even under persistently varying local signals { z i ( t ) } i = 1 N , recovering the performance of the continuous-time protocol in [22].
Remark 2.
Note that if m = 0 , our proposal in (3) is similar to the approaches in [20,21], which use FOSM. In this case, considering ε = 0 , the error bound performance of the protocol is proportional to Δ as a result of Theorem 1, as expected due to chattering. Moreover, by increasing the order m > 0 of the sliding modes in (3), the effect of chattering is attenuated by having an error bound performance proportional to Δ m + 1 , which is smaller than Δ for small sampling steps.
Remark 3.
In light of Theorem 1 and Corollary 1, it can be concluded that the event-triggering threshold ε affects the maximum error bound. Hence, its value can be tuned for a particular application according to the magnitude of the admissible consensus error. Moreover, recalling (2), the parameter ε represents the admissible error one node can have in its knowledge of its neighbours’ estimates. For this reason, ε should be small enough to guarantee that the magnitude of the difference | z ^ i [ k ] z ^ i [ k 1 i j ] | is acceptable in comparison to the amplitude of the signal z ¯ i ( t ) according to the application requirements.
Remark 4.
Theorem 1 ensures the existence of sufficiently high gains g 0 , , g m such that the convergence of (3) is obtained. However, the proof of the theorem reveals that these gains can be extracted precisely from a continuous-time EDCHO protocol. Hence, we refer the reader to [22], where a design procedure for the EDCHO gains g 0 , , g m is described in detail based on well-established techniques.

3. Protocol Convergence

In this Section, we provide a formal analysis of the convergence of our consensus protocol. To do so, we introduce three technical lemmas before providing the proof of Theorem 1 in Section 3.1.
First, define χ μ [ k ] = [ χ i , 0 [ k ] , , χ i , m [ k ] ] and z ( t ) = [ z 1 ( t ) , , z N ( t ) ] . Moreover, define:
z ^ i , μ [ k ] = z i ( μ ) [ k ] χ i , μ [ k ]
where z ^ i [ k ] z ^ i , 0 [ k ] , as well as z ^ μ [ k ] = [ z ^ 1 , μ [ k ] , , z ^ N , μ [ k ] ] . In addition, note that z ^ i [ k ] = z ^ i [ k i j ] + ε i [ k ] where | ε i [ k ] | ε by construction from (2). Hence, z ^ 0 [ k i j ] = z ^ 0 [ k ] ε [ k ] with ε [ k ] = [ ε 1 [ k ] , , ε N [ k ] ] and ε [ k ] ε N .
Thus, the protocol in (3) can be written in vector form as:
χ μ [ k + 1 ] = ν = 0 m μ Δ ν ν ! χ μ + ν [ k ] + Δ g μ D D ( z ^ 0 [ k ] ε [ k ] ) m μ m + 1 χ m [ k + 1 ] = χ m [ k ] + Δ g m D D ( z ^ 0 [ k ] ε [ k ] ) 0
for which the condition i = 1 N χ i , μ [ 0 ] = 0 in Theorem 1 can be written as 𝟙 χ μ [ 0 ] = 0 .
The purpose of the internal variables χ μ [ k ] is to approximate the disagreement between local signals expressed as P z ( μ ) [ k ] with P = I N ( 1 / N ) U recalling that U = 𝟙 𝟙 such that if χ μ [ k ] = P z ( μ ) [ k ] exactly, then all the agents reach consensus towards:
z ^ μ [ k ] = z ( μ ) [ k ] χ μ [ k ] = ( 1 / N ) U z ( μ ) [ k ] = 𝟙 z ¯ ( μ ) [ k ]
The first step to show that χ μ [ k ] approximates P z ( μ ) [ k ] is showing that it is always orthogonal to the consensus direction in the following result.
Lemma 1.
Let G be connected and 𝟙 χ μ [ 0 ] = 0 hold μ { 0 , , m } . Then, (5) satisfies 𝟙 χ μ [ k ] = 0 , k 0 , μ { 0 , , m } .
Proof. 
The proof follows by strong induction. Note that, 𝟙 χ m [ k + 1 ] = 𝟙 χ m [ k ] from (5) since 𝟙 D = 0 . Hence, 𝟙 χ m [ k ] = 0 , k 0 due to the initial condition 𝟙 χ m [ 0 ] = 0 . Now, use this as an induction base for the rest of the μ { 1 , , m 1 } . Assume from the induction hypothesis that 𝟙 χ μ + ν [ k ] = 0 , k 0 with ν { 1 , , m μ } and μ < m . Then, from (5):
𝟙 χ μ [ k + 1 ] = 𝟙 χ μ [ k ] + ν = 1 m μ Δ ν ν ! 𝟙 χ μ + ν [ k ] = 𝟙 χ μ [ k ]
from which it follows that 𝟙 χ μ [ k ] = 0 , k 0 similarly as before, completing the proof. □
Now, let the error χ ˜ μ [ k ] = χ μ [ k ] P z ( μ ) [ k ] and recall from Corollary A2 in Appendix B that z ˜ ( μ ) [ k ] = P z ( μ ) [ k ] can be expanded as:
z ˜ ( μ ) [ k + 1 ] = ν = 0 m μ Δ ν ν ! z ˜ ( μ + ν ) [ k ] + r μ [ k ]
where r μ [ k ] ( Δ m μ + 1 / ( m μ + 1 ) ! ) [ L , L ] N . Therefore, combining (5) and (6), the error dynamics for χ ~ μ [ k ] can be written as:
χ ˜ μ [ k + 1 ] = ν = 0 m μ Δ ν ν ! χ ˜ μ + ν [ k ] Δ g μ D D ( χ ˜ 0 [ k ] + ε [ k ] ) m μ m + 1 + r μ [ k ] χ ˜ m [ k + 1 ] = χ ˜ m [ k ] Δ g m D D ( χ ˜ 0 [ k ] + ε [ k ] ) 0 + r m [ k ]
using D ( z ^ 0 [ k ] ε [ k ] ) = D ( χ ˜ 0 [ k ] + ε [ k ] ) due to D = D P . Rearranging (7) leads to:
χ ˜ μ [ k + 1 ] = χ ˜ μ [ k ] + Δ ( ν = 1 m μ Δ ν 1 ν ! χ ˜ μ + ν [ k ] g μ D D ( χ ˜ 0 [ k ] + ε [ k ] ) m μ m + 1 + r μ [ k ] / Δ ) χ ˜ m [ k + 1 ] = χ ˜ m [ k ] + Δ g m D D ( χ ˜ 0 [ k ] + ε [ k ] ) 0 + r m [ k ] / Δ
Hence, all trajectories of (8) satisfy the inclusions:
χ ˜ μ [ k + 1 ] χ ˜ μ [ k ] + Δ F μ ( χ ˜ [ k ] ; ρ ) χ ˜ m [ k + 1 ] χ ˜ m [ k ] + Δ F m ( χ ˜ [ k ] ; ρ )
with χ ˜ [ k ] = [ χ ˜ 0 [ k ] , , χ ˜ m [ k ] ] , ρ = max { Δ , ( ε N ) 1 / ( m + 1 ) } , and the set valued functions:
F μ ( χ ˜ [ k ] ; ρ ) = ν = 1 m μ ρ ν 1 [ 1 , 1 ] ν ! χ ˜ μ + ν [ k ] g μ D D ( χ ˜ 0 [ k ] + ρ m + 1 [ 1 , 1 ] N ) m μ m + 1 + L ρ m μ ( m μ + 1 ) ! [ 1 , 1 ] N F m ( χ ˜ [ k ] ; ρ ) = g m D D ( χ ˜ 0 [ k ] + ρ m + 1 [ 1 , 1 ] N 0 + [ L , L ] N
To show the convergence of the error system in (9), we write an equivalent continuous-time system as follows. First, we extend solutions to (7) as χ ˜ μ ( t ) for any t [ k Δ , ( k + 1 ) Δ ] according to:
χ ˜ μ ( t ) = χ ˜ μ [ k ] + ( t k Δ ) v μ [ k ] χ ˜ m ( t ) = χ ˜ m [ k ] + ( t k Δ ) v m [ k ]
where v μ [ k ] is an arbitrary vector v μ [ k ] F μ ( χ ˜ [ k ] ; ρ ) such that (11) complies (9) at t = ( k + 1 ) Δ . Hence, it follows that:
χ ˜ ˙ μ ( t ) F μ ( χ ˜ [ k ] ; ρ ) , μ { 1 , , m } , t [ k Δ , ( k + 1 ) Δ )
Finally, note that δ ( t ) = t k Δ satisfies 0 δ ( t ) Δ and hence χ ˜ [ k ] = χ ˜ ( t δ ( t ) ) χ ˜ ( t Δ [ 0 , 1 ] ) χ ˜ ( t ρ [ 0 , 1 ] ) . Thus, we have:
χ ˜ ˙ μ ( t ) F μ ( χ ˜ ( t ρ [ 0 , 1 ] ) ; ρ ) , μ { 1 , , m } , t 0
In the following, we derive some important properties for (13).
Lemma 2.
Given any η > 0 , the differential inclusion (13) is invariant under the transformation:
χ ˜ μ ( t ) = η m μ + 1 χ ˜ μ ( t ) , ρ = η ρ , t = η t
Proof. 
Write F μ ( χ ˜ ; ρ ) = F μ ( M η χ ˜ ; ρ / η ) with M η = diag ( η m 1 𝟙 , , η 1 𝟙 ) and note that:
F μ ( M η χ ˜ ; ρ / η ) = ν = 1 m μ η 1 ν ( ρ ) ν 1 [ 1 , 1 ] ν ! η m + μ + ν 1 χ ˜ μ + ν g μ D D ( η m 1 χ ˜ 0 + η m 1 ( ρ ) m + 1 [ 1 , 1 ] N ) m μ m + 1 + L η m + μ ( ρ ) m μ ( m μ + 1 ) ! [ 1 , 1 ] N = η μ m ν = 1 m μ ( ρ ) ν 1 [ 1 , 1 ] ν ! χ ˜ μ + ν η μ m g μ D D ( χ ˜ 0 + ( ρ ) m + 1 [ 1 , 1 ] N ) m μ m + 1 + η μ m L ( ρ ) m μ ( m μ + 1 ) ! [ 1 , 1 ] N = η μ m F μ ( χ ˜ ; ρ )
and μ { 1 , , m 1 } . The same reasoning applies to the case where μ = m . Now, compute the dynamics of the transformed variables in the new time t :
d χ ˜ μ d t = d χ ˜ μ d t d t d t = χ ˜ ˙ μ η 1 = η m μ + 1 χ ˜ ˙ μ η 1 η m μ F μ ( χ ˜ ( t ρ [ 0 , 1 ] ) ; ρ ) = F μ ( χ ˜ ( t ρ [ 0 , 1 ] ) ; ρ )
which completes the proof. □
Lemma 3.
Let Assumption 1 hold. Hence, if ρ = 0 , then there exist sufficiently high gains g 0 , , g m > 0 such that (13) is finite-time stable towards the origin.
Proof. 
The proof follows by noting that:
F μ ( χ ˜ ( t ) ; 0 ) = χ ˜ μ + 1 ( t ) g μ D D χ ˜ 0 ( t ) m μ m + 1
for μ { 0 , , m 1 } and:
F m ( χ ˜ ( t ) ; 0 ) = g m D D χ ˜ 0 [ k ] 0 + [ L , L ] N
Hence, (13) with ρ = 0 is equivalent to (A2) in Appendix A, which is finite-time stable towards the origin according to Corollary A1, completing the proof. □
With these results, we are ready to show Theorem 1.

3.1. Proof of Theorem 1

We use a similar reasoning as in Theorem 2 in [27] where the asymptotics in Theorem 1 are obtained through an argument of continuity of solutions for differential inclusions with respect to the parameter ρ = max { Δ , ( ε N ) 1 / ( m + 1 ) } [28]. First, let the transformation:
χ ˜ μ ( t ) = η m μ + 1 χ ˜ μ ( t ) , ρ = η ρ , t = η t
and note that the dynamics of the new variables comply with:
d χ ˜ μ d t F μ ( χ ˜ ( t ρ [ 0 , 1 ] ) ; ρ ) , μ { 1 , , m }
according to Lemma 2. Lemma 3 implies that if ρ = 0 then (15) complies with χ ˜ μ ( t ) = 0 , t T for some T > 0 . Hence, by the continuity of the solutions of the inclusion (15), there exists a sufficiently small ρ > 0 such that (15) complies χ ˜ μ ( t ) c μ , t T for some T , c μ > 0 .
Using η = ρ / ρ , the original variables comply that for t T = T / η :
χ ˜ μ ( t ) = η m + μ 1 χ ˜ μ ( t ) c μ η m + μ 1 = c μ ρ m μ + 1
with c μ = c μ ( ρ ) m + μ 1 , which is a constant since ρ is fixed. Therefore, χ ˜ 0 ( t ) c 0 ρ m + 1 . This means that:
z ^ 0 ( t ) = z ( t ) χ 0 ( t ) = z ( t ) χ ˜ 0 ( t ) P z ( t ) = 𝟙 z ¯ ( t ) χ ˜ 0 ( t )
and, thus, z ^ 0 ( t ) 𝟙 z ¯ ( t ) c 0 ρ m + 1 . Consequently, evaluating the previous inequality component-wise and at t = k ρ , then | z ^ i [ k ] z ¯ [ k ] | c ρ m + 1 = c max { Δ m + 1 , ε N } , k K : = T / ρ for some c > 0 where we used z ^ i [ k ] z ^ i , 0 [ k ] , completing the proof of Theorem 1. A similar argument is used for the additional outputs in (4) in order to show Corollary 1.

4. Numerical Experiments

To validate our proposal, we show experiments on a simulation scenario. The communication network for the N = 10 agents is described by the graph G shown in Figure 2. As an example, we consider m = 3 . For this case, we set the gains { g μ } μ = 0 m = { 48 , 200 , 280 , 160 } for all agents, which were chosen as established in Remark 4. We have arbitrarily chosen initial conditions as { χ i , 0 [ 0 ] } i = 1 N 1 = { 7.27 , 2.52 , 1.63 , 7.68 , 8.86 , 9.60 , 4.30 , 1.90 , 9.24 } and χ N , 0 [ 0 ] = i = 1 N 1 χ i , 0 [ 0 ] as well as χ i , μ [ 0 ] = 0 , μ > 0 to ensure i = 1 N χ i , μ [ 0 ] = 0 , μ { 0 , , m } . The internal reference signals for each agent have been provided as z i ( t ) = A i cos ( ω i t + ϕ i ) . For the sake of generality, we have arbitrarily chosen the following amplitudes A i , frequencies ω i , and phases ϕ i , respectively:
{ A i } i = 1 N = { 33.61 , 21.58 , 34.72 , 12.84 , 0.49 , 26.61 , 13.97 , 47.31 , 45.32 , 19.63 } { ω i } i = 1 N = { 1.52 , 1.15 , 1.50 , 1.29 , 0.25 , 1.01 , 0.69 , 0.18 , 0.30 , 0.40 } { ϕ i } i = 1 N = { 0.02 , 0.67 , 0.84 , 0.97 , 0.06 , 0.45 , 0.58 , 0.69 , 0.72 , 0.65 }
Our goal is to show that we can achieve consistently exact dynamic consensus on the average of the reference signals, z ¯ ( t ) , as defined in Remark 1. For our protocol, the error due to the discretization and event-triggered communication is in an arbitrarily small neighbourhood of zero. The desired neighbourhood can be tuned with Δ and ε , according to Theorem 1. Thus, we have tested our proposal with different sampling times Δ and event thresholds ε .
Figure 3 shows the consensus results for our protocol with a sampling period of Δ = 10 4 and no event-triggered communication ( ε = 0 ) as an approximation of the ideal continuous-time case with full communication. For the Figures, we define z ^ i ( t ) = z ^ i [ k ] for t [ k Δ , ( k + 1 ) Δ ) . The desired dynamic average of the signals, z ¯ ( t ) , is plotted in a dashed red line. It can be observed that, after some time, the consensus error falls close to zero.
When we add event-triggered communication, the consensus error increases with the value of the triggering threshold ε . This can be observed in Figure 4 and Figure 5, which show the results with ε = 0.01 and ε = 1 , respectively, keeping Δ = 10 4 . Note that, especially for a small ε , the event-triggered error is more apparent in the instants where the measured signal changes its slope. This is due to the shape of the event-triggering condition (2), since events are triggered when the current estimate differs from the last transmitted one by more than some threshold ε . In the instants around the change in slope, this difference remains smaller than the triggering threshold for a longer time, with the signal around the flat region. Hence, a more noticeable trigger-induced error appears.
The sampling period Δ also affects the magnitude of the steady-state consensus error. Figure 6 shows the consensus results with Δ = 10 3 , ε = 0.01 . When compared to Figure 4, it is apparent that the consensus error has increased with the sampling period. Note that the oscillating shape of the error is due to the sinusoidal shape of the signal of interest.
The increase in error due to ε is common in event-triggered setups where there is a trade-off between the number of triggered events and the quality of the results. When we set a higher value of ε , we achieve a reduction in communication between agents since information is only exchanged at event instants in the corresponding links, at the cost of an increase in the resulting error. To further analyse the said trade-off, Figure 7 shows the effect that the reduction in communication has on the consensus error. This figure shows that the communication through the network of agents can be significantly reduced with respect to the full communication case without producing a high increase in the steady-state consensus error. The communication rate has been computed as follows:
Communication rate ( % ) = Messages sent between agents Maximum possible messages · 100
A communication rate of 100% represents full communication, where each agent transmits to all its neighbours at every sampling time. For our proposal, note that when an event is triggered in the link ( i , j ) E , a message is sent from i to its neighbour j and agent j replies with another message to agent i. Thus, for every event that is triggered, two messages are sent through the network. We have considered that when events are triggered in the links ( i , j ) and ( j , i ) simultaneously, only one message is sent in each direction.
A similar trade-off is found for the sampling time. Figure 7 also shows the evolution of the consensus error with different values of the sampling period. Note that, if event-triggered communication is used ( ε > 0 ), the value of ε has a higher impact on the steady-state error than the sampling period when Δ 0 . In that case, the steady-state error does not asymptotically tend to zero but rather remains in a neighbourhood of zero that is bounded according to ε .
To compare with the other approaches in the literature, we have obtained consensus results for the same simulation scenario using a linear protocol and a First-Order Sliding Modes (FOSM) protocol, which are two existing options for dynamic consensus in the state of the art as described in the following. Note that by performing some adaptations, the linear and FOSM protocols can be obtained as particular cases of our protocol. To implement the linear protocol, we use (3) with F μ ( ) = ( ) , which is a similar setting as in [4,9]. For the FOSM protocol, we use the equations in (3) setting m = 0 , which correspond to one of the proposals in [20]. For both cases, we set Δ = 10 4 and ε = 0 to simulate the case with full communication.
Figure 8 compares the consensus error for the linear and FOSM protocols against our proposal. The FOSM protocol can eliminate the steady-state consensus error in continuous time implementations, but the discretized version suffers from chattering. In contrast, we can see that our method reduces chattering, improving the steady-state error. The linear approach cannot eliminate the permanent consensus error in the estimates of the dynamic average z ¯ ( t ) , regardless of the value of Δ (note that the shape of the error is due to the sinusoidal nature of the signals used in the experiment).
We show the numerical results in Table 1 to summarize our comparison. We include the maximum steady-state error obtained for the linear and FOSM protocols and for our method. We include the results with full communication ( ε = 0 ) for comparison and the values obtained with our event-triggered setup ( ε = 0.01 ). To highlight the advantage of event-triggered communication, we include the level of communication in the network of each approach. The results from Table 1 show that our proposal can significantly reduce the steady-state error compared to the linear and FOSM protocols. Moreover, including event-triggered communication instead of having each agent transmit to all neighbours at every sampling instant produces a drastic reduction in the amount of communication through the network while still performing better than the linear and FOSM protocols with full communication.
Finally, we showed in Corollary 1 that our proposal could also recover the derivatives of the signal of interest with some bounded error. We compare the results obtained for the derivatives against those of a linear protocol. For the FOSM protocol, recall that m = 0 is used in (3) to compute it and, therefore, it only obtains the internal state χ i , 0 [ k ] , but not the corresponding internal states for the derivatives. Although the linear approach can recover the derivatives of the desired average signal, our proposal is more accurate, as shown in Figure 9 and Figure 10. With our protocol, the derivatives are recovered with some bounded error due to the sampling and event-triggered communication. It can be seen in Figure 10 that this error is more apparent in high-order derivatives and its magnitude can be tuned according to the parameters Δ , ε , as shown in Corollary 1. Hence, the improvement obtained with our protocol with respect to the linear one is also evident.

5. Comparison to Related Work

In this Section, we offer a qualitative comparison of our work with other methods in the literature. Note that our work is related to the EDCHO protocol presented in [22], which provides a continuous-time formulation of an exact dynamic consensus algorithm. Both works are similar in that they use HOSM to achieve exact convergence. However, since discrete communication is used in practice, our work contributes a discrete-time formulation that guarantees exact convergence when the sampling step approaches zero. Moreover, we have included event-triggered communication to alleviate the communication load in the network of agents.
Other exact dynamic consensus approaches appear in [20,21], which use FOSM. In the discrete-time consensus problem, FOSM protocols suffer from chattering, a steady-state error caused by discretization. The chattering deteriorates the performance of the consensus algorithm in an order of magnitude proportional to the sampling step Δ when using FOSM. Using HOSM, we have shown in Theorem 1 that our protocol diminishes the effect of chattering. In particular, the error bound is proportional to Δ m + 1 , with m being the order of the sliding modes. We showcased this improvement in the experiment results from Figure 8 and Table 1 in Section 4.
Even though the use of sliding modes results in the exact convergence of the consensus algorithm in continuous-time implementations, several works still rely on linear protocols. A high-order, discrete-time implementation of linear dynamic consensus protocols appears in [9] whose performance does not depend on the sampling step, contrary to our work. Nonetheless, this approach requires the agents to communicate on each sampling instant. This feature is improved in our approach by the use of event-triggered communication.
Many recent works featuring event-triggered schemes to reduce communication among the agents [13,15,29] attain a similar trade-off between the consensus quality and communication rate. However, they also rely on linear consensus protocols. When persistently varying signals are used, this always results in a non-zero steady-state error, which cannot be eliminated even in continuous-time implementations. Hence, even though our event-triggering mechanism is similar to those from other works, our HOSM approach can achieve a vanishing error when Δ , ε are arbitrarily small.

6. Conclusions

We have proposed a dynamic consensus algorithm to estimate the dynamic average of a set of time-varying signals in a distributed fashion. By construction, discrete sampling and event-triggered communication are used to reduce the communication burden. The algorithm uses High-Order Sliding Modes, resulting in exact convergence when continuous communication and sampling are used. We have shown that reduced chattering is obtained in the discrete-time setting compared to the First-Order Sliding Modes approach. We have also shown the advantage of our proposal against linear protocols because our protocol can recover the average signal’s derivatives. Finally, we have highlighted the important communication savings that can be achieved by using event-triggered communication between the network agents while maintaining a good trade-off between error and communication. A formal convergence analysis and multiple numerical examples are provided, which verify the proposal’s advantages. We consider removing the constraint on the initial conditions for future work, which will cause the proposal to be robust to the connection or disconnection of agents from the network.

Author Contributions

Conceptualization, R.A.-L.; data curation, validation and experimentation I.P.-S.; investigation and software, I.P.-S. and R.A.-L.; writing, review and editing, I.P.-S., R.A.-L. and C.S.; project administration, supervision and funding acquisition, C.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported via projects PID2021-124137OBI00 and TED2021-130224B-I00 funded by MCIN/AEI/10.13039/501100011033, by ERDF A way of making Europe and by the European Union NextGenerationEU/PRTR, by the Gobierno de Aragón under Project DGA T45-20R, by the Universidad de Zaragoza and Banco Santander, by the Consejo Nacional de Ciencia y Tecnología (CONACYT-Mexico) with grant number 739841, and by Spanish grant FPU20/03134.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. The EDCHO Protocol

The EDCHO protocol, presented in [22], is a distributed algorithm designed to compute z ¯ i ( t ) = ( 1 / N ) i = 1 N s i ( t ) and its first m derivatives at each agent, where s i ( t ) are local time-varying signals. EDCHO can be written as:
x ˙ i , μ ( t ) = g μ j N i y i , 0 ( t ) y j , 0 ( t ) m μ m + 1 + x i , μ + 1 ( t ) for 0 μ < m x ˙ i , m ( t ) = g m j N i y i , 0 ( t ) y j , 0 ( t ) 0 y i , μ ( t ) = s i ( μ ) ( t ) x i , μ ( t ) .
where N i is the index set of neighbours for agent i.
Proposition A1
([22] adapted from Theorem 7). Let Assumption 1 hold for given L, i = 1 N x i , μ ( 0 ) = 0 , μ { 0 , , m } and a fixed connected communication network G . Then, there exists a time T > 0 that depends on the initial conditions y i , μ ( 0 ) and gains g 0 , , g m > 0 such that (A1) comply with:
y i , μ ( t ) = z ¯ ( μ ) ( t ) , t T , μ { 0 , , m } , i { 1 , , N }
Given a value of L > 0 , the design rules for the parameters g 0 , , g m > 0 are detailed in Section 6 in [22]. Now, let χ μ ( t ) = [ χ 1 , μ ( t ) , , χ N , μ ( t ) ] , z ^ μ ( t ) = [ z ^ 1 , μ ( t ) , , z ^ N , μ ( t ) ] , z ( t ) = [ z 1 ( t ) , , z N ( t ) ] . Hence, Proposition A1 implies that z ^ μ = 𝟙 z ¯ ( μ ) ( t ) , t T , which, combining with z ^ μ ( t ) = z ( μ ) ( t ) χ μ ( t ) , implies χ μ ( t ) = P z ( μ ) ( t ) , t T with P = I ( 1 / N ) U recalling that U = 𝟙 𝟙 . This means that χ ˜ μ ( t ) = χ μ ( t ) P z ( μ ) ( t ) reaches the origin in finite time and has dynamics:
χ ˜ ˙ μ ( t ) = χ ˜ μ + 1 ( t ) g μ D D χ ˜ 0 ( t ) m μ m + 1 for 0 μ m 1 , χ ˜ ˙ m ( t ) [ L , L ] N g m D D χ ˜ 0 ( t ) 0
written as a differential inclusion since it applies to any P z ( m + 1 ) ( t ) [ L , L ] N given Assumption 1 and since D z ^ 0 ( t ) = D ( z ( t ) χ 0 ( t ) ) = D ( χ 0 ( t ) P z ( t ) ) = D χ ˜ 0 ( t ) due to D P = D . In this case, the solutions to (A2) are understood in the sense of Filippov [28] in which we set 0 0 = [ 1 , 1 ] . This leads to the following result.
Corollary A1.
Consider that the assumptions of Proposition A1 hold, then all the trajectories of the differential inclusion in (A2) reach the origin in finite time.

Appendix B. Taylor Theorem with Integral Remainder

Proposition A2
([30] [Theorem 7.3.18, Page 217). ] Suppose that f ( 1 ) ( t ) , , f ( n + 1 ) ( t ) exist on t [ τ , τ + Δ ] for some τ , Δ > 0 and that f ( n + 1 ) ( t ) is Riemann integrable on the same interval. Therefore, we have:
f ( τ + Δ ) = ν = 0 n Δ ν ν ! f ( ν ) ( τ ) + R
where the remainder is provided by:
R = 1 n ! τ τ + Δ f ( n + 1 ) ( t ) ( τ + Δ t ) n d t
Corollary A2.
Consider some integer m such that the same assumptions as in Proposition A2 hold for each of the components of some vector signal z ˜ ( t ) R N with n = m . Moreover, assume that z ˜ ( m + 1 ) ( t ) L , t [ k Δ , ( k + 1 ) Δ ] for some L. Then, we have:
z ˜ ( μ ) [ k + 1 ] = ν = 0 m μ Δ ν ν ! z ˜ ( μ + ν ) [ k ] + r μ [ k ]
where the remainder satisfies:
r μ [ k ] L Δ m μ + 1 ( m μ + 1 ) ! [ 1 , 1 ] N
Proof. 
First, let f ( t ) be any component of z ˜ ( μ ) ( t ) and set n = m μ , τ = k Δ in Proposition A2 to recover (A3) with vector of remainders:
r μ [ k ] = 1 ( m μ ) ! k Δ ( k + 1 ) Δ z ˜ ( m + 1 ) ( t ) ( ( k + 1 ) Δ t ) m μ d t
Next:
r μ [ k ] = 1 ( m μ ) ! k Δ ( k + 1 ) Δ z ˜ ( m + 1 ) ( t ) ( ( k + 1 ) Δ t ) m μ d t 1 ( m μ ) ! k Δ ( k + 1 ) Δ z ˜ ( m + 1 ) ( t ) ( ( k + 1 ) Δ t ) m μ d t L ( m μ ) ! k Δ ( k + 1 ) Δ ( ( k + 1 ) Δ t ) m μ d t = L Δ m μ + 1 ( m μ + 1 ) !
which completes the proof. □

References

  1. Aldana-Lopez, R.; Gomez-Gutierrez, D.; Aragues, R.; Sagues, C. Dynamic Consensus with Prescribed Convergence Time for Multileader Formation Tracking. IEEE Control Syst. Lett. 2022, 6, 3014–3019. [Google Scholar] [CrossRef]
  2. Chen, F.; Ren, W. A Connection Between Dynamic Region-Following Formation Control and Distributed Average Tracking. IEEE Trans. Cybern. 2018, 48, 1760–1772. [Google Scholar] [CrossRef] [PubMed]
  3. Ren, W.; Beard, R.W.; Atkins, E.M. A survey of consensus problems in multi-agent coordination. In Proceedings of the American Control Conference, Portland, OR, USA, 8–10 June 2005; Volume 3, pp. 1859–1864. [Google Scholar]
  4. Kia, S.S.; Van Scoy, B.; Cortes, J.; Freeman, R.A.; Lynch, K.M.; Martinez, S. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Syst. Mag. 2019, 39, 40–72. [Google Scholar] [CrossRef] [Green Version]
  5. Sen, A.; Sahoo, S.R.; Kothari, M. Distributed Average Tracking With Incomplete Measurement Under a Weight-Unbalanced Digraph. IEEE Trans. Autom. Control 2022, 67, 6025–6037. [Google Scholar] [CrossRef]
  6. Iqbal, M.; Qu, Z.; Gusrialdi, A. Resilient Dynamic Average-Consensus of Multiagent Systems. IEEE Control Syst. Lett. 2022, 6, 3487–3492. [Google Scholar] [CrossRef]
  7. Peterson, J.D.; Yucelen, T.; Sarangapani, J.; Pasiliao, E.L. Active-Passive Dynamic Consensus Filters with Reduced Information Exchange and Time-Varying Agent Roles. IEEE Trans. Control Syst. Technol. 2020, 28, 844–856. [Google Scholar] [CrossRef]
  8. Zhu, M.; Martínez, S. Discrete-time dynamic average consensus. Automatica 2010, 46, 322–329. [Google Scholar] [CrossRef]
  9. Montijano, E.; Montijano, J.I.; Sagüés, C.; Martínez, S. Robust discrete time dynamic average consensus. Automatica 2014, 50, 3131–3138. [Google Scholar] [CrossRef]
  10. Kia, S.S.; Cortés, J.; Martínez, S. Distributed convex optimization via continuous-time coordination algorithms with discrete-time communication. Automatica 2015, 55, 254–264. [Google Scholar] [CrossRef] [Green Version]
  11. Kia, S.S.; Cortés, J.; Martínez, S. Distributed Event-Triggered Communication for Dynamic Average Consensus in Networked Systems. Automatica 2015, 59, 112–119. [Google Scholar] [CrossRef]
  12. Zhao, Y.; Xian, C.; Wen, G.; Huang, P.; Ren, W. Design of Distributed Event-Triggered Average Tracking Algorithms for Homogeneous and Heterogeneous Multiagent Systems. IEEE Trans. Autom. Control 2022, 67, 1269–1284. [Google Scholar] [CrossRef]
  13. Battistelli, G.; Chisci, L.; Selvi, D. A distributed Kalman filter with event-triggered communication and guaranteed stability. Automatica 2018, 93, 75–82. [Google Scholar] [CrossRef]
  14. Yu, D.; Xia, Y.; Li, L.; Zhai, D.H. Event-triggered distributed state estimation over wireless sensor networks. Automatica 2020, 118, 109039. [Google Scholar] [CrossRef]
  15. Rezaei, H.; Ghorbani, M. Event-triggered resilient distributed extended Kalman filter with consensus on estimation. Int. J. Robust Nonlinear Control 2022, 32, 1303–1315. [Google Scholar] [CrossRef]
  16. Ge, X.; Han, Q.L.; Wang, Z. A dynamic event-triggered transmission scheme for distributed set-membership estimation over wireless sensor networks. IEEE Trans. Cybern. 2019, 49, 171–183. [Google Scholar] [CrossRef] [PubMed]
  17. Wu, J.; Jia, Q.S.; Johansson, K.H.; Shi, L. Event-based sensor data scheduling: Trade-off between communication rate and estimation quality. IEEE Trans. Autom. Control 2013, 58, 1041–1046. [Google Scholar] [CrossRef] [Green Version]
  18. Peng, C.; Li, F. A survey on recent advances in event-triggered communication and control. Inf. Sci. 2018, 457–458, 113–125. [Google Scholar] [CrossRef]
  19. Vasiljevic, L.K.; Khalil, H.K. Error bounds in differentiation of noisy signals by high-gain observers. Syst. Control. Lett. 2008, 57, 856–862. [Google Scholar] [CrossRef]
  20. George, J.; Freeman, R.A. Robust Dynamic Average Consensus Algorithms. IEEE Trans. Autom. Control 2019, 64, 4615–4622. [Google Scholar] [CrossRef]
  21. Ren, W.; Al-Saggaf, U.M. Distributed Kalman–Bucy Filter With Embedded Dynamic Averaging Algorithm. IEEE Syst. J. 2018, 12, 1722–1730. [Google Scholar] [CrossRef]
  22. Aldana-López, R.; Aragüés, R.; Sagüés, C. EDCHO: High order exact dynamic consensus. Automatica 2021, 131, 109750. [Google Scholar] [CrossRef]
  23. Aldana-López, R.; Aragüés, R.; Sagüés, C. REDCHO: Robust Exact Dynamic Consensus of High Order. Automatica 2022, 141, 110320. [Google Scholar] [CrossRef]
  24. Perruquetti, W.; Barbot, J.P. Sliding Mode Control in Engineering; Marcel Dekker, Inc.: New York, NY, USA, 2002. [Google Scholar]
  25. Godsil, C.; Royle, G. Algebraic Graph Theory; Graduate Texts in Mathematics; Springer: Berlin/Heidelberg, Germany, 2001; Volume 207. [Google Scholar]
  26. Levant, A. Higher-order sliding modes, differentiation and output-feedback control. Int. J. Control 2003, 76, 924–941. [Google Scholar] [CrossRef]
  27. Levant, A. Homogeneity approach to high-order sliding mode design. Automatica 2005, 41, 823–830. [Google Scholar] [CrossRef]
  28. Filippov, A.F. Differential equations with discontinuous righthand sides; Mathematics and its Applications (Soviet Series); Kluwer Academic Publishers Group: Dordrecht, The Netherlands, 1988; Volume 18, p. x+304, Translated from the Russian. [Google Scholar]
  29. Tan, X.; Cao, J.; Li, X. Consensus of Leader-Following Multiagent Systems: A Distributed Event-Triggered Impulsive Control Strategy. IEEE Trans. Cybern. 2019, 49, 792–801. [Google Scholar] [CrossRef]
  30. Rudin, W. Principles of Mathematical Analysis, 3rd ed.; McGraw-Hill Book Company, Inc.: New York, NY, USA; Toronto, ON, Canada; London, UK, 1976. [Google Scholar]
Figure 1. Block diagram for each agent i V . The internal states that the agent computes are updated using the last transmitted estimates of the agent and its neighbours j N i , which are denoted by z ^ i [ k i j ] and z ^ j [ k i j ] , respectively. The current estimate z ^ i [ k ] is computed using the internal state χ i , 0 [ k ] and the local signal z i [ k ] . This estimate is evaluated by the event trigger, which decides if the current value should be transmitted to the neighbours.
Figure 1. Block diagram for each agent i V . The internal states that the agent computes are updated using the last transmitted estimates of the agent and its neighbours j N i , which are denoted by z ^ i [ k i j ] and z ^ j [ k i j ] , respectively. The current estimate z ^ i [ k ] is computed using the internal state χ i , 0 [ k ] and the local signal z i [ k ] . This estimate is evaluated by the event trigger, which decides if the current value should be transmitted to the neighbours.
Machines 11 00128 g001
Figure 2. Graph G describing the communication network of the N = 10 agents.
Figure 2. Graph G describing the communication network of the N = 10 agents.
Machines 11 00128 g002
Figure 3. Consensus results for our protocol with Δ = 10 4 , ε = 0 (full communication). By using a small time step and communication at every step, our protocol obtains highly precise results. (a) Consensus estimates for the average signal using our proposal. The desired value z ¯ ( t ) is plotted as a dashed red line. The estimates quickly converge to the average signal. (b) Consensus error of the estimates with respect to the average signal. The error converges to a neighbourhood of zero, which can be made arbitrarily small by tuning the value of the sampling period.
Figure 3. Consensus results for our protocol with Δ = 10 4 , ε = 0 (full communication). By using a small time step and communication at every step, our protocol obtains highly precise results. (a) Consensus estimates for the average signal using our proposal. The desired value z ¯ ( t ) is plotted as a dashed red line. The estimates quickly converge to the average signal. (b) Consensus error of the estimates with respect to the average signal. The error converges to a neighbourhood of zero, which can be made arbitrarily small by tuning the value of the sampling period.
Machines 11 00128 g003
Figure 4. Consensus error for our protocol with Δ = 10 4 , ε = 0.01 . The addition of event-triggered communication increases the steady-state consensus error with respect to the full communication case (see Figure 3) and can be tuned with ε .
Figure 4. Consensus error for our protocol with Δ = 10 4 , ε = 0.01 . The addition of event-triggered communication increases the steady-state consensus error with respect to the full communication case (see Figure 3) and can be tuned with ε .
Machines 11 00128 g004
Figure 5. Consensus results for our protocol with Δ = 10 4 , ε = 1 . Due to the increase in the event-triggering threshold, the error is higher than in the case with ε = 0.01 , shown in Figure 4. (a) Consensus estimates for the average signal using our proposal. The desired value z ¯ ( t ) is plotted as a dashed red line. The estimates quickly converge to the average signal, but with an increased steady-state error due to the event-triggered communication. (b) Consensus error of the estimates with respect to the average signal. Increasing the event-triggering threshold ε causes a higher steady-state error.
Figure 5. Consensus results for our protocol with Δ = 10 4 , ε = 1 . Due to the increase in the event-triggering threshold, the error is higher than in the case with ε = 0.01 , shown in Figure 4. (a) Consensus estimates for the average signal using our proposal. The desired value z ¯ ( t ) is plotted as a dashed red line. The estimates quickly converge to the average signal, but with an increased steady-state error due to the event-triggered communication. (b) Consensus error of the estimates with respect to the average signal. Increasing the event-triggering threshold ε causes a higher steady-state error.
Machines 11 00128 g005
Figure 6. Consensus error for our protocol with Δ = 10 3 , ε = 0.01 . Compared to the case shown in Figure 4, with Δ = 10 4 , the consensus error has increased.
Figure 6. Consensus error for our protocol with Δ = 10 3 , ε = 0.01 . Compared to the case shown in Figure 4, with Δ = 10 4 , the consensus error has increased.
Machines 11 00128 g006
Figure 7. Trade-off between the consensus error and the communication rate (with fixed Δ = 10 4 ) and trade-off with the sampling period (with fixed ε = 0.01 ) . As the value of the triggering threshold ε increases, the communication through the network is reduced, at the cost of a higher consensus error. However, using the event-triggering mechanism, communication can be significantly reduced with respect to the full communication case (communication rate of 100%), with a relatively small increase in the steady-state error. The consensus error also increases with the sampling period Δ . When Δ 0 , the parameter ε has a higher impact than Δ on the magnitude of the steady-state error.
Figure 7. Trade-off between the consensus error and the communication rate (with fixed Δ = 10 4 ) and trade-off with the sampling period (with fixed ε = 0.01 ) . As the value of the triggering threshold ε increases, the communication through the network is reduced, at the cost of a higher consensus error. However, using the event-triggering mechanism, communication can be significantly reduced with respect to the full communication case (communication rate of 100%), with a relatively small increase in the steady-state error. The consensus error also increases with the sampling period Δ . When Δ 0 , the parameter ε has a higher impact than Δ on the magnitude of the steady-state error.
Machines 11 00128 g007
Figure 8. Comparison of consensus error with linear protocol, FOSM, and our protocol, under full communication ( Δ = 10 4 , ε = 0 ). With the linear protocol, there exists a permanent steady-state error, which is higher than for the other protocols. Both the FOSM and our protocol can eliminate the error in a continuous-time implementation, but in the discretized setup our method improves the error caused by chattering with respect to the FOSM protocol.
Figure 8. Comparison of consensus error with linear protocol, FOSM, and our protocol, under full communication ( Δ = 10 4 , ε = 0 ). With the linear protocol, there exists a permanent steady-state error, which is higher than for the other protocols. Both the FOSM and our protocol can eliminate the error in a continuous-time implementation, but in the discretized setup our method improves the error caused by chattering with respect to the FOSM protocol.
Machines 11 00128 g008
Figure 9. Consensus results for the linear protocol with Δ = 10 4 , ε = 0 , showing the estimates of the derivatives of first (a), second (b), and third (c) order. The derivatives z ¯ ( 1 ) ( t ) , z ¯ ( 2 ) ( t ) , z ¯ ( 3 ) ( t ) are plotted as a dashed red line. High-order derivatives are not accurately computed using a linear protocol.
Figure 9. Consensus results for the linear protocol with Δ = 10 4 , ε = 0 , showing the estimates of the derivatives of first (a), second (b), and third (c) order. The derivatives z ¯ ( 1 ) ( t ) , z ¯ ( 2 ) ( t ) , z ¯ ( 3 ) ( t ) are plotted as a dashed red line. High-order derivatives are not accurately computed using a linear protocol.
Machines 11 00128 g009
Figure 10. Consensus results for our protocol with Δ = 10 4 , ε = 0.001 , showing the estimates of the derivatives of first (a), second (b), and third (c) order. The derivatives z ¯ ( 1 ) ( t ) , z ¯ ( 2 ) ( t ) , z ¯ ( 3 ) ( t ) are plotted as a dashed red line. The error induced by the discretization and event-triggered communication is more apparent in higher-order derivatives, but the estimates are greatly improved with respect to the linear protocol (compare to Figure 9).
Figure 10. Consensus results for our protocol with Δ = 10 4 , ε = 0.001 , showing the estimates of the derivatives of first (a), second (b), and third (c) order. The derivatives z ¯ ( 1 ) ( t ) , z ¯ ( 2 ) ( t ) , z ¯ ( 3 ) ( t ) are plotted as a dashed red line. The error induced by the discretization and event-triggered communication is more apparent in higher-order derivatives, but the estimates are greatly improved with respect to the linear protocol (compare to Figure 9).
Machines 11 00128 g010
Table 1. Comparison of numerical results for different protocols. Our proposal reduces the estimation error with respect to linear and FOSM protocols in the full communication case. Moreover, adding event-triggered communication to our proposal produces a significant reduction in communication, while still achieving better error values than other protocols.
Table 1. Comparison of numerical results for different protocols. Our proposal reduces the estimation error with respect to linear and FOSM protocols in the full communication case. Moreover, adding event-triggered communication to our proposal produces a significant reduction in communication, while still achieving better error values than other protocols.
Linear [4,9]FOSM [20]Ours
ε = 0 ε = 0 . 01
Maximum steady
state error
0.1440.0270.0050.015
Communication rate100%100%100%7.61%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Perez-Salesa, I.; Aldana-Lopez, R.; Sagues, C. Precise Dynamic Consensus under Event-Triggered Communication. Machines 2023, 11, 128. https://doi.org/10.3390/machines11020128

AMA Style

Perez-Salesa I, Aldana-Lopez R, Sagues C. Precise Dynamic Consensus under Event-Triggered Communication. Machines. 2023; 11(2):128. https://doi.org/10.3390/machines11020128

Chicago/Turabian Style

Perez-Salesa, Irene, Rodrigo Aldana-Lopez, and Carlos Sagues. 2023. "Precise Dynamic Consensus under Event-Triggered Communication" Machines 11, no. 2: 128. https://doi.org/10.3390/machines11020128

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop