Next Article in Journal
Comparative Study of Bacterial Isolates in Ovine Mandibular Osteomyelitis and Oral Microbiota of Healthy Sheep
Previous Article in Journal
Multi-Sensor Fusion for Aerial Robots in Industrial GNSS-Denied Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mean Square Consensus of Nonlinear Multi-Agent Systems under Markovian Impulsive Attacks

1
School of Automation, Guangdong University of Technology, Guangzhou 510006, China
2
School of Intelligent Engineering, Shaoguan University, Shaoguan 512026, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(9), 3926; https://doi.org/10.3390/app11093926
Submission received: 1 April 2021 / Revised: 22 April 2021 / Accepted: 22 April 2021 / Published: 26 April 2021
(This article belongs to the Topic Dynamical Systems: Theory and Applications)

Abstract

:
This paper focuses primarily on the mean square consensus problem of a class of nonlinear multi-agent systems suffering from stochastic impulsive deception attacks. The attacks here are modeled by completely stochastic destabilizing impulses, where their gains and instants satisfy all distributions and the Markovian process. Compared with existing methods, which assume that only gains are stochastic, it is difficult to deal with systems with different types of random variables. Thus, estimating the influence of these different types on the consensus problem is a key point of this paper. Based on the properties of stochastic processes, some sufficient conditions to solve the consensus problem are derived and some special cases are considered. Finally, a numerical example is given to illustrate the main results. Our results show that the consensus can be obtained if impulsive attacks do not occur too frequently, and it can promote system stability if the gains are below the defined threshold.

1. Introduction

With the development of hardware equipment and communication technology, the collaborative control problem has recently become the focus of studies of complex systems [1,2,3]. The consensus of multi-agent systems (MASs) plays an important role in the field of collective behaviors, owing to its excellent model assumptions and extensive application scenarios such as control of the formation of unmanned aerial vehicles [4], restoration of a power system [5], intelligent and sustainable supplier selection of supply chains [6], etc.
In real-world application scenarios, communication among agents may be affected by the environment or various attacks, especially human-made malicious attacks, which is likely to cause the system to become unstable. For attack scenarios, generally speaking, they may be classified into two main categories: denial-of-service (DoS) attacks [7,8,9,10] and deception attacks [11,12,13] (for more details, refer to [14]). DoS attacks are attacks that make the data of sensors and controllers unavailable, resulting in packet time delays or packet drops in the signal transmission. In contrast, deception attacks are attacks that have the ability to obtain and tamper with the transmitting data and the commands from controllers. DoS attacks can destroy communication topologies and can damage the stability of the systems. Compared with DoS attacks, deception attacks are more moderate and difficult to detect, which quietly makes the system unstable. Thus, deception attacks have attracted a lot of interest and has become one of our main concerns. Much research on deception attacks has been conducted in recent years [15,16,17,18,19]. For instant, consider the existence of false data injection attacks; the authors in [16] investigated the security issue in the state estimation problem of a networked control system. Similarly, in the presence of deception attacks, the authors [19] studied secure synchronization of MASs by means of impulsive control. However, in the above research, the results were based on a linear or an additive assumption and assumptions on the attack model were made, resulting in narrow scopes of applications.
Taking this restriction into account, models with more simulation significance and practical application value were established by means of stochastic processes, especially the Bernoulli distribution [20,21,22,23,24], in which the gains of attacks were assumed to be a stochastic Bernoulli variable. For example, the authors in [21] investigated the consensus of linear MASs under deception attacks with stochastic Bernoulli variables. It is worth noting that the attacks mentioned above were described as discrete events. When the state of the system suffers instantaneous disturbances and experiences sudden changes at certain moments, the system may be attacked. This is a so-called impulsive attack. An impulsive attack is a form of attack that causes instantaneous changes to the state of the system. Models of impulsive attacks with stochastic gains seem to be sufficiently good at neglecting positions. However, in actuality, such a problem is also vital to the stability of systems as we do not know a priori when attacks are made. Suppose that both the gains and instants of the impulsive attacks are stochastic. The author in [25] first considered these two characteristics of an attack, and then, some sufficient conditions for almost stable general Lipschitz-type nonlinear systems were derived. However, the work in [25] still used restricted gains under the Bernoulli distribution and the influence of attacks was assumed to monotonically decrease, which become restrictive assumptions.
On the other hand, focused on the variability in an environment, a model relying on a Markovian chain is appropriate in general. Taking this into account, the consensus with Markovian switching topologies has become a hot issue regarding the performance of multi-agent systems with external disturbances. However, most existing methods such as the works in [26,27] for a consensus with attacks only considered cases where topologies or systems contained Markovian properties rather than attacks. Though some authors did assess attacks in a Markovian form such as the works in [9,28], most of these works were concerned with DoS attacks; the type of variable chosen, such as gains; or assumptions about the effects of a combination of attacks instead of deception attacks. As a result, the works above do not fully consider the characteristics of the attack, and thus, a general work on both gains and instants of deception attacks still has not yet been conducted.
With the discussion above, this paper aims to establish a more universal model for deception attacks and provides some sufficient conditions for the consensus of multi-agent systems. The contributions of this paper can be summed up as follows:
  • Taking destabilizing impulse into account, and based on the Markovian properties, a general model for impulsive deception attacks is established, where the gains do not obey any specific distribution and instants are assumed to obey the Markovian chain.
  • In addition, some special cases including both the characteristics of attacks and Markovian properties are considered.
The remainder of this paper is organized as follows. The main results about the consensus for system (1) are derived in Section 3 based on the novel model for the impulsive deception attacks introduced in Section 2, and a corresponding numerical example is provided in Section 4. The conclusion and some interesting topics of future work are given in Section 5.
Notation 1.
Let R be a set of real numbers and N be a set of positive integers. Let R n be the n-dimensional Euclidean space; furthermore, I n is the n-dimensional identity matrix and 1 n is the n-dimensional column vector with one element. Let M be a matrix or vector, for which the transposition is represented by M T , and its deduced norm or norm is represented by M . Let λ max ( A ) and λ min ( A ) denote the largest and nonzero smallest eigenvalue of the matrix A, respectively. A matrix B is said to be a symmetric positive definite matrix if B = B T and x T B x > 0 for any nonzero vector x. For any function h : R R , we define h ( t + ) = lim Δ t 0 + h ( t + Δ t ) , h ( t ) = lim Δ t 0 h ( t + Δ t ) . In addition, let ( Ω , F , F t 0 , P ) be a complete probability space with respect to the filtration F t 0 , which satisfies the common conditions, i.e., right continuous and F 0 contains all P -null sets. Let E [ · ] ( E [ · | · ] ) denote the (conditional) expectation of a stochastic process.

2. Preliminaries

Before presenting the system models and the problem statement, we first give some necessary knowledge about graph theory, which is used to describe the communication topology among MASs. In this paper, we consider the MASs with N agents and the topology can be represented by a digraph G = ( V , E , A ) , where V is the set of vertices, E V × V is the set of edges, and A = [ a i j ] N × N is called the adjacency matrix. When the jth agent sends information to the ith agent, there exists an edge between them and a i j = 1 ; otherwise, a i j = 0 . Let the diagonal matrix D = d i a g { d i ¯ } be the in-degree matrix and d i ¯ = j = 1 N a i j , i = 1 , 2 , , N . Then, the Laplacian matrix L = [ l i j ] N × N can be defined as L = D A , where l i j = a i j , i j and l i i = j i N a i j .
Let us consider that generally nonlinear multi-agent systems (NMASs) consist of N agents and that each can be described as follows:
x ˙ i ( t ) = A x i ( t ) + B f ( x i ( t ) ) + C u i ( t ) , t t 0 .
where x i ( t ) R n and u i ( t ) R n are the system state and external input control, respectively. Function f : N n N n denotes the nonlinear items satisfying the well-known Lipschitz condition. A , B , and C are constant matrices with approximate dimensions.
Before presenting the controller considered in this paper, we first provide some explanations about the impulsive deception attacks, so that readers can understand the design of the controller more clearly. The configuration of NMASs under impulsive deception attacks is shown in Figure 1. Generally, controllers generate control signals according to states measured by sensors and then send them to the actuators. For MASs, data transmissions are heavily dependent on topologies and wireless transmission technology. If attackers launch data modification or false data attacks to topologies in discrete-time manners or calls impulsive attacks, they result in instantaneous jumps in the transmitted signal.
Under the configuration reflected in Figure 1, in this paper, we consider the following common linear feedback controller under Markovian impulsive attacks:
u i ( t ) = K j = 1 N a i j ( x j ( t ) x i ( t ) ) + k = 1 + d k x i ( t ) δ ( t t k ) .
where K R is the feedback gain for designed controller. Let I : = { t k } k = 1 + denote the impulsive sequence in Dirac form, i.e., the function δ , and d k denotes the impulsive gains. In addition, we assume that x i ( t ) is right-hand continuous at t = t k , meaning that x i ( t k + ) = x i ( t k ) . Since the destabilizing impulse is adopted to model the impulsive attacks here, one can obtain the fact that d k > 0 or d k < 2 . Note that k = 1 + d k x i ( t ) δ ( t t k ) here denotes impulsive attacks.
Remark 1.
Let T i ( t , t 0 ) and N i ¯ ( t , t 0 ) denote the total impulsive intervals and the occurrence number for the ith modes of impulses over the interval [ t 0 , t ] , respectively. Hence, we have N ¯ ( t , t 0 ) = i = 1 p N i ¯ ( t , t 0 ) , which denotes the total number of impulses over the interval [ t 0 , t ] .
Considering that the stabilization problem of an error system with ideal dynamics is a common method to tackle the consensus of multi-agent systems. Define the error state as e i ( t ) : = x i ( t ) x ¯ ( t ) = x i ( t ) 1 N i = 1 N x i ( t ) , which also can be written as e ( t ) = [ e 1 T ( t ) , e 2 T ( t ) , , e N T ( t ) ] T = E x ( t ) = ( ( I N 1 N 1 N 1 N T ) I n ) x ( t ) , and the vector of states x ( t ) = [ x 1 T ( t ) , x 2 T ( t ) , , x N T ( t ) ] T . Then, we can rewrite system (1) with controller (2) in the error system in a compact form as follows:
e ˙ ( t ) = H e ( t ) + B ¯ F ( e ( t ) ) , t t k , t t 0 e ( t k ) = η k e ( t k ) , t = t k .
where H = I N A K L C . Define B ¯ = I N B and function F ( e ( t ) ) = E f ( x ( t ) ) , where the vector function f ( x ( t ) ) = [ f T ( x 1 ( t ) ) , f T ( x 2 ( t ) ) , , f T ( x N ( t ) ) ] T . Moreover, η k = 1 + d k , and then, one has η k > 1 according to the values of d k .
Before ending this section, here, we present some assumptions and definitions used in this paper.
Assumption 1.
Suppose that there exists p types of impulses, and both the gains and instants of impulsive sequence are stochastic. The gains η k take values from a finite set G = { μ i } i Γ satisfying E [ μ i 2 ] = υ i < + and ϑ = max i Γ { υ i } , and Γ : = { 1 , 2 , , p } . The impulsive sequence { t k } k = 1 + is governed by a right-continuous homogeneous Markovian chain with p modes, i.e., for t [ t k , t k + 1 ) , r k : = r ( t ) = i Γ with the generator Λ = [ λ i j ] p × p , i , j Γ given by
P ( r t + h = j | r t = i ) = λ i j h + o ( h ) , i j , 1 + λ i j h + o ( h ) , i = j .
where h > 0 , lim h 0 o ( h ) h = 0 and λ i j denotes the transition rate from i to j, which satisfies that λ i j 0 for i j and λ i i = j i λ i j .
According to properties of the Markovian chain, we know that the impulsive interval τ ( k ) = t k t k 1 satisfies the exponential distribution, i.e., τ ( k ) E x p ( λ i ) , when r k = i and λ i = λ i i . Moreover, we obtain the transition probability that p i j = λ i j λ i for i j and p i j = 0 for i = j .
Based on Assumption 1, we further assume that η k is a stochastic process on ( Ω 1 , F 1 , F 1 k N , P 1 ) , where F 1 k N = σ ( η 1 , η 2 , , η k ) is the filtration of F 1 . On the other hand, the impulsive instants satisfies the Markovian chain, meaning that r 1 , r 2 , , r k are independent random variables on a probability space ( Ω 2 , F 2 , F 2 k N , P 2 ) ; similarly, F 2 k N = σ ( r 1 , r 2 , , r k ) is the filtration of F 2 . Hence, denote Ω : = Ω 1 × Ω 2 , F : = F 1 × F 2 , P : = P 1 × P 2 , and F k : = σ ( F 1 k × F 2 k ) . As a result, the jointly stochastic process { ( r k , η k ) } k N can be defined on the probability space ( Ω , F , F k , P ) .
Assumption 2.
The nonlinear function f : R R is said to be satisfied the Lipschitz condition if there exists a constant κ > 0 such that
f ( x 1 ) f ( x 2 ) κ x 1 x 2 .
Definition 1.
System (1) is said to be mean square consensus under Markovian impulsive attacks for any given initial value of system states x ( t 0 ) if
lim t + E e i ( t ) 2 = 0 , i = 1 , 2 , , N .

3. Main Results

Theorem 1.
Under Assumptions 1 and 2, the mean square consensus of system (1) with controller (2) under the Markovian impulsive attacks can be reached if there exist symmetric positive definite matrix P and feedback gain K such that
j = 1 p max i Γ { p i j } λ j υ j λ j γ < 1
where γ = ρ β < min j Γ { λ j } , ρ = λ max ( P ) λ min ( P ) , β = λ max Π , and Π = H T + κ B ¯ T + P H + κ B ¯ .
Proof. 
Consider the following Lyapunov function candidate
V ( t , e ( t ) ) = e T ( t ) P e ( t ) .
For t [ t k 1 , t k ) , one can obtain the derivatives of V t , e ( t ) :
D + V ( t , e ( t ) ) = e ˙ T ( t ) P e ( t ) + e T ( t ) P e ˙ ( t ) e T ( t ) H T P + κ B ¯ T P + P H + κ P B ¯ e ( t ) γ V ( t , e ( t ) ) .
where γ = ρ β .
For t = t k , caused by the impulsive attacks, we have
V ( t k , e ( t k ) ) = e T ( t k ) P e ( t k ) = η k 2 e T ( t k ) P e ( t k ) η k 2 e γ τ k V ( t k 1 , e ( t k 1 ) ) .
Since the items e γ τ k and V ( t k 1 , e ( t k 1 ) ) are nonnegative random variables, by taking the expectation and utilizing the properties of the conditional expectations, we obtain
E [ V ( t k , e ( t k ) ) ] E [ E [ η k 2 e γ τ k V ( t k 1 , e ( t k 1 ) ) | F k 1 ] ] = E [ E 2 [ E 1 [ η k 2 e γ τ k V ( t k 1 , e ( t k 1 ) ) | F k 1 1 ] | F k 1 2 ] = E [ V ( t k 1 , e ( t k 1 ) ) ] E [ E [ η k 2 e γ τ k | r k 1 ] ] .
For t t 0 , t 1 , it follows from (8) that we have
E [ V t , e ( t ) ] E [ E [ e γ τ 1 | r 0 ] ] E [ V t 0 , e ( t 0 ) ]
from which one can get that, for t = t 1 ,
E [ V t 1 , e ( t 1 ) ] E [ E [ η 1 2 e γ τ 1 | r 0 ] ] E [ V t 0 , e ( t 0 ) ] .
For t t 1 , t 2 , we obtain a similar result from (8) and (10):
E [ V t , e ( t ) ] E [ E [ e γ τ 2 | r 1 ] ] E [ V ( t 1 , e ( t 1 ) ) ]
from which one can get that, for t = t 2 ,
E [ V t 2 , e ( t 2 ) ] l = 1 2 E [ η l 2 e γ τ l | r l 1 ] E [ V t 0 , e ( t 0 ) ] .
By mathematical induction, one can finally obtain that, for t = t k ,
E [ V t k , e ( t k ) ] E [ V ( t 0 , e ( t 0 ) ) ] l = 1 k E [ E [ η k 2 e γ τ l | r l 1 ] ] = E [ V ( t 0 , e ( t 0 ) ) ] E [ l = 1 k E 2 [ E 1 [ η k 2 ] e γ τ l | r l 1 ] .
Let i l I , l N . Since the impulsive sequence { η k , r k , k N + } with respect to t k is a Markovian chain and the impulsive interval τ ( k ) E x p ( λ i ) , then we have
E [ l = 1 k E 2 [ E 1 [ η l 2 ] e γ τ l | r l 1 ] ] = i 1 = 1 p i 2 = 1 p i k = 1 p m = 1 k E 2 [ E 1 [ η i m 2 ] e γ τ i m ] | r i m 1 ] × P ( r i 1 = i 1 , r i 2 = i 2 , , r i k = i k ) i 1 = 1 p i 2 = 1 p i k = 1 p m = 1 k 0 + e γ s λ i m e λ i m s d s × m = 1 k max i Γ { p i i m } E 1 [ η i m 2 ] i 1 = 1 p i 2 = 2 p i k = 1 p m = 1 k λ i m E 1 [ η i m 2 ] λ i m γ max i Γ { p i i m } = θ k .
where θ = j = 1 p max i Γ { p i j } λ j υ j λ j γ < 1 .
Substituting (12) into (11), we get that
E [ V ( t k , e ( t k ) ) ] θ k E [ V ( t 0 , e ( t 0 ) ) ] .
from which we obtain that, with condition (6)
lim k + E [ V ( t k , e ( t k ) ) ] = 0 .
which indicates that lim k + E [ e ( t k ) 2 ] = 0 . Hence, the consensus of system (1) can be achieved, and the proof of Theorem 1 is completed. □
Remark 2.
Note that the parameters γ and λ j must satisfy γ < min j Γ { λ j } . If we choose γ min j Γ { λ j } , we can easily know that 0 + e γ s λ i l e λ i l s d s = + , so the condition that γ < min j Γ { λ j } is necessary. Moreover, from (6), we can find a fact that the nonlinear functions has a negative effect on the consensus of system (1), which cannot deal with a high-density impulsive attack. If the system considered is linear, then the condition (6) can be released to adapt more general cases.
Corollary 1.
Under Assumptions 1 and 2, if κ 0 , then the consensus of system (1) with controller (2) under Markovian impulsive attacks can be reached if there exist symmetric positive definite matrix P and feedback gain K such that
j = 1 p max i Γ { p i j } λ j υ j λ j γ < 1 .
where γ = ρ β < min j Γ { λ j } , ρ = λ max ( P ) λ min ( P ) , β = λ max Π , and Π = H T + H .
Remark 3.
The impulsive attacks in this paper are assumed to be stochastic, and since we do not further restrict the distribution of impulsive gains, the explicit relations between the consensus (or system stability) and the impulsive gains are ambiguous, where E [ η k ] is replaced by E [ η k 2 ] . For the above reason, it may be hard to obtain E [ η k 2 ] as we do not know the distribution to which η k subjected; we need a prior known E [ η k 2 ] when the clear distribution is unknown. On the other hand, it is common and suitable to assume that the gains from attacks satisfy the Bernoulli distribution such as that in the work of [25], representing the occurrence of attacks. Moreover, the condition (6) shows that a large feedback gain is needed if the strength of impulsive attacks is strong. If the gains are deterministic (i.e., η k = η > 1 , k N ), then we can get the following corollary, which is the direct result of model simplification, so the proof is also omitted here.
Corollary 2.
Under Assumptions 1 and 2, then the consensus of system (1) with controller (2) under Markovian impulsive attacks can be reached if there exist symmetric positive definite matrix P and feedback gain K such that
j = 1 p max i Γ { p i j } λ j η 2 λ j γ < 1 .
where γ = ρ β < 0 , ρ = λ max ( P ) λ min ( P ) , β = λ max Π , and Π = H T + H + κ B ¯ T + κ B ¯ .
Remark 4.
From the condition (16), one can observe that the dynamics of considered NMASs are stable without impulsive attacks. The reason for this restrictive condition is that the impulsive effects taken into account in this paper are destabilizing, which means that the parameter η always bigger than one. As a result, though the condition γ < min j Γ { λ j } is satisfied, one can find that the inequality (16) never holds when γ 0 . Therefore, the restrictive condition γ < 0 in (16) is required.
In the following, we further assume that the Markovian chain obeyed by the impulsive sequence { t k } k = 1 + is irreducible and that it adopts a stationary distribution ε ¯ = ( ε ¯ 1 , ε ¯ 2 , , ε ¯ p ) .
Theorem 2.
Under Assumptions 1 and 2, then the consensus of system (1) with controller (2) under Markovian impulsive attacks can be reached almost surely (a.s.), if there exist symmetric positive definite matrix P and feedback gain K such that
i Γ λ i ε ¯ i ln ( E [ η i 2 ] ) + γ < 0 .
where γ = ρ β ; ρ = λ max ( P ) λ min ( P ) ;   a n d   β = λ max Π , where Π = H T + H + κ B ¯ T + κ B ¯ , and ε ¯ i , i Γ is the element of the stationary distribution adotped by the impulsive sequence.
Proof. 
From (11), we get that for t t 0
V ( t , e ( t ) ) l = 1 k η l 2 e γ ( t t 0 ) V ( t 0 , e ( t 0 ) ) = k = 1 N ¯ ( t , t 0 ) η k 2 e γ ( t t 0 ) V ( t 0 , e ( t 0 ) ) .
Note that the impulsive interval τ ( k ) E x p ( λ i ) , i Γ ; then using the Ergodic theorem [29], the strong law of large number, and the properties of the homogeneous Markovian chain, for i Γ , we can obtain that
lim t + N i ¯ ( t , t 0 ) t t 0 = lim t + N i ¯ ( t , t 0 ) T i ( t , t 0 ) T i ( t , t 0 ) t t 0 = lim t + N i ¯ ( t , t 0 ) k = 1 N i ¯ ( t , t 0 ) τ k ε ¯ i = λ i ε ¯ i , a . s .
Then, it follows from (17) and (19) that
E [ V ( t , e ( t ) ) ] k = 1 N ¯ ( t , t 0 ) E [ η k 2 ] e γ ( t t 0 ) E [ V ( t 0 , e ( t 0 ) ) ] = i = 1 p ( E [ η i 2 ] ) N i ¯ ( t , t 0 ) e γ ( t t 0 ) E [ V ( t 0 , e ( t 0 ) ) ] = e ( i Γ λ i ε ¯ i ln ( E [ η i 2 ] ) + γ ) ( t t 0 ) E [ V ( t 0 , e ( t 0 ) ) ] = ω ¯ 1 ( E [ V ( t 0 , e ( t 0 ) ) ] , t t 0 ) a . s .
where the function ω ¯ 1 ( r , s ) : = e ( i Γ λ i ε ¯ i ln ( E [ η i 2 ] ) + γ ) s r . Define ω ¯ 2 ( r , s ) : = ω ¯ 1 ( r , s ) ϵ for an arbitrarily small ϵ ( 0 , 1 ) . By employing the Markov’s inequality [[30], p. 111, (18.1)] to (20), we have for t t 0
P { V ( t , e ( t ) ) ω ¯ 2 ( E [ V ( t 0 , e ( t 0 ) ) ] , t t 0 ) } 1 E [ V ( t , e ( t ) ) ] ω ¯ 2 ( E [ V ( t 0 , e ( t 0 ) ) ] , t t 0 ) 1 ϵ .
From (20), we know that lim t + E [ V ( t , e ( t ) ) ] = 0 . Hence, the consensus of system (1) can be achieved almost surely, and the proof of Therorem 2 is completed. □
Remark 5.
Compared with the parameters in Theorem 1, from (17), one can observe that the condition γ < min j Γ { υ } is removed by employing the properties of stationary distribution of Markovian chain.

4. Numerical Examples

In this section, a numerical example with four different cases corresponding to the results established in the previous section is presented, and the last one is given to illustrate the case when gains of attacks η k 1 , k N . Consider the NMASs with four agents, for which the communication topology is described as Figure 2. The dynamics of each agent can be described as follows:
x ˙ i ( t ) = A x i ( t ) + B f ( x i ( t ) ) + C u i ( t ) .
where the system states x i ( t ) = [ x i 1 T ( t ) , x i 2 T ( t ) ] T R 2 . u i ( t ) is shown in (2), and the nonlinear function f ( x i ( t ) ) = ( tanh ( x i 1 ( t ) ) , tanh ( x i 2 ( t ) ) ) T . The constant matrices A , B , and C are given as follows:
A = 0.1 0 0 0.1 , B = 0.4 0.02 1 0.6 , C = 0.2 0.1 0 0.2 .
which implies κ = 0.505 and β = 1.234 with K = 0.1 .
Choose P = I 2 ; then ρ = 1 and γ = ρ β = 1.234 . Suppose a generator of the considered Markovian chain is given as the matrix Λ
Λ = 4 2 1 1 1 5 2 2 2 1 4 1 2 2 1 5 ,
from which we know that γ = 1.234 < min j Γ { λ j } = 4 , from which the transition probability matrix Q can also be derived as
Q = 0 0.5 0.25 0.25 0.2 0 0.4 0.4 0.5 0.25 0 0.25 0.4 0.4 0.2 0 ,
and for which the Markovian chain is described in Figure 3, where we only present a local part of the whole Markovian chain.
In addition, set the initial values of system state
x ( t 0 ) = x 1 T ( t 0 ) , x 2 T ( t 0 ) , x 3 T ( t 0 ) , x 4 T ( t 0 ) T = 1.5 0.6 0.5 1.2 1.5 0.6 0.5 1.2 T
which can also be randomly generated. Moreover, we can verify that the average values of x ( t 0 ) is x ave = 0 . Let the error state defined as e i ( t ) = x i 1 2 ( t ) + x i 2 2 ( t ) . As shown in Figure 4, the error trajectories of agents do not reach an agreement in the absence of a controller, in other words, system (22) is unstable without control.
In the following, we consider four cases of stochastic attacks, in which the first three examples are given to illustrate some results proposed in the previous section and the last example is given to illustrate the case in which gains of attacks are below the defined threshold, i.e., η k 1 , k N . Let the time step of all the simulations in this section be s t e p = 0.01 .
Remark 6.
As we know, the Markovian process can reflect the random jump phenomenon in the real world well and is widely used in many production activities. The cases given here are used to simulate different kinds of malicious human-made network attacks in reality as well as possible attack phenomena in engineering applications such as some transformers that suffer from cyber attacks and result in functional failure in the power grid.
Case 1:   Stochastic gains and instants.
In this case, based on the condition that | η k | > 1 , k N , we assume that gains of the impulsive deception attacks μ 1 = 1.1 ξ 1 , μ 2 = 1.2 ξ 2 , μ 3 = 1.1 ξ 3 , and μ 4 = 1.2 ξ 4 , where ξ i , i = 1 , 2 , 3 , 4 satisfy the Bernoulli distribution with expectations E [ ξ 1 ] = 0.5 , E [ ξ 2 ] = 0.1 , E [ ξ 3 ] = 0.2 , and E [ ξ 4 ] = 0.1 , respectively. Then, we can verify that υ 1 = 0.605 , υ 2 = 0.144 , υ 3 = 0.242 , and υ 4 = 0.144 . In addition, instants of attacks obey the Markovian chain, for which the generator is presented as above.
Based on the given conditions, one can get that j = 1 4 max i Γ { p i j } λ j υ j λ j γ = 0.682 < 1 . Thus, by employing Theorem 1, the consensus of system (22) can be achieved as Figure 5 shown. Compared with Figure 4, we know that the error states of system (22) can converge to zero from Figure 5 with controller (2), and according to Definition 1, the consensus of system (22) can be achieved.
Case 2:   The Markovian chain is a stationary distribution.
From the given generator, the stationary distribution ε ¯ = ( 0.273 , 0.227 , 0.273 , 0.227 ) can be obtained. In addition, we assume that gains of the impulsive deception attacks take values from the following random valuables such that μ 1 = 1.5 ξ 1 , μ 2 = 2 ξ 2 , μ 3 = 1.5 ξ 3 , and μ 4 = 1.2 ξ 4 , where ξ 1 and ξ 2 are the discrete variables obeying the following distribution:
ξ 1 0.7−0.70.8 ξ 2 0.6−0.60.8
Pr 0.50.40.1 Pr 0.60.20.2
In contrast, ξ 3 and ξ 4 satisfy the Bernoulli distribution with expectations E [ ξ 3 ] = 0.5 and E [ ξ 4 ] = 0.1 , respectively. Thus, we know that υ 1 = 1.136 , υ 2 = 1.664 , υ 3 = 1.125 , and υ 4 = 0.144 ; therefore the condition in Theorem 2 can be checked such that i Γ λ i ε ¯ i ln ( E [ η i 2 ] ) + γ = 0.117 < 0 with K = 0.1 and β = 1.234 . As a result, the consensus of system (22) can be achieved by employing Theorem 2, as shown in Figure 6. Similarly, we know that the error states of system (22) can converge to zero from Figure 5 with the controller (2), and and according to Definition 1, the consensus of system (22) can be achieved.
Case 3:   Determined gains and stochastic instants.
In this case, we further assume that gains of the impulsive deception attacks is a constant η k η = 1.2 and that instants are assumed to be the same as that in Case 1. Note that the gains in this case are always bigger than one, and according to Remark 4, we know that system (22) must be stable without impulsive deception attacks. Thinking about the conditions (16), we reselect the system matrices A as
A = 4 0 0 4
With the parameters given or calculated above, one can obtain that γ = ρ β = 7.479 < 0 and j = 1 4 max i Γ { p i j } λ j η 2 λ j γ = 0.978 < 1 . Therefore, the consensus of system (22) can be achieved by employing Corollary 2, and the result is shown as Figure 7. Note that, in this case, the parameter γ needs to be less than zero, the reason can be found in Remark 4. Similarly, from Figure 7 and Definition 1, we can obtain the consensus as error states converge to zero.
Case 4:   Stochasticgains and instants with d k 2 , 0 .
In this case, we further assume that d k 2 , 0 , k N and that gains of the impulsive deception attacks μ i , i = 1 , 2 , 3 , 4 take values from μ 1 = 0.9 ξ 1 , μ 2 = 0.9 ξ 2 , μ 3 = 0.8 ξ 3 , and μ 4 = 0.8 ξ 4 , where ξ i , i = 1 , 2 , 3 , 4 satisfy the Bernoulli distribution with expectations E [ ξ 1 ] = 0.1 , E [ ξ 2 ] = 0.1 , E [ ξ 3 ] = 0.1 , and E [ ξ 4 ] = 0.1 , resceptively.
Then, we can verify that υ 1 = 0.081 , υ 2 = 0.081 , υ 3 = 0.064 , and υ 4 = 0.064 . In addition, instants of attacks obey the Markovian chain for which the generator is presented as above. Hence, based on the given conditions, one can get that j = 1 4 max i Γ { p i j } λ j υ j λ j γ = 0.180 < 1 . Thus, by employing Theorem 1, the consensus of system (22) can be achieved as shown in Figure 8.
Remark 7.
From Case 4, we can find that impulsive attacks can stabilize an unstable system when gains belong to 2 , 0 . In fact, we can find its theoretical support from the error system (3). When d k 2 , 0 , one can get that η k = d k + 1 1 . The impulses can stabilize an unstable system, as we know, when the gains of impulses are less than one. Thus, the error system (3) can be stabilized with impulsive control.
Remark 8.
In order to further compare the simulation results with the controller, we give the following simulations with attacks but without control and a system with control but without an attack. For a system (3) affected by the attack and no controller added, the error trajectories of the agents are depicted in Figure 9. From Figure 9, we can find that the error states of system (3) become bigger and bigger, meaning that consensus cannot be reached under attacks without the controller. In addition, for the system that is not attacked, the error trajectories of agents with control are depicted in Figure 10, from which we know that system (22) can achieve consensus under the common controller u ^ i ( t ) = K j = 1 N a i j ( x j ( t ) x i ( t ) ) and without the influence of attacks k = 1 + d k x i ( t ) δ ( t t k ) .

5. Conclusions

In this paper, a more general model for impulsive deception attacks where both gains and instants are stochastic was established, and the consensus problem of the nonlinear multi-agent system under stochastic impulsive attacks was considered. The sufficient conditions were derived with the help of some properties of the stochastic process, and some special cases with more or fewer assumptions were also considered, including the determined, linear, and stationary processes. It can be found that the convergence rate of the NMASs and the transition rates were crucial for stabilization of the considered systems. Moreover, we find that the consensus under impulsive attacks can be maintained if the frequency of attacks is small, i.e., the probability of the random variables cannot be too large. Finally, a numerical example with four cases was provided to verify the effectiveness of the obtained results. Further works will attempt to generalize the Markovian assumption to the semi-Markovian process and will take continuous disturbances into account, as the impulsive intervals with semi-Markov assumption can be used to satisfy any distributions instead of only the exponential distribution while the continuous disturbances are likely to exist in many cases.

Author Contributions

Conceptualization, H.L.; methodology, H.L. and Y.W.; software, H.L. and H.W.; validation, Y.W., X.Z., and P.G.; resources, X.Z.; data curation, P.G.; writing—original draft preparation, H.L.; writing—review and editing, H.L. and Y.W.; supervision, H.W.; project administration, H.L.; funding acquisition, X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research work was funded by the Key-Area Research and Development Program of Guangdong Province (2019B010140002), and the National Natural Science Foundation of China (61803104 and 61673120).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MASsMulti-agent systems
NMASsNonlinear multi-agent systems
DoSThe denial-of-service

References

  1. Boccaletti, S.; Latora, V.; Moreno, Y.; Chavez, M.; Hwang, D.U. Complex networks: Structure and dynamics. Phys. Rep. 2006, 424, 175–308. [Google Scholar] [CrossRef]
  2. Ge, X.; Yang, F.; Han, Q.L. Distributed networked control systems: A brief overview. Inf. Sci. 2017, 380, 117–131. [Google Scholar] [CrossRef]
  3. Ren, Y.; Wang, A.; Wang, H. Fault diagnosis and tolerant control for discrete stochastic distribution collaborative control systems. IEEE Trans. Syst. Man Cybern. Syst. 2014, 45, 462–471. [Google Scholar] [CrossRef]
  4. Oh, K.K.; Park, M.C.; Ahn, H.S. A survey of multi-agent formation control. Automatica 2015, 53, 424–440. [Google Scholar] [CrossRef]
  5. González-Briones, A.; Chamoso, P.; Prieto, J.; Corchado, J.M.; Yoe, H. Reuse of wasted thermal energy in power plants for agricultural crops by means of multi-agent approach. In Proceedings of the 2018 International Conference on Smart Energy Systems and Technologies (SEST), Seville, Spain, 10–12 September 2018; pp. 1–6. [Google Scholar]
  6. Ghadimi, P.; Wang, C.; Lim, M.K.; Heavey, C. Intelligent sustainable supplier selection using multi-agent technology: Theory and application for Industry 4.0 supply chains. Comput. Ind. Eng. 2019, 127, 588–600. [Google Scholar] [CrossRef]
  7. Xu, Y.; Fang, M.; Wu, Z.G.; Pan, Y.J.; Chadli, M.; Huang, T. Input-based event-triggering consensus of multiagent systems under denial-of-service attacks. IEEE Trans. Syst. Man Cybern. Syst. 2018, 50, 1455–1464. [Google Scholar] [CrossRef]
  8. Lu, A.Y.; Yang, G.H. Distributed consensus control for multi-agent systems under denial-of-service. Inf. Sci. 2018, 439, 95–107. [Google Scholar] [CrossRef]
  9. Xu, W.; Ho, D.W.; Zhong, J.; Chen, B. Event/self-triggered control for leader-following consensus over unreliable network with DoS attacks. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3137–3149. [Google Scholar] [CrossRef] [PubMed]
  10. Wu, X.; Tang, Y.; Zhang, W. Stability analysis of stochastic delayed systems with an application to multi-agent systems. IEEE Trans. Autom. Control 2016, 61, 4143–4149. [Google Scholar] [CrossRef]
  11. Yang, W.; Lei, L.; Yang, C. Event-based distributed state estimation under deception attack. Neurocomputing 2017, 270, 145–151. [Google Scholar] [CrossRef]
  12. Li, X.M.; Zhou, Q.; Li, P.; Li, H.; Lu, R. Event-triggered consensus control for multi-agent systems against false data-injection attacks. IEEE Trans. Cybern. 2019, 50, 1856–1866. [Google Scholar] [CrossRef]
  13. Liu, H.; Niu, B.; Li, Y. False-Data-Injection Attacks on Remote Distributed Consensus Estimation. IEEE Trans. Cybern. 2020, 1–11. [Google Scholar] [CrossRef]
  14. Ding, D.; Han, Q.L.; Xiang, Y.; Ge, X.; Zhang, X.M. A survey on security control and attack detection for industrial cyber-physical systems. Neurocomputing 2018, 275, 1674–1683. [Google Scholar] [CrossRef]
  15. Guo, Z.; Shi, D.; Johansson, K.H.; Shi, L. Optimal linear cyber-attack on remote state estimation. IEEE Trans. Control. Netw. Syst. 2016, 4, 4–13. [Google Scholar] [CrossRef]
  16. Hu, L.; Wang, Z.; Han, Q.L.; Liu, X. State estimation under false data injection attacks: Security analysis and system protection. Automatica 2018, 87, 176–183. [Google Scholar] [CrossRef] [Green Version]
  17. Jin, X.; Haddad, W.M.; Yucelen, T. An adaptive control architecture for mitigating sensor and actuator attacks in cyber-physical systems. IEEE Trans. Autom. Control 2017, 62, 6058–6064. [Google Scholar] [CrossRef]
  18. An, L.; Yang, G.H. Improved adaptive resilient control against sensor and actuator attacks. Inf. Sci. 2018, 423, 145–156. [Google Scholar] [CrossRef]
  19. He, W.; Gao, X.; Zhong, W.; Qian, F. Secure impulsive synchronization control of multi-agent systems under deception attacks. Inf. Sci. 2018, 459, 354–368. [Google Scholar] [CrossRef]
  20. Zhang, D.; Liu, L.; Feng, G. Consensus of heterogeneous linear multiagent systems subject to aperiodic sampled-data and DoS attack. IEEE Trans. Cybern. 2018, 49, 1501–1511. [Google Scholar] [CrossRef] [PubMed]
  21. Ding, D.; Wang, Z.; Ho, D.W.; Wei, G. Observer-based event-triggering consensus control for multiagent systems with lossy sensors and cyber-attacks. IEEE Trans. Cybern. 2016, 47, 1936–1947. [Google Scholar] [CrossRef] [PubMed]
  22. He, W.; Mo, Z.; Han, Q.L.; Qian, F. Secure impulsive synchronization in Lipschitz-type multi-agent systems subject to deception attacks. IEEE/CAA J. Autom. Sin. 2020, 7, 1326–1334. [Google Scholar]
  23. Liu, J.; Yin, T.; Yue, D.; Karimi, H.R.; Cao, J. Event-based secure leader-following consensus control for multiagent systems with multiple cyber attacks. IEEE Trans. Cybern. 2020, 51, 162–173. [Google Scholar] [CrossRef] [PubMed]
  24. Wen, G.; Zhai, X.; Peng, Z.; Rahmani, A. Fault-tolerant secure consensus tracking of delayed nonlinear multi-agent systems with deception attacks and uncertain parameters via impulsive control. Commun. Nonlinear Sci. Numer. Simul. 2020, 82, 105043. [Google Scholar] [CrossRef]
  25. He, W.; Qian, F.; Han, Q.L.; Chen, G. Almost sure stability of nonlinear systems under random and impulsive sequential attacks. IEEE Trans. Autom. Control 2020, 65, 3879–3886. [Google Scholar] [CrossRef]
  26. Sun, H.; Peng, C.; Yang, T.; Zhang, H.; He, W. Resilient control of networked control systems with stochastic denial of service attacks. Neurocomputing 2017, 270, 170–177. [Google Scholar] [CrossRef]
  27. Xu, Z.; Ni, H.; Reza Karimi, H.; Zhang, D. A Markovian jump system approach to consensus of heterogeneous multiagent systems with partially unknown and uncertain attack strategies. Int. J. Robust Nonlinear Control 2020, 30, 3039–3053. [Google Scholar] [CrossRef]
  28. Befekadu, G.K.; Gupta, V.; Antsaklis, P.J. Risk-sensitive control under Markov modulated denial-of-service (DoS) attack strategies. IEEE Trans. Autom. Control 2015, 60, 3299–3304. [Google Scholar] [CrossRef]
  29. Norris, J.R. Markov Chains; Number 2; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar]
  30. Rogers, L.C.G.; Williams, D. Diffusions, Markov Processes and Martingales: Foundations; Cambridge University Press: Cambridge, UK, 2000; Volume 1. [Google Scholar]
Figure 1. Configuration of NMASs under impulsive deception attacks (where A i and S i denote the actuator and sensor of the ith agent, respectively, and l i denotes the l i th agent, i = 1 , 2 , , N ).
Figure 1. Configuration of NMASs under impulsive deception attacks (where A i and S i denote the actuator and sensor of the ith agent, respectively, and l i denotes the l i th agent, i = 1 , 2 , , N ).
Applsci 11 03926 g001
Figure 2. The communication topology of the NMASs (22).
Figure 2. The communication topology of the NMASs (22).
Applsci 11 03926 g002
Figure 3. The Markovian chain generated by the probability transition matrix Q.
Figure 3. The Markovian chain generated by the probability transition matrix Q.
Applsci 11 03926 g003
Figure 4. The error trajectories of agents in the absence of controller.
Figure 4. The error trajectories of agents in the absence of controller.
Applsci 11 03926 g004
Figure 5. The error trajectories of agents in Case 1 with stochastic gains and Markovian instants.
Figure 5. The error trajectories of agents in Case 1 with stochastic gains and Markovian instants.
Applsci 11 03926 g005
Figure 6. The error trajectories of agents in Case 2 with Markovian instants adopting a stationary distribution.
Figure 6. The error trajectories of agents in Case 2 with Markovian instants adopting a stationary distribution.
Applsci 11 03926 g006
Figure 7. The error trajectories of agents in Case 3 with determined gains and Markovian instants.
Figure 7. The error trajectories of agents in Case 3 with determined gains and Markovian instants.
Applsci 11 03926 g007
Figure 8. The error trajectories of agents in Case 4 with stochastic gains and Markovian instants.
Figure 8. The error trajectories of agents in Case 4 with stochastic gains and Markovian instants.
Applsci 11 03926 g008
Figure 9. The error trajectories of agents affected by an attack without the controller.
Figure 9. The error trajectories of agents affected by an attack without the controller.
Applsci 11 03926 g009
Figure 10. The error trajectories of agents with control but without an attack.
Figure 10. The error trajectories of agents with control but without an attack.
Applsci 11 03926 g010
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Luo, H.; Wang, Y.; Zhang, X.; Gao, P.; Wen, H. Mean Square Consensus of Nonlinear Multi-Agent Systems under Markovian Impulsive Attacks. Appl. Sci. 2021, 11, 3926. https://doi.org/10.3390/app11093926

AMA Style

Luo H, Wang Y, Zhang X, Gao P, Wen H. Mean Square Consensus of Nonlinear Multi-Agent Systems under Markovian Impulsive Attacks. Applied Sciences. 2021; 11(9):3926. https://doi.org/10.3390/app11093926

Chicago/Turabian Style

Luo, Huan, Yinhe Wang, Xuexi Zhang, Peitao Gao, and Haoxiang Wen. 2021. "Mean Square Consensus of Nonlinear Multi-Agent Systems under Markovian Impulsive Attacks" Applied Sciences 11, no. 9: 3926. https://doi.org/10.3390/app11093926

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop