Next Article in Journal
A Novel Two-Step Inertial Viscosity Algorithm for Bilevel Optimization Problems Applied to Image Recovery
Next Article in Special Issue
Robust Model Predictive Control for Two-DOF Flexible-Joint Manipulator System
Previous Article in Journal
A New R-Function to Estimate the PDF of the Product of Two Uncorrelated Normal Variables
Previous Article in Special Issue
Distributed Disturbance Observer-Based Containment Control of Multi-Agent Systems via an Event-Triggered Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Consensus of the Stochastic Leader-Following Multi-Agent System with Time Delay

School of Mathematics and Statistics, Suzhou University, Suzhou 234000, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(16), 3517; https://doi.org/10.3390/math11163517
Submission received: 27 July 2023 / Revised: 9 August 2023 / Accepted: 11 August 2023 / Published: 14 August 2023

Abstract

:
For the multi-agent system with time delay and noise, the adaptive consensus of tracking control problems is discussed by the Lyapunov function. The main purpose of this study is to design an adaptive control protocol for the system, such that even if there exists time delay among agents, the protocol can still ensure the consensus of the stochastic system. The main contribution is to revise the protocols that were previously only applicable to system without time delay. Because the system is inevitably disrupted by time delay and noise during the interactive process, achieving coordination and consensus is difficult. To enable the followers to track the leader, a novel adaptive law depending on the Riccati equation is firstly proposed, and the adaptive law is different from previous mandatory control law completely depending on a known function. The ability to be altered online based on the state of system is a major feature of the adaptive law. When there are interactive noise and time delay between the followers and leader of the system, a special Lyapunov function is constructed to prove the adaptive consensus. And the upper bound of time delay is obtained by using the Itô integral theory. Finally, if the time delay of the system approaches zero, it is shown that the adaptive law still ensures that each follower tracks the leader under simpler conditions.
MSC:
93A16; 68T42

1. Introduction

The multi-agent system can complete a complex task through mutual coordination among agents, which has become a research hotspot in current academic research. Centralized control and distributed control are two main aspects of current research on multi-agent applications. The focus of current research is distributed control, since it is more fault-tolerant to the environment and has lower cost requirements than centralized control. The application scope of distributed control in multi-agent systems includes unmanned aerial vehicles, smart grid, target tracking, traffic control and other fields [1,2,3]. The core of many distributed control systems is to seek a suitable control protocol that makes it possible for all agents to reach the same state, which is called the consensus of system. Currently, the research topics of the consensus focus on random disturbance control, finite time control, event-triggered control, distributed optimal control and so on.
Since Visek et al. [4] proposed a special mathematical model and discovered that all agents ultimately reach the same state under specific conditions, the multi-agent system has quickly attracted the attention of a large number of scholars. Recently, Qin et al. [5] and Amirkhani et al. [6] reviewed the theoretical progress of the consensus and introduced some difficulties in the system. In order to achieve the consensus, it is often necessary to constrain the topology of system and construct an appropriate control protocol. For an undirected graph, connectivity is usually required, while it is balanced for a directed graph. This paper mainly studies the adaptive consensus on a directed graph. Our goal is to build an adaptive protocol that enables the followers to track a certain objective. Moreover, the problem is disturbed by noise and has a hysteresis effect. Up to now, numerous academics have investigated the leader-following consensus from various angles. Jiang et al. [7] discussed the tracking issue when the equations of state contain time-varying matrices. A similar consensus was analyzed in the event-triggered mechanism [8,9,10]. Zhang et al. [11] extended the tracking problem to stochastic system and utilized mathematical expectation to analyze the problem. The multi-agent system mentioned in these references all have definite models. However, the internal structure of the system is often uncertain in complex environments, so the adaptive control methods are proposed to continuously update the structure of the system.
Adaptive control technology is a method that automatically adjusts its own control parameters with the change of the environment to achieve the best performance. Adaptive consensus can be defined as that the state of all agents is finally consistent due to the adaptive control technology. Adaptive law can be seen as the changing law of the control parameters, and it is usually represented by a differential equation. Adaptive algorithms are usually characterized by information and intelligence; the information of this paper mainly comes from the state of system, and the intelligence is determined by the adaptive laws. The algorithm is often combined with machine learning theory and applied to some game scenarios. Adaptive control was initially applied in the aerospace field, and Whitaker is crucial to the advancement of the method. Currently, this special technology has found extensive use in fields such as aerospace, power, transportation, robotics, etc. The creation of a suitable adaptive law is the crux of the challenge for this technology. For a multi-agent system, when the mandatory gain is independent of the states of output and input, Li et al. [12] and Cheng et al. [13] respectively analyzed the average consensus. Zong et al. [14] investigated the random weak consensus under mandatory gain. For the adaptive gain that can be dynamically modified according to the current state, Knotek et al. [15] established an adaptive control law with decay gain, and the edge-based adaptive techniques for a nonlinear multi-agent system were taken into consideration by Yu et al. [16]. Luo et al. [17] analyzed a gradient-descent-based adaptive law and gave a scheme for the optimal control problem of uncertain multi-agent system. Li et al. [18] proposed a value iteration strategy and used the gradient descent method to update the weights. For a self-organized system, some important self-organized models were discussed in [19], and a self-organized interlimb coordination control was analyzed in [20]. For the optimal control problem of discrete systems, Peng et al. [2] designed a strategy for the adaptive adjustment of weight vectors based on neural network approximation. Nevertheless, these studies did not consider the effects of noise and time delay. Since the system is inevitably disturbed by time delay and noise at the same time, it is necessary to study the adaptive consensus under noise and time delay.
Currently, there have been many research conclusions about the consensus under noisy environments, but less research has been conducted on the topic of adaptive consensus. In fact, the interactive network among agents is subject to noise, so the stochastic multi-agent system should be considered. Itô integral theory provides an important tool for the adaptive problems of stochastic system. When agents have noise perturbations during communication, Duan et al. [21] designed an adaptive control protocol and proved that the tracking error of the problem is bounded. Huang [22] discussed the adaptive consensus of uncertain system, and proved agents can obtain average consensus in the almost sure sense. Xiao et al. [23] proposed the adaptive finite-time control protocols for a leaderless system, and proved similar properties hold for systems with a leader. The bipartite adaptive consensus of the stochastic system were taken into account in [24,25]. However, these references did not consider the interference of time delay. Time delay often degrades the performance of the control system and disrupts the stability of the system. Furthermore, the presence of time delay causes the great difficulties in the analysis and synthesis of the control system.
When the system is jointly disturbed by time delay and noise during the interactive process, the dynamical model of the system has a more complex form. There are currently only a few papers that consider the adaptive consensus in this situation. When the adaptive gain is mandatory, Zong et al. [14] analyzed the tracking problem in the case of the joint disturbance of noise and time delay. Also, a neural network approach was employed to analyze the topic for mandatory gain [26]. In practical applications, the mandatory gain has to be accurately selected based on the actual situation, which is often quite difficult. This paper will consider an adaptive control law that can dynamically adjust on the basis of state. For the tracing problem of multi-agent system, we first propose an adaptive control protocol and design a novel adaptive law, then the Lyapunov function is used to prove the adaptive consensus of the system. Finally, when the time delay trends to zero, we simplify the conditions for the system to attain the adaptive consensus. The significance of this paper is to revise the control protocol that were previously only applicable to a system without time delay. Our proposed adaptive control protocol can ensure the consensus of a system under the interference of noise and time delay. The contributions are as follows:
(1)
For a stochastic multi-agent system, a novel adaptive control law is firstly proposed when there is a lag phenomenon in the interactive process. The control laws in [12,13,14] were all mandatory and often required precise selection to determine the specific form. The adaptive control proposed in this paper can be dynamically adjusted based on the current state of the system, thus avoiding the difficulty of precise selection.
(2)
No matter whether the stochastic multi-agent system has time delay or not, the adaptive control law can ensure the consensus. However, the adaptive laws in [15,16] were only applied to multi-agent systems without delay and noise. Additionally, the sufficient conditions of consensus in this paper are simpler for the case without delay.
(3)
Compared with some early references in [21,24], the final tracking error in this paper has a smaller value under the adaptive law. Furthermore, when the intensity of noise approaches zero, the final dynamic error will trend to zero. However, many previous conclusions can only converge to a non-zero constant.

2. Theoretical Basis

The system in this work includes one leader and N followers, denoted as v 0 , v 1 , ⋯, and v N , respectively. G = ( V , N , A ) represents a digraph among the followers. V = { v 1 , v 2 , , v N } and N V × V is the set of the followers and edges, respectively. A = [ e i j ] R N × N is called adjacency matrix, its elements satisfy e i j = 1 if and only if ( v i , v j ) N , or else, e i j = 0 . N i = { v j V : ( v j , v i ) N } is the neighbor set, and L G = [ l i j ] is the Laplace matrix, where l i i = j i e i j and l i j = e i j , i j . In addition, assuming G ˜ is a digraph composed of all agents, and the matrix L G ˜ is defined by 0 0 1 × N E 0 · 1 N L G + E 0 , where E 0 = diag { e 10 , e 20 , , e N 0 } and 1 N = [ 1 , 1 , , 1 ] T . The difference between the two digraphs is that G ˜ contains the node of leader, while G does not.
Supposing the leader v 0 is globally reachable in this paper, which means a directed path from each follower v i to the leader v 0 can be found. When all elements of the adjacency matrix A satisfy j = 1 N e i j = j = 1 N e j i , the digraph is a balanced graph. The following lemmas are introduced.
Lemma 1 
([27]). Assuming G ˜ is a digraph, the three properties are equivalent:
(1) 
The node v 0 is globally reachable.
(2) 
For the matrix H = L G + E 0 , the real parts of all eigenvalues are positive.
(3) 
Further suppose the digraph G is balanced, then H + H T is positive definite.
Lemma 2 
([28]). For the matrices M 1 , M 2 , M 3 and M 4 , the Kronecker product of two matrices is represented by the symbol . Assuming the four matrices have appropriate dimensions, then the following properties hold:
(1) 
M 1 ( M 2 + M 3 ) = M 1 M 2 + M 1 M 3 .
(2) 
( M 1 M 2 ) M 3 = M 1 ( M 2 M 3 ) .
(3) 
( M 1 M 2 ) ( M 3 M 4 ) = M 1 M 3 M 2 M 4 .
(4) 
( M 1 M 2 ) T = M 1 T M 2 T .
(5) 
tr ( M 1 M 2 ) = tr ( M 1 ) tr ( M 2 ) .

3. The Adaptive Consensus

Considering a multi-agent system, its dynamic behavior can be expressed as
x ˙ i ( t ) = A x i ( t ) + B u i ( t ) , i = 1 , 2 , , N .
In the equation, u i ( t ) R p denotes the input and needs to be devised, x i ( t ) R n represents the state of the position. A is a n × n order constant matrix, B is a n × p order constant matrix, and the two matrices are known. The model of leader is represented as
x ˙ 0 ( t ) = A x 0 ( t ) .
In order to obtain the adaptive consensus of system (1), the key issue is to construct a control protocol u i ( t ) containing adaptive gain based on the communication graph among agents, and then use the state x i ( t ) to design the adjustment method of the adaptive gain. The adaptive method can rely on relatively little prior knowledge about the model. If the system (1) is not disturbed by time delay and noise, a general control protocol can be represented as u i ( t ) = c K j N i e i j ( x j ( t ) x i ( t ) ) , where c is a coupling weight and K is a feedback gain matrix. The protocol was investigated in [29,30], who pointed out that the constant c is related to the global information of the system. When there exists noise interference and time delay in the process of communication, this paper proposes an adaptive control protocol, designs an novel adaptive control law, and analyzes the impact of time delay on the system.
For n dimensional probability space ( Ω , F , P ) , the standard Brownian motions in the space are denoted by W i ( t ) R n , the standard white noise is written as η i ( t ) R n and satisfies 0 t η i ( s ) d s = W i ( t ) . For the system (1), the control protocol perturbed by noise and time delay is designed as
u i ( t ) = s i ( t ) K j N i e i j ( x j ( t τ ) x i ( t τ ) ) + e i 0 ( x 0 ( t τ ) x i ( t τ ) ) + e i 0 σ 0 i η i ( t ) .
In the protocol, τ > 0 is time delay, σ 0 i is noise intensity, the constants e i j and e i 0 indicate the weights of digraphs in the multi-agent system, the matrix K R p × n is called a feedback gain matrix. The adaptive gain s i ( t ) satisfies θ ̲ s i ( t ) θ ¯ , where θ ̲ and θ ¯ are two positive constants. The difficulty of solving adaptive control problems lies in designing an appropriate adaptive control law. For this control protocol (3), in order to obtain the adaptive consensus of the system, the main difficulty is to construct a differential equation that the gain s i ( t ) satisfies.
When the control protocol (3) does not contain time delay and noise, many scholars have already studied the adaptive consensus. Li et al. [31] considered the adaptive tracking problem of system with a leader. The adaptive event-triggering theory was discussed for a linear time-varying system in [32]. Deng et al. [33] analyzed the adaptive tracking problem of high-order system. However, time delay and noise are inevitable in the process of agent interaction. For leaderless multi-agent system, Wu et al. [34] designed an adaptive control protocol in noisy environments. The adaptive consensus with multiplicative noise was analyzed in [35]. Duan et al. [21] discussed one order leader-following system with noise in the absence of time delay. In this section, the adaptive problem of system (1) and (2) will be studied under the control protocol (3), which not only considers the impact of noise, but also considers the effect of time delay, so it is more in line with real scenarios.
If the adaptive gain is mandatory, such as s i ( t ) = s ( t ) = 1 1 + t or log ( 1 + t ) 1 + t , there have been many results. The mean square consensus was achieved in [12,13]. Zong et al. [14] investigated the adaptive protocol of the system under time delay and noise. Nevertheless, the mandatory gain has to be accurately selected in order to satisfy the limiting conditions, which is often quite difficult. Therefore, the adaptive gain that can be dynamically adjusted according to the state has obvious advantages in practical applications. In order to solve the consensus of the system (1)–(3), we construct a novel adaptive law as
s ˙ i ( t ) = ε i ( t ) T j = 1 N h i j Γ ε j ( t ) ( s i ( t ) δ ) .
where the constant δ > 1 λ min ( H T + H ) , the dynamic error ε i ( t ) = x i ( t ) x 0 ( t ) , and the symbol h i j is the element of H. The adaptive law (4) can continuously improve the structure of the model by extracting model’s information, thereby enabling the model to more and more accurate. It is worth noting that the adaptive laws proposed in most of the literature are different, such as the mandatory adaptive law [13,14], the decaying adaptive law [15], the edge-based adaptive law [16], etc. The advantages of the adaptive law (4) is that it can be applied to multi-agent systems with noise and time delay. In order to prove the consensus of system, the solution of the algebraic Riccati equation is used to build the matrix Γ . Let K = B T P , the matrix Γ = P B K is called adaptive gain matrix in (4), and P is a positive matrix and satisfies the algebraic Riccati equation
A T P + P A P B B T P + k I = 0 , ( k > 0 ) .
The above equation has been widely applied to prove the stability of the system since it was proposed. Generally, the matrix P can be used to construct Lyapunov functions, combined with the special form of the Riccati equation, it is easy to verify the conditions of the stability theorem.
Remark 1. 
The adaptive law (4) has a simpler structure and can be rewritten as
s ˙ 1 ( t ) s ˙ 2 ( t ) s ˙ N ( t ) = ε 1 ( t ) T 0 0 0 ε 2 ( t ) T 0 0 0 ε N ( t ) T ( H P B K ) ε 1 ( t ) ε 2 ( t ) ε N ( t ) s 1 ( t ) δ s 2 ( t ) δ s N ( t ) δ .
Although many different forms of adaptive laws have been proposed, most cannot be represented by the Kronecker products, which will make previous adaptive laws appear more complex. In addition, for the mandatory gain s ( t ) proposed in many literature, the two constraints 0 s ( t ) d t = and 0 s 2 ( t ) d t < need to be used, such as the continuous mandatory gain in references [13,14] and the discrete mandatory gain in reference [34]. The adaptive gain proposed in this article will automatically adjust according to the current state.
Let ε ( t ) = [ ( x 1 ( t ) x 0 ( t ) ) T , ( x 2 ( t ) x 0 ( t ) ) T , , ( x N ( t ) x 0 ( t ) ) T ] T , the dynamic error equation can be abbreviated as
d ε ( t ) = [ ( I N A ) ε ( t ) ( S ( t ) H B K ) ε ( t τ ) ] d t ( S ( t ) E 0 C 0 B K ) d W .
where S ( t ) = diag { s 1 ( t ) , s 2 ( t ) , , s N ( t ) } is a diagonal matrix, I N is an identity matrix, C 0 = diag { σ 01 , σ 02 , , σ 0 N } is the matrix corresponding to noise intensity, d W is n N dimensional standard Brownian motion, and E 0 = diag { e 10 , e 20 , , e N 0 } reflects the interaction of the system. Equation (5) is known as a stochastic differential equation, which includes a differential part and random part. The random part can reflect the changes of disturbance. The following theorem demonstrates the adaptive consensus of the system (1)–(3) when the adaptive law adopts the Equation (4).
Theorem 1. 
Assuming that the digraph G ˜ = ( V ˜ , N ˜ , A ˜ ) for a system of N + 1 agents is made up of N followers and one leader, and that its subgraph G for all followers is a balanced graph. For the multi-agent system determined by the Equations (1)–(3), if there exists a positive constant ξ satisfying
k > ξ θ ¯ 2 λ max ( H H T ) λ max 4 ( P ) λ max 2 ( B B T ) + 4 ξ 1 τ 2 θ ¯ 2 λ max ( H H T ) λ max 2 ( P ) λ max 2 ( B B T ) + 4 ξ 1 τ 2 λ max ( A A T )
then the mean square bounded consensus can be gained under the adaptive law (4), i.e.,
lim t + E | x i ( t ) x 0 ( t ) | 2 = ϵ 1
where E represents the expectation, and ϵ 1 is a small constant independent of time t.
Proof. 
The Lyapunov function is chosen as follows,
V 1 ( t ) = V 11 ( t ) + V 12 ( t ) = ε ( t ) T ( I N P ) ε ( t ) + w 1 t τ t | ε ( s ) | 2 d s + w 2 τ 0 t + θ t | ε ( s ) | 2 d s d θ + w 3 τ 0 t + θ t | ε ( s τ ) | 2 d s d θ + V 12 ( t ) ,
where the function V 12 ( t ) = i = 1 N ( s i ( t ) δ ) 2 . The Lyapunov function is mainly divided into three parts, the first part ε ( t ) T ( I N P ) ε ( t ) is similar to the construction of Lyapunov functions in most references, the special integral part was referred to as a degenerate functional and was used by Kolmanovskii et al. [36]. The last part V 12 ( t ) is a commonly used form in most of the literature when discussing adaptive consensus, and after taking the derivative of this function, the adaptive law can be used to eliminate some unnecessary terms in the following calculations. If the time t is less than τ in the double integral τ 0 t + θ t | ε ( s τ ) | 2 d s d θ , we assume ε i ( t ) equals to the initial value ε i ( 0 ) .
Applying the Itô formula and the error closed-loop systems (6), the random differentiation is expressed as
d V 1 ( t ) = L 1 V 1 ( t ) d t + 2 ε ( t ) T [ S ( t ) E 0 C 0 P B K ] d W ,
where the first term is defined as
L 1 V 1 ( t ) = ε ( t ) T [ I N ( A T P + P A ) ] ε ( t ) 2 ε ( t ) T [ S ( t ) H P B K ] ε ( t τ ) + tr S 2 ( t ) E 0 2 C 0 2 K T B T P B K + w 1 | ε ( t ) | 2 w 1 | ε ( t τ ) | 2 + w 2 τ | ε ( t ) | 2 w 2 t τ t | ε ( s ) | 2 d s + w 3 τ | ε ( t τ ) | 2 w 3 t τ t | ε ( s τ ) | 2 d s + V ˙ 12 ( t ) .
Using the adaptive laws, we can obtain the following equation by combining the derivative rule of the composite function,
V ˙ 12 ( t ) = 2 i = 1 N ( s i ( t ) δ ) s ˙ i ( t ) = 2 i = 1 N ( s i ( t ) δ ) ε i ( t ) T j = 1 N h i j P B K ε j ( t ) 2 i = 1 N ( s i ( t ) δ ) 2 = 2 i = 1 N s i ( t ) ε i ( t ) T j = 1 N h i j P B K ε j ( t ) 2 δ i = 1 N ε i ( t ) T j = 1 N h i j P B K ε j ( t ) 2 i = 1 N ( s i ( t ) δ ) 2 = 2 ε ( t ) T [ S ( t ) H P B K ] ε ( t ) δ ε ( t ) T [ ( H T + H ) P B K ] ε ( t ) 2 i = 1 N ( s i ( t ) δ ) 2 .
By Lemma 1, we can obtain the matrix H + H T is positive definite, which means all eigenvalues are greater than zero. Thus, we can obtain δ λ min ( H T + H ) > 1 by the known condition δ > 1 λ min ( H T + H ) . From the Ricatti equation, we have,
L 1 V 1 ( t ) = ε ( t ) T [ I N ( A T P + P A ) ] ε ( t ) δ ε ( t ) T [ ( H T + H ) P B K ] ε ( t ) + 2 ε ( t ) T [ S ( t ) H P B K ] ε ( t ) ε ( t τ ) + tr S 2 ( t ) E 0 2 C 0 2 K T B T P B K + w 1 | ε ( t ) | 2 w 1 | ε ( t τ ) | 2 + w 2 τ | ε ( t ) | 2 w 2 t τ t | ε ( s ) | 2 d s + w 3 τ | ε ( t τ ) | 2 w 3 t τ t | ε ( s τ ) | 2 d s 2 i = 1 N ( s i ( t ) δ ) 2 ε ( t ) T [ I N ( A T P + P A P B K ) ] ε ( t ) + 2 ε ( t ) T [ S ( t ) H P B K ] ε ( t ) ε ( t τ ) + tr S 2 ( t ) E 0 2 C 0 2 K T B T P B K + w 1 | ε ( t ) | 2 w 1 | ε ( t τ ) | 2 + w 2 τ | ε ( t ) | 2 w 2 t τ t | ε ( s ) | 2 d s + w 3 τ | ε ( t τ ) | 2 w 3 t τ t | ε ( s τ ) | 2 d s 2 i = 1 N ( s i ( t ) δ ) 2 = k | ε ( t ) | 2 + 2 ε ( t ) T [ S ( t ) H P B K ] ε ( t ) ε ( t τ ) + tr S 2 ( t ) E 0 2 C 0 2 K T B T P B K + w 1 | ε ( t ) | 2 w 1 | ε ( t τ ) | 2 + w 2 τ | ε ( t ) | 2 w 2 t τ t | ε ( s ) | 2 d s + w 3 τ | ε ( t τ ) | 2 w 3 t τ t | ε ( s τ ) | 2 d s 2 i = 1 N ( s i ( t ) δ ) 2 .
Note the inequality 2 a b ξ a 2 + 1 ξ b 2 for any positive constant ξ , we have
2 ε ( t ) T [ S ( t ) H P B K ] ε ( t ) ε ( t τ ) ξ ε ( t ) T [ S ( t ) H P B K ] [ S ( t ) H P B K ] T ε ( t ) + ξ 1 | ε ( t ) ε ( t τ ) | 2 ξ θ ¯ 2 λ max ( H H T ) λ max 4 ( P ) λ max 2 ( B B T ) | ε ( t ) | 2 + ξ 1 | ε ( t ) ε ( t τ ) | 2 .
Now, we can obtain from the above inequality,
L 1 V 1 ( t ) k | ε ( t ) | 2 + ξ θ ¯ 2 λ max ( H H T ) λ max 4 ( P ) λ max 2 ( B B T ) | ε ( t ) | 2 + ξ 1 | ε ( t ) ε ( t τ ) | 2 + θ ¯ 2 λ max 3 ( P ) λ max 2 ( B B T ) max ( e i 0 σ 0 i ) 2 + w 1 | ε ( t ) | 2 w 1 | ε ( t τ ) | 2 + w 2 τ | ε ( t ) | 2 w 2 t τ t | ε ( s ) | 2 d s + w 3 τ | ε ( t τ ) | 2 w 3 t τ t | ε ( s τ ) | 2 d s 2 i = 1 N ( s i ( t ) δ ) 2 = Λ 1 | ε ( t ) | 2 Λ 2 | ε ( t τ ) | 2 + ξ 1 | ε ( t ) ε ( t τ ) | 2 + θ ¯ 2 λ max 3 ( P ) λ max 2 ( B B T ) max ( e i 0 σ 0 i ) 2 w 2 t τ t | ε ( s ) | 2 d s w 3 t τ t | ε ( s τ ) | 2 d s 2 i = 1 N ( s i ( t ) δ ) 2 ,
where the two constants in the above inequality are denoted as
Λ 1 = k ξ θ ¯ 2 λ max ( H H T ) λ max 4 ( P ) λ max 2 ( B B T ) w 1 w 2 τ
and Λ 2 = w 1 w 3 τ .
According to the error closed-loop equation, it obtains from the Hölder inequality,
| ε ( t ) ε ( t τ ) | 2 = t τ t d ε ( s ) 2 = t τ t ( I N A ) ε ( s ) ( S ( s ) H ) ( B K ) ε ( s τ ) d s t τ t ( S ( s ) E 0 C 0 ) ( B K ) d W 2 4 t τ t ( I N A ) ε ( s ) d s 2 + 4 t τ t ( S ( s ) H ) ( B K ) ε ( s τ ) d s 2 + 4 t τ t ( S ( s ) E 0 C 0 ) ( B K ) d W 2 4 τ λ max ( A A T ) t τ t | ε ( s ) | 2 d s + 4 τ θ ¯ 2 λ max ( H H T ) λ max 2 ( P ) λ max 2 ( B B T ) t τ t | ε ( s τ ) | 2 d s + 4 θ ¯ 2 λ max 2 ( P ) λ max 2 ( B B T ) max ( e i 0 σ 0 i ) 2 t τ t d W 2 .
So, we obtain
L 1 V 1 ( t ) Λ 1 λ max ( P ) ε ( t ) T ( I N P ) ε ( t ) Λ 2 | ε ( t τ ) | 2 w 2 4 ξ 1 τ λ max ( A A T ) t τ t | ε ( s ) | 2 d s w 3 4 ξ 1 τ θ ¯ 2 λ max 2 ( B B T ) λ max 2 ( P ) λ max ( H H T ) t τ t | ε ( s τ ) | 2 d s α 1 w 1 t τ t | ε ( s ) | 2 d s + w 2 τ 0 t + θ t | ε ( s ) | 2 d s d θ + w 3 τ 0 t + θ t | ε ( s τ ) | 2 d s d θ + V 12 ( t ) + α 1 w 1 t τ t | ε ( s ) | 2 d s + w 2 τ 0 t + θ t | ε ( s ) | 2 d s d θ + w 3 τ 0 t + θ t | ε ( s τ ) | 2 d s d θ + V 12 ( t ) + θ ¯ 2 λ max 3 ( P ) λ max 2 ( B B T ) max ( e i 0 σ 0 i ) 2 2 i = 1 N ( s i ( t ) δ ) 2 + 4 ξ 1 θ ¯ 2 λ max 2 ( P ) λ max 2 ( B B T ) max ( e i 0 σ 0 i ) 2 t τ t d W 2
where α 1 is a positive constant that will be determined later.
From the known condition (7), we have
k ξ θ ¯ 2 λ max ( H H T ) λ max 4 ( P ) λ max 2 ( B B T ) τ > 4 ξ 1 τ θ ¯ 2 λ max ( H H T ) λ max 2 ( P ) λ max 2 ( B B T ) + 4 ξ 1 τ λ max ( A A T )
We can select w 2 and w 3 to satisfy
w 2 > 4 ξ 1 τ λ max ( A A T ) , w 3 > 4 ξ 1 τ θ ¯ 2 λ max ( H H T ) λ max 2 ( P ) λ max 2 ( B B T )
and
w 2 + w 3 < k ξ θ ¯ 2 λ max ( H H T ) λ max 4 ( P ) λ max 2 ( B B T ) τ
From the above equation, we have
k ξ θ ¯ 2 λ max ( H H T ) λ max 4 ( P ) λ max 2 ( B B T ) w 2 τ > w 3 τ
Now, we can select w 1 to satisfy
k ξ θ ¯ 2 λ max ( H H T ) λ max 4 ( P ) λ max 2 ( B B T ) w 2 τ > w 1 > w 3 τ
which implies Λ 1 = k ξ θ ¯ 2 λ max ( H H T ) λ max 4 ( P ) λ max 2 ( B B T ) w 1 w 2 τ > 0 and Λ 2 = w 1 w 3 τ > 0 .
On the other hand, the positive constant α 3 is selected to satisfy
α 1 2 , α 1 Λ 1 λ max ( P ) , α 1 w 2 4 ξ 1 τ λ max ( A A T ) w 1 + w 2 τ , α 1 w 3 4 ξ 1 τ θ ¯ 2 λ max 2 ( B B T ) λ max 2 ( P ) λ max ( H H T ) w 3 τ .
Note τ 0 t + θ t | ε ( s ) | 2 d s d θ τ t τ t | ε ( s ) | 2 d s and τ 0 t + θ t | ε ( s τ ) | 2 d s d θ τ t τ t | ε ( s τ ) | 2 d s , the following inequality can be given from the Equation (13),
L 1 V 1 ( t ) α 1 V 1 ( t ) Λ 2 | ε ( t τ ) | 2 w 2 4 ξ 1 τ λ max ( A A T ) α 1 ( w 1 + w 2 τ ) t τ t | ε ( s ) | 2 d s w 3 4 ξ 1 τ θ ¯ 2 λ max 2 ( B B T ) λ max 2 ( P ) λ max ( H H T ) α 1 w 3 τ t τ t | ε ( s τ ) | 2 d s ( 2 α 1 ) i = 1 N ( s i ( t ) δ ) 2 + θ ¯ 2 λ max 3 ( P ) λ max 2 ( B B T ) max ( e i 0 σ 0 i ) 2 + 4 ξ 1 θ ¯ 2 λ max 2 ( P ) λ max 2 ( B B T ) max ( e i 0 σ 0 i ) 2 t τ t d W 2 α 1 V 1 ( t ) + θ ¯ 2 λ max 3 ( P ) λ max 2 ( B B T ) max ( e i 0 σ 0 i ) 2 + 4 ξ 1 θ ¯ 2 λ max 2 ( P ) λ max 2 ( B B T ) max ( e i 0 σ 0 i ) 2 t τ t d W 2
By using d e γ t V 1 ( t ) = γ e γ t V 1 ( t ) d t + e γ t d V 1 ( t ) and integrating on both sides of the formula, it follows from the Equation (9),
e γ t E V 1 ( t ) = E V 1 ( 0 ) + γ E 0 t e γ s V 1 ( s ) d s + E 0 t e γ s d V 1 ( s ) E V 1 ( 0 ) ( α 1 γ ) E 0 t e γ s V 1 ( s ) d s + [ θ ¯ 2 λ max 3 ( P ) λ max 2 ( B B T ) max ( e i 0 σ 0 i ) 2 + 4 ξ 1 τ n N θ ¯ 2 λ max 2 ( P ) λ max 2 ( B B T ) max ( e i 0 σ 0 i ) 2 ] e γ t 1 γ E V 1 ( 0 ) + μ 1 γ e γ t
where γ is chosen to satisfy γ < α 1 , the symbol E represents the expectation of the random variable, and the positive constant μ 1 is defined as follows
μ 1 = θ ¯ 2 λ max 3 ( P ) λ max 2 ( B B T ) max ( e i 0 σ 0 i ) 2 + 4 ξ 1 τ n N θ ¯ 2 λ max 2 ( P ) λ max 2 ( B B T ) max ( e i 0 σ 0 i ) 2 .
So, we have
E V 1 ( t ) E V 1 ( 0 ) e γ t + μ 1 γ
Note | ε ( t ) | 2 ε ( t ) T ( I N P ) ε ( t ) λ min ( P ) V 1 ( t ) λ min ( P ) , we can obtain
lim t + E | x i ( t ) x 0 ( t ) | 2 = ϵ 1
and ϵ 1 is a small constant independent of time t. □
Remark 2. 
Under the random noise disturbance, little papers discuss the adaptive consensus of multi-agent systems in the presence of time delays. Theorem 1 indicates that the adaptive control law (4) can ensure that the dynamic error between the followers and the leader can converge to a small number ϵ 1 in the mean square sense. Looking back at the above proof, it can be found that ϵ 1 = μ 1 γ λ min ( P ) = Ξ max ( e i 0 σ 0 i ) 2 , where Ξ is a constant. So this boundary ϵ 1 tends to zero when the noise intensity of the system approaches zero.
Remark 3. 
Formula (7) can be transformed into
τ < k ξ ξ 2 θ ¯ 2 λ max ( H H T ) λ max 4 ( P ) λ max 2 ( B B T ) 4 θ ¯ 2 λ max ( H H T ) λ max 2 ( P ) λ max 2 ( B B T ) + 4 λ max ( A A T ) .
So the upper limit of time delay can be obtained as
k 4 θ ¯ λ max 2 ( P ) λ max ( B B T ) λ max ( H H T ) θ ¯ 2 λ max ( H H T ) λ max 2 ( P ) λ max 2 ( B B T ) + λ max ( A A T ) .
Although the constant time delay in this paper cannot be directly extended to time-varying delay, the above formula gives the range of time delay, which can provide some reference for future work.
Now, we further analyze the adaptive control law (4), and investigate whether multi-agent system can still achieve the consensus under τ = 0 . For this case, the adaptive law is kept unchanged, and the control protocol is constructed as
u i ( t ) = s i ( t ) K j N i e i j ( x j ( t ) x i ( t ) ) + e i 0 ( x 0 ( t ) x i ( t ) ) + e i 0 σ 0 i η i ( t ) ,
where η i ( t ) R n is n dimensional standard white noise. The abbreviated form of the error dynamic equation is represented by
d ε ( t ) = [ I N A S ( t ) H B K ] ε ( t ) d t ( S ( t ) E 0 C 0 B K ) d W .
Theorem 2. 
Assuming the digraph G ˜ = ( V ˜ , N ˜ , A ˜ ) has the same properties as Theorem 1. If the control protocol of the multi-agent system (1) and (2) satisfies (16) and the adaptive law is shown in (4), then the system can achieve the mean square bounded consensus, i.e.,
lim t + E | x i ( t ) x 0 ( t ) | 2 = ϵ 2
where ϵ 2 is a small constant independent of time t.
Proof. 
The Laypunov function is denoted as
V 2 ( t ) = V 21 ( t ) + V 22 ( t ) = ε ( t ) T ( I N P ) ε ( t ) + i = 1 N ( s i ( t ) δ ) 2 ,
where P satisfies the Equation (5). We can obtain from the Itô formula
d V 2 ( t ) = L 2 V 2 ( t ) d t 2 ε ( t ) T [ S ( t ) E 0 C 0 P B K ] d W ,
and the operator L 2 satisfies
L 2 V 2 ( t ) = ε ( t ) T [ I N ( P A + A T P ) ] ε ( t ) 2 ε ( t ) T [ S ( t ) H P B K ] ε ( t ) + V ˙ 12 ( t ) + tr S ( t ) 2 E 0 2 C 0 2 K T B T P B K ,
Using the similar method, we can obtain the following equality from the adaptive law (4)
V ˙ 22 ( t ) = 2 ε ( t ) T [ S ( t ) H P B K ] ε ( t ) δ ε ( t ) T [ ( H T + H ) P B K ] ε ( t ) 2 i = 1 N ( s i ( t ) δ ) 2 .
Lemma 1 indicates that the minimum eigenvalue of the matrix H T + H satisfies λ min ( H T + H ) > 0 . Using the known condition δ > 1 λ min ( H T + H ) , we have
L 2 V 2 ( t ) = ε ( t ) T [ I N ( P A + A T P ) ] ε ( t ) δ ε ( t ) T [ ( H + H T ) ( P B K ) ] ε ( t ) + tr S ( t ) 2 E 0 2 C 0 2 K T B T P B K 2 i = 1 N ( s i ( t ) δ ) 2 ε ( t ) T [ I N ( P A + A T P ) ] ε ( t ) δ λ min ( H + H T ) ε ( t ) T [ I N ( P B K ) ] ε ( t ) + tr S ( t ) 2 E 0 2 C 0 2 K T B T P B K 2 i = 1 N ( s i ( t ) δ ) 2 k | ε ( t ) | 2 2 i = 1 N ( s i ( t ) δ ) 2 + θ ¯ 2 max { ( e i 0 σ 0 i ) 2 } λ max 3 ( P ) λ max 2 ( B B T ) k λ max ( P ) ε ( t ) T ( I N P ) ε ( t ) 2 i = 1 N ( s i ( t ) δ ) 2 + θ ¯ 2 max { ( e i 0 σ 0 i ) 2 } λ max 3 ( P ) λ max 2 ( B B T ) min k λ max ( P ) , 2 V 2 ( t ) + θ ¯ 2 max { ( e i 0 σ 0 i ) 2 } λ max 3 ( P ) λ max 2 ( B B T )
From the formula d e γ t V 2 ( t ) = γ e γ t V 2 ( t ) d t + e γ t d V 2 ( t ) , we can obtain the following inequality from the Equation (19),
e γ t E V 2 ( t ) = E V 2 ( 0 ) + γ E 0 t e γ s V 2 ( s ) d s + E 0 t e γ s d V 2 ( s ) E V 2 ( 0 ) min k λ max ( P ) , 2 γ E 0 t e γ s V 2 ( s ) d s + θ ¯ 2 max { ( e i 0 σ 0 i ) 2 } λ max 3 ( P ) λ max 2 ( B B T ) e γ t 1 γ E V 2 ( 0 ) + μ 2 γ e γ t
where γ < min k λ max ( P ) , 2 and μ 2 = θ ¯ 2 max { ( e i 0 σ 0 i ) 2 } λ max 3 ( P ) λ max 2 ( B B T ) . Hence, divide the inequality by e γ t , it obtains
E V 2 ( t ) E V 2 ( 0 ) e γ t + μ 2 γ .
Finally, the mean square bounded consensus is obtained as follows
lim t + E | x i ( t ) x 0 ( t ) | 2 = ϵ 2
where ϵ 2 is a small constant independent of time t. □
Remark 4. 
Under the same adaptive law (4), the conditions of Theorem 2 are much simpler than those of Theorem 1, which can greatly expand the application range of the adaptive law in the problem. Moreover, Theorems 1 and 2 show that the adaptive law (4) can ensure that followers can track leader in the mean square sense, regardless of whether the stochastic multi-agent system has a time delay or not.
Remark 5. 
Hu et al. [37] designed a dynamic output-feedback controller by using the relative state information, and achieved the consensus by adjusting the internal state of the controller. Compared with the literature, the consensus in this paper can be achieved by adjusting the adaptive gain of system. Although both control strategies can achieve the consensus, [37] did not consider the impact of time delay, and the adaptive gain is mandatory.

4. Simulation

To analyze the validity of main conclusions, assuming that the system covers one leader and three followers, we conduct numerical simulations in one- and two-dimensional space respectively, and verify that the adaptive control law of this paper can make all followers track the target regardless of whether the system has time delay.
Example 1. 
For the system in one-dimensional space, let the leader be globally reachable, and the digraph G 1 formed by the followers be balanced, its adjacency matrix is represented by A = 0 0 1 1 0 0 0 1 0 . Using the definition of Laplacian matrix L G 1 , we can obtain the matrix H = L G 1 + E 0 = 2 0 1 1 1 0 0 1 2 , where E 0 = 1 0 0 0 0 0 0 0 1 is the communication matrix between the leader and the followers. The leader-following multi-agent system is represented by
x ˙ i ( t ) = 0.3 x i ( t ) + 0.4 u ( t ) , x ˙ 0 ( t ) = 0.3 x 0 ( t ) .
For the above one-dimensional multi-agent system, taking k = 0.8 , the matrix P = 1.0432 can be obtained from the Riccati equation A T P + P A P B B T P + k I = 0 . After simple calculation, we obtain the adaptive gain matrix Γ = P B B T P = 0.1741 . Since the minimum eigenvalue of H T + H is 1, we take the constant δ = 1.02 to ensure δ > 1 λ min ( H T + H ) , so the adaptive law can be represented by s ˙ i ( t ) = 0.1741 ( x i ( t ) x 0 ( t ) ) T j = 1 N h i j ( x j ( t ) x 0 ( t ) ) ( s i ( t ) 1.02 ) . For the system, if τ is 0.13 , the noise intensity is 0.23 , the constant θ ¯ is 1.4, and the constant ξ is 0.052, then the condition of Theorem 1 holds as k = 0.8 > ξ θ ¯ 2 λ max ( H H T ) λ max 4 ( P ) λ max 2 ( B B T ) + 4 ξ 1 τ 2 θ ¯ 2 λ max ( H H T ) λ max 2 ( P ) λ max 2 ( B B T ) + 4 ξ 1 τ 2 λ max ( A A T ) = 0.6447 . At this point, Figure 1 shows the trend of tracking error over time, and Figure 2 shows the trajectory of the adaptive gain. Under the combined effects of noise and time delay, it can be seen that the state errors eventually converge to a small range. For the system with τ = 0 , we maintain the topological structure and dynamic equations of the problem invariable, which means the conditions of Theorem 2 hold. Under the same adaptive control law, the noise intensity is increased to 0.9, the system can still attain the mean square bounded consensus. Figure 3 and Figure 4 show the trajectories of dynamic error and adaptive gain of each agents.
Example 2. 
In the two dimensional space, assuming the leader is globally reachable, the digraph G 2 composed of followers is balanced, and the matrix H = L G 2 + E 0 = 2 1 0 1 1 0 0 0 2 . The system is represented by
x ˙ i ( t ) = 0 0.8 0.8 0 x i ( t ) + 0.4 0.1 0.1 0.4 u ( t ) , x ˙ 0 ( t ) = 0 0.8 0.8 0 x 0 ( t ) .
For two dimensional system with time delay, the constant k in the Riccati equation is taken as 0.4, we can obtain P = 1.4388 0.0350 0.0350 1.6538 and the adaptive gain matrix Γ = P B B T P = 0.3441 0.1721 0.1721 0.4559 . Due to λ min ( H T + H ) = 0.7693 , the constant δ in the adaptive control law is taken as 1.35 in order to satisfy δ > 1 λ min ( H T + H ) . Let the time delay τ = 0.0041 , the noise intensity σ 0 i = 0.33 , the constant θ ¯ = 3.4 , and ξ = 0.0041 , we can obtain that the condition of Theorem 1 holds as k = 0.4 > ξ θ ¯ 2 λ max ( H H T ) λ max 4 ( P ) λ max 2 ( B B T ) + 4 ξ 1 τ 2 θ ¯ 2 λ max ( H H T ) λ max 2 ( P ) λ max 2 ( B B T ) + 4 ξ 1 τ 2 λ max ( A A T ) = 0.3881 . Figure 5 and Figure 6 describe the trajectory of dynamic error and adaptive gain of system with time delay in a noisy environment. It can be seen that all components of the three followers can track the target. When the time delay disappears, we maintain the above adaptive law unchanged, and then the conditions of Theorem 2 hold. Let the noise intensity σ 0 i = 0.24 , and the trends of the dynamic error and the adaptive gain of the system are shown in Figure 7 and Figure 8.
In order to compare the differences between the adaptive control protocol proposed in this paper and some previous papers, we once again simulate the one-dimensional multi-agent system in Example 1, and take the noise intensity as 0.2. Under the mandatory gain a i ( t ) = log ( 1 + t ) 1 + t and the adaptive law (4), we simulate the dynamic error and the gain of two situations, respectively, as shown in Figure 9 and Figure 10. The black curve represents the situation of mandatory gain, the other colors represent the changes of three agents under the control law (4). From the two figures, it can be seen that the adaptive control protocol proposed in this paper has a faster rate of convergence, so three followers can track the leader in a shorter time. In addition, the mandatory gain will eventually converge to zero, while the adaptive gain (4) will converge to a non-zero constant.

5. Conclusions

For the tracking issues, adaptive control is analyzed in cases both with and without time delay. Firstly, the adaptive control protocol of the stochastic system is given in the presence of time delay, and the adaptive law is designed. The adaptive control law depends on the solution of the Riccati equation and can be abbreviated into matrix form by the Kronecker products. Then, it was proved that the followers can track the target in the mean square sense, and the dynamic error can obtain to a very little constant. Compared with the previous references, the final dynamic error has a smaller value, and when the noise intensity converges to zero, this dynamic error value also trends to zero. It should be noted that the method of proof can not be directly extended to the case of variable delay. In the future, it is meaningful to further explore the adaptive consensus of multi-agent system with variable delay, and the output feedback control with time delay also needs additional investigation.

Author Contributions

Conceptualization, methodology, writing original draft preparation. S.J.; writing—review and editing, G.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work is Major Projects of Natural Science Research in Anhui Universities (2022AH040207 and KJ2021A1101).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, Y.; He, L.; Huang, C.Q. Adaptive time-varying formation tracking control of unmanned aerial vehicles with quantized input. ISA Trans. 2019, 85, 76–83. [Google Scholar] [CrossRef]
  2. Peng, Z.; Luo, R.; Hu, J.; Shi, K.; Ghosh, B.K. Distributed optimal tracking control of discrete-time multiagent systems via event-triggered reinforcement learning. IEEE Trans. Circuits Syst. Regul. Pap. 2022, 69, 3689–3700. [Google Scholar] [CrossRef]
  3. Wan, Y.; Qin, J.; Li, F.; Yu, X.; Kang, Y. Game theoretic-based distributed charging strategy for PEVs in a smart charging station. IEEE Trans. Smart Grid 2021, V12, 538–547. [Google Scholar] [CrossRef]
  4. Vicsek, T.; Czirók, A.; Ben-Jacob, E.; Cohen, I.; Shochet, O. Novel type of phase transition in a system of self-driven particles. Phys. Rev. Lett. 1995, 75, 1226–1229. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Qin, J.; Ma, Q.; Shi, Y.; Wang, L. Recent advances in consensus of multi-agent systems: A brief survey. IEEE Trans. Ind. Electron. 2017, 64, 4972–4983. [Google Scholar] [CrossRef]
  6. Amirkhani, A.; Barshooi, A.H. Consensus in multi-agent systems: A review. Artif. Intell. Rev. 2022, 55, 3897–3935. [Google Scholar] [CrossRef]
  7. Jiang, J.H.; Jiang, Y.Y. Leader-following consensus of linear time-varying multi-agent systems under fixed and switching topologies. Automatica 2020, 113, 108804. [Google Scholar] [CrossRef]
  8. Wang, X.X.; Liu, Z.X.; Chen, Z.Q. Event-triggered fault-tolerant consensus control with control allocation in leader-following multi-agent systems. Sci. China 2021, 64, 879–889. [Google Scholar] [CrossRef]
  9. Jeong, J.; Lim, Y.; Parivallal, A. An asymmetric lyapunov-krasovskii functional approach for event-triggered consensus of multi-agent systems with deception attacks. Appl. Math. Comput. 2023, 439, 127584. [Google Scholar] [CrossRef]
  10. Murugesan, S.; Liu, Y.C. Resilient finite-time distributed event-triggered consensus of multi-agent systems with multiple cyber-attacks. Commun. Nonlinear Sci. Numer. Simul. 2023, 116, 106876. [Google Scholar] [CrossRef]
  11. Zhang, R.H.; Zhang, Y.Y.; Zong, X.F. Stochastic leader-following consensus of discrete-time nonlinear multi-agent systems with multiplicative noises. J. Frankl. Inst. 2022, 359, 7753–7774. [Google Scholar] [CrossRef]
  12. Li, T.; Zhang, J.F. Mean square average-consensus under measurement noises and fixed topologies: Necessary and sufficient conditions. Automatica 2009, 45, 1929–1936. [Google Scholar] [CrossRef]
  13. Cheng, L.; Hou, Z.G.; Tan, M. Necessary and sufficient conditions for consensus of double-integrator multi-agent systems with measurement noises. IEEE Trans. Autom. Control 2011, 56, 1958–1963. [Google Scholar] [CrossRef]
  14. Zong, X.F.; Li, T.; Zhang, J.F. Consensus conditions of continuous-time multi-agent systems with time-delays and measurement noises. Automatica 2019, 99, 412–419. [Google Scholar] [CrossRef] [Green Version]
  15. Knotek, T.; Hengster-Movric, K.; Ebek, M. Distributed adaptive consensus protocol with decaying gains. Int. J. Robust Nonlinear Control 2020, 30, 6166–6188. [Google Scholar] [CrossRef]
  16. Yu, Z.; Huang, D.; Jiang, H.; Hu, C. Consensus of second-order multi-agent systems with nonlinear dynamics via edge-based distributed adaptive protocols. J. Frankl. Inst. 2016, 353, 4821–4844. [Google Scholar] [CrossRef]
  17. Luo, R.; Peng, Z.N.; Hu, J.P. On model identification based optimal control and it’s applications to multi-agent learning and control. Mathematics 2023, 11, 906. [Google Scholar] [CrossRef]
  18. Li, M.; Qin, J.; Ma, Q.; Zheng, W.X.; Kang, Y. Hierarchical optimal synchronization for linear systems via reinforcement learning: A stackelberg-nash game perspective. IEEE Trans. Neural Netw. Learn. Syst. 2020, 99, 1–12. [Google Scholar] [CrossRef]
  19. Duarte, A.; Weissing, F.J.; Pen, I.; Keller, L. An evolutionary perspective on self-organized division of labor in social insects. Annu. Rev. 2011, 42, 91–110. [Google Scholar] [CrossRef] [Green Version]
  20. Larsen, A.D.; Büscher, T.H.; Chuthong, T.; Pairam, T.; Bethge, H.; Gorb, S.N.; Manoonpong, P. Self-organized stick insect-like locomotion under decentralized adaptive neural control: From biological investigation to robot simulation. Adv. Theory Simulations 2023, 228, 2300228. [Google Scholar] [CrossRef]
  21. Duan, Y.B.; Yang, Z.W. Design and analysis of the consensus gain for stochastic multi-agent systems. Control Theory Appl. 2019, 36, 629–635. [Google Scholar]
  22. Huang, Y.X. Adaptive consensus for uncertain multi-agent systems with stochastic measurement noises. Commun. Nonlinear Sci. Numer. Simul. 2023, 120, 107156. [Google Scholar] [CrossRef]
  23. Xiao, G.L.; Wang, J.R.; Meng, D.Y. Adaptive finite-time consensus for stochastic multi-agent systems with uncertain actuator faults. IEEE Trans. Control. Netw. Syst. 2023, 10, 1–12. [Google Scholar] [CrossRef]
  24. Wu, Y.Z.; Liang, Q.P.; Zhao, Y.Y. Adaptive bipartite consensus control of general linear multi-agent systems using noisy measurements. Eur. J. Control 2021, 59, 123–128. [Google Scholar] [CrossRef]
  25. Ma, C.Q.; Qin, Z.Y.; Zhao, Y.B. Bipartite consensus of integrator multi-agent systems with measurement noise. IET Control Theory Appl. 2017, 11, 3313–3320. [Google Scholar] [CrossRef] [Green Version]
  26. Wen, G.; Chen, C.P.; Liu, Y.J.; Liu, Z. Neural network-based adaptive leader-following consensus control for a class of nonlinear multiagent state-delay systems. IEEE Trans. Cybern. 2016, 47, 1–10. [Google Scholar] [CrossRef]
  27. Hu, J.P.; Hong, Y.G. Leader-following coordination of multi-agent systems with coupling time delays. Phys. A 2007, 374, 853–863. [Google Scholar] [CrossRef] [Green Version]
  28. Horn, R.A.; Johnson, C.R. Matrix Analysis; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  29. Li, Z.; Duan, Z.; Chen, G.; Huang, L. Consensus of multi-agent systems and synchronization of complex networks: A unified viewpoint. IEEE Trans. Circuits Syst. I Regul. Pap. 2010, 57, 213–224. [Google Scholar]
  30. Zhang, H.; Lewis, F.; Das, A. Optimal design for synchronization of cooperative systems: State feedback, observer, and output feedback. IEEE Trans. Autom. Control 2011, 56, 1948–1952. [Google Scholar] [CrossRef]
  31. Li, Z.; Wen, G.; Duan, Z.; Ren, W. Designing fully distributed consensus protocols for linear multi-agent systems with directed graphs. IEEE Trans. Autom. Control 2015, 60, 1152–1157. [Google Scholar] [CrossRef] [Green Version]
  32. Zhang, W.; Abuzar Hussein Mohammed, A.; Bao, J.; Liu, Y. Adaptive event-triggering consensus for multi-agent systems with linear time-varying dynamics. J. Syst. Sci. Complex. 2022, 35, 1700–1718. [Google Scholar] [CrossRef]
  33. Deng, C.; Wen, C.Y.; Li, X.Y. Distributed adaptive tracking control for high-order nonlinear multiagent systems over event-triggered communication. IEEE Trans. Autom. Control 2023, 68, 1176–1183. [Google Scholar] [CrossRef]
  34. Wu, Z.H.; Fang, H.J. Delayed-state-derivative feedback for improving consensus performance of second-order delayed multi-agent systems. Int. J. Syst. Sci. 2012, 43, 140–152. [Google Scholar] [CrossRef]
  35. Jin, S.B.; Yu, Q.J. Construction of adaptive consensus gains for multi-agent systems with multiplicative noise. Complexity 2021, 2021, 4425511. [Google Scholar] [CrossRef]
  36. Kolmanovskii, V.; Myshkis, A. Applied Theory of Functional Differential Equations; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1992. [Google Scholar]
  37. Hu, J.; Wu, Y.; Li, T.; Ghosh, B.K. Consensus control of general linear multi-agent systems with antagonistic interactions and communication noises. IEEE Trans. Autom. Control 2019, 64, 2122–2127. [Google Scholar] [CrossRef]
Figure 1. Dynamic error of one–dimensional system with time delay.
Figure 1. Dynamic error of one–dimensional system with time delay.
Mathematics 11 03517 g001
Figure 2. Adaptive gain of one–dimensional system with time delay.
Figure 2. Adaptive gain of one–dimensional system with time delay.
Mathematics 11 03517 g002
Figure 3. Dynamic error of one–dimensional system without time delay.
Figure 3. Dynamic error of one–dimensional system without time delay.
Mathematics 11 03517 g003
Figure 4. Adaptive trajectory of one–dimensional system without time delay.
Figure 4. Adaptive trajectory of one–dimensional system without time delay.
Mathematics 11 03517 g004
Figure 5. Dynamic error of two–dimensional system with time delay.
Figure 5. Dynamic error of two–dimensional system with time delay.
Mathematics 11 03517 g005
Figure 6. Adaptive gain of two–dimensional system with time delay.
Figure 6. Adaptive gain of two–dimensional system with time delay.
Mathematics 11 03517 g006
Figure 7. Dynamic error of two–dimensional system without time delay.
Figure 7. Dynamic error of two–dimensional system without time delay.
Mathematics 11 03517 g007
Figure 8. Adaptive gain of two–dimensional system without time delay.
Figure 8. Adaptive gain of two–dimensional system without time delay.
Mathematics 11 03517 g008
Figure 9. Comparison of two different gains.
Figure 9. Comparison of two different gains.
Mathematics 11 03517 g009
Figure 10. Comparison of dynamic errors under two different gains.
Figure 10. Comparison of dynamic errors under two different gains.
Mathematics 11 03517 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jin, S.; Zhang, G. Adaptive Consensus of the Stochastic Leader-Following Multi-Agent System with Time Delay. Mathematics 2023, 11, 3517. https://doi.org/10.3390/math11163517

AMA Style

Jin S, Zhang G. Adaptive Consensus of the Stochastic Leader-Following Multi-Agent System with Time Delay. Mathematics. 2023; 11(16):3517. https://doi.org/10.3390/math11163517

Chicago/Turabian Style

Jin, Shoubo, and Guanghui Zhang. 2023. "Adaptive Consensus of the Stochastic Leader-Following Multi-Agent System with Time Delay" Mathematics 11, no. 16: 3517. https://doi.org/10.3390/math11163517

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop