Next Article in Journal
Analysis of the Vibration Characteristics and Vibration Reduction Methods of Iron Core Reactor
Previous Article in Journal
Computer-Aided Choosing of an Optimal Structural Variant of a Robot for Extracting Castings from Die Casting Machines
Previous Article in Special Issue
Tracking Control of Uncertain Neural Network Systems with Preisach Hysteresis Inputs: A New Iteration-Based Adaptive Inversion Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Self-Triggered Control for Multi-Agent Systems with Actuator Failures and Time-Varying State Constraints

1
School of Mechanical and Electrical Engineering, Guangzhou University, Guangzhou 510006, China
2
College of Mathematics and Systems Science, Shandong University of Science and Technology, Qingdao 266590, China
*
Authors to whom correspondence should be addressed.
Actuators 2023, 12(9), 364; https://doi.org/10.3390/act12090364
Submission received: 26 August 2023 / Revised: 12 September 2023 / Accepted: 15 September 2023 / Published: 19 September 2023

Abstract

:
This work focuses on the consensus problem for multi-agent systems (MASs) with actuator failures and time-varying state constraints, and presents a fixed-time self-triggered consensus control protocol. The use of time-varying asymmetrical barrier Lyapunov functions (BLF) avoids the violation of time-varying state constraints in MASs, ensuring stability and safety. Meanwhile, the system’s performance is further enhanced by leveraging the proposed adaptive neural networks (NNs) control method to mitigate the effects of actuator failures and nonlinear disturbances. Moreover, a self-triggered mechanism based on a fixed-time strategy is proposed to reach rapid convergence and conserve bandwidth resources in MASs. The mechanism achieves consensus within a predefined fixed time, irrespective of the system’s initial states, while conserving communication resources. Finally, the proposed method’s effectiveness is confirmed through two simulation examples, encompassing diverse actuator failure scenarios.

1. Introduction

Consensus control in multi-agent systems (MASs) is currently a prominent issue in the field of control. It refers to the continuous updating of their own states through mutual communication and local collaboration among agents in MASs, so that each agent reaches the same state [1,2,3]. Based on graph theory, MASs can achieve distributed communication by utilizing the state information and connectivity between corresponding agents and their neighbors. Distributed topology structures have advantages such as easy scalability and high fault tolerance, which have attracted extensive research [4,5,6,7,8]. In Ref. [6], a consensus constraint control method for MASs with random disturbances was proposed based on an undirected graph. Ref. [7] investigated a two-layer distributed control strategy for nonlinear MASs with unknown control directions. This strategy only requires relatively weak communication conditions. Ref. [8] proposed a consensus tracking protocol for a class of incommensurate fractional-order MAS with directed switching topologies.
The convergence speed is a key focus for the consensus control of MASs, and the aforementioned references mostly employed finite-time consensus control methods. Specifically, these methods facilitate the attainment of consensus and the fulfillment of the desired control objectives by MASs within a finite time period. Such methods offer notable advantages, including robustness, rapid convergence, and high precision in control [9,10]. However, it is important to note that the convergence time upper bound of finite-time control methods is contingent upon the initial states of the system. In situations where the initial states are unknown or exceedingly large, guaranteeing the convergence time becomes challenging [11]. Meanwhile, in practical applications, MASs may require achieving consensus within a specific time or achieving predictive convergence when the initial states are unknown. In response to these demands, fixed-time consensus control methods have been developed [12]. These methods alleviate the limitation of convergence time being dependent on the system’s initial values in MASs and allow for designing the desired fixed convergence time according to specific needs. At present, fixed-time control methods have been extensively utilized in the control of MASs [1,13,14,15,16]. In Ref. [13] and Ref. [14], fixed-time consensus control methods were developed for leaderless and leader-following MASs, respectively. In Ref. [15], a practical fixed-time consensus control method was introduced, which can reduce the state disagreement at the settling time by fine-tuning the parameters of time base generator. Ref. [16] presented a fixed-time control method for heterogeneous MASs, which was based upon only local information.
It is well known that MASs are evolving towards precise and efficient intelligence. Consequently, there is an imminent need to explore and address various factors that impact system control performance, with actuator failures being one such issue [17]. The effectiveness of control relies heavily on the normal operation of actuators in active control systems. However, in practical operations of industrial systems, actuators may fail due to various reasons [18]. If proper compensation is not employed when actuator failures occur, it may result in a range of consequences from decreased control performance to catastrophic system failures [19]. Consequently, numerous compensatory methods have been suggested to address the issue of actuator failures in MASs [20,21,22]. Ref. [20] developed a fault-tolerant consensus control protocol based on NNs and observer framework to prevent disturbances caused by actuator failures. In Ref. [21], an adaptive output-feedback consensus strategy was developed that took into account both actuator failures and unmatched actuator redundancy. In Ref. [22], the compensations for actuator faults, including gain fault, bias fault and unknown control direction, were achieved by using the Nussbaum function.
Furthermore, constraints that are frequently encountered in practical systems pose a significant influence on system performance. Constraints related concerns arise due to inherent system limitations or other factors, demanding careful consideration during the controller design process. Mishandling constraint issues may potentially result in safety hazards and economic losses [19]. In Ref. [23], the actuator input saturation constraints in the spacecraft formation were considered, and a distributed Lyapunov-based model predictive controller that was resilient against actuation attacks was proposed, eliminating the impact of input saturation constraints on the spacecraft formation control system. However, the design of the controller became more challenging when considering full-state constraints. Presently, research in the field of state constraints control primarily revolves around the incorporation of appropriate barrier Lyapunov functions (BLFs) to prevent the system from violating the corresponding constraint conditions [24,25,26,27]. To list a few, Ref. [24] utilized logarithmic asymmetric BLF (ABLF) to avoid the influence of full-state constraints in nonlinear systems. In Ref. [25], a higher-order tan-type BLF was formulated to address the limitation of traditional BLF in accommodating higher-order systems. In Ref. [26], a novel obstacle function was introduced to accommodate stochastic higher-order nonlinear systems with a non-triangular structure. Ref. [27] proposed a robust adaptive control scheme that can eliminate the restrictive conditions required for BLF-based control methods. It should be noted that the BLFs proposed in the aforementioned references were only applicable to constant constraints and cannot be directly applied to time-varying constraints. In order to achieve generality, this work constructs time-varying ABLFs based on the methodologies presented in Refs. [28,29].
On the other hand, the intermittent and missing data during the data transmission process, caused by the limited processing and storage capacity of individual agents, as well as the limited system network bandwidth, have attracted the attention of scholars. In commonly used time-triggered control, agents activate controllers at given time intervals [30]. Compared to continuous trigger methods, this approach can save communication resources to some extent. However, it still has many unnecessary triggers. In contrast, the event-triggered mechanism (ETM) does not need to adhere to a fixed trigger interval. It is based on a set of trigger conditions, and the controller communication is only updated when the agent meets those trigger conditions [10]. It is evident that, by designing appropriate trigger conditions, the sampling frequency can be effectively reduced, thereby conserving communication resources. Nevertheless, in practical terms, ETM requires a significant amount of detection hardware, increasing hardware costs. In Ref. [31], a sample-based ETM was proposed, which eliminated the need for the continuous detection of trigger conditions by only detecting them at sampling moments. However, detection hardware is still required in practical applications. Conversely, the self-triggered mechanism (STM) relies solely on information from the current time to calculate the next trigger instant [32,33,34]. They do not require continuous monitoring of error values, as in the case of ETM, to determine if the trigger conditions have been met. Specifically, the STM proposed in Ref. [32] only measures the states of itself and all neighboring agents at the triggering moment and calculates the next triggering moment according to the average measured states. The STMs proposed in Refs. [33,34] utilize the rate of change of control signals to calculate the next triggered time. These methods can effectively save communication resources. However, when the temporal derivative of control signals is too drastic, it may lead to system instability.
Guided by the above discussions, this work investigates the issue of consensus control in MASs with actuator failures and time-varying state constraints, and proposes a fixed-time self-triggered control strategy. This work makes contributions to the following aspects when compared to previous literature:
  • In contrast to the finite-time control methods ([8,9,10,12,18,19]), a fixed-time STM is implemented, whose convergence time is unaffected by initial states. Additionally, the conservation of bandwidth resources makes the mechanism more practical. In comparison to prior studies ([33,34]), the proposed STM addresses the issues of system instability caused by rapid changes in the control signal.
  • Different from the control schemes ([20,21,22]), this paper presents an adaptive NN control approach that effectively addresses both actuator failures and time-varying state constraints within a fixed-time framework. By combining radial basis function NNs with barrier Lyapunov functions, effective compensation can be achieved for actuator failures while ensuring that the system’s states do not violate time-varying constraints.
The outline of this article is as follows. Section 2 presents the system model and lemmas. Section 3 describes the design process and stability proof of the fixed-time self-triggered controller. To validate the effectiveness of the proposed method, Section 4 conducts two sets of simulation experiments and two sets of actuator failure cases. Section 5 provides a summary conclusion of this article.

2. System Modelling and Problem Formulation

2.1. Model Description

Consider a class of MASs as follows:
x ˙ i , g ¯ = x i , g ¯ + 1 + n ¯ i , g ¯ ( x ¯ i , g ¯ ) , g ¯ = 1 , , n 1 , x ˙ i , n = j = 1 r g i , j i , j ( x ¯ i , j ) u i , j + n ¯ i , n ( x ¯ i , n ) , y i = x i , 1 .
The MASs consist of one virtual leader and l agents. In (1), n and r ( > 1 ) represent the number of system’s states and actuators, respectively; x i , g ( g = 1 , , n ) and x ¯ i , g = [ x i , 1 , , x i , g ] T denote the state vectors of the i-th agent for i = 1 , , l ; u i , j represents the output of the j-th actuator, y i denotes the agent output, n ¯ i , g ( x ¯ i , g ) and i , j ( x ¯ i , j ) are respectively unknown and known nonlinear functions, and g i , j represents an unknown constant.
In practical applications, actuators may experience failures. The fault model for the j-th actuator is defined as
u i , j = σ i , j u ¯ i , j + κ i , j , t t j , σ i , j κ i , j = 0 ,
where u ¯ i , j is the input of the j-th actuator, σ i , j [ 0 , 1 ] represents the fault severity, κ i , j represents the output when the j-th actuator fails completely, and t j represents the time at which the j-th actuator fails. Based on Equation (2), the operation of actuators can be classified into the following three cases:
Case I: σ i , j = 1 , indicating normal operation of the j-th actuator, in this case, u i , j = u ¯ i , j .
Case II: σ i , j ( 0 , 1 ) , indicating partial failure of the j-th actuator, in this case, u i , j = σ i , j u ¯ i , j .
Case III: σ i , j = 0 , indicating complete failure of the j-th actuator, in this case, u i , j = κ i , j .
To facilitate further analysis, we establish the following definitions for sets F a and F b . Set F a represents the collection of completely failed actuators, while set F b represents the ensemble of actuators that are functioning normally or experiencing partial failures. Based on these established definitions, we can deduce F a F b = { 1 , , r } .
Assumption 1 
([21]). During the operation, at most r 1 actuators completely fail.
Based on the aforementioned analysis, the MAS (1) can be transformed into the following form:
x ˙ i , g ¯ = x i , g ¯ + 1 + n ¯ i , g ¯ ( x ¯ i , g ¯ ) , g ¯ = 1 , , n 1 , x ˙ i , n = j = 1 r g i , j i , j ( x ¯ i ) ( σ i , j u ¯ i , j + κ i , j ) + n ¯ i , n ( x ¯ i , n ) , y i = x i , 1 .
Furthermore, considering the constraints in practical applications, the system state x i , g needs to remain between the time-varying barrier functions k c i g ( t ) and k ¯ c i g ( t ) , i.e., k c i g ( t ) < x i , g < k ¯ c i g ( t ) .
Assumption 2 
([29]). y 0 ( t ) is known and bounded, i.e., y 0 ( t ) Y 0 ( k ¯ c i 1 ( t ) > Y 0 , k c i 1 ( t ) < Y 0 ) . Additionally, the j ( j = 1 , , n ) -th order derivative of y 0 ( t ) satisfies y 0 ( j ) ( t ) Y j , where Y 0 , Y 1 , , Y j represent positive constants.
Assumption 3 
([11]). There exist constants L ¯ i , g j and L i , g j ( j = 0 , , n ) such that k ¯ c i g L ¯ i , g 0 , k c i g L i , g 0 , k ¯ c i g ( j ) L ¯ i , g j , and k c i g ( j ) L i , g j , where k ¯ c i g ( j ) and k c i g ( j ) represent the j-th order derivatives of k ¯ c i g and k c i g , respectively.

2.2. Graph Theory

The communication topology structure between the leader and the rest of the agents in an MAS can be described using a directed graph G d = { P , F } , where P = { 1 , 2 , , l } denotes the set of agents in the system, F P × P stands for the set of edges, which are the connections between the agents. A = [ a i j ] R l × l represents the adjacency matrix, when a i j > 0 , we consider that the information from agent j can be transmitted to agent i. When a i j = 0 , agent j’s information cannot be transmitted to agent i. Furthermore, by defining d i = j = 1 l a i , j ( i = 1 , 2 , , l ) , we can obtain the degree matrix D = d i a g ( d 1 , d 2 , , d l ) and Laplacian matrix L = D A of the system. In addition, define b i ( 0 ) to represent communication between the leader and agents, when direct communication exists, b i > 0 ; when direct communication does not exist, b i = 0 .
Assumption 4 
([13]). b i ( i = 1 , 2 , , l ) cannot be all zeros, which means that at least one agent is capable of receiving information from the leader.

2.3. Preliminaries

Lemma 1 
([22]). For a system
x ˙ ( t ) = f ( x , t ) , x ( 0 ) = x 0 ,
where f ( · ) is a continuous smooth nonlinear function, if there exists a positive definite, continuously differentiable Lyapunov function V ( x ) such that V ( 0 ) = 0 , and the derivative of V ( x ) satisfies
V ˙ ( x ) A V ( x ) d B V ( x ) p + C ,
where A > 0 , B > 0 , C > 0 , 1 < d < and 0 < p < 1 , then V ( x ) is a set to
V ( x ) min C ( 1 ν ) A 1 d , C ( 1 ν ) B 1 p ,
where 0 < ν < 1 . Furthermore, the system is considered to be practically fixed-time stable and its convergence time T will not exceed
T ¯ 1 A ν ( d 1 ) + 1 B ν ( 1 p ) .
Lemma 2 
([29]). Given arbitrary ς > 0 , o R satisfying
0 o o tanh o ς 0.2785 ς .
Lemma 3 
([11]). If 0 < Ω < , τ = 1 , 2 , , l , δ 1 > 1 and 0 < δ 2 1 , then we have
( τ = 1 l Ω ι ) δ 1 1 l 1 δ 1 τ = 1 l Ω τ δ 1 , ( τ = 1 l Ω ι ) δ 2 τ = 1 l Ω τ δ 2 .
Lemma 4 
([28]). For arbitrary κ < 1 and a constant ε > 0 , it can be stated that
ln 1 1 κ 2 ε < κ 2 ε 1 κ 2 ε .
Lemma 5 
([19]). For any real numbers h 1 and h 2 , we have
h 1 h 2 β 3 β 2 h 1 β 2 + 1 β 1 β 3 β 1 h 2 β 1 ,
where β 1 , β 2 and β 3 are positive parameters, and they satisfy the condition ( β 1 1 ) ( β 2 1 ) = 1 .
Lemma 6 
([6]). Let Y 0 Y denotes the set of tracking errors, where Y 0 = [ y 0 , y 0 , , y 0 ] l and Y = [ y 1 , y 2 , , y l ] , then it follows:
Y 0 Y Z ξ min ,
where Z = [ z 1 , 1 , z 2 , 1 , , z l , 1 ] represents the set of synchronization errors, ξ min denotes the minimum singular value of the matrix L + B .
Lemma 7 
([18]). For f 1 R and f 2 R , the following inequality holds:
f 1 w 1 f 1 w 2 w 1 w 1 + w 2 w 3 f 1 w 1 + w 2 + w 2 w 1 + w 2 w 3 w 1 w 2 f 2 w 1 + w 2 ,
where w 1 > 0 , w 2 > 0 and w 3 > 0 .
Lemma 8 
([8]). For arbitrary continuous function h ¯ ( G ) with sufficiently large r in a compact set Ω G R r , there exists an NN approximator ϑ * T ϖ ( G ) such that
h ¯ ( G ) = ϑ * T ϖ ( G ) + ε ( G ) ,
where G = [ g 1 , g 2 , , g k ] T R k R r denotes the input vector, ϑ * = [ ϑ 1 , ϑ 2 , , ϑ l ] T R r represents the ideal NN weight matrix, ε ( G ) is the approximation error and satisfies ε ( G ) ε ¯ . ϖ ( G ) = [ ϖ 1 ( G ) , ϖ 2 ( G ) , , ϖ r ( G ) ] T denotes the radial basis function vector, where Gaussian function ω i ( G ) ( i = 1 , 2 , , r ) expressed as
ϖ i ( G ) = exp ( G η i ) T ( G η i ) o i ,
where o i and η i refer to the width and center of the Gaussian function, respectively. The ideal weight matrix ϑ * is given as
ϑ * = arg min ϑ R r sup G Ω G h ¯ ( G ) ϑ T ϖ ( G ) .

3. Design and Stability Analysis of Fixed-Time Self-Triggered Consensus Controller

3.1. Fixed-Time Self-Triggered Consensus Controller Design

In this section, controllers and adaptive laws will be constructed using the backstepping technique for MASs with actuator faults and time-varying constraints. First, consider the following error systems:
z i , 1 = j = 1 l a i j ( y i y j ) + b i ( y i y 0 ) , z i , g = x i , g φ i , g 1 , g = 2 , , n ,
where z i , 1 and z i , g , respectively, denote the synchronization error and virtual error of the i-th agent, φ i , g 1 represents the virtual controller.
Step 1: By considering (3) and (17), it can be deduced that
z ˙ i , 1 = b i y ˙ 0 k = 1 l ( x k , 2 + n ¯ k , 1 ) + ( b i + d i ) n ¯ i , 1 + ( b i + d i ) ( z i , 2 + φ i , 1 ) .
Given the function h ¯ i , 1 ( G i , 1 ) = b i y ˙ 0 k = 1 l ( x k , 2 + n ¯ k , 1 ) + ( b i + d i ) n ¯ i , 1 , according to the Lemma 8, the continuous function h ¯ i , 1 ( G i , 1 ) can be approximated by the following form:
h ¯ i , 1 ( G i , 1 ) = ϑ i , 1 * T ϖ i , 1 ( G i , 1 ) + ε i , 1 ( G i , 1 ) ,
where G i , 1 = [ x i , 1 , x k , 1 , x k , 2 , y ˙ 0 ] , ε i , 1 ( G i , 1 ) > 0 .
Then, Equation (18) can be rewritten as
z ˙ i , 1 = ϑ i , 1 * T ϖ i , 1 ( G i , 1 ) + ε i , 1 ( G i , 1 ) + ( b i + d i ) ( z i , 2 + φ i , 1 ) .
Formulate the ABLF as follows:
V i , 1 = ( 1 s i , 1 ( z i , 1 ) ) 2 ln k a i 1 2 ( t ) k a i 1 2 ( t ) z i , 1 2 + s i , 1 ( z i , 1 ) 2 ln k b i 1 2 ( t ) k b i 1 2 ( t ) z i , 1 2 + 1 2 ϑ ˜ i , 1 T W i , 1 1 ϑ ˜ i , 1 ,
where ϑ ˜ i , 1 = ϑ ^ i , 1 ϑ i , 1 * represents the error between the ideal weight matrix ϑ i , 1 * and its estimated value ϑ ^ i , 1 , W i , 1 = W i , 1 1 is a positive constant gain matrix, the barrier functions k a i 1 ( t ) and k b i 1 ( t ) satisfy k a i 1 ( t ) = ξ min k ¯ a i 1 ( t ) and k b i 1 ( t ) = ξ min k ¯ b i 1 ( t ) , where k ¯ a i 1 ( t ) = y 0 k c i 1 ( t ) and k ¯ b i 1 ( t ) = k c i 1 ( t ) y 0 . ξ min represents the minimum singular value of L A . The function s i , g ( z i , g ) ( g = 1 , 2 , , n ) is defined as
s i , g ( z i , g ) = 1 , z i , g 0 , 0 , z i , g < 0 .
Remark 1.
From the definition of s i , g ( z i , g ) , it can be observed that the designed ABLF can handle asymmetric barrier constraints.
For the purpose of simplifying the analysis procedure, the following transformation is performed:
ζ a i , g ( t ) = z i , g k a i g ( t ) , ζ b i , g ( t ) = z i , g k b i g ( t ) , ζ i , g ( t ) = ( 1 s i , g ( z i , g ) ) z i , g k a i g ( t ) + s i , g ( z i , g ) z i , g k b i g ( t ) .
By combining (21) and (23), the derivative of V i , 1 is given by
V ˙ i , 1 = ( 1 s i , 1 ( z i , 1 ) ) ζ a i , 1 k a i 1 ( t ) ( 1 ζ a i , 1 2 ) + s i , 1 ( z i , 1 ) ζ b i , 1 k b i 1 ( t ) ( 1 ζ b i , 1 2 ) z ˙ i , 1 + ϑ ˜ i , 1 T W i , 1 1 ϑ ^ ˙ i , 1 ( 1 s i , 1 ( z i , 1 ) ) k ˙ a i 1 ( t ) ζ a i , 1 2 k a i 1 ( t ) ( 1 ζ a i , 1 2 ) s i , 1 ( z i , 1 ) k ˙ b i 1 ( t ) ζ b i , 1 2 k b i 1 ( t ) ( 1 ζ b i , 1 2 ) .
Substituting (18) into (24), we have
V ˙ i , 1 = ( 1 s i , 1 ( z i , 1 ) ) ζ a i , 1 k a i 1 ( t ) ( 1 ζ a i , 1 2 ) + s i , 1 ( z i , 1 ) ζ b i , 1 k b i 1 ( t ) ( 1 ζ b i , 1 2 ) × [ ϑ i , 1 * T ϖ i , 1 ( G i , 1 ) + ε i , 1 ( G i , 1 ) + ( b i + d i ) ( z i , 2 + φ i , 1 ) ] + ϑ ˜ i , 1 T W i , 1 1 ϑ ^ ˙ i , 1 ( 1 s i , 1 ( z i , 1 ) ) k ˙ a i 1 ( t ) ζ a i , 1 2 k a i 1 ( t ) ( 1 ζ a i , 1 2 ) s i , 1 ( z i , 1 ) k ˙ b i 1 ( t ) ζ b i , 1 2 k b i 1 ( t ) ( 1 ζ b i , 1 2 ) .
Then, construct the following virtual controller:
φ i , 1 = 1 ( b i + d i ) [ c i , 1 , 1 χ i , 1 z i , 1 3 c i , 2 , 1 χ i , 1 p 1 z i , 1 2 p 1 1 2 γ i , 1 χ i , 1 z i , 1 1 2 ( b i + d i ) 2 z i , 1 Λ i , 1 ( t ) z i , 1 ϑ ^ i , 1 T ϖ i , 1 ( G i , 1 ) ] ,
where c i , 1 , 1 , c i , 2 , 1 , and γ i , 1 are positive gain constants, and the design parameter p satisfies p ( 0.5 , 1 ) . The definitions of χ i , g ( g = 1 , 2 , , n ) and Λ i , g ( t ) are given by Equations (27) and (28), respectively.
χ i , g = ( 1 s i , g ( z i , g ) ) k a i g 2 ( t ) z i , g 2 + s i , g ( z i , g ) k b i g 2 ( t ) z i , g 2 ,
Λ i , g ( t ) = k ˙ a i g ( t ) k a i g ( t ) 2 + k ˙ b i g ( t ) k b i g ( t ) 2 + ι g ,
where ι g is a positive constant.
By further deriving from (28), we can obtain
Λ i , 1 ( t ) + ( 1 s i , 1 ( z i , 1 ) ) k ˙ a i 1 ( t ) ζ a i , 1 2 k a i 1 ( t ) ( 1 ζ a i , 1 2 ) + s i , 1 ( z i , 1 ) k ˙ b i 1 ( t ) ζ b i , 1 2 k b i 1 ( t ) ( 1 ζ b i , 1 2 ) 0 .
The construction of the adaptive law is as follows:
ϑ ^ ˙ i , 1 = W i , 1 ( χ i , 1 z i , 1 ϖ ( G i , 1 ) δ i , 1 ϑ ^ i , 1 ) ,
where δ i , 1 > 0 is a constant.
By simultaneously considering (25) to (30), we have
V ˙ i , 1 χ i , 1 z i , 1 [ c i , 1 , 1 χ i , 1 z i , 1 3 c i , 2 , 1 χ i , 1 p 1 z i , 1 2 p 1 1 2 γ i , 1 χ i , 1 z i , 1 1 2 ( b i + d i ) 2 z i , 1 + ( b i + d i ) z i , 2 + ε i , 1 ( G i , 1 ) ] δ i , 1 ϑ ˜ i , 1 T ϑ ^ i , 1 .
Based on the Lemma 5, inequalities (32) and (33) can be derived.
χ i , 1 z i , 1 ε i , 1 ( G i , 1 ) 1 2 γ i , 1 ( χ i , 1 z i , 1 ) 2 + 1 2 γ i , 1 ε ¯ i , 1 2 ,
χ i , 1 ( b i + d i ) z i , 1 z i , 2 χ i , 1 ( 1 2 ( b i + d i ) 2 z i , 1 2 + 1 2 z i , 2 2 ) ,
where ε ¯ i , 1 represents an upper bound for ε i , 1 ( G i , 1 ) .
Obviously, from (32) and (33), we can obtain
V ˙ i , 1 c i , 1 , 1 χ i , 1 2 z i , 1 4 c i , 2 , 1 χ i , 1 p z i , 1 2 p + 1 2 γ i , 1 ε ¯ i , 1 2 + 1 2 q χ i , 1 z i , 2 2 δ i , 1 ϑ ˜ i , 1 T ϑ ^ i , 1 .
Step  g   ( 2 g n 1 ) : By considering (3) and (17), it can be deduced that
z ˙ i , g = φ i , g + z i , g + 1 + n ¯ i , g φ ˙ i , g 1 ,
where φ ˙ i , g 1 = j = 0 g 1 φ i , g 1 y 0 ( j ) y ˙ 0 + j = 1 g 1 φ i , g 1 x i , j x i , j + j = 1 g 1 φ i , g 1 ϑ ^ i , j ϑ ^ ˙ i , j + j = 1 g 1 k = 1 l α i , g 1 x k , j x k , j
+ j = 0 g 1 φ i , g 1 ϕ i , g 1 ( j ) ϕ i , g 1 ( j + 1 ) , ϕ i , g 1 = [ k a i 1 , , k a i g 1 , k b i 1 , , k b i g 1 ] T .
Defining the function h ¯ i , g ( G i , g ) = n ¯ i , g φ ˙ i , g 1 , according to Lemma 8, there exists an NN approximator ϑ i , g * T ϖ i , g ( G i , g ) and a positive parameter ε i , g ( G i , g ) , such that h ¯ i , g ( G i , g ) can be expressed as
h ¯ i , g ( G i , g ) = ϑ i , g * T ϖ i , g ( G i , g ) + ε i , g ( G i , g ) ,
where G i , g = [ x ¯ i , g T , x ¯ k , g T , y ˙ 0 ] , ε i , g ( G i , g ) > 0 .
Therefore, z ˙ i , g can be rewritten as
z ˙ i , g = ϑ i , g * T ϖ i , g ( G i , g ) + ε i , g ( G i , g ) + z i , g + 1 + α i , g .
The construction of the ABLF is given as
V i , g = V i , g 1 + ( 1 s i , g ( z i , g ) ) 2 ln k a i g 2 ( t ) k a i g 2 ( t ) z i , g 2 + 1 2 ϑ ˜ i , g T W i , g 1 ϑ ˜ i , g + s i , g ( z i , g ) 2 ln k b i g 2 ( t ) k b i g 2 ( t ) z i , g 2 ,
where ϑ ˜ i , g = ϑ ^ i , g ϑ i , g * represents the error between the ideal weight matrix ϑ i , g * and its estimated value ϑ ^ i , g , W i , g = W i , g 1 is a positive constant gain matrix.
Taking the derivative of (38), we obtain:
V ˙ i , g = V ˙ i , g 1 + ( 1 s i , g ( z i , g ) ) ζ a i , g k a i g ( t ) ( 1 ζ a i , g 2 ) + s i , g ( z i , g ) ζ b i , g k b i g ( t ) ( 1 ζ b i , g 2 ) z ˙ i , g + ϑ ˜ i , g T W i , g 1 ϑ ^ ˙ i , g ( 1 s i , g ( z i , g ) ) k ˙ a i g ( t ) ζ a i , g 2 k a i g ( t ) ( 1 ζ a i , g 2 ) s i , g ( z i , g ) k ˙ b i g ( t ) ζ b i , g 2 k b i g ( t ) ( 1 ζ b i , g 2 ) .
Substituting (37) into (39), it yields
V ˙ i , g = V ˙ i , g 1 + ( 1 s i , g ( z i , g ) ) ζ a i , g k a i g ( t ) ( 1 ζ a i , g 2 ) + s i , g ( z i , g ) ζ b i , g k b i g ( t ) ( 1 ζ b i , g 2 ) × [ ϑ i , g * T ϖ i , g ( G i , g ) + ε i , g ( G i , g ) + z i , g + 1 + φ i , g ] + ϑ ˜ i , g T W i , g 1 ϑ ^ ˙ i , g ( 1 s i , g ( z i , g ) ) k ˙ a i g ( t ) ζ a i , g 2 k a i g ( t ) ( 1 ζ a i , g 2 ) s i , g ( z i , g ) k ˙ b i g ( t ) ζ b i , g 2 k b i g ( t ) ( 1 ζ b i , g 2 ) .
Designing the virtual controller and adaptive law as
φ i , g = c i , 1 , g χ i , g z i , g 3 c i , 2 , g χ i , g p 1 z i , g 2 p 1 1 2 γ i , g χ i , g z i , g 1 2 z i , g χ i , g 1 z i , g 2 χ i , g Λ i , g ( t ) z i , g ϑ ^ i , g T ϖ i , g ( G i , g ) ,
ϑ ^ ˙ i , g = W i , g ( χ i , g z i , g ϖ ( G i , g ) δ i , g ϑ ^ i , g ) ,
where c i , 1 , g , c i , 2 , g , γ i , g and δ i , g are positive design parameters.
According to (41) and (42), we gain
V ˙ i , g V ˙ i , g 1 c i , 1 , g χ i , g 2 z i , g 4 c i , 2 , g χ i , g p z i , g 2 p + χ i , g z i , g ε i , g ( G i , g ) + χ i , g z i , g z i , g + 1 1 2 χ i , g z i , g 2 1 2 q χ i , g 1 z i , g 2 1 2 γ i , g χ i , g 2 z i , g 2 δ i , g ϑ ˜ i , g T ϑ ^ i , g .
Based on the Lemma 5, inequalities (44) and (45) can be derived.
χ i , g z i , g ε i , g ( G i , g ) 1 2 γ i , g ( χ i , g z i , g ) 2 + 1 2 γ i , g ε ¯ i , g 2 ,
χ i , g z i , g z i , g + 1 χ i , g ( 1 2 z i , g 2 q + 1 2 z i , g + 1 2 ) ,
where ε ¯ i , g represents an upper bound for ε i , g ( G i , g ) .
Obviously, from the above inequality, we can obtain
V ˙ i , g V ˙ i , g 1 c i , 1 , g χ i , g 2 z i , g 4 c i , 2 , g χ i , g p z i , g 2 p + 1 2 γ i , g ε ¯ i , g 2 1 2 q χ i , g 1 z i , g 2 + 1 2 q χ i , g z i , g + 1 2 δ i , g ϑ ˜ i , g T ϑ ^ i , g .
Through analysis, it can be deduced in step g 1 that:
V ˙ i , g 1 j g 1 c i , 1 , j χ i , j 2 z i , j 4 j g 1 c i , 2 , j χ i , j p z i , j 2 p + 1 2 j g 1 γ i , j ε ¯ i , j 2 + 1 2 q χ i , g 1 z i , g 2 j g 1 δ i , j ϑ ˜ i , j T ϑ ^ i , j .
Hence, V ˙ i , g can be written as
V ˙ i , g j g c i , 1 , j χ i , j 2 z i , j 4 j g c i , 2 , j χ i , j p z i , j 2 p + 1 2 j g γ i , j ε ¯ i , j 2 + 1 2 q χ i , g z i , g + 1 2 j g δ i , j ϑ ˜ i , j T ϑ ^ i , j .
Step n: To conserve system communication resources, an STM is established as shown below:
u ¯ i , j = ω i , j ( t g ) , t [ t g , t g + 1 ) , t g + 1 = t g + t * , w ˙ i , j ( t ) > v 2 , t g + η u ¯ i , j + η 1 max { v 1 , w i , j ( t ) } , w ˙ i , j ( t ) v 2 ,
where w i , j ( t ) = ω ˙ i , j ( t ) | t = t g , ω i , j ( t ) represents the intermediate control signal for the STM, which will be defined next. t g ( g N + ) represents the triggering moment, t * denotes an arbitrarily small positive number, and η 1 , η , v 1 , and v 2 are positive design parameters.
Remark 2.
From (49), it can be observed that the designed STM calculates the next triggering moment at each trigger instant. The next triggering moment is determined by the values of η, η 1 , v 2 , t * , and max { v 1 , w i , j ( t ) } . Accordingly, when the intermediate control signal undergoes significant changes, the next triggering moment occur earlier. Additionally, by introducing v 1 , the system ensures that the signals are updated within a reasonable time frame to avoid prolonged periods without updates. v 2 represents the maximum tolerated rate of change for w i , j ( t ) . Properly selecting the parameters for v 1 and v 2 guarantees both efficient utilization of communication resources and system stability.
The definition of the intermediate control signal is as provided below:
ω i , j ( t ) = ( 1 + η ) μ i , j tanh z i , n s i g n ( g i , j ) i , j ( x ¯ i ) μ i , j ς + η ¯ 1 tanh z i , n s i g n ( g i , j ) i , j ( x ¯ i ) η ¯ 1 ς ,
where ς denotes a positive constant and μ i , j represents the control signal of the j-th actuator. Its design is as follows:
μ i , j = s i g n ( g i , j ) θ i , j T ρ i i , j ( x ¯ i ) ,
where θ i , j T = 1 j F b g i , j σ i , j θ i , j , 1 θ i , j , r T , ρ i = φ i , n ρ i , 1 ρ i , r T , ρ i , j ¯ = i , j ( x ¯ i ) , φ i , n denotes the virtual controller. Especially, if j ¯ F a , θ i , j , j ¯ = g i , j κ i , j j ¯ F b g i , j σ i , j , else θ i , j , j ¯ = 0 .
Consequently, it can be inferred that
j F b g i , j σ i , j θ i , j T ρ i = φ i , n j F a g i , j i , j ( x ¯ i ) κ i , j .
Since θ i , j cannot be obtained in advance, θ ^ i , j is used to estimate the value of θ i , j . The estimation error is represented by θ ˜ i , j = θ i , j θ ^ i , j . Therefore, the expression for μ i , j should be:
μ i , j = s i g n ( g i , j ) θ ^ i , j T ρ i i , j ( x ¯ i ) .
By considering (3) and (17), it can be deduced that
z ˙ i , n = j = 1 r g i , j i , j ( x ¯ i ) ( σ i , j u ¯ i , j + κ i , j ) + n ¯ i , n ( x ¯ i , n ) φ ˙ i , n 1 ,
where φ ˙ i , n 1 = j = 0 n 1 φ i , n 1 y 0 ( j ) y ˙ 0 + j = 1 n 1 φ i , n 1 x i , j x i , j + j = 1 n 1 φ i , n 1 ϑ ^ i , j ϑ ^ ˙ i , j + j = 1 n 1 k = 1 l α i , n 1 x k , j x k , j + j = 0 n 1 φ i , n 1 ϕ i , n 1 ( j ) ϕ i , n 1 ( j + 1 ) .
Given the function h ¯ i , n ( G i , n ) = n ¯ i , n φ ˙ i , n 1 + 0.557 λ j = 1 r g i , j σ i , j , according to the Lemma 8, the continuous function h ¯ i , n ( G i , n ) can be approximated by the following form:
h ¯ i , n ( G i , n ) = ϑ i , n * T ϖ i , n ( G i , n ) + ε i , n ( G i , n ) ,
where G i , n = [ x ¯ i , n T , x ¯ k , n T , y ˙ 0 ] , ε i , n ( G i , n ) > 0 .
Then, z ˙ i , n can be rewritten as
z ˙ i , n = j = 1 r g i , j i , j ( x ¯ i ) ( σ i , j u ¯ i , j + κ i , j ) + ϑ i , n * T ϖ i , n ( G i , n ) + ε i , n ( G i , n ) 0.557 ς j = 1 r g i , j σ i , j .
In this step, the ABLF is chosen as
V i , n = V i , n 1 + ( 1 s i , n ( z i , n ) ) 2 ln k a i n 2 ( t ) k a i n 2 ( t ) z i , n 2 + s i , n ( z i , n ) 2 ln k b i n 2 ( t ) k b i n 2 ( t ) z i , n 2 + 1 2 ϑ ˜ i , n T W i , n 1 ϑ ˜ i , n + 1 2 j F b g i , j σ i , j θ ˜ i , j T J i , j 1 θ ˜ i , j ,
where ϑ ˜ i , n = ϑ ^ i , n ϑ i , n * represents the error between the ideal weight matrix ϑ i , n * and its estimated value ϑ ^ i , n , W i , n = W i , n 1 and J i , j = J i , j 1 represent positive constant gain matrices.
Differentiating Equation (57), we have
V ˙ i , n = V ˙ i , n 1 + ( 1 s i , n ( z i , n ) ) ζ a i , n k a i n ( t ) ( 1 ζ a i , n 2 ) + s i , n ( z i , n ) ζ b i , n k b i n ( t ) ( 1 ζ b i , n 2 ) z ˙ i , n + ϑ ˜ i , n T W i , n 1 ϑ ^ ˙ i , n + j F b g i , j σ i , j θ ˜ i , j T J i , j 1 θ ^ ˙ i , j ( 1 s i , n ( z i , n ) ) k ˙ a i n ( t ) ζ a i , n 2 k a i n ( t ) ( 1 ζ a i , n 2 ) s i , n ( z i , n ) k ˙ b i n ( t ) ζ b i , n 2 k b i n ( t ) ( 1 ζ b i , n 2 ) .
Substituting (56) into (58), we obtain:
V ˙ i , n = V ˙ i , n 1 + ϑ ˜ i , n T W i , n 1 θ ^ ˙ i , n + j F b g i , j σ i , j θ ˜ i , j T J i , j 1 θ ^ ˙ i , j + χ i , n z i , n × j = 1 r g i , j i , j ( x ¯ i ) ( σ i , j u ¯ i , j + κ i , j ) + ϑ i , n * T ϖ i , n ( G i , n ) + ε i , n ( G i , n ) 0.557 ς j = 1 r g i , j σ i , j ( 1 s i , n ( z i , n ) ) k ˙ a i n ( t ) ζ a i , n 2 k a i n ( t ) ( 1 ζ a i , n 2 ) s i , n ( z i , n ) k ˙ b i n ( t ) ζ b i , n 2 k b i n ( t ) ( 1 ζ b i , n 2 ) .
According to (49), we have u ¯ i , j ( t ) = ω i , j ( t ) o 1 η 1 1 + o 2 η , where o 1 < 1 and o 2 < 1 . Further derivation leads to:
z i , n s i g n ( g i , j ) i , j ( x ¯ i ) u ¯ i , j = z i , n s i g n ( g i , j ) i , j ( x ¯ i ) × 1 + η 1 + o i , 2 η μ i , j tanh z i , n s i g n ( g i , j ) i , j ( x ¯ i ) μ i , j ς + η ¯ 1 tanh z i , n s i g n ( g i , j ) i , j ( x ¯ i ) η ¯ 1 ς + o i , 1 η 1 1 + o i , 2 η .
According to (60) and Lemma 2, it can be given as follows:
z i , n s i g n ( g i , j ) i , j ( x ¯ i ) u ¯ i , j z i , n s i g n ( g i , j ) i , j ( x ¯ i ) μ i , j tanh z i , n s i g n ( g i , j ) i , j ( x ¯ i ) μ i , j ς z i , n s i g n ( g i , j ) i , j ( x ¯ i ) η ¯ 1 tanh z i , n s i g n ( g i , j ) i , j ( x ¯ i ) η ¯ 1 ς + z i , n s i g n ( g i , j ) i , j ( x ¯ i ) η 1 1 η z i , n s i g n ( g i , j ) i , j ( x ¯ i ) μ i , j z i , n s i g n ( g i , j ) i , j ( x ¯ i ) μ i , j tanh z i , n s i g n ( g i , j ) i , j ( x ¯ i ) μ i , j ς z i , n s i g n ( g i , j ) i , j ( x ¯ i ) μ i , j + z i , n s i g n ( g i , j ) i , j ( x ¯ i ) η 1 1 η z i , n s i g n ( g i , j ) i , j ( x ¯ i ) η ¯ 1 tanh z i , n s i g n ( g i , j ) i , j ( x ¯ i ) η ¯ 1 ς z i , n s i g n ( g i , j ) i , j ( x ¯ i ) μ i , j + 0.557 ς .
Along with (53), (59) and (61), one gets
V ˙ i , n V ˙ i , n 1 + j = 1 r g i , j σ i , j θ ^ i , j T ρ i + j = 1 r g i , j i , j ( x ¯ i ) κ i , j + ϑ i , n * T ϖ i , n ( G i , n ) + ε i , n ( G i , n ) × z i , n χ i , n + ϑ ˜ i , n T W i , n 1 ϑ ^ ˙ i , n + j F b g i , j σ i , j θ ˜ i , j T J i , j 1 θ ^ ˙ i , j ( 1 s i , n ( z i , n ) ) k ˙ a i n ( t ) ζ a i , n 2 k a i n ( t ) ( 1 ζ a i , n 2 ) s i , n ( z i , n ) k ˙ b i n ( t ) ζ b i , n 2 k b i n ( t ) ( 1 ζ b i , n 2 ) .
According to θ ˜ i , j = θ i , j θ ^ i , j and (52), which yields
V ˙ i , n V ˙ i , n 1 + χ i , n z i , n φ i , n + j = 1 r g i , j σ i , j θ ˜ i , j T ρ i + ϑ i , n * T ϖ i , n ( G i , n ) + χ i , n z i , n ε i , n ( G i , n ) + ϑ ˜ i , n T W i , n 1 ϑ ^ ˙ i , n + j F b g i , j σ i , j θ ˜ i , j T J i , j 1 θ ^ ˙ i , j ( 1 s i , n ( z i , n ) ) k ˙ a i n ( t ) ζ a i , n 2 k a i n ( t ) ( 1 ζ a i , n 2 ) s i , n ( z i , n ) k ˙ b i n ( t ) ζ b i , n 2 k b i n ( t ) ( 1 ζ b i , n 2 ) .
Designing the virtual controller and adaptive laws as
φ i , n = c i , 1 , n χ i , n z i , n 3 c i , 2 , n χ i , n p 1 z i , n 2 p 1 Λ i , n ( t ) z i , n 1 2 γ i , n χ i , n z i , n χ i , n 1 z i , n 2 χ i , n ϑ ^ i , n T ϖ i , n ( G i , n ) ,
ϑ ^ ˙ i , n = W i , n ( χ i , n z i , n ϖ ( G i , n ) δ i , n ϑ ^ i , n ) ,
θ ^ ˙ i , j = J i , j ( χ i , n z i , n ρ i υ i , j θ ^ i , j ) ,
where c i , 1 , n , c i , 2 , n , γ i , n , δ i , n and υ i , j are positive design parameters.
Substituting (64)–(66) into (63), we obtain
V ˙ i , n V ˙ i , n 1 c i , 1 , n χ i , n z i , n 3 c i , 2 , n χ i , n p z i , n 2 p 1 2 γ i , n χ i , n z i , n 2 + χ i , n z i , n ε i , n ( G i , n ) 1 2 q χ i , n 1 z i , n 2 δ i , n ϑ ˜ i , n T ϑ ^ i , n j F b g i , j σ i , j υ i , j θ ˜ i , j T θ ^ i , j .
Applying the Lemma 5, one has
χ i , n z i , n ε i , n ( G i , n ) 1 2 γ i , n ( χ i , n z i , n ) 2 + 1 2 γ i , n ε ¯ i , n 2 ,
where ε ¯ i , n represents an upper bound for ε i , n ( G i , n ) .
Now, by inserting (68) into (67), one has
V ˙ i , n V ˙ i , n 1 c i , 1 , n χ i , n 2 z i , n 4 c i , 2 , n χ i , n p z i , n 2 p + 1 2 γ i , n ε ¯ i , n 2 1 2 χ i , n 1 z i , n 2 δ i , n ϑ ˜ i , n T ϑ ^ i , n j F b g i , j σ i , j υ i , j θ ˜ i , j T θ ^ i , j .
The following inequalities can be deduced in step n 1
V ˙ i , n 1 j n 1 c i , 1 , j χ i , j 2 z i , j 4 j n 1 c i , 2 , j χ i , j p z i , j 2 p + 1 2 j n 1 γ i , j ε ¯ i , j 2 + 1 2 χ i , n 1 z i , n 2 j n 1 δ i , j ϑ ˜ i , j T ϑ ^ i , j .
Substituting (70) into (69) leads to
V ˙ i , n j n c i , 1 , j χ i , j 2 z i , j 4 j n c i , 2 , j χ i , j p z i , j 2 p + 1 2 j n γ i , j ε ¯ i , j 2 j n δ i , j ϑ ˜ i , j T ϑ ^ i , j j F b g i , j σ i , j υ i , j θ ˜ i , j T θ ^ i , j .
Applying Lemma 3, it follows that
j n c i , 1 , j ( χ i , j z i , j 2 ) 2 j n c i , 1 , j s i , j ( z i , j ) ln k b i j 2 k b i j 2 z i , j 2 + ( 1 s i , j ( z i , j ) ) ln k a i j 2 k a i j 2 z i , j 2 2 ,
j n c i , 1 , j ( χ i , j z i , j 2 ) p j n c i , 2 , j s i , j ( z i , j ) ln k b i j 2 k b i j 2 z i , j 2 + ( 1 s i , j ( z i , j ) ) ln k a i j 2 k a i j 2 z i , j 2 p .
According to Lemma 7, two inequalities can be obtained as
j n δ i , j ϑ ˜ i , j T ϑ ^ i , j j F b g i , j σ i , j υ i , j θ ˜ i , j T θ ^ i , j j n δ i , j ϑ ˜ i , j T ( ϑ ˜ i , j + ϑ i , j * ) j F b g i , j σ i , j υ i , j θ ˜ i , j T ( θ ˜ i , j + θ i , j * ) j n 1 2 δ i , j ϑ ˜ i , j 2 1 2 j F b g i , j σ i , j υ i , j θ ˜ i , j 2 + j n 1 2 δ i , j ϑ i , j * 2 + 1 2 j F b g i , j σ i , j υ i , j θ i , j * 2 .
Along with (71)–(74), one gets
V ˙ i , n j n c i , 1 , j s i , j ( z i , j ) ln k b i j 2 k b i j 2 z i , j 2 + ( 1 s i , j ( z i , j ) ) ln k a i j 2 k a i j 2 z i , j 2 2 j n c i , 2 , j s i , j ( z i , j ) ln k b i j 2 k b i j 2 z i , j 2 + ( 1 s i , j ( z i , j ) ) ln k a i j 2 k a i j 2 z i , j 2 p + 1 2 j n γ i , j ε ¯ i , j 2 j n 1 2 δ i , j ϑ ˜ i , j 2 + j n 1 2 δ i , j ϑ i , j * 2 1 2 j F b g i , j σ i , j υ i , j θ ˜ i , j 2 + 1 2 j F b g i , j σ i , j υ i , j θ i , j * 2 .
By applying Lemma 7, one can obtain
j n ϑ ˜ i , j 2 2 p ( 1 p ) p p 1 p + j n ϑ ˜ i , j 2 2 ,
j F b g i , j σ i , j θ ˜ i , j 2 2 p ( 1 p ) p p 1 p + j F b g i , j σ i , j θ ˜ i , j 2 2 .
According to (76), (77) and Lemma 3, (75) can be rewritten as
V ˙ i , n α i , 1 j n s i , j ( z i , j ) ln k b i j 2 k b i j 2 z i , j 2 + j n ( 1 s i , j ( z i , j ) ) ln k a i j 2 k a i j 2 z i , j 2 2 β i , 1 j n s i , j ( z i , j ) ln k b i j 2 k b i j 2 z i , j 2 + j n ( 1 s i , j ( z i , j ) ) ln k a i j 2 k a i j 2 z i , j 2 p α i , 2 j n ϑ ˜ i , j 2 2 2 + α i , 2 j n ϑ ˜ i , j 2 2 2 β i , 2 j n ϑ ˜ i , j 2 2 p α i , 3 j F b g i , j σ i , j θ ˜ i , j 2 2 2 + α i , 3 j F b g i , j σ i , j θ ˜ i , j 2 2 2 β i , 2 j F b g i , j σ i , j θ ˜ i , j 2 2 p + 1 2 j n γ i , j ε ¯ i , j 2 + 2 β i , 2 ( 1 p ) p p 1 p + j n 1 2 δ i , j ϑ i , j * 2 + 1 2 j F b g i , j σ i , j υ i , j θ i , j * 2 ,
where α i , 1 = min { c i , 1 , 1 , , c i , 1 , n } n , α i , 2 = min { δ i , 1 λ min ( W i , 1 ) , δ i , 1 λ min ( W i , 1 ) , , δ i , n λ min ( W i , n ) } n ,
α i , 3 = min { υ i , 1 λ min ( J i , 1 ) , υ i , 2 λ min ( J i , 2 ) , υ i , j λ min ( J i , j ) } j , β i , 1 = min { c i , 2 , 1 , , c i , 2 , n } ,
β i , 2 = min { δ i , 1 λ min ( W i , 1 ) , δ i , 1 λ min ( W i , 1 ) , , δ i , n λ min ( W i , n ) , υ i , j λ min ( J i , j ) } .
Then, based on (57), (78) and Lemma 3, it holds that
V ˙ i , n A i V i , n 2 B i V i , n p + C i .
where A i = min { α i , 1 , α i , 2 , α i , 3 } 2 , B i = min { β i , 1 , β i , 2 } , C i = α i , 2 j n ϑ ˜ i , j 2 2 2 + 2 β i , 2 ( 1 p ) p p 1 p + α i , 3 j F b g i , j σ i , j θ ˜ i , j 2 2 2 + 1 2 j n γ i , j ε ¯ i , j 2 + j n 1 2 δ i , j ϑ i , j * 2 + 1 2 j F b g i , j σ i , j υ i , j θ i , j * 2 .
Remark 3.
This paper proposes an adaptive NN control method based on radial basis function neural networks to deal with uncertain nonlinearities among agents. The structure of the adaptive NN control method used in this paper is shown in Figure 1, and it only requires online updating of the neural network weights. Then, in combination with adaptive control, the ideal weight ϑ i , g * ( g = 1 , , n ) is estimated using ϑ ^ i , g . Following that, the adaptive backstepping technology is employed to construct the virtual controller φ ˙ i , g and the adaptive law ϑ ^ ˙ i , g , which ensures the stability of the system.

3.2. System Stability Analysis

Theorem 1.
Under the Assumptions 1–4, the proposed method using virtual controllers (26), (41), (64), adaptive laws (30), (42), (65), (66), STM (49), control signal (51), and intermediate control signal (50) can guarantee the following characteristics for MASs (3) with actuator faults and time-varying state constraints:
  • The system states will not violate the specified constraints and all system signals are bounded.
  • Zeno-behavior will not occur.
Proof. 
1. Construct the Lyapunov function V for the system as follows:
V = i = 1 l V i , n .
According to (80) and the Lemma 3, it yields
V ˙ i = 1 l A i V i , n 2 i = 1 l B i V i , n p + i = 1 l C i A i = 1 l V i , n 2 B i = 1 l V i , n p + C A V 2 B V p + C ,
where A = { A 1 , , A l } l , B = min { B 1 , , B l } , C = i = 1 l C i .
According to Lemma 1, it can be concluded that the convergence time T of the MASs (3) is practical fixed-time, and it satisfies
T 1 A ν + 1 B ν ( 1 p ) ,
where 0 < ν < 1 , furthermore, the solution of the MASs (3), is to a set of
Θ min V ( Θ ) C ( 1 ν ) A 1 2 , C ( 1 ν ) B 1 p .
Remark 4.
According to (82), it can be seen that the convergence time T of the MASs (3) is independent of the initial states and only depends on the system parameters. Therefore, the MAS is practically fixed-time stable.
Due to each term in V being positive, it can be deduced that
1 2 ln 1 1 ζ i , g 2 V Θ .
By further inference, we can derive the following inequalities:
z i , g Δ i , g , z i , g Δ ¯ i , g ,
where Δ i , g = k a i g ( t ) 1 1 e 2 Θ 1 2 , Δ ¯ i , g = k b i g ( t ) 1 1 e 2 Θ 1 2 .
Then, according to the Lemma 6, we can deduce
y Y 0 Δ i , 1 ξ min k a i 1 ( t ) ξ min , y Y 0 Δ ¯ i , 1 ξ min k b i 1 ( t ) ξ min .
By the definition of k ¯ a i 1 ( t ) and k ¯ b i 1 ( t ) , it follows that
x i , 1 k ¯ a i 1 ( t ) + y 0 ( t ) , x i , 1 k ¯ b i 1 ( t ) + y 0 ( t ) .
Therefore, the system state x i , 1 satisfies
x i , 1 k c i 1 ( t ) , x i , 1 k ¯ c i 1 ( t ) .
From Equation (88), it can be concluded that the output state of the system is bounded. Additionally, from (85), it can be inferred that the error z i , 1 is also bounded. k ˙ a i 1 ( t ) and k ˙ b i 1 ( t ) are composed of y 0 ( 1 ) ( t ) , k ¯ c i 1 ( 1 ) and k c i 1 ( 1 ) . It can be inferred from Assumptions 2 and 3 that y 0 ( 1 ) ( t ) , k ¯ c i 1 ( 1 ) and k c i 1 ( 1 ) are both bounded. Therefore, k ˙ a i 1 ( t ) and k ˙ b i 1 ( t ) are also bounded. Then, according to the definition of ϑ ^ i , 1 , ϑ ^ i , 1 is bounded. Due to φ i , 1 composed of k ˙ a i 1 ( t ) , k ˙ b i 1 ( t ) , ϑ ^ i , 1 and y 0 ( 1 ) ( t ) , φ i , 1 exists an upper bound φ ¯ i , 1 . According to z i , 2 = x i , 2 + φ i , 1 and by selecting k c i 2 ( t ) = Δ i , 2 ( t ) + φ ¯ i , 1 , k ¯ c i 2 ( t ) = Δ ¯ i , 2 ( t ) + φ ¯ i , 1 , it can be obtained that k c i 2 ( t ) x i , 2 k ¯ c i 2 ( t ) , which means x i , 2 is within the constraint ranges.
In view of the above analysis, using the same approach, we can determine that all states of the system will always remain within the constraints and, furthermore, all signals of the system are bounded.
2. According to (49), when w i , j ( t ) > v 2 , we have t g + 1 t g = t * > 0 . When w i , j ( t ) v 2 , due to the boundedness of u ¯ i , j and w i , j ( t ) = ω ˙ i , j ( t ) | t = t g , we can conclude that η u ¯ i , j + η 1 max { v 1 , w i , j ( t ) } is also bounded. Therefore, there exists a constant t ¯ = η u ¯ i , j + η 1 max { v 1 , w i , j ( t ) } such that t g + 1 t g = t ¯ > 0 . Evidently, the occurrence of Zeno-behavior is precluded. □

4. Simulation Results

In this section, two sets of experiments are employed to substantiate the efficacy of the proposed fixed-time self-triggered control method.

4.1. Example 1

Consider a class of MASs with a communication structure as presented in Figure 2, where node 0 represents the leader, and 1–4 represent the followers. The model for each follower is given below.
x ˙ i , 1 = x i , 2 + n ¯ i , 1 ( x ¯ i , 1 ) , x ˙ i , 2 = j = 1 2 g i , j i , j ( x ¯ i ) ( σ i , j u ¯ i , j + κ i , j ) + n ¯ i , 2 ( x ¯ i , 2 ) , y i = x i , 1 ,
where i = 1 , 2 , , 4 , the selection of the nonlinear functions are i , j ( x ¯ i ) = 3 + 0.1 sin ( x i , 1 ) , n ¯ i , 1 ( x ¯ i , 1 ) = 0.1 sin ( x i , 1 ) and n ¯ i , 2 ( x ¯ i , 1 ) = 0.2 sin ( x i , 1 x i , 2 2 ) . The system’s constraint functions and the leader’s signal are chosen as k ¯ c i 1 ( t ) = 0.9 + 0.4 sin ( t ) , k c i 1 ( t ) = 0.8 + 0.3 sin ( t ) , k ¯ c i 2 ( t ) = 2.9 + 0.7 cos ( t ) , k c i 2 ( t ) = 3 + 0.2 cos ( t ) and y 0 = sin ( t ) , respectively. The other system parameters and controller parameters are listed in Table 1.
Furthermore, we account for two types of actuator failures:
Case I: At time t = 15 s, actuator 1 experiences a failure rate of 20% for each agent, while actuator 2 experiences a failure rate of 40%.
Case II: At time t = 15 s, actuator 1 continues to function normally for each agent, while actuator 2 completely fails.
The simulation results for Example 1 are shown in Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9. From the simulation result graphs, it can be observed that the proposed method is effective for MASs with actuator faults and time-varying state constraints. Figure 3 and Figure 4 depict the tracking performance of each agent in response to the leader’s trajectory and the associated synchronization errors. It is evident that, under both instances of actuator failures, the agents achieve rapid convergence and maintain small tracking errors. Furthermore, the time-varying constraints on the output states are effectively satisfied. Figure 5, Figure 6, Figure 7 and Figure 8 depict the input signals of the STM and the system’s input signals. From the figures, it is evident that the system’s trigger interval are irregular and the signal update frequency is reduced based on the constructed STM. Notably, despite encountering actuator failures of varying magnitudes at 15 s, the system successfully attains the intended control objectives. Figure 9 depicts a comparison of the actuator 1 triggering number between the conventional continuous-time trigger mechanism and the developed STM. It is evident from the figure that the STM significantly decreases the triggering number, leading to substantial savings in system communication resources. The specific rate of communication resource savings is provided in Table 2.
To validate the fixed-time convergence property of the system, three sets of different initial states were selected to compare the convergence time of the synchronization errors. The selections of each set of initial states are displayed in Table 3, and the corresponding simulation results are illustrated in Figure 10. As can be seen from Figure 10, regardless of the initial states, the system’s convergence time remains around 0.1 s, which aligns with our expectations.

4.2. Example 2

This section considers a class of MASs composed of multiple single-link robotic arms. The dynamics model of the system is as follows:
x ˙ i , 1 = x i , 2 , x ˙ i , 2 = 1 J j = 1 2 g i , j i , j ( x ¯ i ) l ( σ i , j u ¯ i , j + κ i , j ) D x i , 2 M G L sin x i , 1 , y i = x i , 1 ,
where G = 9.8 m/s 2 and D = 1 kg/s represent the gravitational acceleration and damping factor, L = 1 m and M = 2 kg represent the length and mass of the robotic arm, respectively, and J represents the moment of inertia. The system’s constraint functions and the leader’s signal are chosen as k ¯ c i 1 ( t ) = 1.2 + 0.4 sin ( t ) , k c i 1 ( t ) = 1.2 + 0.3 sin ( t ) , k ¯ c i 2 ( t ) = 2.9 + 0.7 cos ( t ) , k c i 2 ( t ) = 3 + 0.2 cos ( t ) and y 0 = sin ( t ) respectively. The other system parameters and controller parameters are listed in Table 4. Similarly, we adopt two types of actuator failure cases from Example 1.
The corresponding simulation results are illustrated in Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16 and Figure 17. Consistent with the findings from Example 1, employing the proposed control method ensures that the MAS can still achieve consensus within a fixed time even in the presence of actuator failures and time-varying constraints. Furthermore, while ensuring fast convergence and small tracking errors, the system effectively saves communication resources (see Table 5).

5. Conclusions

Based on the STM, this work presents a fixed-time consensus control method, which tackles the challenges of actuator failures and time-varying constraints in MASs. The ABLFs and adaptive backstepping technique are employed to ensure system satisfaction for various types of full-state constraints (time-varying constraints, constant constraints, symmetric constraints, asymmetric constraints). Additionally, a fixed-time STM is constructed to achieve rapid MASs stabilization while alleviating communication pressure among the agents. From Example 2, it can be observed that, as the tracking signal becomes complex, the amount of communication resources saved by the STM decreases. In the future, we will further research and enhance it, aiming to maintain its high efficiency in various scenarios.

Author Contributions

Author Contributions: Conceptualization J.W. and F.W.; Methodology, Z.H. and R.T.; Software, J.L. and Y.Z.; Writing—original draft preparation, J.W. and R.T.; Writing—review and editing, Y.G. and W.H.; Supervision, R.T. and F.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Guangzhou Yangcheng Scholars Research Project under grant number 202235199, the Special Funds for the Cultivation of Guangdong College Students’ Scientific and Technological Innovation (Climbing Program Special Funds) under grant number pdjh2022a0404, the Research project at Guangzhou University under grant RC2023007 and the College Students’ Innovative Entrepreneurial Training Plan Program under grant s202311078031.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ning, B.; Han, Q.; Zuo, Z.; Ding, L.; Lu, Q.; Ge, X. Fixed-time and prescribed-time consensus control of multiagent systems and its applications: A survey of recent trends and methodologies. IEEE Trans. Ind. Inform. 2023, 19, 1121–1135. [Google Scholar] [CrossRef]
  2. Qin, J.; Ma, Q.; Shi, Y.; Wang, L. Recent advances in consensus of multi-agent systems: A brief survey. IEEE Trans. Ind. Electron. 2017, 64, 4972–4983. [Google Scholar] [CrossRef]
  3. Ding, L.; Han, Q.; Ge, X.; Zhang, X. An overview of recent advances in event-triggered consensus of multiagent systems. IEEE Trans. Cybern. 2018, 48, 1110–1123. [Google Scholar] [CrossRef]
  4. Cao, Y.; Yu, W.; Ren, W.; Chen, G. An overview of recent progress in the study of distributed multi-agent coordination. IEEE Trans. Ind. Inform. 2013, 9, 427–438. [Google Scholar] [CrossRef]
  5. Chen, F.; Ren, W. Multi-agent control: A graph-theoretic perspective. J. Syst. Sci. Complex 2021, 34, 1973–2002. [Google Scholar] [CrossRef]
  6. Zhi, L.; Wu, J. Adaptive constraint control for nonlinear multi-agent systems with undirected graphs. AIMS Math. 2021, 6, 12051–12064. [Google Scholar] [CrossRef]
  7. Wang, Y.; Lei, Y.; Bian, T.; Guan, Z. Distributed control of nonlinear multiagent systems with unknown and nonidentical control directions via event-triggered communication. IEEE Trans. Cybern. 2020, 50, 1820–1832. [Google Scholar] [CrossRef] [PubMed]
  8. Gong, P.; Han, Q.; Lan, W. Finite-time consensus tracking for incommensurate fractional-order nonlinear multiagent systems with directed switching topologies. IEEE Trans. Cybern. 2022, 52, 65–76. [Google Scholar] [CrossRef]
  9. Liu, H.; Cheng, L.; Tan, M.; Hou, Z. Exponential finite-time consensus of fractional-order multiagent systems. IEEE Trans. Syst. Man. Cybern. Syst. 2020, 50, 1549–1558. [Google Scholar] [CrossRef]
  10. Lin, G.; Li, H.; Ahn, C.; Yao, D. Event-based finite-time neural control for human-in-the-loop UAV attitude systems. IEEE Trans. Neural Netw. Learn. Syst. 2022. [Google Scholar] [CrossRef] [PubMed]
  11. Jia, T.; Pan, Y.; Liang, H.; Lam, H. Event-based adaptive fixed-time fuzzy control for active vehicle suspension systems with time-varying displacement constraint. IEEE Trans. Fuzzy Syst. 2022, 30, 2813–2821. [Google Scholar] [CrossRef]
  12. Liu, Y.; Li, H.; Zuo, Z.; Li, X.; Lu, R. An overview of finite/fixed-time control and its application in engineering systems. IEEE/CAA J. Autom. Sin. 2022, 9, 2106–2120. [Google Scholar] [CrossRef]
  13. Chen, C.; Han, Y.; Zhu, S.; Zeng, Z. Distributed fixed-time tracking and containment control for second-order multi-agent systems: A nonsingular sliding-mode control approach. IEEE Trans. Netw. Sci. Eng. 2023, 10, 687–697. [Google Scholar] [CrossRef]
  14. Ni, J.; Shi, P. Adaptive neural network fixed-time leader–follower consensus for multiagent systems with constraints and disturbances. IEEE Trans. Cybern. 2021, 51, 1835–1848. [Google Scholar] [CrossRef]
  15. Ning, B.; Han, Q.; Zuo, Z. Practical fixed-time consensus for integrator-type multi-agent systems: A time base generator approach. Automatica 2019, 105, 406–414. [Google Scholar] [CrossRef]
  16. Du, H.; Wen, G.; Wu, D.; Cheng, Y.; Lü, J. Distributed fixed-time consensus for nonlinear heterogeneous multi-agent systems. Automatica 2020, 113, 108797. [Google Scholar] [CrossRef]
  17. Yang, X.; Wan, X.; Cheng, Z.; Cao, J.; Yang, L.; Rutkowski, L. Synchronization of Switched Discrete-Time Neural Networks via Quantized Output Control With Actuator Fault. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 4191–4201. [Google Scholar] [CrossRef]
  18. Dong, G.; Li, H.; Ma, H.; Lu, R. Finite-time consensus tracking neural network ftc of multi-agent systems. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 653–662. [Google Scholar] [CrossRef]
  19. Wang, J.; Yan, Y.; Liu, Z.; Chen, C.; Zhang, C.; Chen, K. Finite-time consensus control for multi-agent systems with full-state constraints and actuator failures. Neural Netw. 2023, 157, 350–363. [Google Scholar] [CrossRef]
  20. Jin, X.; Lu, S.; Yu, J. Adaptive NN-based consensus for a class of nonlinear multiagent systems with actuator faults and faulty networks. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 3474–3486. [Google Scholar] [CrossRef]
  21. Li, Y.; Ding, S.; Hua, C.; Liu, G. Distributed adaptive leader-following consensus for nonlinear multiagent systems with actuator failures under directed switching graphs. IEEE Trans. Cybern. 2023, 53, 211–221. [Google Scholar] [CrossRef] [PubMed]
  22. Ren, H.; Ma, H.; Li, H.; Wang, Z. Adaptive fixed-time control of nonlinear mass with actuator faults. IEEE/CAA J. Autom. Sinica 2023, 10, 1252–1262. [Google Scholar] [CrossRef]
  23. Cui, Y.; Chen, Y.; Yang, D.; Shu, Z.; Huang, T.; Gong, X. Resilient formation tracking of spacecraft swarm against actuation attacks: A distributed Lyapunov-based model predictive approach. IEEE Trans. Syst. Man. Cybern. Syst. 2023. [Google Scholar] [CrossRef]
  24. Liu, Y.; Tong, S. Barrier Lyapunov functions for Nussbaum gain adaptive control of full state constrained nonlinear systems. Automatica 2017, 76, 143–152. [Google Scholar] [CrossRef]
  25. Sun, W.; Su, S.; Dong, G.; Bai, W. Reduced adaptive fuzzy tracking control for high-order stochastic nonstrict feedback nonlinear system with full-state constraints. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 1496–1506. [Google Scholar] [CrossRef]
  26. Fang, L.; Ding, S.; Park, J.; Ma, L. Adaptive fuzzy control for nontriangular stochastic high-order nonlinear systems subject to asymmetric output constraints. IEEE Trans. Cybern. 2022, 52, 1280–1291. [Google Scholar] [CrossRef]
  27. Zhao, K.; Song, Y. Removing the feasibility conditions imposed on tracking control designs for state-constrained strict-feedback systems. IEEE Trans. Autom. Control 2019, 64, 1265–1272. [Google Scholar] [CrossRef]
  28. Liu, Y.; Zhao, W.; Liu, L.; Li, D.; Tong, S.; Chen, C. Adaptive neural network control for a class of nonlinear systems with function constraints on states. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 2732–2741. [Google Scholar] [CrossRef]
  29. Edalati, L.; Sedigh, A.K.; Shooredeli, M.A.; Moarefianpour, A. Adaptive fuzzy dynamic surface control of nonlinear systems with input saturation and time-varying output constraints. Mech. Syst. Signal Process. 2018, 100, 311–329. [Google Scholar] [CrossRef]
  30. Wang, Q.; He, Y. Time-triggered intermittent control of continuous systems. Int. J. Rob. Nonlinear Control 2022, 52, 1280–1291. [Google Scholar] [CrossRef]
  31. Cui, Y.; Luo, B.; Feng, Z.; Huang, T.; Gong, X. Resilient state containment of multi-agent systems against composite attacks via output feedback: A sampled-based event-triggered hierarchical approach. IEEE Trans. Cybern. 2023, 629, 77–95. [Google Scholar] [CrossRef]
  32. Fan, Y.; Liu, L.; Feng, G.; Wang, Y. Self-triggered consensus for multi-agent systems with zeno-free triggers. IEEE Trans. Autom. Control 2015, 60, 2779–2784. [Google Scholar] [CrossRef]
  33. Wang, J.; Zhang, H.; Ma, K.; Liu, Z.; Chen, C. Neural adaptive self-triggered control for uncertain nonlinear systems with input hysteresis. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 6206–6214. [Google Scholar] [CrossRef]
  34. Wu, J.; He, F.; Shen, H.; Ding, S.; Wu, Z. Adaptive NN fixed-time fault-tolerant control for uncertain stochastic system with deferred output constraint via self-triggered mechanism. IEEE Trans. Cybern. 2022, 53, 5892–5903. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Schematic diagram of the adaptive NN control method.
Figure 1. Schematic diagram of the adaptive NN control method.
Actuators 12 00364 g001
Figure 2. Communication structure graph.
Figure 2. Communication structure graph.
Actuators 12 00364 g002
Figure 3. Example 1 (Case I): (a) Trajectories of leader y 0 and agents y i ; (b) The synchronization errors z i , 1 .
Figure 3. Example 1 (Case I): (a) Trajectories of leader y 0 and agents y i ; (b) The synchronization errors z i , 1 .
Actuators 12 00364 g003
Figure 4. Example 1 (Case II): (a) Trajectories of leader y 0 and agents y i ; (b) The synchronization errors z i , 1 .
Figure 4. Example 1 (Case II): (a) Trajectories of leader y 0 and agents y i ; (b) The synchronization errors z i , 1 .
Actuators 12 00364 g004
Figure 5. Example 1 (Case I): Trajectories of ω i , 1 and u i , 1 .
Figure 5. Example 1 (Case I): Trajectories of ω i , 1 and u i , 1 .
Actuators 12 00364 g005
Figure 6. Example 1 (Case I): Trajectories of ω i , 2 and u i , 2 .
Figure 6. Example 1 (Case I): Trajectories of ω i , 2 and u i , 2 .
Actuators 12 00364 g006
Figure 7. Example 1 (Case II): Trajectories of ω i , 1 and u i , 1 .
Figure 7. Example 1 (Case II): Trajectories of ω i , 1 and u i , 1 .
Actuators 12 00364 g007
Figure 8. Example 1 (Case II): Trajectories of ω i , 2 and u i , 2 .
Figure 8. Example 1 (Case II): Trajectories of ω i , 2 and u i , 2 .
Actuators 12 00364 g008
Figure 9. Example 1: (a) Comparison of trigger frequencies in Case I; (b) Comparison of trigger frequencies in Case II.
Figure 9. Example 1: (a) Comparison of trigger frequencies in Case I; (b) Comparison of trigger frequencies in Case II.
Actuators 12 00364 g009
Figure 10. Example 1 (Case I): (a) The synchronization errors of State I; (b) The synchronization errors of State II; (c) The synchronization errors of State III.
Figure 10. Example 1 (Case I): (a) The synchronization errors of State I; (b) The synchronization errors of State II; (c) The synchronization errors of State III.
Actuators 12 00364 g010
Figure 11. Example 2 (Case I): (a) Trajectories of leader y 0 and agents y i ; (b) The synchronization errors z i , 1 .
Figure 11. Example 2 (Case I): (a) Trajectories of leader y 0 and agents y i ; (b) The synchronization errors z i , 1 .
Actuators 12 00364 g011
Figure 12. Example 2 (Case II): (a) Trajectories of leader y 0 and agents y i ; (b) The synchronization errors z i , 1 .
Figure 12. Example 2 (Case II): (a) Trajectories of leader y 0 and agents y i ; (b) The synchronization errors z i , 1 .
Actuators 12 00364 g012
Figure 13. Example 2 (Case I): Trajectories of ω i , 1 and u i , 1 .
Figure 13. Example 2 (Case I): Trajectories of ω i , 1 and u i , 1 .
Actuators 12 00364 g013
Figure 14. Example 2 (Case I): Trajectories of ω i , 2 and u i , 2 .
Figure 14. Example 2 (Case I): Trajectories of ω i , 2 and u i , 2 .
Actuators 12 00364 g014
Figure 15. Example 2 (Case II): Trajectories of ω i , 1 and u i , 1 .
Figure 15. Example 2 (Case II): Trajectories of ω i , 1 and u i , 1 .
Actuators 12 00364 g015
Figure 16. Example 2 (Case II): Trajectories of ω i , 2 and u i , 2 .
Figure 16. Example 2 (Case II): Trajectories of ω i , 2 and u i , 2 .
Actuators 12 00364 g016
Figure 17. Example 2: (a) Comparison of trigger frequencies in Case I; (b) Comparison of trigger frequencies in Case II.
Figure 17. Example 2: (a) Comparison of trigger frequencies in Case I; (b) Comparison of trigger frequencies in Case II.
Actuators 12 00364 g017
Table 1. The other system parameters and controller parameters of Example 1.
Table 1. The other system parameters and controller parameters of Example 1.
The initial values of system [ x 1 , 1 ( 0 ) , x 2 , 1 ( 0 ) , x 3 , 1 ( 0 ) , x 4 , 1 ( 0 ) ] T = [ 0.2 , 0.2 , 0.3 , 0.4 ] T ,
[ x 1 , 2 ( 0 ) , x 2 , 2 ( 0 ) , x 3 , 2 ( 0 ) , x 4 , 2 ( 0 ) ] T = [ 0 , 0 , 0 , 0 ] T ,
ϑ ^ i , 1 ( 0 ) = ϑ ^ i , 2 ( 0 ) = 0 , θ ^ i , 1 ( 0 ) = θ ^ i , 2 ( 0 ) = [ 0.4 , 0 , 0 ] T ,
[ u ¯ 1 , 1 , u ¯ 1 , 2 ] T = [ u ¯ 2 , 1 , u ¯ 2 , 2 ] T = [ u ¯ 3 , 1 , u ¯ 3 , 2 ] T = [ u ¯ 4 , 1 , u ¯ 4 , 2 ] T = [ 0 , 0 ] T .
The parameters of controllers [ c 1 , 1 , 1 , c 2 , 1 , 1 , c 3 , 1 , 1 , c 4 , 1 , 1 ] T = [ 44 , 42 , 47 , 46 ] T , ι 1 = 25 , ι 2 = 30 ,
[ c 1 , 2 , 1 , c 2 , 2 , 1 , c 3 , 2 , 1 , c 4 , 2 , 1 ] T = [ 1 , 3 , 1 , 1 ] T , γ i , 1 = 4 , γ i , 2 = 2 ,
[ c 1 , 1 , 2 , c 2 , 1 , 2 , c 3 , 1 , 2 , c 4 , 1 , 2 ] T = [ 4 , 4 , 5 , 2 ] T , W i , 1 = 0.8 I 15 × 15 ,
[ c 1 , 2 , 2 , c 2 , 2 , 2 , c 3 , 2 , 2 , c 4 , 2 , 2 ] T = [ 8 , 4 , 3 , 3 ] T , W i , 2 = 0.6 I 15 × 15 ,
J i , 1 = J i , 2 = 0.1 I 3 × 3 , δ i , 1 = δ i , 1 = 0.1 , υ i , 1 = υ i , 2 = 0.01 .
The parameters of STM η = 0.1 , η 1 = 0.01 , ς = 0.5 , v 1 = 20 , v 2 = 50 , t * = 0.01 .
Table 2. The bandwidth saving rate of Example 1.
Table 2. The bandwidth saving rate of Example 1.
CaseAgent 1Agent 2Agent 3Agent 4
Case I81.57%66.03%65.57%73.60%
Case II80.50%66.17%65.23%73.23%
Table 3. Three different initial conditions of Example 1 Case I.
Table 3. Three different initial conditions of Example 1 Case I.
State x i , 1 ( 0 ) x i , 1 ( 0 ) x i , 1 ( 0 ) x i , 1 ( 0 )
State I0.20.20.30.4
State II0.40.30.20.1
State III−0.200.3−0.1
Table 4. The other system parameters and controller parameters of Example 2.
Table 4. The other system parameters and controller parameters of Example 2.
The initial values of system [ x 1 , 1 ( 0 ) , x 2 , 1 ( 0 ) , x 3 , 1 ( 0 ) , x 4 , 1 ( 0 ) ] T = [ 0.3 , 0.4 , 0.2 , 0.5 ] T ,
[ x 1 , 2 ( 0 ) , x 2 , 2 ( 0 ) , x 3 , 2 ( 0 ) , x 4 , 2 ( 0 ) ] T = [ 0 , 0 , 0 , 0 ] T ,
ϑ ^ i , 1 ( 0 ) = ϑ ^ i , 2 ( 0 ) = 0 , θ ^ i , 1 ( 0 ) = θ ^ i , 2 ( 0 ) = [ 0.4 , 0 , 0 ] T ,
[ u ¯ 1 , 1 , u ¯ 1 , 2 ] T = [ u ¯ 2 , 1 , u ¯ 2 , 2 ] T = [ u ¯ 3 , 1 , u ¯ 3 , 2 ] T = [ u ¯ 4 , 1 , u ¯ 4 , 2 ] T = [ 0 , 0 ] T .
The parameters of controllers [ c 1 , 1 , 1 , c 2 , 1 , 1 , c 3 , 1 , 1 , c 4 , 1 , 1 ] T = [ 38 , 27 , 33 , 34 ] T , ι 1 = 30 , ι 2 = 30 ,
[ c 1 , 2 , 1 , c 2 , 2 , 1 , c 3 , 2 , 1 , c 4 , 2 , 1 ] T = [ 2 , 3 , 2 , 1 ] T , γ i , 1 = 2 , γ i , 2 = 2 ,
[ c 1 , 1 , 2 , c 2 , 1 , 2 , c 3 , 1 , 2 , c 4 , 1 , 2 ] T = [ 2 , 2 , 5 , 4 ] T , W i , 1 = 0.8 I 15 × 15 ,
[ c 1 , 2 , 2 , c 2 , 2 , 2 , c 3 , 2 , 2 , c 4 , 2 , 2 ] T = [ 6 , 4 , 3 , 3 ] T , W i , 2 = 0.8 I 15 × 15 ,
J i , 1 = J i , 2 = 0.1 I 3 × 3 , δ i , 1 = δ i , 1 = 0.1 , υ i , 1 = υ i , 2 = 0.01 .
The parameters of STM η = 0.1 , η 1 = 0.01 , ς = 0.5 , v 1 = 20 , v 2 = 50 , t * = 0.01 .
Table 5. The bandwidth saving rate of Example 2.
Table 5. The bandwidth saving rate of Example 2.
CaseAgent 1Agent 2Agent 3Agent 4
Case I54.33%32.73%32.93%31.53%
Case II54.57%36.90%36.70%34.53%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, J.; Hu, Z.; Liu, J.; Zhang, Y.; Gu, Y.; Huang, W.; Tang, R.; Wang, F. Adaptive Self-Triggered Control for Multi-Agent Systems with Actuator Failures and Time-Varying State Constraints. Actuators 2023, 12, 364. https://doi.org/10.3390/act12090364

AMA Style

Wang J, Hu Z, Liu J, Zhang Y, Gu Y, Huang W, Tang R, Wang F. Adaptive Self-Triggered Control for Multi-Agent Systems with Actuator Failures and Time-Varying State Constraints. Actuators. 2023; 12(9):364. https://doi.org/10.3390/act12090364

Chicago/Turabian Style

Wang, Jianhui, Zikai Hu, Jiarui Liu, Yuanqing Zhang, Yixiang Gu, Weicong Huang, Ruizhi Tang, and Fang Wang. 2023. "Adaptive Self-Triggered Control for Multi-Agent Systems with Actuator Failures and Time-Varying State Constraints" Actuators 12, no. 9: 364. https://doi.org/10.3390/act12090364

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop