Next Article in Journal
Identification and Classification of Aggregation Operators Using Bipolar Complex Fuzzy Settings and Their Application in Decision Support Systems
Previous Article in Journal
Supply Chain Pricing Models Considering Risk Attitudes under Free-Riding Behavior
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

State Estimation for Complex-Valued Inertial Neural Networks with Multiple Time Delays

College of Mathematics and Systems Science, Shandong University of Science and Technology, Qingdao 266590, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(10), 1725; https://doi.org/10.3390/math10101725
Submission received: 18 March 2022 / Revised: 7 May 2022 / Accepted: 16 May 2022 / Published: 18 May 2022

Abstract

:
In this paper, the problem of state estimation for complex-valued inertial neural networks with leakage, additive and distributed delays is considered. By means of the Lyapunov–Krasovskii functional method, the Jensen inequality, and the reciprocally convex approach, a delay-dependent criterion based on linear matrix inequalities (LMIs) is derived. At the same time, the network state is estimated by observing the output measurements to ensure the global asymptotic stability of the error system. Finally, two examples are given to verify the effectiveness of the proposed method.

1. Introduction

In the past decades, people have performed much research on various types of neural network models. Because of the complex dynamic characteristics of the neurocyte, it is necessary to consider describing the complex dynamic properties of this neural response by using derivative information of the state variables. This makes neural networks have good application prospects in various fields [1,2]. Therefore, the dynamical behaviors of such systems have received considerable attention [3,4,5,6]. In addition, inertial neural networks are described by second-order differential equations, which are different from first-order differential models such as bidirectional associative memory neural networks and Cohen–Grossberg neural networks. The inertial neural network model is widely used in biology and engineering due to its more complex dynamics phenomena and superior characteristics, as in [7,8]. As a result, researchers have begun to focus on this system, and many instructive results have emerged [9,10,11]. For example, ref. [10] analyzed the global exponential stability problem of fuzzy inertial neural networks with time-varying delays, and [11] discussed the global exponential stabilization and lag synchronization control problems of inertial neural networks with time delays.
However, it is well known that in the design of neural network state feedback controllers, it is of great significance to obtain the state information of the networks. Unfortunately, in practice, due to the strong coupling of large-scale neural networks, it is difficult to obtain accurate and complete neural state information from the networks’ output. Therefore, we need to adopt reasonable measurement methods and design various effective estimators to estimate the state of neurons so as to further describe and simulate the complex responses of neurons. Based on this, people have performed much research on the state estimation of neural networks [12,13,14,15,16,17,18]. It is worth emphasizing that in [12], the state estimation problem of neural networks was discussed for the first time. Subsequently, some important relevant results appeared, one after another [13,14,15,16,17,18]. However, there are few research works on state estimation of inertial neural networks, only appearing in [17,18]. As a result, it is worthwhile to explore the state estimation problem of inertial neural networks.
On the other hand, neural networks with real-valued weight, output, state, and activation functions are called real-valued neural networks. It is undeniable that in many application fields, real-valued neural networks have their excellent side, but also have certain limitations in some aspects. In order to surmount these limitations, researchers naturally put forward complex-valued neural networks, and more and more studies [19,20,21,22,23,24,25,26,27,28,29,30,31,32] on the dynamical behaviors of complex-valued neural networks have been carried out in recent years. At the same time, many scholars also pay great attention to the analysis of complex-valued inertial neural networks [33,34,35,36,37,38]. From the perspective of research methods, most of them transform the second-order differential systems into the first-order ones through variable transformation and then study the addressed systems based on the first-order ones [34,35]. However, this method can double the state variables and dimension of the systems. In order to maintain the original characteristics of the systems and make the theoretical analysis simple, via the method of non-reduced order, refs. [36,37,38,39] analyzed the exponential and adaptive synchronization and the finite/fixed time synchronization for complex-valued inertial neural networks, respectively.
In reality, the change trend of many systems is not only related to the current state, but also depends on the past state, which naturally involves time delays. As is known to all, as one of the main sources of system instability, oscillation, or performance deterioration, time delays inevitably affect many systems in different forms, including leakage time delay, additive time delays, distributed time delay, and so on. For bidirectional associative memory neural networks, the finite time stability problem with distributed delays [40] and the state estimation problem with additive delays [41] are considered, respectively. In [42,43,44], the state estimation problem of quaternion-valued neural networks and event-triggered exponential stabilization for inertial complex-valued neural networks with multiple time delays were studied. At present, based on the influence of time delays and the characteristics of inertial neural networks, some scholars have combined the inertia term, time delays, and the most classical first-order differential models to explore the synchronization and anti-synchronization of complex-valued inertial neural networks with time-varying delays, such as [45]. Unfortunately, for complex-valued inertial neural networks, there is no literature on the existence of the above three time delays at the same time, not to mention the research on state estimation, so we need to fill in the gaps in this aspect.
Inspired by the above reasons, the state estimation problem of a class of complex-valued inertial neural networks with multiple time delays is studied by using a non-reduced-order method. The main challenges and contributions of this paper are summarized below: (1) Different from [27,28,31], separating the complex-valued neural networks into two equivalent real-valued subsystems, the state estimation problem of complex-valued inertial neural networks with multiple time delays is studied via the nonseparable method. (2) To maintain the the characteristics of the original system and increase the flexibility and generality of the theoretical results, the considered second-order differential model is regarded as a whole instead of reducing the order based on variable substitution. (3) To achieve our aim, an appropriate Lyapunov–Krasovskii functional that fully considers the information of leakage delay, additive delays, and distributed delay is constructed, in which the inverted convex inequality in the complex domain is adopted.

2. Preliminaries and Model Descriptions

Consider the following complex-valued inertial neural networks with leakage and additive and distributed delays:
u ¨ ( t ) = A u ˙ ( t ) B u ( t δ ) + C f ( u ( t ) ) + D f ( u ( t τ 1 ( t ) τ 2 ( t ) ) ) + E t β t f ( u ( s ) ) d s
where u ( t ) = ( u 1 ( t ) , u 2 ( t ) , , u n ( t ) ) T C n is the state of the neuron n, and its second derivative is called the term of inertia; A = diag ( a 1 , a 2 , , a n ) R n × n with a p > 0 ( p = 1 , 2 , , n ) and B = diag ( b 1 , b 2 , , b n ) R n × n with b p > 0 indicate the self-feedback connection weight matrices; C C n × n , D C n × n and E C n × n are connection weight matrices; f ( u ( t ) ) = ( f 1 ( u 1 ( t ) ) , f 2 ( u 2 ( t ) ) , , f n ( u n ( t ) ) ) T C n represents the complex-valued activation functions; δ is the leakage delay satisfying δ 0 ; τ q ( t ) ( q = 1 , 2 ) denotes the time-varying delays with 0 τ q ( t ) τ q , and β stands for the distributed delay, where τ q are given real constants.
Let τ ( t ) = τ 1 ( t ) + τ 2 ( t ) and τ = τ 1 + τ 2 . The initial conditions of System (1) are
u ( s ) = η ( s ) , u ˙ ( s ) = ρ ( s ) , h s 0
where η ( s ) and ρ ( s ) are continuous and h = max { δ , τ , β } .
Moreover, the measurement outputs of System (1) are assumed as
m ( t ) = F u ( t ) + G g ( u ( t ) )
where m ( t ) C m is the measurement output of System (1); F C m × n and G C m × m are the output weighting matrices; g ( u ( t ) ) C m denotes neuron-dependent nonlinear disturbance signals.
Further, Equations (1)–(3) can be integrated as
u ¨ ( t ) = A u ˙ ( t ) B u ( t δ ) + C f ( u ( t ) ) + D f ( u ( t τ 1 ( t ) τ 2 ( t ) ) ) + E t β t f ( u ( s ) ) d s m ( t ) = F u ( t ) + G g ( u ( t ) ) u ( s ) = η ( s ) , u ˙ ( s ) = ρ ( s ) , s [ h , 0 ] .
Next, we construct a full-order state estimator for System (4):
v ¨ ( t ) = A v ˙ ( t ) B v ( t δ ) + C f ( v ( t ) ) + D f ( v ( t τ 1 ( t ) τ 2 ( t ) ) ) + E t β t f ( v ( s ) ) d s + K ( m ˙ ( t ) n ˙ ( t ) ) + H ( m ( t ) n ( t ) ) n ( t ) = F v ( t ) + G g ( v ( t ) ) v ( s ) = η ˜ ( s ) , v ˙ ( s ) = ρ ˜ ( s ) , s [ h , 0 ]
where v ( t ) is the estimation of u ( t ) , n ( t ) is the estimated output, and K , H C n × m are the state estimator gains matrices to be designed.
Now, define e ( t ) = u ( t ) v ( t ) , f ˜ ( e ( t ) ) = f ( u ( t ) ) f ( v ( t ) ) and g ˜ ( e ( t ) ) = g ( u ( t ) ) g ( v ( t ) ) ; the estimation error system can be derived from (4) and (5) as follows:
e ¨ ( t ) = ( A + K F ) e ˙ ( t ) H F e ( t ) B e ( t δ ) + C f ˜ ( e ( t ) ) + D f ˜ ( e ( t τ 1 ( t ) τ 2 ( t ) ) ) + E t β t f ˜ ( e ( s ) ) d s K G g ˜ ( e ˙ ( t ) ) H G g ˜ ( e ( t ) ) e ( s ) = ϱ ( s ) , e ˙ ( s ) = ς ( s ) , s [ h , 0 ] .
The flow chart of complex-valued inertial networks networks and the design estimator are illustrated in Figure 1.
Remark 1.
In the real control networks, when the signal is transmitted from one point to the next, it will pass through physical equipment, controllers, sensors, and actuators. In this process, several different types of time delays may occur due to the changes of networks; transmission conditions, such as leakage time delay, additive time delays, and distributed time delay. On the other hand, in large-scale neural networks, there are only partial neuron states’ information in the network output. In order to make better use of neural networks, it is usually necessary to estimate the state of neurons through output measurements and then use the estimated neuron state to complete some actual performance of the system. Hence, it is very important to discuss the state estimation problem for neural network models with multiple time delays. Here, we focus on this problem for complex-valued inertial neural networks with multiple time delays.
The following preliminaries will be used in the derivation of the next section.
Assumption 1.
For any κ { 1 , 2 , , n } , there exists a constant l κ R such that
| f κ ( x ) f i ( y ) | l κ | x y |
for all x , y C .
Assumption 2.
For any κ { 1 , 2 , , m } , there exists a constant N κ R n such that
| g κ ( u ) g κ ( v ) | | N κ T ( u v ) |
for all u , v C n .
For convenience, we define L = diag ( l 1 , l 2 , , l n ) and N = ( N 1 , N 2 , , N m ) .
Lemma 1
([21]). For positive definite Hermitian matrix P C n × n , vector function u ( s ) : [ c , d ] C n with scalars c < d , then
c d u ( s ) d s * P c d u ( s ) d s ( d c ) c d u * ( s ) P u ( s ) d s .
Lemma 2
([45]). For any given vectors ξ 1 , ξ 2 C n , any scalar 0 < ρ < 1 , any positive definite Hermitian matrix P C n × n , and any matrix Q C n × n such that P Q Q * P > 0 ,
The following inequality holds:
1 ρ ξ 1 * P ξ 1 + 1 1 ρ ξ 2 * P ξ 2 ξ 1 ξ 2 * P Q Q * P ξ 1 ξ 2 .

3. Main Results

In this section, the state estimation problem of complex-valued inertial neural networks is analyzed, and a sufficient condition for the global asymptotic stability of the error state system (6) is proposed.
Theorem 1.
Suppose Assumptions 1 and 2 hold; the error system (6) is globally asymptotically stable if there exist real positive diagonal matrices Λ 1 , Λ 2 R n × n , Λ 3 , Λ 4 R m × m , positive definite Hermitian matrices P 1 , P 4 , P 6 , Q 1 , Q 2 , S 1 , S 3 , U 1 , U 2 , W 1 , W 2 , R C n × n , and any matrices P 2 , P 3 , P 5 , S 2 , W ˜ 1 , W ˜ 2 , C n × n , K ¯ , H ¯ C n × m , such that the following LMIs hold:
P = P 1 P 2 P 3 * P 4 P 5 * * P 6 > 0
S = S 1 S 2 * S 3 > 0
Θ 1 = W 1 W ˜ 1 * W 1 > 0
Θ 2 = W 2 W ˜ 2 * W 2 > 0
Ξ = Ξ ˜ 1 Ξ ˜ 2 * Ξ ˜ 3 < 0
where
Ξ ˜ 1 = Δ 11 Δ 12 Δ 13 P 3 M B W ˜ 1 * W ˜ 2 * W 1 W ˜ 1 * W 2 W ˜ 2 * * Δ 22 Δ 23 P 5 M B 0 0 0 0 * * M M * M B 0 0 0 0 * * * Q 1 0 0 0 0 * * * * W 1 0 W 1 W ˜ 1 0 * * * * * W 2 0 W 2 W ˜ 2 * * * * * * Δ 77 0 * * * * * * * Δ 88
Ξ ˜ 2 = P 6 S 1 + S 2 * S 2 + S 3 M C M D M E H ¯ G K ¯ G P 3 0 0 M C M D M E H ¯ G K ¯ G P 5 0 0 M C M D M E H ¯ G K ¯ G P 6 0 0 0 0 0 0 0 0 S 1 S 2 0 0 0 0 0 0 S 2 * S 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Ξ ˜ 3 = Q 2 0 0 0 0 0 0 0 * U 1 0 0 0 0 0 0 * * U 2 0 0 0 0 0 * * * β 2 R Λ 1 0 0 0 0 * * * * Λ 2 0 0 0 * * * * * R 0 0 * * * * * * Λ 3 0 * * * * * * * Λ 4
with Δ 11 = P 3 + P 3 * + Q 1 + δ 2 Q 2 + τ 1 U 1 + τ U 2 W 1 W 2 + L Λ 1 L + N Λ 3 N T H ¯ F F * H ¯ * , Δ 12 = P 1 + P 5 * M A K ¯ F F * H ¯ * , Δ 13 = P 2 M F * H ¯ * , Δ 22 = P 2 + P 2 * + τ 1 2 W 1 + τ 2 W 2 M A K ¯ F F * K ¯ * A T M * + N Λ 4 N T , Δ 23 = P 4 M F * K ¯ * A T M * , Δ 77 = 2 W 1 + W ˜ 1 + W ˜ 1 * , Δ 88 = 2 W 2 + W ˜ 2 + W ˜ 2 * + L Λ 2 L . In this case, the estimator gain matrix is given by K = M 1 K ¯ , H = M 1 H ¯ .
Proof. 
Consider the following Lyapunov functional:
V ( t ) = n = 1 6 V n ( t )
where
V 1 ( t ) = e ( t ) e ˙ ( t ) t δ t e ( s ) d s * P 1 P 2 P 3 * P 4 P 5 * * P 6 e ( t ) e ˙ ( t ) t δ t e ( s ) d s V 2 ( t ) = t δ t e * ( s ) Q 1 e ( s ) d s + δ δ 0 t + ϵ t e * ( s ) Q 2 e ( s ) d s d ϵ V 3 ( t ) = t τ 1 t e ( s ) d s t τ t e ( s ) d s * S 1 S 2 * S 3 t τ 1 t e ( s ) d s t τ t e ( s ) d s V 4 ( t ) = τ 1 0 t + ϵ t e * ( s ) U 1 e ( s ) d s d ϵ + τ 0 t + ϵ t e * ( s ) U 2 e ( s ) d s d ϵ V 5 ( t ) = τ 1 τ 1 0 t + ϵ t e ˙ * ( s ) W 1 e ˙ ( s ) d s d ϵ + τ τ 0 t + ϵ t e ˙ * ( s ) W 2 e ˙ ( s ) d s d ϵ V 6 ( t ) = β β 0 t + ϵ t f ˜ * ( e ( s ) ) R f ˜ ( e ( s ) ) d s d ϵ .
Taking the derivative of V ( t ) and based on Lemma 1, we can obtain
V ˙ 1 ( t ) = e * ( t ) ( P 3 + P 3 * ) e ( t ) + e * ( t ) ( P 1 + P 5 * ) e ˙ ( t ) + e * ( t ) P 2 e ¨ ( t ) e * ( t ) P 3 e ( t δ ) + e ˙ * ( t ) ( P 1 + P 5 ) e ( t ) + e ˙ * ( t ) ( P 2 + P 2 * ) e ˙ ( t ) + e ˙ * ( t ) P 4 e ¨ ( t ) e ˙ * ( t ) P 5 e ( t δ ) + e ¨ * ( t ) P 2 * e ( t ) + e ¨ * ( t ) P 4 e ˙ ( t ) e * ( t δ ) P 3 * e ( t ) e * ( t δ ) P 5 * e ˙ ( t ) + e * ( t ) P 6 t δ t e ( s ) d s + e ˙ * ( t ) P 3 t δ t e ( s ) d s + e ¨ * ( t ) P 5 t δ t e ( s ) d s e * ( t δ ) P 6 t δ t e ( s ) d s + t δ t e ( s ) d s * P 3 * e ˙ ( t ) + t δ t e ( s ) d s * P 5 * e ¨ ( t ) + t δ t e ( s ) d s * P 6 e ( t ) t δ t e ( s ) d s * P 6 e ( t δ )
V ˙ 2 ( t ) = e * ( t ) Q 1 e ( t ) e * ( t δ ) Q 1 e ( t δ ) + δ 2 e * ( t ) Q 2 e ( t ) δ t δ t e * ( s ) Q 2 e ( s ) d s e * ( t ) ( Q 1 + δ 2 Q 2 ) e ( t ) e * ( t δ ) Q 1 e ( t δ ) t δ t e ( s ) d s * Q 2 t δ t e ( s ) d s
V ˙ 3 ( t ) = e * ( t ) ( S 1 + S 2 * ) t τ 1 t e ( s ) d s + e * ( t ) ( S 2 + S 3 ) t τ t e ( s ) d s e * ( t τ 1 ) S 1 t τ 1 t e ( s ) d s e * ( t τ 1 ) S 2 t τ t e ( s ) d s e * ( t τ ) S 2 * t τ 1 t e ( s ) d s e * ( t τ ) S 3 t τ t e ( s ) d s + t τ 1 t e ( s ) d s * ( S 1 + S 2 ) e ( t ) + t τ t e ( s ) d s * ( S 2 * + S 3 ) e ( t ) t τ 1 t e ( s ) d s * S 1 e ( t τ 1 ) t τ 1 t e ( s ) d s * S 2 e ( t τ ) t τ t e ( s ) d s * S 2 * e ( t τ 1 ) t τ t e ( s ) d s * S 3 e ( t τ )
V ˙ 4 ( t ) = e * ( t ) ( τ 1 U 1 + τ U 2 ) e ( t ) τ 1 t τ 1 t e * ( s ) U 1 e ( s ) d s τ t τ t e * ( s ) U 2 e ( s ) d s e * ( t ) ( τ 1 U 1 + τ U 2 ) e ( t ) t τ 1 t e ( s ) d s * U 1 t τ 1 t e ( s ) d s t τ t e ( s ) d s * U 2 t τ t e ( s ) d s
V ˙ 5 ( t ) = e ˙ * ( t ) ( τ 1 2 W 1 + τ 2 W 2 ) e ˙ ( t ) τ 1 t τ 1 t e ˙ * ( s ) W 1 e ˙ ( s ) d s τ t τ t e ˙ * ( s ) W 2 e ˙ ( s ) d s
V ˙ 6 ( t ) = β 2 f ˜ * ( e ( t ) ) R f ˜ ( e ( t ) ) β t β t f ˜ * ( e ( s ) ) R f ˜ ( e ( s ) ) d s β 2 f ˜ * ( e ( t ) ) R f ˜ ( e ( t ) ) t β t f ˜ ( e ( s ) ) d s * R t β t f ˜ ( e ( s ) ) d s .
On the basis of Lemmas 1 and 2, we further obtain that
τ 1 t τ 1 t e ˙ * ( s ) W 1 e ˙ ( s ) d s = τ 1 t τ 1 t τ 1 ( t ) e ˙ * ( s ) W 1 e ˙ ( s ) d s τ 1 t τ 1 ( t ) t e ˙ * ( s ) W 1 e ˙ ( s ) d s τ 1 τ 1 τ 1 ( t ) t τ 1 t τ 1 ( t ) e ˙ ( s ) d s * W 1 t τ 1 t τ 1 ( t ) e ˙ ( s ) d s τ 1 τ 1 ( t ) t τ 1 ( t ) t e ˙ ( s ) d s * W 1 t τ 1 ( t ) t e ˙ ( s ) d s = τ 1 τ 1 τ 1 ( t ) e ( t ) e ( t τ 1 ) e ( t τ 1 ( t ) ) * 0 I I W 1 0 I I * e ( t ) e ( t τ 1 ) e ( t τ 1 ( t ) ) τ 1 τ 1 ( t ) e ( t ) e ( t τ 1 ) e ( t τ 1 ( t ) ) * I 0 I W 1 I 0 I * e ( t ) e ( t τ 1 ) e ( t τ 1 ( t ) ) e ( t ) e ( t τ 1 ) e ( t τ 1 ( t ) ) * 0 I I 0 I I W 1 W ˜ 1 * W 1 0 I I 0 I I * e ( t ) e ( t τ 1 ) e ( t τ 1 ( t ) ) = e ( t ) e ( t τ 1 ) e ( t τ 1 ( t ) ) * W 1 W ˜ 1 * W 1 + W ˜ 1 * * W 1 W 1 + W ˜ 1 * * 2 W 1 W ˜ 1 W ˜ 1 * e ( t ) e ( t τ 1 ) e ( t τ 1 ( t ) ) .
Similar to (24), we have that
τ t τ t e ˙ * ( s ) W 2 e ˙ ( s ) d s e ( t ) e ( t τ ) e ( t τ ( t ) ) * W 2 W ˜ 2 * W 2 + W ˜ 2 * * W 2 W 2 + W ˜ 2 * * 2 W 2 W ˜ 2 W ˜ 2 * e ( t ) e ( t τ ) e ( t τ ( t ) ) .
Adding (24) and (25) to (22), then we can have
V ˙ 5 ( t ) e ˙ * ( t ) ( τ 1 2 W 1 + τ 2 W 2 ) e ˙ ( t ) e ( t ) e ( t τ 1 ) e ( t τ 1 ( t ) ) * W 1 W ˜ 1 * W 1 + W ˜ 1 * * W 1 W 1 + W ˜ 1 * * 2 W 1 W ˜ 1 W ˜ 1 * e ( t ) e ( t τ 1 ) e ( t τ 1 ( t ) ) e ( t ) e ( t τ ) e ( t τ ( t ) ) * W 2 W ˜ 2 * W 2 + W ˜ 2 * * W 2 W 2 + W ˜ 2 * * 2 W 2 W ˜ 2 W ˜ 2 * e ( t ) e ( t τ ) e ( t τ ( t ) ) .
In addition, by means of Assumptions 1 and 2, we can obtain
0 e * ( t ) L Λ 1 L e ( t ) f ˜ * ( e ( t ) ) Λ 1 f ˜ ( e ( t ) ) 0 e * ( t τ ( t ) ) L Λ 2 L e ( t τ ( t ) ) f ˜ * ( e ( t τ ( t ) ) ) Λ 2 f ˜ ( e ( t τ ( t ) ) ) 0 e * ( t ) N Λ 3 N T e ( t ) g ˜ * ( e ( t ) ) Λ 3 g ˜ ( e ( t ) ) 0 e * ( t ) N Λ 4 N T e ( t ) g ˜ * ( e ˙ ( t ) ) Λ 4 g ˜ ( e ˙ ( t ) ) .
Then, using the free weighting matrix, it follows from (6) that
0 = [ M * e ( t ) + M * e ˙ ( t ) + M * e ¨ ( t ) ] * P + P * [ M * e ( t ) + M * e ˙ ( t ) + M * e ¨ ( t ) ]
where
P = e ¨ ( t ) ( A + K F ) e ˙ ( t ) H F e ( t ) B e ( t δ ) + C f ˜ ( e ( t ) ) + D f ˜ ( e ( t τ 1 ( t ) τ 2 ( t ) ) ) + E t β t f ˜ ( e ( s ) ) d s K G g ˜ ( e ˙ ( t ) ) H G g ˜ ( e ( t ) ) .
Nothing that K = M 1 K ¯ , H = M 1 H ¯ , from (19)–(21), (23), (26)–(28), we can obtain that
V ˙ ( t ) ξ * ( t ) Ξ ξ ( t )
where
ξ ( t ) = [ e * ( t ) e ˙ * ( t ) e ¨ * ( t ) e * ( t δ ) e * ( t τ 1 ) e * ( t τ ) e * ( t τ 1 ( t ) ) e * ( t τ ( t ) ) t δ t e * ( s ) d s t τ 1 t e * ( s ) d s t τ t e * ( s ) d s f ˜ ( e ( t ) ) f ˜ ( e * ( t τ ( t ) ) ) t β t f ˜ ( e * ( s ) ) d s g ˜ * ( e ( t ) ) g ˜ * ( e ˙ ( t ) ) ] * .
Then, according to (15), one has
V ˙ ( t ) 0 .
Thus, the error system (6) is globally asymptotically stable. The proof is complete. □
Remark 2.
In the process of obtaining the desired result, instead of reducing the order, the original second-order system is considered as the subject. This results in that the constructed Lyapunov functional (16) includes the state derivatives, which is different from the existing ones in [42,43]. This is an important feature of this paper. Moreover, as far as we know, this is the first time that the state estimation of complex-valued inertial neural networks with multiple time delays has been discussed.
Remark 3.
In order to maintain the original characteristics of the system and without increasing its complexity, besides adopting the non-reduced order, we also apply the nonseparable method. We regard the system (1) as a whole in the complex domain to study the state estimation problem. Thus, our result is more universal and flexible.

4. Numerical Examples

In this section, we give two numerical examples to verify the effectiveness of the above theoretical result.
Example 1.
Consider System (4) with the following parameters:
A = 0.8 0 0 2.8 , B = 0.9 0 0 1.1 , C = 0.5 + i 0.3 + 1.2 i 0.3 0.2 i 1 0.2 i ,
D = 0.3 + 0.7 i 1.1 + 0.5 i 0.3 + 0.2 i 0.5 + 0.5 i , E = 0.2 + 0.8 i 0.5 0.8 i 0.3 + 0.2 i 0.2 0.8 i .
In addition, we select the activation functions as f q ( u q ( t ) ) = 1 e u q 1 + e u q ( q = 1 , 2 ) , δ = 0.1 , τ 1 ( t ) = 0.2 sin 2 ( t ) , τ 2 ( t ) = 0.5 cos 2 ( t ) , and β = 0.1 , which mean that τ 1 = 0.2 and τ 2 = 0.5 . It could be easily tested that Assumptions 1 and 2 are satisfied, and L = diag ( 0.5 , 0.5 ) ,
N = 0 0.1 0.2 0 .
Moreover, choose the initial values as η ( s ) = [ 4.7 3 i ; 9 + 8.9 i ] T , η ˜ ( s ) = [ 6 + 4.9 i ; 2.2 + 2.1 i ] T , ρ ( s ) = [ 2.1 + 7.2 i ; 2.9 + 5.1 i ] T , and ρ ˜ ( s ) = [ 1.3 + 3.5 i ; 6.6 3.2 i ] T , for s [ 0.7 , 0 ] . We select the output weighting matrices and neuron-dependent nonlinear disturbance signals as
F = 0.2 + 0.5 i 0.2 0.4 i 0.1 0.2 i 0.4 0.3 i , G = 0.5 0.2 i 0.2 0.2 i 0.3 + 0.1 i 0.3 + 0.1 i ,
and g q ( u q ( t ) ) = u q ( q = 1 , 2 ) .
Then, by means of the MATLAB LMI toolbox, the feasible solutions of (11)–(15) can be obtained, such as Λ 1 = diag ( 0.9615 , 0.9615 ) , Λ 2 = diag ( 0.6049 , 0.6049 ) , Λ 3 = diag ( 6.4005 , 6.4005 ) , Λ 4 = diag ( 10.9396 , 10.9396 ) .
P 1 = 1.3341 + 0.0000 i 0.0015 + 0.0015 i 0.0015 0.0015 i 4.0161 + 0.0000 i , P 2 = 0.4473 0.0006 i 0.0008 + 0.0156 i 0.0008 + 0.0156 i 1.2160 + 0.0236 i , P 3 = 0.3097 0.0029 i 0.0226 0.0176 i 0.0226 0.0176 i 0.8992 0.0027 i , P 4 = 0.8317 + 0.0000 i 0.0093 + 0.0156 i 0.0093 0.0156 i 2.4997 + 0.0000 i , P 5 = 0.1311 0.0024 i 0.0243 0.0361 i 0.0243 0.0361 i 0.4271 0.0211 i , P 6 = 0.6252 + 0.0000 i 0.0058 0.0029 i 0.0058 + 0.0029 i 1.3612 + 0.0000 i , S 1 = 1.0883 + 0.0000 i 0.0019 + 0.8317 i 0.0019 0.8317 i 3.0778 + 0.0000 i , S 2 = 0.0013 0.0018 i 0.0015 0.0027 i 0.0015 0.0027 i 0.0330 + 0.0213 i , S 3 = 0.1133 + 0.0000 i 0.0005 0.0243 i 0.0005 + 0.0243 i 0.4461 + 0.0000 i , Q 1 = 0.3635 + 0.0000 i 0.0186 0.0361 i 0.0186 + 0.0361 i 1.1520 + 0.0000 i , Q 2 = 3.9982 + 0.0000 i 0.1264 + 0.0058 i 0.1264 0.0058 i 5.3500 + 0.0000 i , W 1 = 3.1818 + 0.0000 i 0.0089 + 0.0330 i 0.0089 0.0330 i 4.8098 + 0.0000 i , W 2 = 0.7340 + 0.0000 i 0.0039 + 0.0213 i 0.0039 0.0213 i 1.8923 + 0.0000 i , R = 3.3062 + 0.0000 i 0.4961 + 0.4461 i 0.4961 0.4461 i 3.8736 + 0.0000 i , W ˜ 1 = 0.9734 + 0.0002 i 0.0027 + 0.2611 i 0.0059 0.2518 i 1.7210 + 0.0003 i , W ˜ 2 = 0.0745 + 0.0001 i 0.0009 + 0.0266 i 0.0011 0.0380 i 0.3236 0.0001 i , M = 0.1121 + 0.0007 i 0.0321 0.0026 i 0.0327 + 0.0113 i 0.3273 + 0.0046 i , K ¯ = 0.1284 1.3291 i 1.0282 + 0.0685 i 0.6352 + 1.0851 i 1.7107 + 2.4569 i ,
H ¯ = 0.0382 0.7650 i 0.6966 + 0.0670 i 0.4376 + 0.7508 i 1.2550 + 1.7039 i .
Consequently, it can be obtained from K = M 1 K ¯ and H = M 1 H ¯ that
K = 1.8353 13.1449 i 7.7034 1.4274 i 1.7354 + 4.6670 i 4.5110 + 7.3204 i , H = 0.8037 7.6728 i 5.1293 0.8042 i 1.1953 + 3.0710 i 3.3657 + 5.0622 i .
The simulation results are shown in Figure 2 and Figure 3. Among them, Figure 2 plots the trajectories of the real and imaginary parts of the true state u ( t ) in System (4) and the estimation state v ( t ) . Figure 3 shows the state trajectory of the error system (6). Obviously, the simulation results are in agreement with our theoretical analysis.
Example 2.
Consider System (4) with the following parameters:
A = 0.7 0 0 2.6 , B = 0.3 0 0 0.6 , C = 0.5 0.2 i 0.3 0.4 i 0.3 + 0.2 i 1 0.2 i , D = 0.3 + 0.2 i 1.1 + 0.3 i 0.3 0.2 i 0.5 + 0.2 i , E = 0.3 + 0.8 i 0.8 0.8 i 0.5 + 0.2 i 1 0.8 i .
In addition, we select the same f q ( u q ( t ) ) as activation functions in Example 1, δ = 0.1 , τ 1 ( t ) = 0.1 | sin ( t ) | , τ 2 ( t ) = 0.2 | cos ( 3 t ) | , and β = 0.1 , which mean that τ 1 = 0.1 and τ 2 = 0.2 . It could be easily tested that Assumptions 1 and 2 are satisfied, and L = diag ( 0.5 , 0.5 ) ,
N = 0 0.1 0.2 0 .
Moreover, choose the initial values as η ( s ) = [ 5.6 1.1 i ; 1.9 + 1.2 i ] T , η ˜ ( s ) = [ 0.2 + 0.3 i ; 0.3 + 0.9 i ] T , ρ ( s ) = [ 0.2 + 0.6 i ; 1.3 + 0.6 i ] T , and ρ ˜ ( s ) = [ 2.4 1.1 i ; 1.6 1.1 i ] T , for s [ 0.3 , 0 ] . We select the output weighting matrices and neuron-dependent nonlinear disturbance signals as
F = 0.3 + 0.5 i 0.2 0.4 i 0.1 0.2 i 0.5 0.3 i , G = 0.5 0.2 i 0.2 0.2 i 0.3 + 0.1 i 0.3 + 0.1 i ,
and g q ( u q ( t ) ) = u q ( q = 1 , 2 ) .
Then, by means of the MATLAB LMI toolbox, the feasible solutions of (11)–(15) can be obtained, such as Λ 1 = diag ( 64.5684 , 64.5684 ) , Λ 2 = diag ( 55.5390 , 55.5390 ) , Λ 3 = diag ( 223.3662 , 223.3662 ) , Λ 4 = diag ( 227.1920 , 227.1920 ) .
P 1 = 1.0 e + 02 0.6949 + 0.0000 i 0.0128 + 0.0128 i 0.0128 0.0128 i 1.6288 + 0.0000 i , P 2 = 23.8813 + 0.1538 i 1.5234 + 0.0078 i 1.5234 + 0.0078 i 50.3531 1.6239 i , P 3 = 14.7464 0.6364 i 0.8368 1.1136 i 0.8368 1.1136 i 30.9705 + 0.5937 i , P 4 = 1.0 e + 02 0.4104 + 0.0000 i 0.0286 + 0.0001 i 0.0286 0.0001 i 1.0553 + 0.0000 i , P 5 = 8.1536 0.7565 i 0.0348 1.4125 i 0.0348 1.4125 i 16.2009 + 1.4115 i , P 6 = 31.3945 + 0.0000 i 1.2803 0.6364 i 1.2803 + 0.6364 i 49.7008 + 0.0000 i , S 1 = 1.0 e + 02 0.7587 + 0.0000 i 0.0323 + 0.4104 i 0.0323 0.4104 i 1.2279 + 0.0000 i , S 2 = 0.0192 0.0506 i 0.1381 0.3017 i 0.1381 0.3017 i 2.5451 + 0.5451 i , S 3 = 19.7653 + 0.0000 i 1.1483 + 0.0348 i 1.1483 0.0348 i 29.9047 + 0.0000 i , Q 1 = 17.5145 + 0.0000 i 0.2291 1.4125 i 0.2291 + 1.4125 i 42.3430 + 0.0000 i ,
Q 2 = 1.0 e + 02 1.6937 + 0.0000 i 0.0149 0.0128 i 0.0149 + 0.0128 i 1.8482 + 0.0000 i , W 1 = 1.0 e + 02 1.3501 + 0.0000 i 0.0060 + 0.0255 i 0.0060 0.0255 i 1.4794 + 0.0000 i , W 2 = 93.4446 + 0.0000 i 1.0153 + 0.5451 i 1.0153 0.5451 i 96.3752 + 0.0000 i , R = 1.0 e + 02 1.3938 + 0.0000 i 0.0229 + 0.2990 i 0.0229 0.2990 i 1.6984 + 0.0000 i , W ˜ 1 = 30.7114 47.2040 i 19.3940 + 27.8159 i 5.8570 + 24.8126 i 61.2309 + 33.2133 i , W ˜ 2 = 18.5470 41.1490 i 1.4340 + 39.2840 i 1.0151 + 26.5256 i 75.1281 + 35.4684 i , M = 14.5224 + 1.0902 i 1.8188 + 5.9497 i 0.3254 4.5011 i 22.3077 0.6569 i , K ¯ = 39.3566 + 0.0305 i 0.2758 + 8.9574 i 0.2483 8.4581 i 45.8361 0.0401 i , H ¯ = 17.4796 + 0.0089 i 0.3011 + 1.2179 i 0.3341 1.3773 i 17.7157 + 0.0188 i .
Consequently, it can be obtained from K = M 1 K ¯ and H = M 1 H ¯ that
K = 2.6277 3.6505 i 2.5831 + 0.9480 i 0.9898 + 1.6184 i 2.5294 + 2.0984 i , H = 1.7394 2.9958 i 1.4102 + 1.6175 i 0.5398 + 1.5122 i 3.0035 + 1.9865 i .
The simulation results are shown in Figure 4 and Figure 5. Among them, Figure 4 plots the trajectories of the real and imaginary parts of the true state u ( t ) in System (4) and the estimation state v ( t ) . Figure 5 shows the state trajectory of the error system (6). Obviously, the simulation results are in agreement with our theoretical analysis.

5. Conclusions

In this paper, the state estimation for complex-valued inertial neural networks with multiple time delays was studied. By constructing a suitable Lyapunov–Krasovskii functional and using some inequalities such as the Jensen inequality and the reciprocally convex inequality, a delay-dependent criterion based on linear matrix inequalities (LMIs) was obtained, which ensures the global asymptotic stability of the error system and that the network state can be estimated by observing the output measurements. Finally, two examples were given to verify the effectiveness of the proposed method. In the future, for various types of complex-valued neural network models with multiple delays, we will consider some synchronization phenomena, such as quasi-projective synchronization, polynomial synchronization, and so on.

Author Contributions

All authors contributed equally to the writing of the manuscript. All authors have read and approved the final manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China (62173214), in part by the Natural Science Foundation of Shandong Province of China (ZR2021MF100), in part by the Research Fund for the Taishan Scholar Project of Shandong Province of China, in part by the Science and Technology Support Plan for Youth Innovation of Colleges and Universities of Shandong Province of China (2019KJI005), and in part by the SDUST Research Fund.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sharma, S.; Lingras, P.; Xu, F.; Kilburn, P. Application of neural networks to estimate AADT on low-volume roads. J. Transp. Eng. 2001, 127, 426–432. [Google Scholar] [CrossRef]
  2. Balla, K.; Sevilla, R.; Hassan, O.; Morgan, K. An application of neural networks to the prediction of aerodynamic coefficients of aerofoils and wings. Appl. Math. Model. 2021, 96, 456–479. [Google Scholar] [CrossRef]
  3. Jia, Q.; Mwanandiye, E.S.; Tang, W.K.S. Master-slave synchronization of delayed neural networks with time-varying control. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 2292–2298. [Google Scholar] [CrossRef]
  4. Wang, X.; Cao, J.D.; Wang, J.T.; Qi, J. A novel fixed-time stability strategy and its application to fixed-time synchronization control of semi-Markov jump delayed neural networks. Neurocomputing 2021, 452, 284–293. [Google Scholar] [CrossRef]
  5. Shi, C.Y.; Hoi, K.; Vong, S. Free-weighting-matrix inequality for exponential stability for neural networks with time-varying delay. Neurocomputing 2021, 466, 221–228. [Google Scholar] [CrossRef]
  6. Yang, B.; Hao, M.N.; Cao, J.J.; Zhao, X. Delay-dependent global exponential stability for neural networks with time-varying delay. Neurocomputing 2019, 338, 172–180. [Google Scholar] [CrossRef]
  7. Babcock, K.L.; Westervelt, R.M. Dynamics of simple electronic neural networks. Phys. D Nonlinear Phenom. 1987, 28, 305–316. [Google Scholar] [CrossRef]
  8. Angelaki, D.; Correia, M. Models of membrane resonance in pigeon semicircular canal type II hair cells. Biol. Cybern. 1991, 65, 1–10. [Google Scholar] [CrossRef]
  9. Wang, J.F.; Tian, L.X. Global Lagrange stability for inertial neural networks with mixed time-varying delays. Neurocomputing 2017, 235, 140–146. [Google Scholar] [CrossRef]
  10. Chen, D.D.; Kong, F.C. Delay-dependent criteria for global exponential stability of time-varying delayed fuzzy inertial neural networks. Neural Comput. Appl. 2021, 53, 49–68. [Google Scholar] [CrossRef]
  11. Shi, J.C.; Zeng, Z.G. Global exponential stabilization and lag synchronization control of inertial neural networks with time delays. Neural Netw. 2020, 126, 11–20. [Google Scholar] [CrossRef] [PubMed]
  12. Wang, Z.D.; Ho, D.W.C.; Liu, X.H. State estimation for delayed neural networks. IEEE Trans. Neural Netw. 2005, 16, 279–284. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Syed, A.M.; Saravanan, S.; Zhu, Q.X. Non-fragile finite-time H state estimation of neural networks with distributed time-varying delay. J. Frankl. Inst. 2017, 354, 7566–7584. [Google Scholar] [CrossRef]
  14. Ren, J.; Liu, X.; Zhu, H.; Zhong, S.; Shi, K. State estimation of neural networks with two Markovian jumping parameters and multiple time delays. J. Frankl. Inst. 2017, 354, 812–833. [Google Scholar] [CrossRef]
  15. Tan, G.Q.; Wang, J.D.; Wang, Z.S. A new result on L2L performance state estimation of neural networks with time-varying delay: A new result on L2-L performance state estimation of neural networks with time-varying delay. Neurocomputing 2020, 398, 166–171. [Google Scholar] [CrossRef]
  16. Tan, G.Q.; Wang, J.D.; Wang, Z.S. Extended dissipativity state estimation for generalized neural networks with time-varying delay via delay-product-type functionals and integral inequality. Neurocomputing 2021, 455, 78–87. [Google Scholar] [CrossRef]
  17. Sun, L.; Su, L.; Wang, J. Non-fragile dissipative state estimation for semi-Markov jump inertial neural networks with reaction-diffusion. Appl. Math. Comput. 2021, 411, 126404. [Google Scholar] [CrossRef]
  18. Wang, J.; Hu, X.H.; Cao, J.D.; Park, J.H.; Shen, H. H state estimation for switched inertial neural networks with time-varying delays: A persistent dwell-time scheme. IEEE Trans. Neural Netw. Learn. Syst. 2021, 228, 1–11. [Google Scholar] [CrossRef]
  19. Gong, W.Q.; Liang, J.L.; Kan, X.; Nie, X. Robust state estimation for delayed complex-valued neural networks. Neural Process. Lett. 2017, 46, 1009–1029. [Google Scholar] [CrossRef]
  20. Zhang, Z.Q.; Zheng, T. Global asymptotic stability of periodic solutions for delayed complex-valued Cohen–Grossberg neural networks by combining coincidence degree theory with LMI method. Neurocomputing 2018, 289, 220–230. [Google Scholar] [CrossRef]
  21. Liang, J.; Li, K.L.; Song, Q.K.; Zhao, Z.; Liu, Y.; Alsaadi, F.E. State estimation of complex-valued neural networks with two additive time-varying delays. Neurocomputing 2018, 309, 54–61. [Google Scholar] [CrossRef]
  22. Gong, W.Q.; Liang, J.L.; Kan, X.; Wang, L.; Dobaie, A.M. Robust state estimation for stochastic complex-valued neural networks with sampled-data. Neural Comput. Appl. 2019, 31, 523–542. [Google Scholar] [CrossRef]
  23. Wan, P.; Jian, J.G. Global mittag-leffler boundedness for fractional-order complex-valued cohen-grossberg neural networks. Neural Process. Lett. 2019, 49, 121–139. [Google Scholar] [CrossRef]
  24. Wang, S.Z.; Zhang, Z.Y.; Lin, C.; Chen, J. Fixed-time synchronization for complex-valued BAM neural networks with time-varying delays via pinning control and adaptive pinning control. Chaos Soliton. Fractals 2021, 153, 111583. [Google Scholar] [CrossRef]
  25. Gunasekaran, N.; Zhai, G.S. Sampled-data state-estimation of delayed complex-valued neural networks. Int. J. Syst. Sci 2020, 51, 303–312. [Google Scholar] [CrossRef]
  26. Hu, B.X.; Song, Q.K.; Zhao, Z.Q. Robust state estimation for fractional-order complex-valued delayed neural networks with interval parameter uncertainties: LMI approach. Appl. Math. Comput. 2020, 373, 1–12. [Google Scholar] [CrossRef]
  27. Aouiti, C.; Bessifi, M. Sliding mode control for finite-time and fixed-time synchronization of delayed complex-valued recurrent neural networks with discontinuous activation functions and nonidentical parameters. Eur. J. Control 2021, 59, 109–122. [Google Scholar] [CrossRef]
  28. Kumar, A.; Das, S.; Yadav, V.K. Global quasi-synchronization of complex-valued recurrent neural networks with time-varying delay and interaction terms. Chaos Soliton. Frac. 2021, 152, 111323. [Google Scholar] [CrossRef]
  29. Ding, Z.X.; Zhang, H.; Zeng, Z.; Yang, L.; Li, S. Global dissipativity and quasi-mittag-leffler synchronization of fractional-order discontinuous complex-valued neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2021, 1–14. [Google Scholar] [CrossRef]
  30. Liu, X.; Yu, Y.G. Synchronization analysis for discrete fractional-order complex-valued neural networks with time delays. Neural Comput. Appl. 2021, 33, 10503–10514. [Google Scholar] [CrossRef]
  31. Zhang, Z.Y.; Guo, R.N.; Liu, X.P.; Zhong, M.; Lin, C.; Chen, B. Fixed-time synchronization for complex-valued BAM neural networks with time delays. Asian J. Control 2021, 23, 298–314. [Google Scholar] [CrossRef]
  32. Qiu, B.; Liao, X.F.; Zhou, B. State estimation for complex-valued neural networks with time-varying delays. In Proceedings of the Sixth International Conference on Intelligent Control and Information Processing (ICICIP), Wuhan, China, 26–28 November 2015; pp. 531–536. [Google Scholar]
  33. Guo, R.N.; Lu, J.W.; Li, Y.M.; Lv, W. Fixed-time synchronization of inertial complex-valued neural networks with time delays. Nonlinear Dyn. 2021, 105, 1643–1656. [Google Scholar] [CrossRef]
  34. Tang, Q.; Jian, J.G. Global exponential convergence for impulsive inertial complex-valued neural networks with time-varying delays. Math. Comput. Simul. 2019, 159, 39–56. [Google Scholar] [CrossRef]
  35. Li, X.F.; Huang, T.W. Adaptive synchronization for fuzzy inertial complex-valued neural networks with state-dependent coefficients and mixed delays. Fuzzy Sets Syst. 2021, 411, 174–189. [Google Scholar] [CrossRef]
  36. Yu, J.; Hu, C.; Jiang, H.J.; Wang, L. Exponential and adaptive synchronization of inertial complex-valued neural networks: A non-reduced order and non-separation approach. Neural Netw. 2021, 124, 50–59. [Google Scholar] [CrossRef]
  37. Yu, Y.N.; Zhang, Z.Y.; Zhong, M.Y.; Wang, Z. Pinning synchronization and adaptive synchronization of complex-valued inertial neural networks with time-varying delays in fixed-time interval. J. Frankl. Inst. 2022, 359, 1434–1456. [Google Scholar] [CrossRef]
  38. Guo, R.N.; Xu, S.Y.; Ma, Q.; Zhang, Z. Fixed-time synchronization of complex-valued inertial neural networks via nonreduced-order method. IEEE Syst. J. 2021, 1–9. [Google Scholar] [CrossRef]
  39. Long, C.Q.; Zhang, G.D.; Zeng, Z.G.; Hu, J. Finite-time stabilization of complex-valued neural networks with proportional delays and inertial terms: A non-separation approach. Neural Netw. 2022, 148, 86–95. [Google Scholar] [CrossRef]
  40. Du, F.F.; Lu, J.G. New approach to finite-time stability for fractional-order BAM neural networks with discrete and distributed delays. Chaos Soliton. Frac. 2021, 151, 111225. [Google Scholar] [CrossRef]
  41. Nagamani, G.; Rajan, G.S.; Zhu, Q.X. Exponential state estimation for memristor-based discrete-time BAM neural networks with additive delay components. IEEE Trans. Cybern. 2020, 50, 4281–4292. [Google Scholar] [CrossRef]
  42. Chen, X.F.; Song, Q.K. State estimation for quaternion-valued neural networks with multiple time delays. IEEE Trans. Syst. Man Cybern. Syst. 2019, 49, 2278–2287. [Google Scholar] [CrossRef]
  43. Liu, L.B.; Chen, X.F. State estimation of quaternion-valued neural networks with leakage time delay and mixed two additive time-varying delays. Neural Process. Lett. 2020, 51, 2155–2178. [Google Scholar] [CrossRef]
  44. Li, X.F.; Fang, J.A.; Huang, T.W. Event-triggered exponential stabilization for state-based switched inertial complex-valued neural networks with multiple delays. IEEE Trans. Cybern. 2020. [Google Scholar] [CrossRef] [PubMed]
  45. Wei, X.F.; Zhang, Z.Y.; Lin, C.; Chen, J. Synchronization and anti-synchronization for complex-valued inertial neural networks with time-varying delays. Appl. Math. Comput. 2021, 403, 126194. [Google Scholar] [CrossRef]
Figure 1. Flow chart of the estimator.
Figure 1. Flow chart of the estimator.
Mathematics 10 01725 g001
Figure 2. The curves of state u ( t ) and its estimation v ( t ) in Example 1.
Figure 2. The curves of state u ( t ) and its estimation v ( t ) in Example 1.
Mathematics 10 01725 g002
Figure 3. The curves dynamics of error system e ( t ) in Example 1.
Figure 3. The curves dynamics of error system e ( t ) in Example 1.
Mathematics 10 01725 g003
Figure 4. The curves of states u ( t ) and its estimation v ( t ) in Example 2.
Figure 4. The curves of states u ( t ) and its estimation v ( t ) in Example 2.
Mathematics 10 01725 g004
Figure 5. The curves dynamics of error system e ( t ) in Example 2.
Figure 5. The curves dynamics of error system e ( t ) in Example 2.
Mathematics 10 01725 g005
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yu, Y.; Zhang, Z. State Estimation for Complex-Valued Inertial Neural Networks with Multiple Time Delays. Mathematics 2022, 10, 1725. https://doi.org/10.3390/math10101725

AMA Style

Yu Y, Zhang Z. State Estimation for Complex-Valued Inertial Neural Networks with Multiple Time Delays. Mathematics. 2022; 10(10):1725. https://doi.org/10.3390/math10101725

Chicago/Turabian Style

Yu, Yaning, and Ziye Zhang. 2022. "State Estimation for Complex-Valued Inertial Neural Networks with Multiple Time Delays" Mathematics 10, no. 10: 1725. https://doi.org/10.3390/math10101725

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop