Next Article in Journal
Multistability of the Vibrating System of a Micro Resonator
Next Article in Special Issue
Stability Analysis for a Fractional-Order Coupled FitzHugh–Nagumo-Type Neuronal Model
Previous Article in Journal
Dynamic Analysis of a Delayed Fractional Infectious Disease Model with Saturated Incidence
Previous Article in Special Issue
Robust Stability of Fractional Order Memristive BAM Neural Networks with Mixed and Additive Time Varying Delays
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Global Exponential Stability of Fractional Order Complex-Valued Neural Networks with Leakage Delay and Mixed Time Varying Delays

1
Department of Mathematics, Thiruvalluvar University, Vellore 632115, Tamil Nadu, India
2
Department of Mathematics, Faculty of Science, University of Tabuk, P.O. Box 741, Tabuk 71491, Saudi Arabia
3
Department of Information Technology, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
4
Computational Intelligence Laboratory, Toyota Technological Institute, Nagoya 468-8511, Japan
5
Department of Mathematics, Faculty of Science and Technology, Phuket Rajabhat University, Phuket 83000, Thailand
*
Author to whom correspondence should be addressed.
Fractal Fract. 2022, 6(3), 140; https://doi.org/10.3390/fractalfract6030140
Submission received: 22 November 2021 / Revised: 8 February 2022 / Accepted: 16 February 2022 / Published: 2 March 2022
(This article belongs to the Special Issue Frontiers in Fractional-Order Neural Networks)

Abstract

:
This paper investigates the global exponential stability of fractional order complex-valued neural networks with leakage delay and mixed time varying delays. By constructing a proper Lyapunov-functional we established sufficient conditions to ensure global exponential stability of the fractional order complex-valued neural networks. The stability conditions are established in terms of linear matrix inequalities. Finally, two numerical examples are given to illustrate the effectiveness of the obtained results.

1. Introduction

In the past decades, fractional order systems have become an important and hot research field. It is well known that the stability is the primary condition of the various systems. Around 300 years back, the foundation of fractional order calculus [1,2,3,4,5,6,7,8,9,10,11], which is an extension of classical integer order calculus, was first discussed by Leibniz and L’Hospital, and its development was very slow for a long period. Compared with an integer-order model, fractional order models can offer more accurate instrument for memory description and inherited properties of several processes. In addition to the above mentioned developments, neural networks have emerged out as a valuable tool in the field of fractional calculus also for modeling various phenomena. Some researchers introduced the fractional order derivatives into neural networks. Fractional order neural networks can be used to replicate neurons in the brain and describe dynamical characteristics of actual network systems precisely. Fractional order neural networks have a wide range of applications, including parameter estimation in statistical theory, quantum motion description in physics, and network security communications [12,13,14,15,16,17,18].
In recent years, neural networks have been extensively investigated since their wide applications, such as pattern recognition, associative memories, automatic control, optimization, image processing and other areas. Stability analysis of various classes of neural network models such as Hopfield neural networks, Cohen–Grossberg neural networks, cellular neural networks, and bidirectional associative memory neural networks has been extensively investigated as stable neural networks. They all have obtained global exponential stability for some kinds of neural networks through different methods.
Through the previous decades, complex valued neural networks is a great research topic. The stability and synchronization analysis of fractional order systems are very difficult. Since calculating the fractional order derivatives of Lyapunov functional is complicated. The stability analysis methods for integer order systems such as Lyapunov functional method cannot be easily generalized to fractional order systems. Complex valued neural networks have been applied in electromagnetic, light, quantum waves and so on. As far as we know, there are a few results about complex valued neural networks and important and interesting results were proposed [19,20,21]. For Instance in [22,23], authors dealt with the existence, uniqueness and global stability of equilibrium point of fractional-order complex valued neural networks with time delays while in [24,25], the synchronization of neural network is investigated.
In reality, time delays inevitably exist in various systems. Time delay is an inherent property of a neural network due to the finite speed of the signal propagation and the information processing time among neurons. Though considering that the time delays arise frequently in practical applications, it is difficult to measure them precisely. In most situations, the delays are variable and unbounded. So it is necessary to study complex valued neural networks with mixed time delays. A few researchers declare that representative time delay, called leakage delay, may exist in the negative input term of the neural systems and greatly affect the elements of neural organizations. On the other hand, the leakage delay extensively affects the dynamical behaviors of systems [26,27,28,29,30,31,32]. So far, there exist few studies considering fractional-order complex valued neural networks with leakage delay [33,34,35].
Motivated by the above discussions, in the present endeavor, we have focused on the stability of fractional order complex valued neural networks with leakage and mixed time-varying delays. The purpose of this paper is to establish some criteria which can guarantee the globally exponential stability of complex valued neural networks with the given convergence rate by using Lyapunov function techniques. The main contributions can be listed as follows:
(i)
In this paper, by making use of a delay differential inequality, we present a new sufficient condition which guarantees global exponential stability of the unique equilibrium point of complex valued neural networks with time-varying delays.
(ii)
It is difficult to calculate fractional-order derivatives of Lyapunov functionals. In order to overcome the difficulty, we constructed a suitable functional including fractional derivative terms, and calculated its derivative to derive the stability conditions.
(iii)
By using Lyapunov functional we obtain the sufficient conditions for global exponential stability neural networks with delays, in terms of LMI, which can be easily calculated by MATLAB LMI control toolbox.
(iv)
Numerical examples are given to illustrate the effectiveness of the derived methods.

2. Preliminaries

Definition 1
([36]). The Caputo fractional derivative of order α for a function h ( t ) is defined as follows:
D α h ( t ) = 1 Γ ( n α ) 0 t ( t s ) n α 1 h ( n ) ( s ) d s ,
where n is the positive integer such that n 1 < α < n .
Definition 2
([37]). The zero solution of (1) is said to be globally exponentially stable in PC, if there exist constants α > 0 and K 0 such that for any solution y ( t , t 0 , ϕ ) with the initial condition ϕ P C ,
| | y ( t , t 0 , x ( t 0 ) ) | | K | | ϕ | | ρ e α ( t t 0 ) , t t 0 .
Lemma 1
([38]). If S C n × n is a positive definite Hermitian matrix, and μ ( x ) : [ a , b ] C n is a scalar function with scalars a < b , then
a b μ T ( x ) d x S a b μ T ( x ) d x ( b a ) a b μ T ( x ) S μ ( x ) d x .
Lemma 2
([39]). Given constant matrices S 1 , S 2 , S 3 , where S 1 = S 1 T , S 2 = S 2 T , and S 2 > 0 , then
S 1 S 3 T S 3 S 2 < 0
if and only if
S 1 + S 3 T S 2 1 S 3 < 0 .
Assumption 1.
μ ( t ) , φ ( t ) δ ( t ) are delays satisfying,
0 δ ( t ) δ , δ ˙ ( t ) δ 1 , 0 φ ( t ) φ , φ ˙ ( t ) φ 1 , 0 μ ( t ) μ , μ ˙ ( t ) μ 1 .
Assumption 2.
In the complex field, the activation functions f ( · ) , h ( · ) satisfy the Lipschitz conditions:
| f k ( p ) f k ( q ) | l k | p q | , | h k ( p ) h k ( q ) | m k | p q | ,
where p , q C , where l k > 0 , m k > 0  ( k = 1 , 2 , , n ) are Lipschitz constants.
In this paper, we consider a class of fractional-order complex-valued neural networks,
D α z j ( t ) = l j z j ( t μ ( t ) ) + ς = 1 n b j ς g ς ( z ς ( t ) ) + ς = 1 n c j ς g ς ( z ς ( t φ ς ( t ) ) ) + ς = 1 n e j ς t δ ς ( t ) t g ς ( z ς ( s ) ) d s + I j ( t ) , z j ( s ) = ϕ j ( s ) , s [ φ , 0 ]
or
D α z ( t ) = L z ( t μ ( t ) ) + B g ( z ( t ) ) + C g ( z ( t φ ( t ) ) ) + E t δ ( t ) t g ( z ( s ) ) d s + I ( t ) , z ( s ) = ϕ ( s ) , s [ φ , 0 ] ,
where 0 < α < 1 denotes the order of fractional-order derivative, μ > 0 is the leakage delay, z ( t ) C n stands for the neural state, L = d i a g ( l 1 , , l n ) R n × n stands for the self-feedback connection weight matrix, B C n × n , C C n × n and E C n × n all are connection weight matrices, g ( · ) C n represent the complex valued neuron activation function.
A constant point z * = ( z 1 * , z 2 * , , z n * ) is said to be an equilibrium point of (1) if z * is a solution of (1), that is,
0 = l j z j * + ς = 1 n b j ς g ς ( z ς * ) + ς = 1 n c j ς g ς ( z ς * ) + ς = 1 n e j ς t δ ς ( t ) t g ς ( z ς * ) d s + I j .
Let us perturb the system (1) through shifting of coordinates as π ˜ j ( t ) = z j ( t ) z j * , then we obtain
D α π ˜ j ( t ) = l j π ˜ j ( t μ ( t ) ) + ς = 1 n b j ς g ς ( z ς ( t ) + z ς * ) g ς ( z ς * ) + ς = 1 n c j ς g ς ( z ς ( t φ ς ( t ) ) + z ς * ) g ς ( z ς * ) + ς = 1 n e j ς t δ ς ( t ) t g ς [ ( z ς ( s ) + z ς * ) g ς ( z ς * ) ] d s
D α π ˜ j ( t ) = l j π ˜ j ( t μ ( t ) ) + ς = 1 n b j ς h ς ( π ˜ ς ( t ) ) + ς = 1 n c j ς f ς ( π ˜ ς ( t φ ς ( t ) ) ) + ς = 1 n e j ς t δ ς ( t ) t h ς ( π ˜ ς ( s ) ) d s ,
where
g ς ( z ς ( t ) + z ς * ) g ς ( z ς * ) = h ς ( π ˜ ς ( t ) ) , g ς ( z ς ( t φ ς ( t ) ) + z ς * ) g ς ( z ς * ) = f ς ( π ˜ ς ( t φ ς ( t ) ) .
It is obvious that the functions f ( · ) , h ( · ) also satisfy the Assumption 2.

3. Mian Results

Theorem 1.
If α j ( j = 1 , 2 , , n ) is a positive constant such that
Λ 1 = [ 2 l j + 3 ς = 1 n L ς 2 ] + b i j 2 α j α ς , Λ 2 = α j α ς [ c j ς 2 + e j ς 2 ] .
Then the equilibrium point z * of system (1) is globally exponentially stable.
Proof. 
Let y j ( t ) = 1 2 α j π ˜ j 2 ( t ) , and calculating D α y j ( t )
D α y j ( t ) = α j π ˜ j ( t ) [ D α π ˜ j ( t ) ] = α j π ˜ j ( t ) [ l j π ˜ j ( t μ ( t ) ) + ς = 1 n b j ς h ς ( π ˜ ς ( t ) ) + ς = 1 n c j ς f ς ( π ˜ ς ( t φ ς ( t ) ) ) + ς = 1 n e j ς t δ ς ( t ) t h ς ( π ˜ ς ( s ) ) d s ] = α j l j π ˜ j ( t ) π ˜ j ( t μ ( t ) ) + ς = 1 n α j π ˜ j ( t ) b j ς h ς ( π ˜ ς ( t ) ) + ς = 1 n α j π ˜ j ( t ) c j ς f ς ( π ˜ ς ( t φ ς ( t ) ) ) + ς = 1 n α j π ˜ j ( t ) e j ς t δ ς ( t ) t h ς ( π ˜ ς ( s ) ) d s α j π ˜ j 2 ( t ) l j + ς = 1 n α j | π ˜ j ( t ) | b j ς | L ς | π ˜ ς ( t ) | + ς = 1 n α j | π ˜ j ( t ) | | c j ς | | L ς | | π ˜ ¯ ς ( t ) | + ς = 1 n α j | π ˜ j ( t ) | | e j ς | | L ς | | π ˜ ¯ ς ( t ) | α j π ˜ j 2 ( t ) l j + 1 2 ς = 1 n α j [ L ς 2 π ˜ j 2 ( t ) + b j ς 2 π ˜ ς 2 ( t ) ] + 1 2 ς = 1 n α j [ L ς 2 π ˜ j 2 ( t ) + c j ς 2 π ˜ ¯ ς 2 ( t ) ] + 1 2 ς = 1 n α j [ | π ˜ j 2 ( t ) L ς 2 + e j ς 2 π ˜ ¯ ς 2 ( t ) ] = [ α j l j + 3 2 ς = 1 n α j L ς 2 ] π ˜ j 2 ( t ) + 1 2 [ α j b j ς 2 ] π ˜ ς 2 ( t ) + 1 2 [ ς = 1 n α j c j ς 2 + ς = 1 n α j e j ς 2 ] π ˜ ¯ ς 2 = ς = 1 n { [ 2 l j + 3 ς = 1 n L ς 2 ] + b j ς 2 α j α ς } 1 2 α ς π ˜ ς 2 ( t ) + ς = 1 n { α j α ς c j ς 2 + α j α ς e j ς 2 } 1 2 α ς π ˜ ¯ ς 2 ( t ) .
As we know that Λ 1 = [ 2 l j + 3 ς = 1 n L ς 2 ] + b j ς 2 α j α ς , Λ 2 = α j α ς [ c j ς 2 + e j ς 2 ] , then it can be rewritten as
D α y j ( t ) Λ 1 y ( t ) + Λ 2 y ¯ ( t ) .
If the matrix Λ = ( Λ 1 + Λ 2 ) is an M-matrix, then there exist constants λ > 0 s j > 0  ( j = 1 , 2 , , n ) then
1 2 α min π ˜ j 2 ( t ) y j ( t ) = 1 2 α j π ˜ j 2 ( t ) s j ς = 1 n y ¯ ς ( t 0 ) η λ ( t t 0 ) = s j ς = 1 n 1 2 α ς π ˜ ¯ ς 2 ( t 0 ) η λ ( t t 0 ) 1 2 s j α max ς = 1 n π ˜ ¯ ς 2 ( t 0 ) η λ ( t t 0 ) π ˜ j 2 ( t ) α max α min s j ς = 1 n π ˜ ¯ ς 2 ( t 0 ) η λ ( t t 0 )
that is,
| | z j ( t ) z j * | | ( α max α min s j ) 1 2 | | z ¯ j ( t 0 ) z j * | | η λ ( t t 0 ) 2 .
This implies that the unique equilibrium point of Equation (1) is globally exponentially stable. □
Remark 1.
D α π ˜ j ( t ) = l j π ˜ j ( t μ ( t ) ) + ς = 1 n b j ς h ς ( π ˜ ς ( t ) ) + ς = 1 n c j ς f ς ( π ˜ ς ( t φ ς ( t ) ) ) + ς = 1 n e j ς t δ ς ( t ) t h ς ( π ˜ ς ( s ) ) d s + u ς ( t )
or equivalently
D α π ˜ ( t ) = L π ˜ ( t μ ( t ) ) + B h ( π ˜ ( t ) ) + C f ( π ˜ ( t φ ( t ) ) ) + E t δ ( t ) t h ( π ˜ ( s ) ) d s + U ( t ) .
To investigate the global exponential stability of the system, a simple sliding motion is proposed as
s ( t ) = D α 1 π ˜ ( t ) .
It is worth noting that the first-order differential order of the sliding surface is equal to the equation of the system. That is calculating the first-order derivative on both sides of above equation, we can get
s ˙ ( t ) = D α π ˜ ( t ) .
A state feedback sliding mode control design for the system was constructed as follows
U ( t ) = K s ( t ) ,
where K is the state feedback gain. Then,
D α π ˜ ( t ) = L π ˜ ( t μ ) + B h ( π ˜ ( t ) ) + C f ( π ˜ ( t φ ( t ) ) ) + E t δ ( t ) t h ( π ˜ ( s ) ) d s K s ( t ) .
Theorem 2.
The system (11) is globally exponentially stable if there exist positive definite Hermitian matrices R 1 , R 2 , R 3 , R 4 , R 5 , R 6 , R 7 , R 8 , R 9 , R 10 , positive diagonal matrices T 1 , T 2 , T 3 , T 4 and a positive scalar α, satisfying the following condition:
Ω = ( Ω i , j ) 13 × 13 < 0 ,
where
Ω 1 , 1 = R 4 + δ R 6 + φ R 8 + R 9 + R 10 H 1 T 1 H 2 L 1 T 3 L 2 , Ω 1 , 2 = T 1 ( H 1 + H 2 ) , Ω 1 , 3 = T 3 ( L 1 + L 2 ) , Ω 2 , 2 = R 2 + φ R 5 T 1 , Ω 3 , 3 = R 3 + δ R 7 T 3 , Ω 3 , 13 = B T R 1 , Ω 4 , 4 = η 2 α φ R 9 ( 1 φ 1 ) H 1 T 2 H 2 , Ω 4 , 7 = T 2 ( H 1 + H 2 ) , Ω 5 , 5 = η 2 α μ R 4 ( 1 μ 1 ) , Ω 5 , 13 = L T R 1 , Ω 6 , 6 = η 2 α δ R 10 ( 1 δ 1 ) L 1 T 4 L 2 , Ω 6 , 8 = T 4 ( L 1 + L 2 ) , Ω 7 , 7 = R 2 ( 1 φ 1 ) η 2 α φ T 2 , Ω 7 , 13 = C T R 1 , Ω 8 , 8 = η 2 α δ R 3 ( 1 δ 1 ) T 4 , Ω 9 , 9 = R 6 δ , Ω 10 , 10 = R 7 δ , Ω 10 , 13 = E T R 1 , Ω 11 , 11 = R 8 φ , Ω 12 , 12 = R 5 φ , Ω 13 , 13 = 2 α R 1 2 R 1 K .
Proof. 
Consider the following Lyapunov function
V = V 1 + V 2 + V 3 + V 4 + V 5 + V 6 + V 7 + V 8 + V 9 + V 10 ,
where
V 1 ( t , π ˜ ( t ) ) = η 2 α t s T ( t ) R 1 s ( t ) , V 2 ( t , π ˜ ( t ) ) = t φ ( t ) t η 2 α s f T ( π ˜ ( s ) ) R 2 f ( π ˜ ( s ) ) d s , V 3 ( t , π ˜ ( t ) ) = t δ ( t ) t η 2 α s h T ( π ˜ ( s ) ) R 3 h ( π ˜ ( s ) ) d s , V 4 ( t , π ˜ ( t ) ) = t μ ( t ) t η 2 α s π ˜ T ( s ) R 4 π ˜ ( s ) d s , V 5 ( t , π ˜ ( t ) ) = φ ( t ) 0 t + θ t η 2 α s f T ( π ˜ ( s ) ) R 5 f ( π ˜ ( s ) ) d s d θ , V 6 ( t , π ˜ ( t ) ) = δ ( t ) 0 t + θ t η 2 α s π ˜ T ( s ) R 6 π ˜ ( s ) d s d θ , V 7 ( t , π ˜ ( t ) ) = δ ( t ) 0 t + θ t η 2 α s h T ( π ˜ ( s ) ) R 7 h ( π ˜ ( s ) ) d s d θ , V 8 ( t , π ˜ ( t ) ) = φ ( t ) 0 t + θ t η 2 α s π ˜ T ( s ) R 8 π ˜ ( s ) d s d θ ,
V 9 ( t , π ˜ ( t ) ) = t φ ( t ) t η 2 α s π ˜ T ( s ) R 9 π ˜ ( s ) d s , V 10 ( t , π ˜ ( t ) ) = t δ ( t ) t η 2 α s π ˜ T ( s ) R 10 π ˜ ( s ) d s .
Now, we can calculate the time derivative of V along the trajectories of system (11), then we have
V ˙ 1 ( t , π ˜ ( t ) ) = 2 α η 2 α t s T ( t ) R 1 s ( t ) + 2 η 2 α t s T ( t ) R 1 D α π ˜ ( t ) , V ˙ 2 ( t , π ˜ ( t ) ) η 2 α t f T ( π ˜ ( t ) ) R 2 f ( π ˜ ( t ) ) η 2 α ( t φ ) f T ( π ˜ ( t φ ( t ) ) ) R 2 f ( π ˜ ( t φ ( t ) ) ) ( 1 φ 1 ) , V ˙ 3 ( t , π ˜ ( t ) ) η 2 α t h T ( π ˜ ( t ) ) R 3 h ( π ˜ ( t ) ) η 2 α ( t δ ) h T ( π ˜ ( t δ ( t ) ) ) R 3 h ( π ˜ ( t δ ( t ) ) ) ( 1 δ 1 ) , V ˙ 4 ( t , π ˜ ( t ) ) η 2 α t π ˜ T ( t ) R 4 π ˜ ( t ) η 2 α ( t μ ) π ˜ T ( t μ ( t ) ) R 4 π ˜ ( t μ ( t ) ) ( 1 μ 1 ) , V ˙ 5 ( t , π ˜ ( t ) ) = φ η 2 α t f T ( π ˜ ( t ) ) R 5 f ( π ˜ ( t ) ) t φ ( t ) t η 2 α s f T ( π ˜ ( s ) ) R 5 f ( π ˜ ( s ) ) d s , V ˙ 6 ( t , π ˜ ( t ) ) = δ η 2 α t π ˜ T ( t ) R 6 π ˜ ( t ) t δ ( t ) t η 2 α s π ˜ T ( s ) R 6 π ˜ ( s ) d s , V ˙ 7 ( t , π ˜ ( t ) ) = δ η 2 α t h T ( π ˜ ( t ) ) R 7 h ( π ˜ ( t ) ) t δ ( t ) t η 2 α s h T ( π ˜ ( s ) ) R 7 h ( π ˜ ( s ) ) d s , V ˙ 8 ( t , π ˜ ( t ) ) = φ η 2 α t π ˜ T ( t ) R 8 π ˜ ( t ) t φ ( t ) t η 2 α s π ˜ T ( s ) R 8 π ˜ ( s ) d s , V ˙ 9 ( t , π ˜ ( t ) ) η 2 α t π ˜ T ( t ) R 9 π ˜ ( t ) η 2 α ( t φ ( t ) ) π ˜ T ( t φ ( t ) ) R 9 π ˜ ( t φ ( t ) ) ( 1 φ 1 ) , V ˙ 10 ( t , π ˜ ( t ) ) η 2 α t π ˜ T ( t ) R 10 π ˜ ( t ) η 2 α ( t δ ( t ) ) π ˜ T ( t δ ( t ) ) R 10 π ˜ ( t δ ( t ) ) ( 1 δ 1 ) .
We have the following inequalities according Lemma 1, we obtain
t φ ( t ) t η 2 α s f T ( π ˜ ( s ) ) R 5 f ( π ˜ ( s ) ) d s 1 φ η 2 α t [ t φ ( t ) t f ( π ˜ ( s ) ) d s ] T R 5 [ t φ ( t ) t f ( π ˜ ( s ) ) d s ] , t δ ( t ) t η 2 α s π ˜ T ( s ) R 6 π ˜ ( s ) d s 1 δ η 2 α t [ t δ ( t ) t π ˜ ( s ) d s ] T R 6 [ t δ ( t ) t π ˜ ( s ) d s ] ,
t δ ( t ) t η 2 α s h T ( π ˜ ( s ) ) R 7 h ( π ˜ ( s ) ) d s 1 δ η 2 α t [ t δ ( t ) t h ( π ˜ ( s ) ) d s ] T R 7 [ t δ ( t ) t h ( π ˜ ( s ) ) d s ] , t φ ( t ) t η 2 α s π ˜ T ( s ) R 8 π ˜ ( s ) d s 1 φ η 2 α t [ t φ ( t ) t π ˜ ( s ) d s ] T R 8 [ t φ ( t ) t π ˜ ( s ) d s ] .
Since T 1 , T 2 , T 3 and T 4 , diagonal matrices, we can get from Assumption 2 that
π ˜ T ( t ) H 1 T 1 H 2 π ˜ ( t ) π ˜ T ( t ) T 1 ( H 1 + H 2 ) f ( π ˜ ( t ) ) + f T ( π ˜ ( t ) ) T 1 f ( π ˜ ( t ) ) 0 , π ˜ T ( t φ ( t ) ) H 1 T 2 H 2 π ˜ ( t φ ( t ) ) π ˜ T ( t φ ( t ) ) T 2 ( H 1 + H 2 ) f ( π ˜ ( t φ ( t ) ) ) + f T ( π ˜ ( t φ ( t ) ) ) T 2 f ( π ˜ ( t φ ( t ) ) ) 0 , π ˜ T ( t ) L 1 T 3 L 2 π ˜ ( t ) π ˜ T ( t ) T 3 ( L 1 + L 2 ) h ( π ˜ ( t ) ) + h T ( π ˜ ( t ) ) T 3 h ( π ˜ ( t ) ) 0 , π ˜ T ( t δ ( t ) ) L 1 T 4 L 2 π ˜ ( t δ ( t ) ) π ˜ T ( t δ ( t ) ) T 4 ( L 1 + L 2 ) h ( π ˜ ( t δ ( t ) ) ) + h T ( π ˜ ( t δ ( t ) ) ) T 4 h ( π ˜ ( t δ ( t ) ) ) 0 .
We obtain the following estimate the derivation of (13)
V ˙ ( t , π ˜ ( t ) ) η 2 α t [ s T ( t ) 2 α R 1 s ( t ) + 2 s T ( t ) R 1 D α π ˜ ( t ) + f T ( π ˜ ( t ) ) R 2 f ( π ˜ ( t ) ) η 2 α φ f T ( π ˜ ( t φ ( t ) ) ) R 2 f ( π ˜ ( t φ ( t ) ) ) ( 1 φ 1 ) + h T ( π ˜ ( t ) ) R 3 h ( π ˜ ( t ) ) η 2 α δ h T ( π ˜ ( t δ ( t ) ) ) R 3 h ( π ˜ ( t δ ( t ) ) ) ( 1 δ 1 ) + π ˜ T ( t ) R 4 π ˜ ( t ) η 2 α μ π ˜ T ( t μ ( t ) ) R 4 π ˜ ( t μ ( t ) ) ( 1 μ 1 ) + φ f T ( π ˜ ( t ) ) R 5 f ( π ˜ ( t ) ) t φ ( t ) t f ( π ˜ ( s ) ) d s T R 5 φ ( t φ ( t ) t f ( π ˜ ( s ) ) d s ) d s + δ π ˜ T ( t ) R 6 π ¯ ( t ) t δ ( t ) t π ˜ ( s ) d s T R 6 δ t δ ( t ) t π ˜ ( s ) d s + δ h T ( π ˜ ( t ) ) R 7 h ( π ˜ ( t ) ) t δ ( t ) t h ( π ˜ ( s ) ) d s T R 7 δ t δ ( t ) t h ( π ˜ ( s ) ) d s + φ π ˜ T ( t ) R 8 π ˜ ( t ) t φ t π ˜ ( s ) d s T R 8 φ t φ t π ˜ ( s ) d s + π ˜ T ( t ) R 9 π ( t ) ˜ η 2 α φ π ˜ T ( t φ ( t ) ) R 9 ζ ( t φ ( t ) ) ( 1 φ 1 ) + π ˜ T ( t ) R 10 π ˜ ( t ) η 2 α δ π ˜ T ( t δ ( t ) ) R 10 π ˜ ( t δ ( t ) ) ( 1 δ 1 ) ] ,
V ˙ ( t , π ˜ ( t ) ) = η 2 α t β T ( t ) Ω β ( t ) < 0 ,
where
β ( t ) = [ π ˜ T ( t ) f T ( π ˜ ( t ) ) h T ( π ˜ ( t ) ) π ˜ T ( t φ ( t ) ) π ˜ T ( t μ ( t ) ) π ˜ T ( t δ ( t ) ) f T ( π ˜ ( t φ ( t ) ) h T ( π ˜ ( t δ ( t ) ) ) ( t δ ( t ) t π ˜ ( s ) d s ) T ( t δ ( t ) t h ( π ˜ ( s ) ) d s ) T ( t φ ( t ) t π ˜ ( s ) d s ) T ( t φ ( t ) t f ( π ˜ ( s ) ) d s ) T s T ( t ) ] T .
From above condition, we have
V ˙ ( t , π ˜ ( t ) ) 0 .
Based on the Lyapunov method, then system (11) is asymptotically stable. Furthermore, we prove the exponential stability of the neural networks. Thus, we know that V ( t ) is monotone non increasing in t for t [ t 0 , ) , i.e., V ( t , π ˜ ( t ) ) V ( t 0 , π ˜ ( t 0 ) ) .
On the other hand,
V ( t 0 , π ˜ ( t 0 ) ) = η 2 α t 0 s T ( t 0 ) R 1 s ( t 0 ) + t 0 φ ( t 0 ) t 0 η 2 α s f T ( π ˜ ( s ) ) R 2 f ( π ˜ ( s ) ) d s + t 0 δ ( t 0 ) t 0 η 2 α s h T ( π ˜ ( s ) ) R 3 h ( π ˜ ( s ) ) d s + t 0 μ ( t 0 ) t η 2 α s π ˜ T ( s ) R 4 π ˜ ( s ) d s + φ ( t 0 ) 0 t 0 + θ t 0 η 2 α s f T ( π ˜ ( s ) ) R 5 f ( π ˜ ( s ) ) d s d θ + δ ( t 0 ) 0 t 0 + θ t 0 η 2 α s π ˜ T ( s ) R 6 π ˜ ( s ) d s d θ + δ ( t 0 ) 0 t 0 + θ t 0 η 2 α s h T ( π ˜ ( s ) ) R 7 h ( π ˜ ( s ) ) d s d θ + φ ( t 0 ) 0 t 0 + θ t 0 η 2 α s π ˜ T ( s ) R 8 π ˜ ( s ) d s d θ + t 0 φ ( t 0 ) t η 2 α s π ˜ T ( s ) R 9 π ˜ ( s ) d s + t 0 δ ( t 0 ) t 0 η 2 α s π ˜ T ( s ) R 10 π ˜ ( s ) d s , η 2 α t 0 { λ M ( R 1 ) + [ λ M ( R 2 ) β 1 + λ M ( R 3 ) β 2 + λ M ( R 4 ) β 3 + λ M ( R 9 ) β 1 + λ M ( R 10 ) β 2 ] l R 2 + [ ( λ M ( R 5 ) + λ M ( R 8 ) ) β 4 + ( λ M ( R 6 ) + λ M ( R 7 ) ) β 5 ] } | | ϕ | | 2 ,
where
β 1 = 1 η 2 α φ 2 α , β 2 = 1 η 2 α δ 2 α , β 3 = 1 η 2 α μ 2 α , β 4 = 1 2 α φ 1 η 2 α φ 2 α , β 5 = 1 2 α δ 1 η 2 α δ 2 α .
For any variable t, form the chosen Lyapunov functional, we can get the following inequality:
V ( t , π ˜ ( t ) ) λ m ( R 1 ) η 2 α t | | s ( t ) | | 2
V ( t , π ˜ ( t ) ) V ( t 0 , π ˜ ( t 0 ) ) .
That’s to say,
| | s ( t ) | | M 1 λ m ( R 1 ) | | ϕ | | η α t , | | D α 1 π ˜ ( t ) | | M 1 λ m ( R 1 ) | | ϕ | | η α t .
For the ith component of vector η ( t ) , we have
M 1 λ m ( R 1 ) | | ϕ | | η α t s γ 1 π ˜ i ( s ) M 1 λ m ( R 1 ) | | ϕ | | η α t .
Taking Laplace transform and Laplace inverse transformation, we have the following inequality
| π ˜ ( t ) | M 1 λ m ( R 1 ) | | ϕ | | 1 t 1 γ E 1 , γ ( α t ) .
Thereupon,
| | π ˜ ( t ) | | M 1 λ m ( R 1 ) | | ϕ | | 1 t 1 γ E 1 , γ ( α t ) .
For γ = 1 , we have
| | π ˜ ( t ) | | M 1 λ m ( R 1 ) | | ϕ | | η ( α t ) ,
where,
M 1 = η 2 α t 0 { ( λ M ( R 1 ) + ( λ M ( R 2 ) β 1 + λ M ( R 3 ) β 2 + λ M ( R 4 ) β 3 + λ M ( R 9 ) β 1 + λ M ( R 10 ) β 2 ) l R 2 + ( λ M ( R 5 ) + λ M ( R 8 ) ) β 4 + ( λ M ( R 6 ) + λ M ( R 7 ) ) β 5 } ,
| | π ˜ ( t ) | | M 2 λ m ( R 1 ) | | ϕ | | η α ( t t 0 ) ,
and
M 2 = ( λ M ( R 1 ) + ( λ M ( R 2 ) β 1 + λ M ( R 3 ) β 2 + λ M ( R 4 ) β 3 + λ M ( R 9 ) β 1 + λ M ( R 10 ) β 2 ) l R 2 + ( λ M ( R 5 ) + λ M ( R 8 ) ) β 4 + ( λ M ( R 6 ) + λ M ( R 7 ) ) β 5 .
From the inequalities (15) and (16), the sliding mode surface s ( t ) and the system state trajectory π ˜ ( t ) converge exponentially to zero. Thus, system (11) is globally exponentially stable. □
Remark 2.
In Theorem 2, Hermitian matrices R 1 , R 2 , R 3 , R 4 , R 5 , R 6 , R 7 , R 8 , R 9 , R 10 , positive diagonal matrices T 1 , T 2 , T 3 , T 4 and positive scalars 0 δ < 1 , satisfying the following.
Theorem 3.
The system (11) is globally exponentially stable if there exist positive definite Hermitian matrices R 1 , R 2 , R 3 , R 4 , R 5 , R 6 , R 7 , R 8 , R 9 , R 10 , positive diagonal matrices T 1 , T 2 , T 3 , T 4 and positive scalars 0 δ < 1 , satisfying the following LMI:
ζ = ( ζ i , j ) 13 × 13 < 0 ,
where
ζ 1 , 1 = R 4 + δ R 6 + φ R 8 + R 9 + R 10 H 1 T 1 H 2 L 1 T 3 L 2 , ζ 1 , 2 = T 1 ( H 1 + H 2 ) , Ω 1 , 3 = T 3 ( L 1 + L 2 ) , ζ 2 , 2 = R 2 + φ R 5 T 1 , ζ 3 , 3 = R 3 + δ R 7 T 3 , ζ 3 , 13 = B T R 1 , ζ 4 , 4 = M 9 ( 1 φ 1 ) H 1 T 2 H 2 , ζ 4 , 7 = T 2 ( H 1 + H 2 ) , ζ 5 , 5 = M 4 ( 1 μ 1 ) , ζ 5 , 13 = L T R 1 , ζ 6 , 6 = M 10 ( 1 δ 1 ) L 1 T 4 L 2 , ζ 6 , 8 = T 4 ( L 1 + L 2 ) , ζ 7 , 7 = M 2 ( 1 φ 1 ) T 2 , ζ 7 , 13 = C T R 1 , ζ 8 , 8 = M 3 ( 1 δ 1 ) T 4 , ζ 9 , 9 = R 6 δ , ζ 10 , 10 = R 7 δ , ζ 10 , 13 = E T R 1 , ζ 11 , 11 = R 8 φ , ζ 12 , 12 = R 5 φ , ζ 13 , 13 = M 1 2 R 1 K .
Proof. 
First we consider the following terms, where 0 < δ = η 2 α ρ < 1
0 < 2 α R 1 = M 1 ,
0 < M 2 < δ R 2 = η 2 α ρ R 2 η 2 α φ R 2 ,
0 < M 3 < δ R 3 = η 2 α ρ R 3 η 2 α δ R 3 ,
0 < M 4 < δ R 4 = η 2 α ρ R 4 η 2 α μ R 4 ,
0 < M 9 < δ R 9 = η 2 α ρ R 9 η 2 α φ R 9 ,
0 < M 10 < δ R 10 = η 2 α ρ R 10 η 2 α δ R 10 .
Using (14) and (18)–(23), we get
V ˙ ( t , π ˜ ( t ) ) η 2 α t [ s T ( t ) ( M 1 2 R 1 K ) s ( t ) + s T ( t ) ( 2 R 1 L ) π ˜ ( t μ ( t ) ) + s T ( t ) 2 R 1 B h ( π ˜ ( t ) ) + s T ( t ) 2 R 1 C f ( π ˜ ( t φ ( t ) ) + s T ( t ) 2 R 1 E t φ ( t ) t h ( π ˜ ( s ) ) d s + f T ( π ˜ ( t ) ) R 2 f ( π ˜ ( t ) ) f T ( π ˜ ( t φ ( t ) ) ) M 2 f ( π ˜ ( t φ ( t ) ) ) ( 1 φ 1 ) + h T ( π ˜ ( t ) ) R 3 h ( π ˜ ( t ) ) h T ( π ˜ ( t δ ( t ) ) ) M 3 h ( π ˜ ( t δ ( t ) ) ) ( 1 δ 1 ) + π ˜ T ( t ) R 4 π ˜ ( t ) π ˜ T ( t μ ( t ) ) M 4 π ˜ ( t μ ( t ) ) ( 1 μ 1 ) + φ f T ( π ˜ ( t ) ) R 5 f ( π ˜ ( t ) ) + δ π ˜ T ( t ) R 6 π ˜ ( t ) R 5 φ t φ ( t ) t f ( π ˜ ( s ) ) d s T t φ ( t ) t f T ( π ˜ ( s ) ) d s R 6 δ t δ ( t ) t π ˜ ( s ) d s T t δ ( t ) t π ˜ ( s ) d s + δ h T ( π ˜ ( t ) ) R 7 h ( π ˜ ( t ) ) R 7 δ t δ ( t ) t h ( π ˜ ( s ) ) d s T t δ ( t ) t h ( π ˜ ( s ) ) d s + φ π ˜ ( t ) R 8 π ˜ ( t ) R 8 φ t φ ( t ) t π ˜ ( s ) d s T t φ ( t ) t π ˜ ( s ) d s + π ˜ T ( t ) R 9 π ˜ ( t ) M 9 π ˜ T ( t φ ( t ) ) ( 1 φ 1 ) π ˜ ( t φ ( t ) ) + π ˜ T ( t ) R 10 π ˜ ( t ) M 10 π ˜ T ( t δ ( t ) ) ( 1 δ 1 ) π ˜ ( t δ ( t ) ) ]
V ˙ ( t , π ˜ ( t ) ) η 2 α t β T ( t ) ζ β ( t ) .
The remaining part of the proof is similar to that of Theorem 2.
This implies that the equilibrium point of the system (11) is globally exponentially stable. The proof is completed. □
Remark 3.
When E = 0 , the fractional-order complex-valued neural networks (11) correspondingly reduces to
D α π ˜ ( t ) = L π ˜ ( t μ ) + B h ( π ˜ ( t ) ) + C f ( π ˜ ( t φ ( t ) ) ) K s ( t ) .
Theorem 4.
The system (25) is globally exponentially stable if there exist Hermitian matrices R 1 , R 2 , R 3 , R 4 , R 9 , R 10 , positive diagonal matrices T 1 , T 2 , T 3 , T 4 satisfying the following LMI:
ϱ = ( ϱ i , j ) 9 × 9 < 0 ,
where
ϱ 1 , 1 = R 4 + R 9 + R 10 H 1 T 1 H 2 L 1 T 3 L 2 , ϱ 1 , 2 = T 3 ( L 1 + L 2 ) , ϱ 1 , 3 = T 1 ( H 1 + H 2 ) , ϱ 1 , 6 = H 3 U H 3 ϱ 2 , 2 = R 2 T 3 , ϱ 2 , 9 = B T R 1 , ϱ 3 , 3 = R 2 T 1 , ϱ 4 , 4 = η 2 α φ R 2 ( 1 φ 1 ) T 2 , ϱ 4 , 7 = ( H 1 + H 2 ) T T 2 , ϱ 4 , 9 = C T R 1 , ϱ 5 , 5 = η 2 α δ R 3 ( 1 δ 1 ) T 4 , ϱ 5 , 8 = ( L 1 + L 2 ) T T 4 ϱ 6 , 6 = η 2 α μ R 4 ( 1 μ 1 ) , ϱ 6 , 9 = L T R 1 , ϱ 7 , 7 = η 2 α φ R 9 ( 1 φ 1 ) H 1 T 2 H 2 , ϱ 8 , 8 = η 2 α δ R 10 ( 1 δ 1 ) L 1 T 4 L 2 , ϱ 9 , 9 = 2 α R 1 2 R 1 K .
Proof. 
Consider the following Lyapunov function
V = V 1 + V 2 + V 3 + V 4 + V 5 + V 6 ,
where
V 1 ( t , π ˜ ( t ) ) = η 2 α t s T ( t ) R 1 s ( t ) , V 2 ( t , π ˜ ( t ) ) = t φ ( t ) t η 2 α s f T ( π ˜ ( s ) ) R 2 f ( π ˜ ( s ) ) d s , V 3 ( t , π ˜ ( t ) ) = t δ ( t ) t η 2 α s h T ( π ˜ ( s ) ) R 3 h ( π ˜ ( s ) ) d s , V 4 ( t , π ˜ ( t ) ) = t μ ( t ) t η 2 α s π ˜ T ( s ) R 4 π ˜ ( s ) d s , V 5 ( t , π ˜ ( t ) ) = t φ ( t ) t η 2 α s π ˜ T ( s ) R 9 π ˜ ( s ) d s , V 6 ( t , π ˜ ( t ) ) = t δ ( t ) t η 2 α s π ˜ T ( s ) R 10 π ˜ ( s ) d s .
Now, we can calculate the time derivative of V along the trajectories of system (25), then we have
V ˙ 1 ( t , π ˜ ( t ) ) = 2 α η 2 α t s T ( t ) R 1 s ( t ) + 2 η 2 α t s T ( t ) R 1 D α π ˜ ( t ) , V ˙ 2 ( t , π ˜ ( t ) ) η 2 α t f T ( π ˜ ( t ) ) R 2 f ( π ˜ ( t ) ) η 2 α ( t φ ) f T ( π ˜ ( t φ ( t ) ) R 2 f ( π ˜ ( t φ ( t ) ) ) ( 1 φ 1 ) , V ˙ 3 ( t , π ˜ ( t ) ) η 2 α t h T ( π ˜ ( t ) ) R 3 h ( π ˜ ( t ) ) η 2 α ( t δ ) h T ( π ˜ ( t δ ( t ) ) R 3 h ( π ˜ ( t δ ( t ) ) ) ( 1 δ 1 ) , V ˙ 4 ( t , π ˜ ( t ) ) η 2 α t π ˜ T ( t ) R 4 π ˜ ( t ) η 2 α ( t μ ) π ˜ T ( t μ ( t ) ) R 4 π ˜ ( t μ ( t ) ) ( 1 μ 1 ) , V ˙ 5 ( t , π ˜ ( t ) ) η 2 α t π ˜ T ( t ) R 9 π ˜ ( t ) η 2 α ( t φ ) π ˜ T ( t φ ( t ) ) R 9 π ˜ ( t φ ( t ) ) ( 1 φ 1 ) , V ˙ 6 ( t , π ˜ ( t ) ) η 2 α t π ˜ T ( t ) R 10 π ˜ ( t ) η 2 α ( t δ ) π ˜ T ( t δ ( t ) ) R 10 π ˜ ( t δ ( t ) ) ( 1 δ 1 ) .
Since T 1 , T 2 , T 3 and T 4 , diagonal matrices, we can get from Assumption 2 that
π ˜ T ( t ) H 1 T 1 H 2 π ˜ ( t ) π ˜ T ( t ) T 1 ( H 1 + H 2 ) f ( π ˜ ( t ) ) + f T ( π ˜ ( t ) ) T 1 f ( π ˜ ( t ) ) 0 , π ˜ T ( t φ ( t ) ) H 1 T 2 H 2 π ˜ ( t φ ( t ) ) π ˜ T ( t φ ( t ) ) T 2 ( H 1 + H 2 ) f ( π ˜ ( t φ ( t ) ) ) + f T ( π ˜ ( t φ ( t ) ) ) T 2 f ( π ˜ ( t φ ( t ) ) ) 0 , π ˜ T ( t ) L 1 T 3 L 2 π ˜ ( t ) π ˜ T ( t ) T 3 ( L 1 + L 2 ) h ( π ˜ ( t ) ) + h T ( π ˜ ( t ) ) T 3 h ( π ˜ ( t ) ) 0 , π ˜ T ( t δ ( t ) ) L 1 T 4 L 2 π ˜ ( t δ ( t ) ) π ˜ T ( t δ ( t ) ) T 4 ( L 1 + L 2 ) h ( π ˜ ( t δ ( t ) ) ) + h T ( π ˜ ( t δ ( t ) ) ) T 4 h ( π ˜ ( t δ ( t ) ) ) 0 .
Adding above, we obtain the following estimate the derivation of (27)
V ˙ ( t , π ˜ ( t ) ) η 2 α t δ T ( t ) ϱ δ ( t )
δ ( t ) = [ π ˜ T ( t ) , h T ( π ˜ ( t ) ) , f T ( π ˜ ( t ) ) , f T ( π ˜ ( t φ ( t ) ) ) , h T ( π ˜ ( t δ ( t ) ) ) , π ˜ T ( t μ ( t ) ) , π ˜ T ( t φ ( t ) , π ˜ T ( t δ ( t ) ) , s T ( t ) ] T .
From above condition, we have
V ˙ ( t , π ˜ ( t ) ) 0 .
Based on the Lyapunov method, system (25) is asymptotically stable. Furthermore, we prove the exponential stability of the neural networks. Thus, we know that V ( t ) is monotone non-increasing in t for t [ t 0 , ) , i.e., V ( t ) V ( t 0 ) .
V ( t , π ˜ ( t ) ) V ( t 0 , π ˜ ( t 0 ) ) .
From condition (29), we have V ˙ 0 , and then for any solution V ( t , π ˜ ( t ) ) . On the other hand,
V ( t 0 , π ˜ ( t 0 ) ) = η 2 α t 0 s T ( t 0 ) R 1 s ( t 0 ) + t 0 φ ( t 0 ) t 0 η 2 α s f T ( π ˜ ( s ) ) R 2 f ( π ˜ ( s ) ) d s + t 0 δ ( t 0 ) t 0 η 2 α s h T ( π ˜ ( s ) ) R 3 h ( π ˜ ( s ) ) d s + t 0 μ ( t 0 ) t η 2 α s π ˜ T ( s ) R 4 π ˜ ( s ) d s + t 0 φ ( t 0 ) t 0 η 2 α s π ˜ T ( s ) R 9 π ˜ ( s ) d s + t 0 δ ( t 0 ) t 0 η 2 α s π ˜ T ( s ) R 10 π ˜ ( s ) d s , η 2 α t 0 { λ M ( R 1 ) + λ M ( R 2 ) β 1 l R 2 + λ M ( R 3 ) β 2 l R 2 + λ M ( R 4 ) β 3 l R 2 + λ M ( R 9 ) β 1 l R 2 + λ M ( R 10 ) β 2 l R 2 } | | ϕ | | 2 ,
where
β 1 = 1 η 2 α φ 2 α , β 2 = 1 η 2 α δ 2 α , β 3 = 1 η 2 α μ 2 α .
| | s ( t ) | | σ 1 λ m ( R 1 ) | | ϕ | | η α t ,
replace s(t) by D α 1 , we get
| | D α 1 π ˜ ( t ) | | σ 1 λ m ( R 1 ) | | ϕ | | η α t ,
for the ith component of vector η ( t ) , we have
σ 1 λ m ( R 1 ) | | ϕ | | η α t s γ 1 π ˜ i ( s ) σ 1 λ m ( R 1 ) | | ϕ | | η α t .
Taking Laplace transform and Laplace inverse transformation, we have the following inequality:
| π ˜ ( t ) | σ 1 λ m ( R 1 ) | | ϕ | | 1 t 1 γ E 1 , γ ( α t ) .
Thereupon,
| | π ˜ ( t ) | | σ 1 λ m ( R 1 ) | | ϕ | | 1 t 1 γ E 1 , γ ( α t ) .
For γ = 1 , we have
| | π ˜ ( t ) | | σ 1 λ m ( R 1 ) | | ϕ | | η ( α t ) ,
where,
σ 1 = η 2 α t 0 ( λ M ( R 1 ) + λ M ( R 2 ) β 1 l R 2 + λ M ( R 3 ) β 2 l R 2 + λ M ( R 4 ) β 3 l R 2 + λ M ( R 9 ) β 1 l R 2 + λ M ( R 10 ) β 2 l R 2 )
| | π ˜ ( t ) | | σ 2 λ m ( R 1 ) | | ϕ | | η α ( t t 0 )
and
σ 2 = ( λ M ( R 1 ) + λ M ( R 2 ) β 1 l R 2 + λ M ( R 3 ) β 2 l R 2 + λ M ( R 4 ) β 3 l R 2 + λ M ( R 9 ) β 1 l R 2 + λ M ( R 10 ) β 2 l R 2 ) .
From the inequalities (30) and (31), the sliding mode surface s ( t ) and the system state trajectory π ˜ ( t ) converge exponentially to zero. Thus, system (25) is globally exponentially stable. □

4. Numerical Examples

Example 1.
Consider the fractional order complex-valued neural networks (11), with parameters
L = 0.94 0 0 0.94 , B = 0.9 + 0.9 i 0.9 + 0.8 i 0.8 + 0.4 i 0.2 + 0.7 i , C = 0.7 + 0.4 i 0.5 0.6 i 0.6 0.5 i 0.5 + 0.6 i , E = 0.2 + 0.6 i 0.5 0.7 i 0.8 + 0.8 i 0.6 0.7 i .
Choose scalars, δ 1 = 0.88 , φ 1 = 0.28 , α = 0.27 , δ = 0.36 , μ 1 = 0.65 , φ = 0.87 . Utilizing Matlab, the LMI (12) of Theorem 2 has the following feasible solutions:
R 1 = 3.6748 + 0.0000 i 11.4759 4.4816 i 11.4759 + 4.4816 i 51.7224 + 0.0000 i , R 2 = 24.2135 + 0.0000 i 10.4759 4.4816 i 10.4759 + 4.4816 i 42.7224 + 0.0000 i , R 3 = 8.0048 + 0.0000 i 5.4759 1.6143 i 5.4759 + 1.6143 i 17.7224 + 0.0000 i , R 4 = 23.6748 + 0.0000 i 8.4759 2.9916 i 8.4759 + 2.9916 i 37 : 7224 + 0.0000 i , R 5 = 16.3740 + 0.0000 i 5.2878 1.8567 i 5.2878 + 1.8567 i 37.8758 + 0.0000 i , R 6 = 10.5478 + 0.0000 i 2.7546 0.7654 i 2.7546 + 0.7654 i 15.6774 + 0.0000 i , R 7 = 13.0757 + 0.0000 i 6.9876 4.7658 i 6.9876 + 4.7658 i 20.7764 + 0.0000 i , R 8 = 3.6748 + 0.0000 i 1.4759 0.6356 i 1.4759 + 0.6356 i 7.4554 + 0.0000 i , R 9 = 19.6748 + 0.0000 i 2.7756 0.6816 i 2.7756 + 0.6816 i 25.9864 + 0.0000 i , R 10 = 17.3498 + 0.0000 i 3.4759 0.4816 i 3.4759 + 0.4816 i 22.7624 + 0.0000 i , T 1 = 43.6748 0 0 54.6543 , T 2 = 34.1745 0 0 38.7224 , T 3 = 78.5777 0 0 87.6767 , T 4 = 36.6748 0 0 56.7667 .
The control gain matrix is obtained as follows:
K = 65.3498 + 0.0000 i 7.6585 0.566 i 7.6585 + 0.566 i 68.7624 + 0.0000 i .
The above results shows that the fractional order complex-valued neural networks (11) is globally exponentially stable.
Example 2.
Consider the following fractional order complex-valued neural networks (25), with parameters are
L = 0.67 0 0 0.67 , B = 0.6 + 0.9 i 0.1 + 0.8 i 0.8 + 0.2 i 0.6 + 0.7 i C = 0.1 + 0.6 i 0.3 0.2 i 0.6 + 0.8 i 0.7 0.7 i .
Choose, scalars are δ 1 = 0.84 , φ 1 = 0.18 , α = 0.27 , δ = 0.35 , μ 1 = 0.15 , φ = 0.67 . Utilizing Matlab, the LMI (26) of Theorem 4 has the following feasible solutions
R 1 = 1.6678 + 0.0000 i 13.8769 1.4816 i 13.8769 + 1.4816 i 11.7224 + 0.0000 i , R 2 = 14.6595 + 0.0000 i 11.4439 28.4876 i 11.4439 + 28.4876 i 28.7224 + 0.0000 i , R 3 = 8.1148 + 0.0000 i 1.6789 1.6183 i 1.6789 + 1.6183 i 17.7224 + 0.0000 i , R 4 = 23.6748 + 0.0000 i 6.1259 8.9565 i 6.1259 8.9565 i 37.1224 + 0.0000 i , R 9 = 16.6748 + 0.0000 i 2.2356 0.5516 i 2.2356 + 0.5516 i 43.9787 + 0.0000 i , R 10 = 18.3788 + 0.0000 i 2.4759 0.8965 i 2.4759 + 0.8965 i 22.7624 + 0.0000 i , T 1 = 27.1148 0 0 44.1233 , T 2 = 67.4445 0 0 76.8724 , T 3 = 54.5877 0 0 76.6546 , T 4 = 86.5332 0 0 97.5678 .
The control gain matrix is obtained as follows:
K = 6.3498 + 0.0000 i 1.5685 0.566 i 1.5685 + 0.566 i 6.7624 + 0.0000 i .
The above results shows that the fractional order complex-valued neural networks (25) is globally exponentially stable.

5. Conclusions

In this paper, we have investigated the global exponential stability of fractional order complex valued neural networks with leakage delay and mixed time varying delays. By constructing a proper Lyapunov functional and using fractional-order differential inequality and some other inequality techniques, we derived sufficient conditions to ensure the global exponential stability for the discussed fractional-order systems. Based on the Lyapunov functional containing some novel single and double integral terms, the stability conditions are derived in terms of LMIs which can be very efficiently solved by using Matlab LMI control toolbox. Finally illustrative examples have been provided to show the effectiveness of the obtained results.

Author Contributions

Conceptualization, M.H., G.M. and M.S.A.; methodology, M.H.; software, J.F.A.-A.; validation, M.H., G.M. and M.S.A., J.F.A.-A., N.G. and R.V.; formal analysis, G.M. and M.S.A.; investigation, J.F.A.-A., N.G. and R.V. All authors have read and agreed to the published version of the manuscript.

Funding

This Project was funded by the Taif University Researchers Supporting Projects at Taif University, Kingdom of Saudi Arabia, under grant number: TURSP-2020/211.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No data were used to support the study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Baleanu, D.; Machado, J.A.T.; Luo, A.C.J. Fractional Dynamics and Control; Springer: New York, NY, USA, 2012. [Google Scholar]
  2. Bazhlekova, E. Subordination principle for a class of fractional order differential equations. Mathematics 2015, 3, 412–427. [Google Scholar] [CrossRef] [Green Version]
  3. Aghababa, M.P. Fractional modeling and control of a complex nonlinear energy supply-demand. system. Complexity 2014, 20, 74–86. [Google Scholar] [CrossRef]
  4. Diethelm, K. The Analysis of Fractional Differential Equations; Springer: Berlin, Germany, 2010. [Google Scholar]
  5. Han, X.; Hymavathi, M.; Sanober, S.; Dhupia, B.; Syed Ali, M. Robust Stability of Fractional Order Memristive BAM Neural Networks with Mixed and Additive Time Varying Delays. Fractal Fract. 2022, 6, 62. [Google Scholar] [CrossRef]
  6. Huang, X.; Jia, J.; Fan, Y.; Wang, Z.; Xia, J. Interval matrix method based synchronization criteria for fractional-order memristive neural networks with multiple time-varying delays. J. Frankl. Inst. 2020, 357, 1707–1733. [Google Scholar] [CrossRef]
  7. Kilbas, A.; Srivastava, A.; Trujillo, J.J. Theory and Applications of Fractional Differential Equations; Elsevier Science Limited: Amsterdam, The Netherlands, 2006; Volume 204. [Google Scholar]
  8. Podlubny, I. Fractional Differential Equations; Academic Press: San Diego, CA, USA, 1999. [Google Scholar]
  9. Lundstrom, B.N.; Higgs, M.H.; Spain, W.J.; Fairhall, A.L. Fractional differentiation by neocortical pyramidal neurons. Nat. Neurosci. 2008, 11, 1335–1342. [Google Scholar] [CrossRef]
  10. Almeida, L.B. Fractional fourier transform and timefrequency representations. IEEE Trans. Signal Process. 1994, 42, 3084–3091. [Google Scholar] [CrossRef]
  11. Monje, C.A.; Vinagre, B.M.; Feliu, V.; Chen, Y. Tuning and auto-tuning of fractional order controllers for industry applications. Control Eng. Pract. 2008, 16, 798–812. [Google Scholar] [CrossRef] [Green Version]
  12. Ali, M.S.; Hymavathi, M. Synchronization of fractional order neutral type fuzzy cellular neural networks with discrete and distributed delays via state feedback control. Neural Process. Lett. 2021, 53, 929–957. [Google Scholar] [CrossRef]
  13. Ali, M.S.; Hymavathi, M.; Rajchakit, G.; Saroha, S.; Palanisamy, L. Synchronization of fractional order fuzzy BAM neural networks with time varying delays and reaction diffusion terms. IEEE Access 2020, 8, 186551–186571. [Google Scholar] [CrossRef]
  14. Ali, M.S.; Hymavathi, M.; Priya, B.; Kauser, S.A.; Thakur, G.K. Stability analysis of stochastic fractional-order competitive neural networks with leakage delay. AIMS Math. 2021, 6, 3205–3241. [Google Scholar]
  15. Zhang, S.; Yu, Y.; Yu, J. LMI conditions for global stability of fractional-order neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 2423–2433. [Google Scholar] [CrossRef]
  16. Jia, J.; Huang, X.; Li, Y.; Wang, Z. Adaptive synchronization of fractional-order memristor-based neural networks with multiple time-varying delays. In Proceedings of the 2017 Chinese Automation Congress (CAC), Jinan, China, 20–22 October 2017; pp. 1229–1234. [Google Scholar]
  17. Zhang, S.; Yu, Y.; Wang, H. Mittag-Leffler stability of fractional-order hopfield neural networks. Nonlinear Anal. Hybrid Syst. 2015, 16, 104–121. [Google Scholar] [CrossRef]
  18. Syed Ali, M.; Hymavathi, M.; Saroha, S.; Krishna Moorthy, R. Global asymptotic stability of neutral type fractional-order memristor-based neural networks with leakage term, discrete and distributed delays. Math. Methods Appl. Sci. 2021, 44, 5953–5973. [Google Scholar] [CrossRef]
  19. Li, H.; Muhammadhaji, A.; Zhang, L.; Jiang, H.; Teng, Z. Improved synchronization criteria for fractional-order complex-valued neural networks via partial control. Adv. Differ. Equ. 2020, 2020, 376. [Google Scholar] [CrossRef]
  20. Zeng, D.; Zhang, R.; Liu, X.; Zhong, S.; Shi, K. Improved results on synchronisation of delayed complex dynamical networks via sampled-data control. Int. J. Syst. Sci. 2018, 49, 1242–1255. [Google Scholar] [CrossRef]
  21. Xu, X.; Zhang, J.; Shi, J. Dynamical behaviour analysis of delayed complex-valued neural networks with impulsive effect. Int. J. Syst. 2017, 48, 686–694. [Google Scholar] [CrossRef]
  22. Rakkiyappan, R.; Cao, J.D.; Velmurugan, G. Existence and uniform stability analysis of fractional-order complex-valued neural networks with time delays. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 84–97. [Google Scholar] [CrossRef]
  23. Rakkiyappana, R.; Velmurugana, G.; Cao, J.D. Stability analysis of fractional-order complex-valued neural networks with time delays. Chaos Solitons Fractals 2015, 78, 297–316. [Google Scholar] [CrossRef]
  24. Syed Ali, M.; Hymavathi, M.; Kauser, S.A.; Rajchakit, G.; Hammachukiattikul, P. Synchronization of Fractional Order Uncertain BAM Competitive Neural Networks. Fractaland Fract. 2022, 6, 14. [Google Scholar] [CrossRef]
  25. Syed Ali, M.; Hymavathi, M.; Senan, S.; Shekher, V.; Arik, S. Global asymptotic synchronization of impulsive fractional-order complex-valued memristor-based neural networks with time varying delays. Commun. Nonlinear Sci. Numer. Simul. 2019, 78, 104869. [Google Scholar] [CrossRef]
  26. Nagamani, G.; Radhika, T.; Balasubramaniam, P. A delay decomposition approach for robust dissipativity and passivity analysis of neutral-type neural networks with leakage time-varying delay. Complexity 2016, 21, 248–264. [Google Scholar] [CrossRef]
  27. Pahnehkolaei, S.M.A.; Alfi, A.; Machado, J.A.T. Uniform stability of fractional order leaky integrator echo state neural network with multiple time delays. Inform. Sci. 2017, 418, 703–716. [Google Scholar] [CrossRef]
  28. Rakkiyappan, R.; Lakshmanan, S.; Sivasamy, R.; Lim, C.P. Leakage delay-dependent stability analysis of Markovian jumping linear systems with time-varying delays and nonlinear perturbations. Appl. Math. Model. 2016, 40, 5026–5043. [Google Scholar] [CrossRef]
  29. Li, X.; Fu, X.L.; Balasubramanianm, P.; Rakkiyappan, R. Existence, uniqueness and stability analysis of recurrent neural networks with time delay in the leakage term under impulsive perturbations. Nonlinear Anal. Real World Appl. 2010, 11, 4092–4108. [Google Scholar] [CrossRef]
  30. Ratnavelu, K.; Manikandan, M.; Balasubramaniam, P. Design of state estimator for BAM fuzzy cellular neural networks with leakage and unbounded distributed delays. Inf. Sci. 2017, 397, 91–109. [Google Scholar] [CrossRef]
  31. Gopalsamy, K. Leakage delays in BAM. J. Math. Anal. Appl. 2007, 325, 1117–1132. [Google Scholar] [CrossRef] [Green Version]
  32. Huang, C.; Cao, J. Impact of leakage delay on bifurcation in high-order fractional BAM neural networks. Neural Netw. 2018, 98, 223–235. [Google Scholar] [CrossRef]
  33. Zhang, L.; Song, Q.; Zhao, Z. Stability analysis of fractional-order complex-valued neural networks with both leakage and discrete delays. Appl. Math. Comput. 2017, 298, 296–309. [Google Scholar] [CrossRef]
  34. Wang, L.; Song, Q.; Liu, Y.; Zhao, Z.; Alsaadi, F. Finite-time stability analysis of fractional-order complex-valued memristor-based neural networks with both leakage and time-varying delays. Neurocomputing 2017, 245, 86–101. [Google Scholar] [CrossRef]
  35. Yuan, J.; Zhao, L.; Huang, C.; Xiao, M. Novel results on bifurcation for a fractional-order complex-valued neural network with leakage delay. Phys. A Stat. Mech. Its Appl. 2019, 514, 868–883. [Google Scholar] [CrossRef]
  36. Cao, Y.; Bai, C. Existence and stability analysis of fractional order BAM neural networks with a time delay. Appl. Math. 2015, 6, 2057–2068. [Google Scholar] [CrossRef] [Green Version]
  37. Ahmed, S.; Stamova, I.M. Global exponential stability for impulsive cellular neural networks with time-varying delays. Nonlinear Anal. 2008, 69, 786–795. [Google Scholar] [CrossRef]
  38. Gu, K.; Chen, J.; Kharitonov, V.L. Stability of Time Delay Systems; Birkhuser: Boston, MA, USA, 2003. [Google Scholar]
  39. Zhang, H.; Ye, R.; Liu, S.; Cao, J.; Alsaedi, A.; Li, X. LMI-based approach to stability analysis for fractional-order neural networks with discrete and distributed delays. Int. J. Syst. Sci. 2018, 49, 537–545. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hymavathi, M.; Muhiuddin, G.; Syed Ali, M.; Al-Amri, J.F.; Gunasekaran, N.; Vadivel, R. Global Exponential Stability of Fractional Order Complex-Valued Neural Networks with Leakage Delay and Mixed Time Varying Delays. Fractal Fract. 2022, 6, 140. https://doi.org/10.3390/fractalfract6030140

AMA Style

Hymavathi M, Muhiuddin G, Syed Ali M, Al-Amri JF, Gunasekaran N, Vadivel R. Global Exponential Stability of Fractional Order Complex-Valued Neural Networks with Leakage Delay and Mixed Time Varying Delays. Fractal and Fractional. 2022; 6(3):140. https://doi.org/10.3390/fractalfract6030140

Chicago/Turabian Style

Hymavathi, M., G. Muhiuddin, M. Syed Ali, Jehad F. Al-Amri, Nallappan Gunasekaran, and R. Vadivel. 2022. "Global Exponential Stability of Fractional Order Complex-Valued Neural Networks with Leakage Delay and Mixed Time Varying Delays" Fractal and Fractional 6, no. 3: 140. https://doi.org/10.3390/fractalfract6030140

Article Metrics

Back to TopTop