Next Article in Journal
A New Method of Quantifying the Complexity of Fractal Networks
Next Article in Special Issue
A Bi-Geometric Fractional Model for the Treatment of Cancer Using Radiotherapy
Previous Article in Journal
The Effect of Learning Rate on Fractal Image Coding Using Artificial Neural Networks
Previous Article in Special Issue
Some New Inequalities on Laplace–Stieltjes Transforms Involving Logarithmic Growth
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Uniform Stability of a Class of Fractional-Order Fuzzy Complex-Valued Neural Networks in Infinite Dimensions

College of Mathematics and Systems Science, Shandong University of Science and Technology, Qingdao 266590, China
*
Author to whom correspondence should be addressed.
Fractal Fract. 2022, 6(5), 281; https://doi.org/10.3390/fractalfract6050281
Submission received: 5 May 2022 / Revised: 21 May 2022 / Accepted: 22 May 2022 / Published: 23 May 2022

Abstract

:
In this paper, the problem of the uniform stability for a class of fractional-order fuzzy impulsive complex-valued neural networks with mixed delays in infinite dimensions is discussed for the first time. By utilizing fixed-point theory, theory of differential inclusion and set-valued mappings, the uniqueness of the solution of the above complex-valued neural networks is derived. Subsequently, the criteria for uniform stability of the above complex-valued neural networks are established. In comparison with related results, we do not need to construct a complex Lyapunov function, reducing the computational complexity. Finally, an example is given to show the validity of the main results.

1. Introduction

Fractional calculus is an effective tool for dealing with arbitrary derivatives and integrals. Since fractional operators are non-local, they become an excellent tool for describing some materials and processes with memory and genetic properties. In addition, because the fractional-order model has more degrees of freedom and unlimited memory, some scholars introduced fractional operators into neural networks. Fractional neural networks provide a single neuron with general computing power for signal processing, frequency-independent phase shifting of oscillatory neuron firing, and simulation of the human brain.
Currently, neural networks are widely used for their applications in optoelectronics, imaging, remote sensing, quantum neural devices and systems, spatiotemporal analysis of physiological nervous systems, artificial neural information processing, and so on [1]. These applications strongly depend on the dynamic behavior of the network. Therefore, some existing results on stability analysis of neural networks have been studied in [2,3]. In recent years, research on the dynamic performance of fractional-order neural networks has achieved remarkable results [4,5]. For instance, Lundstrom et al. has shown that neural network approximation at fractional scale leads to higher approximation rates [6]. It should also be noted that fractional-order neural networks may play an important role in parameter estimation. Therefore, it is extremely important to study the stability of fractional neural networks.
Note that all the references mentioned above deal with real-valued systems. In recent years, due to the successful applications of complex-valued neural networks (CVNNs) in optoelectronics, remote sensing and artificial neural information processing, the research on the dynamic behavior of complex-valued neural networks has attracted more and more attention [7,8,9,10]. For some problems, complex-valued systems can better describe and solve them, while real-valued systems cannot solve such problems. For example, the XOR problem and the detection of a symmetry problem cannot be solved with a single real-valued neuron, but a single complex-valued neuron with an orthogonal decision boundary [11]. The main difficulty faced by CVNNs is the choice of activation functions. If the activation function in CVNNs is both bounded and analytic, according to Liouville’s theorem [12,13], the activation function is constant. In general, CVNNs have more complex properties than real-valued neural networks. Therefore, it is desirable to study the dynamics of CVNNs intensively.
In many dynamical systems, processes and phenomena mutate at certain stages of development. For instance, sudden noise can cause instantaneous disturbance to the state in the electronic network. These sudden changes are shorter than the whole process. This instantaneous change is called an impulsive phenomenon [14,15,16,17]. On the other hand, fractional derivative has aroused research interest, and it has been incorporated into the model of CVNNs by numerous academics, which stirs a surge of research interest of the fractional-order CVNNs. Therefore, it is necessary to analyze the influence of impulsive fractional-order complex-valued neural networks (FOCVNNs). In addition to impulses, several other factors such as complexity, uncertainty or vagueness can be considered while modeling the neural network problems, and this can be studied through the application of fuzzy set theory [18]. In this context, due to the applications in image processing, pattern recognition etc., [19], it is an endeavour to study the dynamics of impulsive fuzzy FOCVNNs, which are challenging for the study of dynamic behavior from both theoretical and application perspectives.
However, it is important to note that the presence of time delays, whatever the delay in control or state deadlines, can cause the system to go from stable to unstable. There have been many interesting results on the stability of time delays [20,21,22,23,24,25], most of which were obtained by use of the Lyapunov method. However, the above method is only applicable when an appropriate Lyapunov function can be constructed, otherwise the stability of the system cannot be proved. Therefore, how to analyze the stability of impulsive fuzzy FOCVNNs with delays without applying the Lyapunov method is quite necessary and more interesting. For example, in [26], using the iterative method, finite-time stability of delayed memristor-based fractional-order neural networks was discussed. In [27], using fixed-point theory, dynamic stability of genetic regulatory networks was studied.
Previous studies show that there is very little existing work on existence and uniform stability for fuzzy infinite-dimensional FOCVNNs with impulses and mixed delays, which is valuable for enabling FOCVNNs for remote sensing, pattern recognition and artificial intelligence. This constitutes the motivation for the present research. Inspired by the aforementioned discussions, we discuss the uniform stability of a class of fuzzy fractional-order infinite-dimensional complex-valued neural networks with mixed delays. The main contributions of this paper lie in the following aspects:
(1) Based on fixed-point theory, theory of differential inclusion and set-valued mappings, several novel criteria on the existence and uniqueness for solution and uniformly stability for the addressed infinite-dimensional FOCVNNs are derived for the first time.
(2) A numerical example is provided to demonstrate the effectiveness and feasibility of the proposed results. Compared with [28], our model is more general.
(3) Our approach avoids constructing a Lyapunov function directly and reduces the computational complexity.

2. Preliminaries

Notations: in this paper, R and C represent a real number set and a complex number set, respectively. i presents the imaginary unit, i.e., i 2 = 1 . Let l c be the space of all complex sequences and l 1 denote the space of all absolutely summable sequences. For z l c , | z | is the module of z, and z = ( z 1 , z 2 , ) T l c , with z j = x j + i y j C , x j , y j R , x = ( x 1 , x 2 , ) T l 1 and y = ( y 1 , y 2 , ) T l 1 , where T represents the transpose, and z = j = 1 ( | x j | + | y j | ) . C ( [ τ , + ) , l 1 ) stands for the space of all continuous functions on [ τ , + ) .
Now we consider the space
PC = { z ( t ) = x ( t ) + i y ( t ) : [ τ , + ) l c , x ( t ) , y ( t ) : [ τ , + ) l 1 , j = 1 ( | x j ( s ) | + | y j ( s ) | ) is bounded with respect to s , s [ τ , + ) , there is 0 = t 0 < t 1 < such that z k C ( ( t k 1 , t k ] , l c ) , z ( t k + ) = lim t > t k t t k ( t ) and z ( t k ) = z ( t k ) = lim t < t k t t k z ( t ) , and z ( t ) = ζ ( t ) + i η ( t ) , when t 0 , ζ ( t ) , η ( t ) C ( [ τ , 0 ] , l 1 ) . }
If the space PC is endowed with the norm
z = z 1 + φ , z PC ,
where z 1 = sup s [ 0 , ) [ j = 1 ( | x j ( s ) | + | y j ( s ) | ) ] , φ = sup s [ τ , 0 ] [ j = 1 ( | ζ j ( s ) | + | η j ( s ) | ) ] , then ( PC , · ) is a Banach space.
We consider infinite-dimensional fractional-order impulsive fuzzy CVNNs with mixed delays, as follows:
t k 1 C D t λ z i ( t ) = c i z i ( t ) + j = 1 a ˜ k j ( z i ( t ) ) f j ( z j ( t ) ) + j = 1 b ˜ i j ( z k ( t ) ) h j ( z j ( t τ j ) ) + j = 1 α k j h j ( z j ( t ) ) + j = 1 β i j h j ( z j ( t ) ) + j = 1 d i j t l ( t ) t h j ( z j ( s ) ) d s + I i ( t ) , t ( t k 1 , t k ] , Δ z i ( t k ) = z i ( t k + ) z i ( t k ) = G i k ( z i ( t k ) ) , i , k J = { 1 , 2 , } ,
where C D λ is the Caputo’s fractional derivative with order 0 < λ < 1 , z j ( t ) = x j ( t ) + i y j ( t ) C and z j ( t τ j ) = x j ( t τ j ) + i y j ( t τ j ) C denote the state variable with respect to the jth neuron at t and t τ j , respectively; τ j R is the time delay, τ j > 0 . c j = c j R + i c j I represents the self-feedback connection weight of jth neuron, where c j R , c j I R , a ˜ i j ( z i ( t ) ) : C C and b ˜ i j ( z i ( t ) ) : C C are memristor-based connective weights, ⋀ and ⋁ denote the fuzzy AND and fuzzy OR operations, respectively; α i j and β i j are the elements of fuzzy feedback MIN template and fuzzy feedback MAX template, respectively. d i j = d i j R + i d i j I , where d i j R , d i j I R ; l ( t ) R is the distributed time-delay, which satisfies 0 l ( t ) l ; f j ( z j ( t ) ) : C C and h j ( z j ( t ) ) : C C are the state-activation functions; I k ( t ) = I k R ( t ) + i I k I ( t ) is the external input bias, where I k R , I k I R ; G i k ( z i ( t k ) ) : C C , z i ( t k + ) = lim t > t k t t k z i ( t ) and z i ( t k ) = z i ( t k ) = lim t < t k t t k z i ( t ) ; Δ z i ( t k ) stands for the impulsive jump at t k .
Additionally, the state z j satisfies the initial condition as follows:
z j ( t ) = φ j ( t ) = ζ j ( t ) + i η j ( t ) , a s t [ τ , 0 ] ,
where φ j ( t ) C ( [ τ , 0 ] , C ) .
Assumption 1.
Let ϑ = μ + i ν ; the state activation functions of ith neuron can be represented as follows:
f i ( ϑ ) = f i R ( μ , ν ) + i f i I ( μ , ν ) , h i ( ϑ ) = h i R ( μ , ν ) + i h i I ( μ , ν ) ,
where f i R ( μ , ν ) , f i I ( μ , ν ) , g i R ( μ , ν ) , g i I ( μ , ν ) R , and for any μ, ν, μ ˜ , ν ˜ , there are positive constants F i R R , F i R I , F i I R , F i I I , H i R R , H i R I , H i I R , H i I I , such that
| f i R ( μ , ν ) f i R ( μ ˜ , ν ˜ ) | F i R R | μ μ ˜ | + F i R I | ν ν ˜ | , | f i I ( μ , ν ) f i I ( μ ˜ , ν ˜ ) | F i I R | μ μ ˜ | + F i I I | ν ν ˜ | , | h i R ( μ , ν ) h i R ( μ ˜ , ν ˜ ) | H i R R | μ μ ˜ | + H i R I | ν ν ˜ | , | h i I ( μ , ν ) h i I ( μ ˜ , ν ˜ ) | H i I R | μ μ ˜ | + H i I I | ν ν ˜ | .
Assumption 2.
Let ϑ = μ + i ν , the function G i k ( ϑ ) = G i k R ( μ , ν ) + i G i k I ( μ , ν ) , where G i k R ( μ , ν ) , G i k I ( μ , ν ) R , and for any μ, ν, μ ˜ , ν ˜ , there are positive constants G i k R R , G i k R I , G i k I R , G i k I I , such that the following inequalities hold
| G i k R ( μ , ν ) G i k R ( μ ˜ , ν ˜ ) | G i k R R | μ μ ˜ | + G i k R I | ν ν ˜ | , | G i k I ( μ , ν ) G i k I ( μ ˜ , ν ˜ ) | G i k I R | μ μ ˜ | + G i k I I | ν ν ˜ | .
Under Assumptions 1 and 2, system (1) can be expressed as follows:
c D λ x i ( t ) = c i R x i ( t ) + c i I y i ( t ) + j = 1 a ˜ i j R f j R ( x j ( t ) , y j ( t ) ) j = 1 a ˜ i j I f j I ( x j ( t ) , y j ( t ) ) + j = 1 b ˜ i j R × h j R ( x j ( t τ j ) , y j ( t τ j ) ) j = 1 b ˜ i j I h j I ( x j ( t τ j ) , y j ( t τ j ) ) + j = 1 α i j h j R ( x j ( t ) , y j ( t ) ) + j = 1 β i j h j R ( x j ( t ) , y j ( t ) ) + j = 1 d i j R t l ( t ) t h j R ( x j ( s ) , y j ( s ) ) d s j = 1 d i j I t l ( t ) t h j I ( x j ( s ) , y j ( s ) ) d s + I i R ( t ) , t ( t k 1 , t k ] , Δ x i ( t k ) = x i ( t k + ) x i ( t k ) = G i k R ( x i ( t k ) ) ,
c D λ y i ( t ) = c i R y i ( t ) c i I x i ( t ) + j = 1 a ˜ i j I f j R ( x j ( t ) , y j ( t ) ) + j = 1 a ˜ i j R f j I ( x j ( t ) , y j ( t ) ) + j = 1 b ˜ i j I × h j R ( x j ( t τ j ) , y j ( t τ j ) ) + j = 1 b ˜ i j R h j I ( x j ( t τ j ) , y j ( t τ j ) ) + j = 1 α i j h j R ( x j ( t ) , y j ( t ) ) + j = 1 β i j h j R ( x j ( t ) , y j ( t ) ) + j = 1 d i j R t l ( t ) t h j I ( x j ( s ) , y j ( s ) ) d s + j = 1 d i j R t l ( t ) t h j I ( x j ( s ) , y j ( s ) ) d s + I i ( t ) , t ( t k 1 , t k ] , Δ y i ( t k ) = y i ( t k + ) y i ( t k ) = G i k I ( z i ( t k ) ) .
According to the memristor feature, denote
a ˜ i j R ( x i ( t ) ) = a ˜ ^ i j R , | x i ( t ) | T j a ˜ ˇ i j R , | x i ( t ) | > T j , b ˜ i j R ( x i ( t ) ) = b ˜ ^ i j R , | x i ( t ) | T j b ˜ ˇ i j R , | x i ( t ) | > T j ,
a ˜ i j I ( y i ( t ) ) = a ˜ ^ i j I , | y i ( t ) | T j a ˜ ˇ i j I , | y i ( t ) | > T j , b ˜ i j I ( z i ( t ) ) = b ˜ ^ i j I , | y i ( t ) | T j b ˜ ˇ i j I , | y i ( t ) | > T j ,
where T j > 0 is the switching jump, a ^ i j , a ˇ i j , b ^ i j , b ˇ i j a ^ i j R , a ˇ i j I , b ^ i j R and b ˇ i j I are constants related to memristances.
Let c o [ m , n ] represent the convex closure which is generated by m and n. Then, we will get
c o [ a ˜ i j R ( x i ( t ) ) ] = a ˜ ^ i j R , | x i ( t ) | < T j c o { a ˜ ^ i j R , a ˜ ˇ i j R } , | x i ( t ) | = T j a ˜ ˇ i j R , | x i ( t ) | > T j , c o [ b ˜ i j R ( x i ( t ) ) ] = b ˜ ^ i j R , | x i ( t ) | T j c o { b ˜ ^ i j R , b ˜ ˇ i j R } , | x i ( t ) | = T j b ˜ ˇ i j R , | x i ( t ) | > T j ,
c o [ a ˜ i j I ( y i ( t ) ) ] = a ˜ ^ i j I , | y i ( t ) | < T j c o { a ˜ ^ i j I , a ˜ ˇ i j I } , | y i ( t ) | = T j a ˜ ˇ i j I , | y i ( t ) | > T j , c o [ b ˜ i j I ( y i ( t ) ) ] = b ˜ ^ i j I , | y i ( t ) | T j c o { b ˜ ^ i j I , b ˜ ˇ i j I } , | y i ( t ) | = T j b ˜ ˇ i j I , | y i ( t ) | > T j .
Since the right-hand sides of the systems (2) and (3) are discontinuous, its solutions cannot be represented by traditional methods. Thus, the solution of the following system should be considered in the sense of Filippov [29]. In the light of differential inclusion and set-valued mappings theory, we deduce
c D λ x i ( t ) c i R x i ( t ) + c i I y i ( t ) + j = 1 c o [ a ˜ i j R ( x i ( t ) ) ] f j R ( x j ( t ) , y j ( t ) ) j = 1 c o [ a ˜ i j I ( x i ( t ) ) ] f j I ( x j ( t ) , y j ( t ) ) + j = 1 c o [ b ˜ i j R ( y i ( t ) ) ] h j R ( x j ( t τ j ) , y j ( t τ j ) ) j = 1 c o [ b ˜ i j R ( y i ( t ) ) ] h j I ( x j ( t τ j ) , y j ( t τ j ) ) + j = 1 α i j h j R ( x j ( t ) , y j ( t ) ) + j = 1 β i j h j R ( x j ( t ) , y j ( t ) ) + j = 1 d i j R t l ( t ) t h j R ( x j ( s ) , y j ( s ) ) d s j = 1 d i j I t l ( t ) t h j I ( x j ( s ) , y j ( s ) ) d s + I i R ( t ) , t ( t k 1 , t k ] , Δ x i ( t k ) = x i ( t k + ) x i ( t k ) = G i k R ( x i ( t k ) ) ,
c D λ y i ( t ) c i R y i ( t ) c i I x i ( t ) + j = 1 c o [ a ˜ i j I ( x i ( t ) ) ] f j R ( x j ( t ) , y j ( t ) ) + j = 1 c o [ a ˜ i j R ( x i ( t ) ) ] f j I ( x j ( t ) , y j ( t ) ) + j = 1 c o [ b ˜ i j I ( y i ( t ) ) ] h j R ( x j ( t τ j ) , y j ( t τ j ) ) + j = 1 c o [ b ˜ i j R ( y i ( t ) ) ] h j I ( x j ( t τ j ) , y j ( t τ j ) ) + j = 1 α i j h j R ( x j ( t ) , y j ( t ) ) + j = 1 β i j h j R ( x j ( t ) , y j ( t ) ) + j = 1 d i j R t l ( t ) t h j I ( x j ( s ) , y j ( s ) ) d s + j = 1 d i j R t l ( t ) t h j I ( x j ( s ) , y j ( s ) ) d s + I i ( t ) , t ( t k 1 , t k ] , Δ y i ( t k ) = y i ( t k + ) y i ( t k ) = G i k I ( z i ( t k ) ) ,
or a i j R c o [ a ˜ i j R ( x i ( t ) ) ] , a i j I c o [ a ˜ i j I ( x i ( t ) ) ] , b i j R c o [ b ˜ i j R ( y i ( t ) ) ] , b i j I c o [ b ˜ i j I ( y i ( t ) ) ] exist, such that
c D λ x i ( t ) = c i R x i ( t ) + c i I y i ( t ) + j = 1 a i j R f j R ( x j ( t ) , y j ( t ) ) j = 1 a i j I f j I ( x j ( t ) , y j ( t ) ) + j = 1 b i j R × h j R ( x j ( t τ j ) , y j ( t τ j ) ) j = 1 b i j I h j I ( x j ( t τ j ) , y j ( t τ j ) ) + j = 1 α i j h j R ( x j ( t ) , y j ( t ) ) + j = 1 β i j h j R ( x j ( t ) , y j ( t ) ) + j = 1 d i j R t l ( t ) t h j R ( x j ( s ) , y j ( s ) ) d s j = 1 d i j I t l ( t ) t h j I ( x j ( s ) , y j ( s ) ) d s + I i R ( t ) , t ( t k 1 , t k ] , Δ x i ( t k ) = x i ( t k + ) x i ( t k ) = G i k R ( x i ( t k ) ) ,
c D λ y i ( t ) = c i R y i ( t ) c i I x i ( t ) + j = 1 a i j I f j R ( x j ( t ) , y j ( t ) ) + j = 1 a i j R f j I ( x j ( t ) , y j ( t ) ) + j = 1 b i j I × h j R ( x j ( t τ j ) , y j ( t τ j ) ) + j = 1 b i j R h j I ( x j ( t τ j ) , y j ( t τ j ) ) + j = 1 α i j h j R ( x j ( t ) , y j ( t ) ) + j = 1 β i j h j R ( x j ( t ) , y j ( t ) ) + j = 1 d i j R t l ( t ) t h j I ( x j ( s ) , y j ( s ) ) d s + j = 1 d i j R t l ( t ) t h j I ( x j ( s ) , y j ( s ) ) d s + I i ( t ) , t ( t k 1 , t k ] , Δ y i ( t k ) = y i ( t k + ) y i ( t k ) = G i k I ( z i ( t k ) ) ,
where a i j = a i j R + i a i j I , b i j = b i j R + i b i j I , here a i j R , a i j I , b i j R , b i j I R .
To study the uniform stability of fractional-order fuzzy impulsive complex-valued neural networks, critical definitions and lemmas are presented as follows.
Definition 1
([30]). The fractional integral (Riemann–Liouville integral) I t 0 , t γ with fractional order γ R + of function ϕ ( t ) is defined as
I t 0 , t γ ϕ ( t ) = 1 Γ ( γ ) t 0 t ( t ϑ ) γ 1 ϕ ( ϑ ) d ϑ ,
where Γ ( · ) is the gamma function.
Definition 2
([30]). The Caputo derivative of fractional order γ of function ϕ ( t ) is defined as
c D t 0 , t γ ϕ ( t ) = 1 Γ ( m γ ) t 0 t ϕ m ( ϑ ) ( t ϑ ) γ m + 1 d ϑ ,
where m 1 < γ < m Z + .
Definition 3.
The solution of system (1) is said to be uniformly stable (δ is independent of t 0 ) if for any ϵ > 0 , there exists δ = δ ( t 0 , ϵ ) such that t t 0 0 , for any two solutions z ( t ) , z ( t ) of system (1) with different initial conditions, it holds that z z ϵ for all t t 0 0 , whenever φ φ δ .
Lemma 1
([31]). Suppose that μ, ν, μ ˜ , ν ˜ R , h j is continuous function, j = 1 , 2 , Then, one has as
j = 1 α i j h j ( μ , ν ) j = 1 α i j h j ( μ ˜ , ν ˜ ) j = 1 | α i j | | h j ( μ , ν ) h j ( μ ˜ , ν ˜ ) | ,
j = 1 β i j h j ( μ , ν ) j = 1 β i j h j ( μ ˜ , ν ˜ ) i = 1 | β i j | | h j ( μ , ν ) h j ( μ ˜ , ν ˜ ) | .
Lemma 2
([32] Contraction Mapping Principle). Let Λ be a contraction operator on a complete metric space X; then, there is a unique point χ X that satisfies Λ ( χ ) = χ .
Lemma 3.
If x i ( t ) and y i ( t ) are the solutions of system (4) and system (5), respectively, then functions x i ( t ) , y i ( t ) are the solutions of the following fractional-order integral differential equations:
x i ( t ) = ζ i ( 0 ) + 1 Γ ( λ ) t 0 t ( t s ) λ 1 g R ( s ) d s , t [ t 0 , t 1 ] , ζ i ( 0 ) + 1 Γ ( λ ) m = 1 t m 1 t m ( t m s ) λ 1 g R ( s ) d s + 1 Γ ( λ ) t k t ( t s ) λ 1 g R ( s ) d s + j = 1 G i j R ( x i ( t j ) , y i ( t j ) ) , t ( t k , t k + 1 ] , k = 1 , 2 , ,
and
y i ( t ) = η i ( 0 ) + 1 Γ ( λ ) t 0 t ( t s ) λ 1 g I ( s ) d s , t [ t 0 , t 1 ] , η i ( 0 ) + 1 Γ ( λ ) m = 1 t m 1 t m ( t m s ) λ 1 g I ( s ) d s + 1 Γ ( λ ) t k t ( t s ) λ 1 g I ( s ) d s + j = 1 G i j I ( x i ( t j ) , y i ( t j ) ) , t ( t k , t k + 1 ] , k = 1 , 2 , ,
where
g R ( s ) = c i R x i ( s ) + c i I y i ( s ) + j = 1 a i j R f j R ( x j ( s ) , y j ( s ) ) + j = 1 b i j R h j R ( x j ( s τ j ) , y j ( s τ j ) ) j = 1 a i j I f j I ( x j ( s ) , y j ( s ) ) j = 1 b i j I h ˜ j I ( x j ( s τ j ) , y j ( s τ j ) ) + j = 1 α i j h j R ( x j ( s ) , y j ( s ) ) + j = 1 β i j h j R ( x j ( s ) , y j ( s ) ) + j = 1 d i j R s l ( s ) t h j R ( x j ( θ ) , y j ( θ ) ) d s j = 1 d i j I s l ( s ) s h j I ( x j ( θ ) , y j ( θ ) ) d θ + I i R ( s ) , g I ( s ) = c i R y i ( s ) c i I x i ( s ) + j = 1 a i j R f j I ( x j ( s ) , y j ( s ) ) + j = 1 b i j R h j I ( x j ( s τ j ) , y j ( s τ j ) ) + j = 1 a i j I f j R ( x j ( s ) , y j ( s ) ) + j = 1 b i j I h j R ( x j ( s τ j ) , y j ( s τ j ) ) + j = 1 α i j h j I ( x j ( s ) , y j ( s ) ) + j = 1 β i j h j I ( x j ( s ) , y j ( s ) ) + j = 1 d i j R s l ( s ) s h j I ( x j ( θ ) , y j ( θ ) ) d θ + j = 1 d i j I s l ( s ) s h j R ( x j ( θ ) , y j ( θ ) ) d θ + I i I ( s ) .
Remark 1.
Since the proof of Lemma 3 is similar to the proof of Lemma 2 in [28], we omit it here.

3. Main Results

In this section, first, we introduce some notations:
ξ 1 i = sup j J { | a i j R | F j R R } , ξ 2 i = sup j J { | a i j R | F j R I } , ξ 3 i = sup j J { | a i j I | F j I R } , ξ 4 i = sup j J { | a i j I | F j I I } , γ 1 i = sup j J { | b i j R | H j R R } , γ 2 i = sup j J { | b i j R | H j R I } , γ 3 i = sup j J { | b i j I | H j I R } , γ 4 i = sup j J { | b i j I | H j I I } , ρ 1 i = sup j J { | α i j R | H j R R } , ρ 2 i = sup j J { | α i j R | H j R I } , κ 1 i = sup j J { | β i j R | H j R R } , κ 2 i = sup j J { | β i j R | H j R I } , υ 1 i = sup j J { | d i j R | H j R R } , υ 2 i = sup j J { | d i j R | H j R I } , υ 3 i = sup j J { | d i j I | H j I R } , υ 4 i = sup j J { | d i j I | H j I I } , ξ 1 i = sup j J { | a i j R | F j I R } , ξ 2 i = sup j J { | a i j R | F j I I } , ξ 3 i = sup j J { | a i j I | F j R R } , ξ 4 i = sup j J { | a i j I | F j R I } , γ 1 i = sup j J { | b i j R | H j I R } , γ 2 i = sup j J { | b i j R | H j I I } , γ 3 i = sup j J { | b i j I | H j R R } , γ 4 i = sup j J { | b i j I | H j R I } , ρ 1 i = sup j J { | α i j | H j I R } , ρ 2 i = sup j J { | α i j | H j I I } , κ 1 i = sup j J { | β i j | H j I R } , κ 2 i = sup j J { | β i j | H j I I } , υ 1 i = sup j J { | d i j R | H j I R } , υ 2 i = sup j J { | d i j R | H j I I } , υ 3 i = sup j J { | d i j I | H j R R } , υ 4 i = sup j J { | d i j I | H j R I } .
Assumption 3.
For i = 1,2,…, the following conditions hold,
i = 1 ( G i k R I + G i k R R ) , j = 1 4 i = 1 ( ξ j i + γ j i + ρ j i + κ j i + υ j i ) , j = 1 4 i = 1 ( ξ j i + γ j i + ρ j i + κ j i + υ j i ) .

3.1. Uniform Stability

Theorem 1.
Under Assumptions 1–3, system (1) is uniformly stable, if q 2 q 4 > q 1 q 3 holds, where μ = m = 1 ( t m t m 1 ) λ Γ ( λ + 1 ) + ( t t k ) λ Γ ( λ + 1 ) < ,
q 1 = i = 1 G i k R I + ( c i I + i = 1 ( ξ 2 i + ξ 4 i + γ 2 i + γ 4 i + ρ 2 i + κ 2 i + υ 2 i + υ 4 i ) ) μ , q 2 = 1 i = 1 G i k R R + ( c i R i = 1 ( ξ 1 i + ξ 3 i + γ 1 i + γ 3 i + ρ 1 i + κ 1 i + υ 1 i + υ 3 i ) μ ) , q 3 = i = 1 G i k I I + ( c i I + i = 1 ( ξ 2 i + ξ 4 i + γ 2 i + γ 4 i + ρ 2 i + κ 2 i + υ 2 i + υ 4 i ) ) μ , q 4 = 1 i = 1 G i k I R + ( c i R i = 1 ( ξ 1 i + ξ 3 i + γ 1 i + γ 3 i + ρ 1 i + κ 1 i + υ 1 i + υ 3 i ) ) μ .
Proof. 
Let z ( t ) = x ( t ) + i y ( t ) and z ( t ) = x ( t ) + i y ( t ) , x x and y y . Suppose that z ( t ) = [ z 1 ( t ) , z 2 ( t ) , ] T and z ( t ) = [ z 1 ( t ) , z 2 ( t ) , ] T are two solutions of system (1), equipped with different initial conditions φ ( t ) = ζ ( t ) + i η ( t ) and φ ( t ) = ζ ( t ) + i η ( t ) , where ζ ( t ) , η ( t ) , ζ ( t ) , η ( t ) C ( [ τ , 0 ] , l 1 ) , z ( 0 + ) = z ( 0 ) and z ( 0 + ) = z ( 0 ) . For t [ t 0 , t 1 ] , we have
| x i ( t ) x i ( t ) | | ζ i ( 0 ) ζ i ( 0 ) | + 1 Γ ( λ ) t 0 t ( t s ) λ 1 g 1 ( s ) d s | ζ i ( 0 ) ζ i ( 0 ) | + 1 Γ ( λ ) t 0 t ( t s ) λ 1 g 2 ( s ) d s | ζ i ( 0 ) ζ i ( 0 ) | + 1 Γ ( λ ) t 0 t ( t s ) λ 1 ( c i R | x i ( s ) x i ( s ) | + c i I | y i ( s ) y i ( s ) | ) d s + ξ 1 i + ξ 3 i + ρ 1 i + κ 1 i Γ ( λ ) t 0 t ( t s ) λ 1 j = 1 | x j ( s ) x j ( s ) | d s + ξ 2 i + ξ 4 i + ρ 2 i + κ 4 i Γ ( λ ) t 0 t ( t s ) λ 1 j = 1 | y j ( s ) y j ( s ) | d s + γ 1 i + γ 3 i Γ ( λ ) t 0 t ( t s ) λ 1 j = 1 | x j ( s τ j ) x j ( s τ j ) | d s + γ 2 i + γ 4 i Γ ( λ ) t 0 t ( t s ) λ 1 j = 1 | y j ( s τ j ) y j ( s τ j ) | d s + υ 1 i + υ 3 i Γ ( λ ) t 0 t ( t s ) λ 1 j = 1 s l ( s ) s | x j ( θ ) x j ( θ ) | d θ d s + υ 2 i + υ 4 i Γ ( λ ) t 0 t ( t s ) λ 1 j = 1 s l ( s ) s | y j ( θ ) y j ( θ ) | d θ d s | ζ i ( 0 ) ζ i ( 0 ) | + ( t 1 t 0 ) λ Γ ( λ + 1 ) ( c i R | x i ( t ) x i ( t ) | + c i I | y i ( t ) y i ( t ) | ) + ( t 1 t 0 ) λ Γ ( λ + 1 ) ( ξ 1 i + ξ 3 i + ρ 1 i + κ 1 i ) x x 1 + ( t 1 t 0 ) λ Γ ( λ + 1 ) ( γ 1 i + γ 3 i ) x x + ( t 1 t 0 ) λ Γ ( λ + 1 ) ( ξ 2 i + ξ 4 i + ρ 2 i + κ 2 i ) y y 1 + ( t 1 t 0 ) λ Γ ( λ + 1 ) ( γ 2 i + γ 4 i ) y y
+ ( t 1 t 0 ) λ Γ ( λ + 1 ) ( υ 2 i l + υ 4 i l ) y y 1 + ( t 1 t 0 ) λ Γ ( λ + 1 ) ( υ 1 i l + υ 3 i l ) ) x x 1 | ζ i ( 0 ) ζ i ( 0 ) | + ( t 1 t 0 ) λ Γ ( λ + 1 ) ( c i R | x i ( t ) x i ( t ) | + c i I | y i ( t ) y i ( t ) | ) + ( t 1 t 0 ) λ Γ ( λ + 1 ) [ ( ξ 1 i + ξ 3 i + γ 1 i + γ 3 i + ρ 1 i + κ 1 i + υ 1 i l + υ 3 i l ) x x + ( ξ 2 i + ξ 4 i + γ 2 i + γ 4 i + ρ 2 i + κ 2 i + υ 2 i l + υ 4 i l ) y y ] ,
where
g 1 ( s ) = c i R | x i ( s ) x i ( s ) | + c i I | y i ( s ) y i ( s ) | + j = 1 | a i j R | | f j R ( x j ( s ) , y j ( s ) ) f j R ( x j ( s ) , y j ( s ) ) | + j = 1 | a i j I | f j I ( x j ( s ) , y j ( s ) ) f j I ( x j ( s ) , y j ( s ) ) | + j = 1 | b i j R | h j R ( x j ( s τ j ) , y j ( s τ j ) ) h j R ( x j ( s τ j ) , y j ( s τ j ) ) | + j = 1 | b i j I | h j I ( x j ( s τ j ) , y j ( s τ j ) ) h j I ( x j ( s τ j ) , y j ( s τ j ) ) | + | j = 1 α i j h j R ( x j ( s ) , y j ( s ) ) j = 1 h j R ( x j ( s ) , y j ( s ) ) | + | j = 1 β i j h j R ( x j ( s ) , y j ( s ) ) j = 1 β i j h j R ( x j ( s ) , y j ( s ) ) | + j = 1 d i j R s l ( s ) s | h j R ( x j ( θ ) , y j ( θ ) ) h j R ( x j ( θ ) , y j ( θ ) ) | d s + j = 1 d i j I s l ( s ) s | h j I ( x j ( θ ) , y j ( θ ) ) h j I ( x j ( θ ) , y j ( θ ) ) | d θ ,
and
g 2 ( s ) = c i R | x i ( s ) x i ( s ) | + c i I | y i ( s ) y i ( s ) | + j = 1 | a i j R | ( F j R R | x j ( s ) x j ( s ) | + F j R I | y j ( s ) y j ( s ) | ) + j = 1 | a i j I | ( F j I R | x j ( s ) x j ( s ) | + F j I I | y j ( s ) y j ( s ) | ) + j = 1 | b i j R | ( H j R R | x j ( s τ j ) x j ( s τ j ) | + H j R I | y j ( s τ j ) y j ( s τ j ) | ) + j = 1 | b i j I | ( H j I R | x j ( s τ j ) x j ( s τ j ) | + H j I I | y j ( s τ j ) y j ( s τ j ) | ) + j = 1 | α i j | ( H j R R | x j ( s ) x j ( s ) | + H j R I | y j ( s ) y j ( s ) | ) + j = 1 | β i j | ( H j R R | x j ( s ) x j ( s ) | + H j R I | y j ( s ) y j ( s ) | ) + j = 1 | d i j R | s l ( s ) s ( H j R R | x j ( θ ) x j ( θ ) | + H j R I | y j ( θ ) y j ( θ ) | ) d θ + j = 1 | d i j I | s l ( s ) s ( H j I R | x j ( θ ) x j ( θ ) | + H j I I | y j ( θ ) y j ( θ ) | ) d θ .
For t ( t k , t k + 1 ] , k = 1 , 2 , , we deduce
| x i ( t ) x i ( t ) | | ζ i ( 0 ) ζ i ( 0 ) | + 1 Γ ( λ ) m = 1 t m 1 t m ( t m s ) λ 1 g 1 ( s ) d s + 1 Γ ( λ ) t k t ( t s ) λ 1 g 1 ( s ) d s + j = 1 [ ( G i j R ( x i ( t j ) , y i ( t j ) ) G i j R ( x i ( t j ) , y i ( t j ) ) ] | ζ i ( 0 ) ζ i ( 0 ) | + 1 Γ ( λ ) m = 1 t m 1 t m ( t m s ) λ 1 g 2 ( s ) d s + 1 Γ ( λ ) t k t ( t s ) λ 1 g 2 ( s ) d s + j = 1 [ G i j R R | x i ( t j ) x i ( t j ) | G i j R I | y i ( t j ) y i ( t j ) | ] | ζ i ( 0 ) ζ i ( 0 ) | + 1 Γ ( λ ) m = 1 t m 1 t m ( t m s ) λ 1 ( c i R | x i ( s ) x i ( s ) | + c i I | y i ( s ) y i ( s ) | ) d s + ξ 1 i + ξ 3 i + ρ 1 i + κ 1 i Γ ( λ ) m = 1 t m 1 t m ( t m s ) λ 1 d s x x 1 + ξ 2 i + ξ 4 i + ρ 2 i + κ 4 i Γ ( λ ) m = 1 t m 1 t m ( t m s ) λ 1 d s y y 1 + γ 1 i + γ 3 i Γ ( λ ) m = 1 t m 1 t m ( t m s ) λ 1 d s x x + γ 2 i + γ 4 i Γ ( λ ) t m 1 t m ( t m s ) λ 1 d s y y + υ 1 i + υ 3 i Γ ( λ ) t m 1 t m ( t m s ) λ 1 l d s x x 1 + υ 2 i + υ 4 i Γ ( λ ) t m 1 t m ( t m s ) λ 1 l d s y y 1 + 1 Γ ( λ ) t k t ( t s ) λ 1 ( c i R | x i ( s ) x i ( s ) | + c i I | y i ( s ) y i ( s ) | ) d s + ξ 1 i + ξ 3 i + ρ 1 i + κ 1 i Γ ( λ ) t k t ( t s ) λ 1 x x 1 d s + ξ 2 i + ξ 4 i + ρ 2 i + κ 4 i Γ ( λ ) t k t ( t s ) λ 1 y y 1 d s + γ 1 i + γ 3 i Γ ( λ ) t k t ( t s ) λ 1 d s x x + γ 2 i + γ 4 i Γ ( λ ) t k t ( t s ) λ 1 d s y y + υ 1 i + υ 3 i Γ ( λ ) t k t ( t s ) λ 1 l x x 1 d s + υ 2 i + υ 4 i Γ ( λ ) t k t ( t s ) λ 1 l d s y y 1 + j = 1 [ G i j R R | x i ( t j ) x i ( t j ) | G i j R I | y i ( t j ) y i ( t j ) | ] | ζ i ( 0 ) ζ i ( 0 ) | + ( c i R ( m = 1 ( t m t m 1 ) λ Γ ( λ + 1 ) + ( t t k ) λ Γ ( λ + 1 ) ) ) | x i ( t ) x i ( t ) | + ( c i I ( m = 1 ( t m t m 1 ) λ Γ ( λ + 1 ) + ( t t m ) λ Γ ( λ + 1 ) ) ) | y i ( t ) y i ( t ) | ] + [ G i k R R + ( ξ 1 i + ξ 3 i + γ 1 i + γ 3 i + ρ 1 i + κ 1 i + υ 1 i + υ 3 i ) ( m = 1 ( t m t m 1 ) λ Γ ( λ + 1 ) + ( t t k ) λ Γ ( λ + 1 ) ) ] x x + [ G i k R I + ( ξ 2 i + ξ 4 i + γ 2 i + γ 4 i + ρ 2 i + κ 2 i + υ 2 i + υ 4 i ) ] ( m = 1 ( t m t m 1 ) λ Γ ( λ + 1 ) + ( t t k ) λ Γ ( λ + 1 ) ) y y ,
where g 1 ( s ) and g 2 ( s ) are the same as previously defined.
According to (6) and (9), we obtain
x x = sup t [ τ , ) i = 1 { | x i ( t ) x i ( t ) | } sup t [ τ , ) i = 1 | ζ i ( t ) ζ i ( t ) | + ( c i R m = 1 ( t m t m 1 ) λ Γ ( λ + 1 ) ) | x i ( t ) x i ( t ) | + ( c i I m = 1 ( t m t m 1 ) λ Γ ( λ + 1 ) ) sup t > 0 | y i ( t ) y i ( t ) | ] + i = 1 ( G i k R R + ξ 1 i + ξ 3 i + γ 1 i + γ 3 i + ρ 1 i + κ 1 i + υ 1 i + υ 3 i ) x x ( m = 1 ( t m t m 1 ) λ Γ ( λ + 1 ) + ( t t k ) λ Γ ( λ + 1 ) ) + i = 1 ( G i k R I + ξ 2 i + ξ 4 i + γ 2 i + γ 4 i + ρ 2 i + κ 2 i + υ 2 i + υ 4 i ) y y ( m = 1 ( t m t m 1 ) λ Γ ( λ + 1 ) + ( t t k ) λ Γ ( λ + 1 ) ) ζ ζ c i R μ x x c i I μ y y + i = 1 [ G i k R R + ( ξ 1 i + ξ 3 i + γ 1 i + γ 3 i + ρ 1 i + κ 1 i + υ 1 i + υ 3 i ) μ ] x x + i = 1 [ G i k R I + ( ξ 2 i + ξ 4 i + γ 2 i + γ 4 i + ρ 2 i + κ 2 i + υ 2 i + υ 4 i ) μ ] y y .
Similarly, for the imaginary parts y and y, we deduce that
y y = sup t [ τ , ) i = 1 { | y i ( t ) y i ( t ) | } η η c i R μ y y + c i I μ x x + i = 1 [ G i k I R + ( ξ 1 i + ξ 3 i + γ 1 i + γ 3 i + ρ 1 i + κ 1 i + υ 1 i + υ 3 i ) μ ] y y + i = 1 [ G i k I I + ( ξ 2 i + ξ 4 i + γ 2 i + γ 4 i + ρ 2 i + κ 2 i + υ 2 i + υ 4 i ) μ ] x x .
From (10) and (11), we can derive
x x { ζ ζ + [ c i I + i = 1 ( G i k R I + ( ξ 2 i + ξ 4 i + γ 2 i + γ 4 i + ρ 2 i + κ 2 i + υ 2 i + υ 4 i ) μ ) ] y y } / { 1 + c i R μ i = 1 [ G i k R R + ( ξ 1 i + ξ 3 i + γ 1 i + γ 3 i + ρ 1 i + κ 1 i + υ 1 i + υ 3 i ) μ ] } ,
and
y y { η η + [ c i I + i = 1 ( G i k I I + ( ξ 2 i + ξ 4 i + γ 2 i + γ 4 i + ρ 2 i + κ 2 i + υ 2 i + υ 4 i ) μ ) ] x x } / { 1 + c i R μ i = 1 [ G i k I R + ( ξ 1 i + ξ 3 i + γ 1 i + γ 3 i + ρ 1 i + κ 1 i + υ 1 i + υ 3 i ) μ ] } .
By combining with the above two inequalities, we conclude
x x ζ ζ + q 1 y y q 2 ,
and
y y η η + q 3 x x q 4 ,
where q 1 , q 2 , q 3 and q 4 are defined as in Theorem 1.
Substituting (13) into (12), we obtain
x x q 1 / q 2 q 4 1 q 1 q 3 / q 2 q 4 ζ ζ + 1 / q 2 1 q 1 q 3 / q 2 q 4 η η .
For any ϵ > 0 , if we take ζ ζ ϵ 4 δ 1 and η η ϵ 4 δ 2 , then we can deduce that
x x ϵ 2 ,
where δ 1 = q 1 / q 2 q 4 1 q 1 q 3 / q 2 q 4 and δ 2 = 1 / q 2 1 q 1 q 3 / q 2 q 4 .
Similarly, by substituting (12) and (13), we derive
y y ϵ 2 ,
where ζ ζ ϵ 4 δ 3 , η η ϵ 4 δ 4 , δ 3 = q 3 / q 2 q 4 1 q 1 q 3 / q 2 q 4 and δ 4 = 1 / q 4 1 q 1 q 3 / q 2 q 4 .
Considering (14) and (15), we conclude that for any ϵ > 0 , we have δ = ϵ max { δ 1 , δ 2 , δ 3 , δ 4 } , such that,
z z < ϵ , w h e n c e φ φ δ .
Therefore, system (1) is uniformly stable. The proof is completed. □
Remark 2.
In [28], the authors considered uniform stability analysis of fractional-order complex-valued neural networks with linear impulses and fixed time-delays in R n . The authors obtained the existence and uniqueness results by utilizing fixed-point theory, sufficient conditions for the uniform stability of solutions for the networks. However, due to the combination of fractional calculus and fuzzy logic, the fuzzy complex-valued neural networks have more complex dynamic behavior, and the discussion of infinite-dimensional complex-valued neural networks with mixed delay is more complicated.

3.2. Existence and Uniqueness

Theorem 2.
Let 0 < λ < 1 , and let Assumptions 1–3 hold, then the system (1) has a unique solution satisfying the initial condition, if the following condition holds:
0 < k = max { k 1 , k 2 } < 1 ,
where
k 1 = i = 1 G i k R I + ( c i I + i = 1 ( ξ 2 i + ξ 4 i + γ 2 i + γ 4 i + ρ 2 i + κ 2 i + υ 2 i + υ 4 i ) ) μ 1 i = 1 G i k R R + c i R μ i = 1 ( ( ξ 1 i + ξ 3 i + γ 1 i + γ 3 i + ρ 1 i + κ 1 i + υ 1 i + υ 3 i ) ) μ , k 2 = i = 1 G i k I I + ( c i I + i = 1 ( ξ 2 i + ξ 4 i + γ 2 i + γ 4 i + ρ 2 i + κ 2 i + υ 2 i + υ 4 i ) ) μ 1 i = 1 G i k I R + c i R μ i = 1 ( ( ξ 1 i + ξ 3 i + γ 1 i + γ 3 i + ρ 1 i + κ 1 i + υ 1 i + υ 3 i ) ) μ .
Proof. 
Let z ( t ) = x ( t ) + i y ( t ) and z ( t ) = x ( t ) + i y ( t ) , x x and y y . Suppose that z ( t ) = [ z 1 ( t ) , z 2 ( t ) , ] T and z ( t ) = [ z 1 ( t ) , z 2 ( t ) ] T are two solutions of system (1). First of all, we construct an operator Λ : PC PC as follows,
Λ i ( x i ( t ) ) = ζ i ( 0 ) + 1 Γ ( λ ) t 0 t ( t s ) λ 1 g R ( s ) d s , t [ t 0 , t 1 ] , Λ i ( x i ( t ) ) = ζ i ( 0 ) + 1 Γ ( λ ) m = 1 t m 1 t m ( t m s ) λ 1 g R ( s ) d s + 1 Γ ( λ ) t k t ( t s ) λ 1 g R ( s ) d s + j = 1 G i j R ( x i ( t j ) , y i ( t j ) ) , t ( t k , t k + 1 ] , k = 1 , 2 , ,
and
Λ i ( y i ( t ) ) = η i ( 0 ) + 1 Γ ( λ ) t 0 t ( t s ) λ 1 g I ( s ) d s , t [ t 0 , t 1 ] , Λ i ( y i ( t ) ) = η i ( 0 ) + 1 Γ ( λ ) m = 1 t m 1 t m ( t m s ) λ 1 g I ( s ) d s + 1 Γ ( λ ) t k t ( t s ) λ 1 g I ( s ) d s + j = 1 G i j I ( x i ( t j ) , y i ( t j ) ) , t ( t k , t k + 1 ] , k = 1 , 2 , .
Next, we will show that Λ is a contraction mapping. For t [ t 0 , t 1 ] , in light of (6), we can deduce
| Λ ( x i ( t ) ) Λ ( x i ( t ) ) | 1 Γ ( λ ) t 0 t ( t s ) λ 1 g 1 ( s ) d s , ( t 1 t 0 ) λ Γ ( λ + 1 ) ( c i R | x i ( t ) x i ( t ) | + c i I | y i ( t ) y i ( t ) | ) + ( ξ 1 i + ξ 3 i + γ 1 i + γ 3 i + ρ 1 i + κ 1 i + υ 1 i + υ 3 i ) x x
+ ( ξ 2 i + ξ 4 i + γ 2 i + γ 4 i + ρ 2 i + κ 2 i + υ 2 i + υ 4 i ) y y .
For t ( t k , t k + 1 ] , k = 1 , 2 , , due to (9), we deduce
| Λ ( x i ( t ) ) Λ ( x i ( t ) ) | 1 Γ ( λ ) m = 1 t m 1 t m ( t m s ) λ 1 g 1 ( s ) d s + 1 Γ ( λ ) t k t ( t s ) λ 1 g 1 ( s ) d s + j = 1 [ ( G i j R ( x i ( t j ) , y i ( t j ) ) G i j R ( x i ( t j ) , y i ( t j ) ) ] ( i = 1 G i k R R c i R m = 1 ( t m t m 1 ) λ Γ ( λ + 1 ) ) | x i ( t ) x i ( t ) | + ( i = 1 G i k R I + c i I × m = 1 ( t m t m 1 ) λ Γ ( λ + 1 ) ) | y i ( t ) y i ( t ) | ] + ( ξ 1 i + ξ 3 i + γ 1 i + γ 3 i + ρ 1 i + κ 1 i + υ 1 i + υ 3 i ) x x m = 1 ( t m t m 1 ) λ Γ ( λ + 1 ) + ( ξ 2 i + ξ 4 i + γ 2 i + γ 4 i + ρ 2 i + κ 2 i + υ 2 i + υ 4 i ) y y m = 1 ( t m t m 1 ) λ Γ ( λ + 1 ) ,
where g 1 ( s ) is defined by (7).
Next, from (17) and (19), we can derive
Λ ( x ) Λ ( x ) = sup t [ τ , ) i = 1 { | x i ( t ) x i ( t ) | } ( i = 1 G i k R R c i R μ ) x x + ( i = 1 G i k R I c i I μ ) y y + i = 1 ( ξ 1 i + ξ 3 i + γ 1 i + γ 3 i + ρ 1 i + κ 1 i + υ 1 i + υ 3 i ) μ x x
+ i = 1 ( ξ 2 i + ξ 4 i + γ 2 i + γ 4 i + ρ 2 i + κ 2 i + υ 2 i + υ 4 i ) μ y y .
Similarly, for the imaginary parts y and y, we conclude that
Λ ( y ) Λ ( y ) = sup t [ τ , ) i = 1 { | y i ( t ) y i ( t ) | } ( i = 1 G i k I R c i R μ ) y y + ( i = 1 G i k I I + c i I μ ) x x + i = 1 ( ξ 1 i + ξ 3 i + γ 1 i + γ 3 i + ρ 1 i + κ 1 i + υ 1 i + υ 3 i ) μ y y
+ i = 1 ( ξ 2 i + ξ 4 i + γ 2 i + γ 4 i + ρ 2 i + κ 2 i + υ 2 i + υ 4 i ) μ x x .
From (20) and (22), we obtain
Λ ( x ) Λ ( x ) [ i = 1 G i k R I + ( c i I + i = 1 ( ξ 2 i + ξ 4 i + γ 2 i + γ 4 i + ρ 2 i + κ 2 i + υ 2 i + υ 4 i ) ) μ ] y y 1 j = 1 G i k R R + c i R μ i = 1 ( ξ 1 i + ξ 3 i + γ 1 i + γ 3 i + ρ 1 i + κ 1 i + υ 1 i + υ 3 i ) μ ,
and
Λ ( y ) Λ ( y ) [ i = 1 G i k I I + ( c i I + i = 1 ( ξ 2 i + ξ 4 i + γ 2 i + γ 4 i + ρ 2 i + κ 2 i + υ 2 i + υ 4 i ) ) μ ] x x 1 i = 1 G i k I R + c i R μ i = 1 ( ξ 1 i + ξ 3 i + γ 1 i + γ 3 i + ρ 1 i + κ 1 i + υ 1 i + υ 3 i ) μ .
Then, we can get
Λ ( z ) Λ ( z ) = Λ ( x ) Λ ( x ) + Λ ( y ) Λ ( y ) k ( x x + y y ) = k z z ,
where 0 < k < 1 is defined by (16).
Consequently, the mapping Λ is a contraction mapping. It follows the Contraction Mapping Principle; the mapping Λ has a unique fixed-point. That is, we can conclude that the system (1) has a unique solution satisfying the initial condition. The proof is accomplished. □

4. Numerical Example

Example 1.
Consider the following two-dimensional fuzzy impulsive fractional-order complex-value neural networks with random discrete delays and distributed delays:
t k 1 C D t λ z i ( t ) = c i z i ( t ) + j = 1 2 a k j ( z i ( t ) ) f j ( z j ( t ) ) + j = 1 2 b i j ( z i ( t ) ) h j ( z j ( t τ j ) ) + j = 1 2 α i j h j ( z j ( t ) ) + j = 1 2 β i j h j ( z j ( t ) ) + j = 1 2 d i j t l ( t ) t h j ( z j ( s ) ) d s + I i ( t ) , t ( t k 1 , t k ] Δ z i ( t k ) = z i ( t k + ) z i ( t k ) = G i k ( z i ( t k ) ) ,
where we suppose λ = 0.8 , τ = l = 0.5 , I 1 = 1 i , I 2 = 3 2 i ,
f j ( z j ) = 2 exp ( x j ) 2 + exp ( x j ) + i 1 2 + exp ( y j ) , h j ( z j ) = 2 exp ( x j ) 2 + exp ( x j ) + i 1 2 + exp ( x j + 3 y j ) , G j ( z j ) = 1 2 + exp ( x j + 3 y j ) + i 1 3 + exp ( y j ) , j = 1 , 2 .
The memristive connective weights are given as follows:
a ˜ 11 R ( x 1 ( t ) ) = 0.1 , | x 1 ( t ) | 1 , 0.2 , | x 1 ( t ) | > 1 , a ˜ 12 R ( x 2 ( t ) ) = 0.2 , | x 2 ( t ) | 1 , 0.3 , | x 2 ( t ) | > 1 ,
a ˜ 21 R ( x 1 ( t ) ) = 0.2 , | x 1 ( t ) | 1 , 0.3 , | x 1 ( t ) | > 1 , a ˜ 22 R ( x 2 ( t ) ) = 0.1 , | x 2 ( t ) | 1 , 0.2 , | x 2 ( t ) | > 1 ,
a ˜ 11 I ( x 1 ( t ) ) = 0.1 , | y 1 ( t ) | 1 , 0.2 , | y 1 ( t ) | > 1 , a ˜ 12 I ( x 2 ( t ) ) = 0.1 , | y 2 ( t ) | 1 , 0.3 , | y 2 ( t ) | > 1 ,
a ˜ 21 I ( y 1 ( t ) ) = 0.1 , | y 1 ( t ) | 1 , 0.3 , | y 1 ( t ) | > 1 , a ˜ 22 I ( y 2 ( t ) ) = 0.25 , | y 2 ( t ) | 1 , 0.2 , | y 2 ( t ) | > 1 ,
b ˜ 11 R ( x 1 ( t ) ) = 0.15 , | x 1 ( t ) | 1 , 0.2 , | x 1 ( t ) | > 1 , b ˜ 12 R ( x 2 ( t ) ) = 0.02 , | x 2 ( t ) | 1 , 0.03 , | x 2 ( t ) | > 1 ,
b ˜ 21 R ( x 1 ( t ) ) = 0.02 , | x 1 ( t ) | 1 , 0.03 , | x 1 ( t ) | > 1 , b ˜ 22 R ( x 2 ( t ) ) = 0.15 , | x 2 ( t ) | 1 , 0.2 , | x 2 ( t ) | > 1 ,
b ˜ 11 I ( y 1 ( t ) ) = 0.1 , | y 1 ( t ) | 1 , 0.2 , | y 1 ( t ) | > 1 , b ˜ 12 I ( y 2 ( t ) ) = 0.05 , | y 2 ( t ) | 1 , 0.03 , | y 2 ( t ) | > 1 ,
b ˜ 21 I ( y 1 ( t ) ) = 0.05 , | y 1 ( t ) | 1 , 0.03 , | y 1 ( t ) | > 1 , b ˜ 22 I ( y 2 ( t ) ) = 0.1 , | y 2 ( t ) | 1 , 0.2 , | y 2 ( t ) | > 1 .
The network parameters are chosen as c 1 = c 2 = 1 + 3 i , d 11 = d 12 = d 21 = d 22 = 1 + i , α 11 = α 12 = α 21 = α 22 = 0.01 , β 11 = β 12 = β 21 = β 22 = 0.02 , F R R = 0.5 , F R I = F I R = 0 , F I I = 0.125 , H R R = 0.5 , H R I = 0 , H I R = 0.125 , H I I = 0.1 , G R R = 0.125 , G R I = 0.1 , G I R = 0 , G I I = 0.125 .
Finally, we can obtain q 1 0.56 , q 2 0.73 , q 3 0.76 , q 4 0.85 , which satisfy q 2 q 4 > q 1 q 3 . Thus, owing to Theorem 1, system (24) is uniformly stable.

5. Conclusions

In this work, we have considered a class of fractional-order fuzzy infinite dimensional neural networks with time delays. We derived sufficient conditions for the uniform stability of the above systems, which is a significant property for the differential systems. The Lyapunov function method is a common method for solving the stability of neural networks. It should be pointed out that in order to reduce the possible conservatism, we need to construct a more complex Lyapunov functional to test the high-dimensional linear matrix inequality, and the computational complexity increases. This paper is based on the fixed-point theory for analysis. An improved criterion for the uniform stability of the network system has been established. Example 1 shows that the derived criteria are applicable to the uniform stability of fractional-order fuzzy neural networks and can be used for the generalization of the results in the existing literature. It should be mentioned that in real life, many systems and natural processes are often subject to stochastic disturbances; thus, the dynamical behavior of stochastic neural networks has attracted much attention in view of its wide range of applications. Finding more practical and stochastic models of complex FOCVNNs and applying FOCVNNs in secure communication and image encryption will be a part of our future work. To sum up, the fractional-order complex-valued neural networks still have many problems worthy of further study.

Author Contributions

X.L.: Investigation, writing—-review and editing; L.C.: conceptualization; supervision; Y.Z.: methodology and writing—original draft. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Shandong Provincial Natural Science Foundation under grant ZR2020MA006 and the Introduction and Cultivation Project of Young and Innovative Talents in Universities of Shandong Province.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data included in this study are available upon request via contact with the corresponding author.

Acknowledgments

We would like to express our thanks to the anonymous referees and the editor for their constructive comments and suggestions, which greatly improved this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Seow, M.J.; Asari, V.K.; Livingston, A. Learning as a nonlinear line of attraction in a recurrent neural network. Neural Comput. Appl. 2010, 19, 337–342. [Google Scholar] [CrossRef]
  2. Ding, Z.; Zeng, Z.; Wang, L. Robust finite-time stabilization of fractional-order neural networks with discontinuous and continuous activation functions under uncertainty. IEEE Trans. Neural Network. 2018, 29, 1477–1490. [