Next Article in Journal
Distributed Energy-Efficient Assembly Scheduling Problem with Transportation Capacity
Previous Article in Journal
Sandwich Theorems for a New Class of Complete Homogeneous Symmetric Functions by Using Cyclic Operator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Global Asymptotic Stability of Competitive Neural Networks with Reaction-Diffusion Terms and Mixed Delays

1
School of Mathematical Sciences, Suqian College, Suqian 223800, China
2
School of Mathematics and Statistics, Huaiyin Normal University, Huaian 223300, China
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(11), 2224; https://doi.org/10.3390/sym14112224
Submission received: 24 September 2022 / Revised: 13 October 2022 / Accepted: 20 October 2022 / Published: 22 October 2022

Abstract

:
In this article, a new competitive neural network (CNN) with reaction-diffusion terms and mixed delays is proposed. Because this network system contains reaction-diffusion terms, it belongs to a partial differential system, which is different from the existing classic CNNs. First, taking into account the spatial diffusion effect, we introduce spatial diffusion for CNNs. Furthermore, since the time delay has an essential influence on the properties of the system, we introduce mixed delays including time-varying discrete delays and distributed delays for CNNs. By constructing suitable Lyapunov–Krasovskii functionals and virtue of the theories of delayed partial differential equations, we study the global asymptotic stability for the considered system. The effectiveness and correctness of the proposed CNN model with reaction-diffusion terms and mixed delays are verified by an example. Finally, some discussion and conclusions for recent developments of CNNs are given.

1. Introduction

In 1996, Meyer-Bäse et al. [1] firstly introduced competitive neural networks (CNNs) with different time scales. Using a quadratic-type Lyapunov function for the flow of a CNN with different time scales as a global stability method, the authors studied the local stability behavior around individual equilibrium points. In the earlier networks, the pools of mutually inhibitory neurons with fixed synaptic connections were considered. In the CNNs, there are two types of state variables: the short-term memory (STM) state variables which describe the fast dynamics of the system, and the long-term memory (LTM) state variables which describe the slow dynamics of the system. Because CNNs can accurately reflect the state transformation of neurons, a large number of achievements have been made in the study of different types of CNNs in the recent decades. Nie et al. [2] studied the exact existence and dynamical behaviors of multiple equilibrium points for delayed competitive neural networks with a class of nondecreasing piecewise linear activation functions. Lu et al. [3,4] considered global exponential stability of delayed competitive neural networks with different time scales. Competitive neural networks with time-varying and distributed delays were studied in [5]. For more results about competitive neural networks, see, e.g., [6,7,8,9] and related references.
On the other hand, diffusion phenomena exist widely in a neural network system, especially, when neurons are shifting in asymmetric neural networks or when metabolites and proteins move from one tier to other levels, see [10,11,12]. Hence, the study of neural network needs to consider the changes of neurons in time and space at the same time. It is of great theoretical and practical value to study the neural network system with a diffusion term. Cao et al. [13] studied global exponential synchronization of delayed memristive neural networks with reaction-diffusion terms. In [14], the authors studied inverse optimal synchronization control of competitive neural networks with constant time delays by means of the drive–response idea and inverse optimality techniques. Xu et al. [15] investigated global asymptotic stability of fractional-order competitive neural networks with multiple time-varying-delay links. Zheng et al. [16] considered the fixed-time synchronization of discontinuous competitive neural networks. Dynamical behavior of reaction-diffusion neural networks and their synchronization was considered in [17]. For state estimation for delayed genetic regulatory networks with reaction-diffusion terms, see [18]; for stability and asymptotic stability problems for neural networks with reaction-diffusion terms, see [19,20]; and for oscillatory behaviors for neural networks with reaction-diffusion terms, see [21,22].
In this paper, we mainly deal with the global asymptotic stability for CNNs when there exist reaction-diffusion terms and mixed delays in CNNs. For classic CNNs (without reaction-diffusion terms), there exist many results, see, e.g., [3,4,5] and relevant references. For research into CNNs, the most important results can be found in [1]. The main research approaches for studying CNNs are the Lyapunov function method [1,4], fixed point theorem, and matrix theory [2]. The main limitation for the above methods is that they are not suitable for dealing with stability problems for CNNs with reaction-diffusion terms and mixed delays. For overcoming the above difficulties, we construct a new Lyapunov function for the considered model on the base of theories of delayed partial differential equations and Lyapunov stability. Up to now, to the best of our knowledge, there are no articles studying the stability problems of CNNs with reaction-diffusion terms and mixed delays. Inspired by the above reasons, we study the stability problems of the equilibrium point for a class of reaction-diffusion CNNs with time-varying delays and distributed delays. The main innovation points are summarized in the following three aspects:
(1)
A new CNN model is introduced in this paper which extends some previous results; to our best knowledge, there exist few papers for studying this new CNN model, such as [3,4,5,6].
(2)
The model in the present paper contains various types of time delays: time-varying delays, distributed delays, bounded delays, and unbounded delays. Time delay is one of the inherent parameters of the system, which has an important impact on the properties of the control system. The study of time delay has very important application value.
(3)
A simple method for studying CNNs with reaction-diffusion terms and various types of delays is given. There is reason to believe that the method used in this paper can easily be used to study other types of dynamic systems.
The rest of the article is organized as follows: In Section 2, a system description and some preliminaries are given. Section 3 gives main results for global asymptotic stability of CNNs. In Section 4, a numerical example is given to show the feasibility of the obtained results. Finally, some conclusions and discussions are drawn in Section 5.

2. Model Description and Preliminaries

Consider the following CNNs with mixed delays:
S T M : x ˙ k ( t ) = a k x k ( t ) + d = 1 m b k d f d ( x d ( t ) ) + d = 1 m c k d f d ( x d ( t τ d ( t ) ) ) + d = 1 m c ˜ k d t γ ( t ) t f d ( x d ( s ) ) d s + B k d = 1 i y k d ( t ) ω d + I k ( t ) L T M : y ˙ k d ( t ) = α k y k d ( t ) + ω d β k f k ( x k ( t ) ) ,
where k = 1 , 2 , , m , x k ( t ) denotes the state of the neuron current; y k d ( t ) denotes the synaptic transfer efficiency; ω d is the external stimulus; b k d , c k d , c ˜ k d and B k are connection weights; I k ( t ) denotes the external input; f d ( · ) is the activation function; τ d ( t ) 0 and γ ( t ) 0 are time-varying delays, a k , α k and β k are nonnegative constants. Let s k ( t ) = d = 1 i y k d ( t ) ω d = ω T y k ( t ) , where ω = ( ω 1 , , ω i ) T , y k ( t ) = ( y k 1 ( t ) , , y k i ( t ) ) T . Assume that ω 2 = ω 1 2 + + ω i 2 = 1 . Then, system (1) can be rewritten by
S T M : x ˙ k ( t ) = a k x k ( t ) + d = 1 m b k d f d ( x d ( t ) ) + d = 1 m c k d f d ( x d ( t τ d ( t ) ) ) + d = 1 m c ˜ k d t γ ( t ) t f d ( x d ( s ) ) d s + B k s k ( t ) + I k ( t ) L T M : s ˙ k ( t ) = α k s k ( t ) + β k f k ( x k ( t ) ) .
Based on the motivation of introducing spatial diffusion for CNNs, we consider the joint influences of spatial diffusion in system (2). Then,
S T M : x k ( δ , t ) t = p = 1 P δ p d k p x k ( δ , t ) δ p a k x k ( δ , t ) + d = 1 m b k d f d ( x d ( δ , t ) ) + d = 1 m c k d f d ( x d ( δ , t τ d ( t ) ) ) + d = 1 m c ˜ k d t γ ( t ) t f d ( x d ( δ , s ) ) d s + B k s k ( δ , t ) + I k ( t ) L T M : s k ( δ , t ) t = p = 1 P δ p d k p * s k ( δ , t ) δ p α k s k ( δ , t ) + β k f k ( x k ( δ , t ) ) ,
where d k p , d k p * 0 are constants of diffusion effects, δ = ( δ 1 , δ 2 , , δ P ) T Ω R P , Ω is a bounded compact set with smooth boundary Ω and m e s Ω > 0 in space R P . Means of other parameters in system (3) are similar to the corresponding ones in system (1). System (3) has the following initial values
x k n = ( x k δ 1 , x k δ 2 , x k δ P ) T , k = 1 , 2 , , m , s k n = ( s k δ 1 , s k δ 2 , s k δ P ) T , k = 1 , 2 , , m ,
and
x k ( δ , s ) = ϕ x k ( δ , s ) , s [ τ , 0 ] , k = 1 , 2 , , m , s k ( δ , s ) = ϕ s k ( δ , s ) , s [ τ , 0 ] , k = 1 , 2 , , m ,
where τ = max t R { τ d ( t ) , γ ( t ) } for d = 1 , 2 , , m , ϕ x k ( δ , s ) and ϕ s k ( δ , s ) are bounded and continuous on Ω × [ τ , 0 ] .
Throughout this paper, we need the following assumptions:
(H 1 ) For each i = 1 , 2 , , m , f i : R R is bounded and satisfies the Lipschitz condition, i.e., there exists a constant F i > 0 such that
| f i ( x ) f i ( y ) | F i | x y | f o r a l l x , y R .
For convenience, some notations are given. For each u = ( u 1 , u 2 , , u m ) T R m , define 1 -norm of u by | | u | | 1 = i = 1 m | u i | ; for each x = ( x 1 ( δ , t ) , x 2 ( δ , t ) , , x m ( δ , t ) ) T R m , denote
| | x i ( δ , t ) | | 2 = Ω | x i ( δ , t ) | 2 d δ 1 2 , i = 1 , 2 , , m , t R .
Definition 1.
Assume that Y * = ( x * , s * ) T is an equilibrium point of system (3), where x * = ( x 1 * , x 2 * , , x m * ) T and s * = ( s 1 * , s 2 * , , s m * ) T . We say that the equilibrium point Y * is globally asymptotically stable, if there exists a constant M 1 such that
i = 1 m | | x i x i * | | 2 + i = 1 m | | s i s i * | | 2 M | | ϕ x x * | | 2 + | | ϕ s s * | | 2 for all t 0 ,
where
| | ϕ x x * | | 2 = sup s [ τ , 0 ] i = 1 m | | ϕ x i ( δ , s ) x i * | | 2 , | | ϕ s s * | | 2 = sup s [ τ , 0 ] i = 1 m | | ϕ s i ( δ , s ) s i * | | 2 .

3. Main Results

Theorem 1.
Suppose that assumption (H 1 ) holds. Then, the equilibrium point Y * = ( x * , s * ) T of system (3) is globally asymptotically stable under the initial conditions (4) and (5), provided that
2 a k + | B k | + | β k | F k + 2 d = 1 m | b k d | F d + | c k d | F d + | c ˜ k d | F d τ < 0
and
2 α k + | β k | F k < 0 ,
where k = 1 , 2 , , m .
Proof. 
It is easy to see that bounded activation functions guarantee the existence of an equilibrium point for system (3). The uniqueness of the equilibrium point for system (3) can be obtained by the global asymptotic stability of the equilibrium point.
Assume that ( x 1 ( δ , t ) , x 2 ( δ , t ) , , x m ( δ , t ) , s 1 ( δ , t ) , s 2 ( δ , t ) , , s m ( δ , t ) ) T is any solution of the system (3). We rewrite system (3) as follows:
( x k x k * ) t = p = 1 P δ p d k p ( x k x k * ) δ p a k ( x k x k * ) + d = 1 m b k d [ f d ( x d ) f d ( x d * ) ] + d = 1 m c k d [ f d ( x d ( δ , t τ d ( t ) ) ) f d ( x d * ) ] + d = 1 m c ˜ k d t γ ( t ) t [ f d ( x d ( δ , s ) ) f d ( x d * ) ] d s + B k ( s k s k * )
and
( s k s k * ) t = p = 1 P δ p d k p * ( s k s k * ) δ p α k ( s k s k * ) + β k [ f k ( x k ( δ , t ) ) f k ( x k * ) ] .
Multiplying both sides of (8) by x k x k * and integrating them on Ω , we have
1 2 d d t Ω ( x k x k * ) 2 d δ = p = 1 P Ω ( x k x k * ) δ p d k p ( x k x k * ) δ p d δ Ω a k ( x k x k * ) 2 d δ + d = 1 m Ω b k d [ f d ( x d ) f d ( x d * ) ] ( x k x k * ) d δ + d = 1 m Ω c k d [ f d ( x d ( δ , t τ d ( t ) ) ) f d ( x d * ) ] ( x k x k * ) d δ + d = 1 m Ω ( x k x k * ) c ˜ k d t γ ( t ) t [ f d ( x d ( δ , s ) ) f d ( x d * ) ] d s d δ + Ω B k ( s k s k * ) ( x k x k * ) d δ .
From the boundary conditions (4) and (5), we have
p = 1 P Ω ( x k x k * ) δ p d k p ( x k x k * ) δ p d δ = p = 1 P Ω d k p ( x k x k * ) δ p 2 d δ
and
p = 1 P Ω ( s k s k * ) δ p d k p * ( s k s k * ) δ p d δ = p = 1 P Ω d k p * ( s k s k * ) δ p 2 d δ .
From (10), (11), assumption (H 1 ), and the Hölder integral inequality, we have
d | | x k x k * | | 2 2 d t 2 a k | | x k x k * | | 2 2 + d = 1 m | b k d | F d | | x d x d * | | 2 2 + d = 1 m | b k d | F d | | x k x k * | | 2 2 + d = 1 m | c k d | F d | | x d x d * | | 2 2 + d = 1 m | c k d | F d | | x k x k * | | 2 2 + d = 1 m | c ˜ k d | F d τ | | x d x d * | | 2 2 + d = 1 m | c ˜ k d | F d τ | | x k x k * | | 2 2 + | B k | | | x k x k * | | 2 2 + | B k | | | s k s k * | | 2 2 = 2 a k + | B k | + d = 1 m | b k d | F d + d = 1 m | c k d | F d + d = 1 m | c ˜ k d | F d τ | | x k x k * | | 2 2 + d = 1 m | b k d | F d + | c k d | F d + | c ˜ k d | F d τ | | x d x d * | | 2 2 + | B k | | | s k s k * | | 2 2 .
Multiplying both sides of (9) by s k s k * and integrating them on Ω , in view of (12) and assumption (H 1 ), we have
d | | s k s k * | | 2 2 d t 2 α k | | s k s k * | | 2 2 + | β k | F k | s k s k * | | 2 2 + | β k | F k | x k x k * | | 2 2 = 2 α k + | β k | F k | | s k s k * | | 2 2 + | β k | F k | x k x k * | | 2 2 .
Construct the following Lyapunov functional:
V ( t ) = k = 1 m | | x k x k * | | 2 2 + | | s k s k * | | 2 2 .
Calculating the upper right Dini derivative D + V ( t ) of V ( t ) along the solutions of system (3), it follows from (6), (7), (13), and (14) that
D + V ( t ) k = 1 m { 2 a k + | B k | + d = 1 m | b k d | F d + d = 1 m | c k d | F d + d = 1 m | c ˜ k d | F d τ + d = 1 m | b k d | F d + | c k d | F d + | c ˜ k d | F d τ + | β k | F k } | | x k x k * | | 2 2 + k = 1 m 2 α k + | β k | F k + | B k | | | s k s k * | | 2 2 0 .
Hence, V ( t ) V ( 0 ) for t 0 . Furthermore, by (3.10), we have
V ( 0 ) = k = 1 m | | x k ( δ , 0 ) x k * | | 2 2 + | | s k ( δ , 0 ) s k * | | 2 2 sup s [ τ , 0 ] k = 1 m | | x k ( δ , s ) x k * | | 2 2 + | | s k ( δ , s ) s k * | | 2 2 .
Thus,
k = 1 m | | x k x k * | | 2 + | | s k s k * | | 2 sup s [ τ , 0 ] k = 1 m | | x k ( δ , s ) x k * | | 2 + | | s k ( δ , s ) s k * | | 2 = | | ϕ x x * | | 2 + | | ϕ s s * | | 2 .
This implies that the equilibrium point of system (3) is globally asymptotically stable. The proof is completed. □
In system (3), the distributed delay is bounded. If the distributed delay is unbounded, we consider the following system:
S T M : x k ( δ , t ) t = p = 1 P δ p d k p x k ( δ , t ) δ p a k x k ( δ , t ) + d = 1 m b k d f d ( x d ( δ , t ) ) + d = 1 m c k d f d ( x d ( δ , t τ d ( t ) ) ) + d = 1 m c ˜ k d t K d ( t s ) f d ( x d ( δ , s ) ) d s + B k s k ( δ , t ) + I k ( t ) L T M : s k ( δ , t ) t = p = 1 P δ p d k p * s k ( δ , t ) δ p α k s k ( δ , t ) + β k f k ( x k ( δ , t ) ) ,
where K d ( · ) is the delay kernel function which satisfies the following assumption:
(H 2 ) (i) K d ( · ) : [ 0 , ) [ 0 , ) ( d = 1 , 2 , , m ) is continuous;
(ii) 0 K d ( s ) d s = 1 , 0 s K d ( s ) d s < ;
(iii) There exists a positive number μ such that 0 s e μ s K d ( s ) d s < . Let system (3.1) have the following initial conditions:
x k n = ( x k δ 1 , x k δ 2 , x k δ P ) T , k = 1 , 2 , , m , s k n = ( s k δ 1 , s k δ 2 , s k δ P ) T , k = 1 , 2 , , m ,
and
x k ( δ , s ) = ϕ x k ( δ , s ) , s ( , 0 ] , k = 1 , 2 , , m , s k ( δ , s ) = ϕ s k ( δ , s ) , s ( , 0 ] , k = 1 , 2 , , m .
Theorem 2.
Suppose that assumptions (H 1 ) and (H 2 ) hold. Then, the equilibrium point Y * = ( x * , s * ) T of system (16) is globally asymptotically stable under the initial conditions (17) and (18), provided that
2 a k + | B k | + 2 d = 1 m | b k d | F d + | c k d | F d + 2 d = 1 m | c ˜ k d | 2 ξ k d F d 2 η d + | β k | F k < 0
and
2 α k + | β k | F k < 0 ,
where k = 1 , 2 , , m , ξ k d + η d = 1 with ξ k d , η d 0 .
Proof. 
Assume that ( x 1 ( δ , t ) , x 2 ( δ , t ) , , x m ( δ , t ) , s 1 ( δ , t ) , s 2 ( δ , t ) , , s m ( δ , t ) ) T is any solution of the system (16). We rewrite system (16) as follows:
( x k x k * ) t = p = 1 P δ p d k p ( x k x k * ) δ p a k ( x k x k * ) + d = 1 m b k d [ f d ( x d ) f d ( x d * ) ] + d = 1 m c k d [ f d ( x d ( δ , t τ d ( t ) ) ) f d ( x d * ) ] + d = 1 m c ˜ k d t K d ( t s ) [ f d ( x d ( δ , s ) ) f d ( x d * ) ] d s + B k ( s k s k * )
and
( s k s k * ) t = p = 1 P δ p d k p * ( s k s k * ) δ p α k ( s k s k * ) + β k [ f k ( x k ( δ , t ) ) f k ( x k * ) ] .
Similar to the proof of Theorem 1, by (21) and (22), we have
d | | x k x k * | | 2 2 d t 2 a k | | x k x k * | | 2 2 + d = 1 m | b k d | F d | | x d x d * | | 2 2 + d = 1 m | b k d | F d | | x k x k * | | 2 2 + d = 1 m | c k d | F d | | x d x d * | | 2 2 + d = 1 m | c k d | F d | | x k x k * | | 2 2 + 2 d = 1 m | c ˜ k d | t K d ( t s ) F d | | x d x d * | | 2 | | | x k x k * | | 2 d s + | B k | | | x k x k * | | 2 2 + | B k | | | s k s k * | | 2 2 = 2 a k + | B k | + d = 1 m | b k d | F d + d = 1 m | c k d | F d | | x k x k * | | 2 2 + d = 1 m | b k d | F d + | c k d | F d | | x d x d * | | 2 2 + | B k | | | s k s k * | | 2 2 + 2 d = 1 m | c ˜ k d | t K d ( t s ) F d | | x d x d * | | 2 | | | x k x k * | | 2 d s
and
d | | s k s k * | | 2 2 d t 2 α k | | s k s k * | | 2 2 + | β k | F k | s k s k * | | 2 2 + | β k | F k | x k x k * | | 2 2 = 2 α k + | β k | F k | | s k s k * | | 2 2 + | β k | F k | x k x k * | | 2 2 .
Construct the following Lyapunov functional:
V ( t ) = k = 1 m | | x k x k * | | 2 2 + | | s k s k * | | 2 2 + d = 1 m | c ˜ k d | 2 ξ k d F d 2 η d 0 K d ( s ) t s t | | x d ( δ , τ ) x d * | | 2 2 d τ d s .
Calculating the upper right Dini derivative D + V ( t ) of V ( t ) along the solutions of system (16), it follows from (23), (24), (19), (20), and assumption (H 2 ) that
D + V ( t ) k = 1 m { 2 a k + | B k | + d = 1 m | b k d | F d + d = 1 m | c k d | F d + d = 1 m | b k d | F d + | c k d | F d + | β k | F k } | | x k x k * | | 2 2 + k = 1 m 2 α k + | β k | F k + | B k | | | s k s k * | | 2 2 + k = 1 m 2 d = 1 m | c ˜ k d | 0 K d ( s ) F d | | x d x d * | | 2 | | | x k x k * | | 2 d s + k = 1 m d = 1 m | c ˜ k d | 2 ξ k d F d 2 η d 0 K d ( s ) | | x d ( δ , t ) x d * | | 2 2 | | x d ( δ , t s ) x d * | | 2 2 d s k = 1 m { 2 a k + | B k | + d = 1 m | b k d | F d + d = 1 m | c k d | F d + d = 1 m | b k d | F d + | c k d | F d + | β k | F k + 2 d = 1 m | c ˜ k d | 2 ξ k d F d 2 η d } | | x k x k * | | 2 2 + k = 1 m 2 α k + | β k | F k + | B k | | | s k s k * | | 2 2 0 .
Hence, V ( t ) V ( 0 ) for t 0 . From (25), we have
V ( t ) k = 1 m | | x k x k * | | 2 2 + | | s k s k * | | 2 2
and
V ( 0 ) = k = 1 m | | x k ( δ , 0 ) x k * | | 2 2 + | | s k ( δ , 0 ) s k * | | 2 2 + d = 1 m | c ˜ k d | 2 ξ k d F d 2 η d 0 K d ( s ) s 0 | | x d ( δ , τ ) x d * | | 2 2 d τ d s 1 + d = 1 m | c ˜ k d | 2 ξ k d F d 2 η d 0 s K d ( s ) d s sup s ( , 0 ] k = 1 m | | x k ( δ , s ) x k * | | 2 + sup s ( , 0 ] k = 1 m | | s k ( δ , s ) s k * | | 2 .
Let
M = max k = 1 , 2 , , m 1 + d = 1 m | c ˜ k d | 2 ξ k d F d 2 η d 0 s K d ( s ) d s .
Then, M 1 and
k = 1 m | | x k x k * | | 2 + | | s k s k * | | 2 M sup s [ τ , 0 ] k = 1 m | | x k ( δ , s ) x k * | | 2 + | | s k ( δ , s ) s k * | | 2 = M | | ϕ x x * | | 2 + | | ϕ s s * | | 2 .
This implies that the equilibrium point of system (16) is globally asymptotically stable. The proof is completed. □
Corollary 1.
Suppose that assumptions (H 1 ) and (H 2 ) hold. Then, the equilibrium point Y * = ( x * , s * ) T of system (16) is globally asymptotically stable under the initial conditions (17) and (18), provided that
2 a k + | B k | + 2 d = 1 m | b k d | F d + | c k d | F d + 2 d = 1 m | c ˜ k d | 2 F d 2 + | β k | F k < 0
and
2 α k + | β k | F k < 0 ,
where k = 1 , 2 , , m .
Remark 1.
In general, constructing a Lyapunov functional is a main research method for studying stability problems of neural networks, see [23,24,25,26,27]. However, constructing a proper Lyapunov functional is very difficult for obtaining the stability criteria of a complicated system. In this paper, a simple Lyapunov functional is constructed. Using this Lyapunov functional, we can easily study the dynamic behavior of a competitive network system.
Remark 2.
Since system (1) contains reaction-diffusion terms, we develop new ways (see Equations (11) and (12)) to deal with these terms so that we can obtain the stability conclusions of the solution smoothly.
Remark 3.
In this paper, we only obtain the global asymptotic stability for competitive neural networks with reaction-diffusion terms and mixed delays. However, we cannot obtain the global exponential stability; the main reason is that system (3) contains reaction-diffusion terms and mixed delays and this makes it difficult to construct a suitable Lyapunov function. The global exponential stability of system (3) is a problem that we need to solve in the future.

4. An Example

Example 1.
Consider the following system:
S T M : x k ( δ , t ) t = δ p d k p x k ( δ , t ) δ p a k x k ( δ , t ) + d = 1 m b k d f d ( x d ( δ , t ) ) + d = 1 m c k d f d ( x d ( δ , t τ d ( t ) ) ) + d = 1 m c ˜ k d t K d ( t s ) f d ( x d ( δ , s ) ) d s + B k s k ( δ , t ) L T M : s k ( δ , t ) t = δ p d k p * s k ( δ , t ) δ p α k s k ( δ , t ) + β k f k ( x k ( δ , t ) ) ,
where p = 1 , k = d = 1 , 2 , K d ( t ) = t e t , d k p = d k p * = 1 , τ d ( t ) = | sin t | , f d ( ξ ) = | ξ + 1 | | ξ 1 | . Obviously, | f d ( ξ 1 ) f d ( ξ 2 ) | 2 | ξ 1 ξ 2 | and F d = 2 . Let
a 1 = 10.5 , a 2 = 12 , b 11 = 5 4 , b 12 = 2 3 , b 21 = 2 3 , b 22 = 7 10 ,
c 11 = 3 4 , c 12 = 1 2 , c 21 = 5 6 , c 22 = 2 5 , B 1 = B 2 = 3 5 ,
c ˜ 11 = 3 4 , c ˜ 12 = 1 5 , c ˜ 21 = 1 3 , c ˜ 22 = 4 3 , α 1 = α 2 = 3.1 , β 1 = β 2 = 1 .
It is easy to check that
2 a 1 + | B 1 | + 2 d = 1 2 | b 1 d | F d + | c 1 d | F d + | c ˜ 1 d | F d τ + | β 1 | F 1 0.81 < 0 ,
2 a 2 + | B 2 | + 2 d = 1 2 | b 2 d | F d + | c 2 d | F d + | c 1 d | F d + | c ˜ 2 d | F d τ + | β 2 | F 2 3.63 < 0 ,
2 α 1 + | β 1 | F 1 = 4.2 < 0 ,
2 α 2 + | β 1 | F 2 = 4.2 < 0 .
All the hypotheses of Theorem 1 are satisfied. Since f 1 ( 0 ) = f 2 ( 0 ) , then ( x * , s * ) T = ( 0 , 0 , 0 , 0 ) T is a constant solution of system (28) which is globally asymptotically stable.

5. Conclusions and Discussion

This paper is devoted to studying the global asymptotic stability for competitive neural networks with reaction-diffusion terms and mixed delays by using the mathematical analysis technique and Lyapunov functional method. For achieving the global asymptotic stability of the competitive neural networks, we use some inequality analysis techniques. We construct a suitable Lyapunov functional for the considered system and obtain some new criteria for guaranteeing the global asymptotic stability of competitive neural networks with reaction-diffusion terms and mixed delays. It should be pointed out that we first study the global asymptotic stability of competitive neural networks with reaction-diffusion terms and mixed delays. Finally, a numerical simulation has been shown to verify the correctness of our theoretical results. However, we only obtain the global asymptotic stability in this paper, we cannot obtain the global exponential stability which will be our research focus in the future.
Since the CNNs in the present paper contain reaction-diffusion terms, they belong to a partial differential equation and the traditional methods of dealing with an ordinary differential system are no longer applicable. By using theories of delayed partial differential equation and Lyapunov stability, we construct a suitable Lyapunov function and obtain global asymptotic stability. We believe that the method in the present paper can be used for other types of systems, such as impulsive partial differential equations, stochastic partial differential equations, and so on.

Author Contributions

Writing—original draft, B.D.; writing—review and editing, S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Meyer-Bäse, A.; Oh, F.; Scheich, H. Singular perturbation analysis of competitive neural networks with different time scales. Neural Comput. 1996, 8, 1731–1742. [Google Scholar] [CrossRef] [PubMed]
  2. Nie, X.; Cao, J.; Fei, S. Multistability and instability of delayed competitive neural networks with nondecreasing piecewise linear activation functions. Neurocomputing 2013, 119, 281–291. [Google Scholar] [CrossRef]
  3. Lu, H.; He, Z. Global exponential stability of delayed competitive neural networks with different time scales. Neural Netw. 2005, 18, 243–250. [Google Scholar] [CrossRef] [PubMed]
  4. Lu, H.; Amari, S. Global exponential stability of multitime scale competitive neural networks with nonsmooth functions. IEEE Trans. Neural Netw. 2006, 17, 1152–1164. [Google Scholar] [CrossRef]
  5. Nie, X.; Cao, J. Multi stability of competitive neural networks with time-varying and distributed delays. Nonlinear Anal. Real World Appl. 2009, 10, 928–942. [Google Scholar] [CrossRef]
  6. Gu, H.; Jiang, H.; Teng, Z. Existence and global exponential stability of equilibrium of competitive neural networks with different time scales and multiple delays. J. Frankl. Inst. 2010, 347, 719–731. [Google Scholar] [CrossRef]
  7. Meyer-Bäse, A.; Roberts, R.; Thümmler, V. Local uniform stability of competitive neural networks with different time-scales under vanishing perturbations. Neurocomputing 2010, 73, 770–775. [Google Scholar] [CrossRef]
  8. Meyer-Bäse, A.; Botella, G.; Rybarska-Rusinek, L. Stochastic stability analysis of competitive neural networks with different time-scales. Neurocomputing 2013, 118, 115–118. [Google Scholar] [CrossRef]
  9. Meyer-Bäse, A.; Roberts, R.; Yu, H. Robust stability analysis of competitive neural networks with different time-scales under perturbations. Neurocomputing 2007, 71, 417–420. [Google Scholar] [CrossRef]
  10. Hu, C.; Jiang, H.; Teng, Z. Impulsive control and synchronization for delayed neural networks with reaction-diffusion terms. IEEE Trans. Neural. Netw. 2010, 21, 67–81. [Google Scholar]
  11. He, L.W.H.; Zeng, Z.; Hu, C. Global stabilization of fuzzy memristor-based reaction-diffusion neural networks. IEEE Trans. Cybern. 2020, 50, 4658–4669. [Google Scholar]
  12. Vidhya, C.; Dharani, S.; Balasubramaniam, P. Stability of impulsive stochastic reaction diffusion recurrent neural network. Neural Process Lett. 2020, 51, 1049–1060. [Google Scholar] [CrossRef]
  13. Cao, Y.; Cao, Y.; Guo, Z.; Huang, T.; Wen, S. Global exponential synchronization of delayed memristive neural networks with reaction-diffusion terms. Neural Netw. 2020, 123, 70–81. [Google Scholar] [CrossRef] [PubMed]
  14. Liu, X.; Yang, C.; Zhu, S. Inverse optimal synchronization control of competitive neural networks with constant time delays. Neural Comput. Appl. 2022, 34, 241–251. [Google Scholar] [CrossRef]
  15. Xu, Y.; Yu, J.; Li, W.; Feng, J. Global asymptotic stability of fractional-order competitive neural networks with multiple time-varying-delay links. Appl. Math. Comput. 2021, 389, 12548. [Google Scholar] [CrossRef]
  16. Zheng, C.; Hu, C.; Yu, J.; Jiang, H. Fixed-time synchronization of discontinuous competitive neural networks with time-varying delays. Neural Netw. 2022, 153, 192–203. [Google Scholar] [CrossRef] [PubMed]
  17. Moayeri, M.; Rad, J.; Parand, K. Dynamical behavior of reaction-diffusion neural networks and their synchronization arising in modeling epileptic seizure: A numerical simulation study. Comput. Math. Appl. 2020, 80, 1887–1927. [Google Scholar] [CrossRef]
  18. Zhang, X.; Han, Y.; Wu, L.; Wang, Y. State estimation for delayed genetic regulatory networks with reaction-diffusion terms. IEEE Trans. Neural Netw. Learn Syst. 2018, 29, 299–309. [Google Scholar] [CrossRef]
  19. Han, Y.; Zhang, X.; Wang, Y. Asymptotic stability criteria for genetic regulatory networks with time-varying delays and reaction-diffusion terms. Circuits Syst. Signal Process 2015, 34, 3161–3190. [Google Scholar] [CrossRef]
  20. Zou, C.; Wei, X.; Zhang, Q.; Zhou, C. Passivity of reaction-diffusion genetic regulatory networks with time-varying delays. Neural Process Lett. 2018, 47, 1115–1132. [Google Scholar] [CrossRef]
  21. Zhang, Y.; Liu, H.; Yan, F.; Zhou, J. Oscillatory behaviors in genetic regulatory networks mediated by microRNA with time delays and reaction-diffusion terms. IEEE Trans. Nanobiosci. 2017, 16, 166–176. [Google Scholar] [CrossRef] [PubMed]
  22. Dong, T.; Zhang, Q. Stability and oscillation analysis of a gene regulatory network with multiple time delays and diffusion rate. IEEE Trans. Nanobiosci. 2020, 19, 285–298. [Google Scholar] [CrossRef]
  23. Ali, M.; Palanisamy, L.; Alsaedi, A.; Gunasekaran, N.; Ahmad, B. Finite-time exponential synchronization of reaction-diffusion delayed complex-dynamical networks. Discret. Contin. Dyn. Syst. 2021, 14, 1465–1477. [Google Scholar]
  24. Wang, L.; Shen, Y. Design of controller on synchronization of memristor-based neural networks with time-varying delays. Neurocomputing 2015, 147, 372–379. [Google Scholar] [CrossRef]
  25. Zhang, G.; Shen, Y.; Yin, Q.; Sun, J. Global exponential periodicity and stability of a class of memristor-based recurrent neural networks with multiple delays. Inf. Sci. 2013, 232, 386–396. [Google Scholar] [CrossRef]
  26. Salau, A.O.; Jain, S. A Survey of the Types, Techniques, Applications. In Proceedings of the 5th IEEE International Conference on Signal Processing and Communication (ICSC), Noida, India, 7–9 March 2019; pp. 158–164. [Google Scholar]
  27. Zhou, J.; Bao, H. Fixed-time synchronization for competitive neural networks with Gaussian-wavelet-type activation functions and discrete delays. J. Appl. Math. Comput. 2020, 64, 103–118. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shao, S.; Du, B. Global Asymptotic Stability of Competitive Neural Networks with Reaction-Diffusion Terms and Mixed Delays. Symmetry 2022, 14, 2224. https://doi.org/10.3390/sym14112224

AMA Style

Shao S, Du B. Global Asymptotic Stability of Competitive Neural Networks with Reaction-Diffusion Terms and Mixed Delays. Symmetry. 2022; 14(11):2224. https://doi.org/10.3390/sym14112224

Chicago/Turabian Style

Shao, Shuxiang, and Bo Du. 2022. "Global Asymptotic Stability of Competitive Neural Networks with Reaction-Diffusion Terms and Mixed Delays" Symmetry 14, no. 11: 2224. https://doi.org/10.3390/sym14112224

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop