Next Article in Journal
Sovereign Credit Default Swap and Stock Markets in Central and Eastern European Countries: Are Feedback Effects at Work?
Next Article in Special Issue
Fractional Lotka-Volterra-Type Cooperation Models: Impulsive Control on Their Stability Behavior
Previous Article in Journal
Magnetisation Processes in Geometrically Frustrated Spin Networks with Self-Assembled Cliques
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design and Practical Stability of a New Class of Impulsive Fractional-Like Neural Networks

1
Department of Mathematics, Technical University of Sofia, 8800 Sliven, Bulgaria
2
Department of Mathematics, University of Texas at San Antonio, San Antonio, TX 78249, USA
3
S.P. Timoshenko Institute of Mechanics, NAS of Ukraine, 03057 Kiev-57, Ukraine
4
Department of Machine Elements and Non-metallic Constructions, Technical University of Sofia, Sofia 1000, Bulgaria
*
Author to whom correspondence should be addressed.
Current Address: Department of Mathematics, University of Texas at San Antonio, San Antonio, TX 78249, USA.
Entropy 2020, 22(3), 337; https://doi.org/10.3390/e22030337
Submission received: 24 February 2020 / Revised: 11 March 2020 / Accepted: 13 March 2020 / Published: 15 March 2020
(This article belongs to the Special Issue Dynamics in Complex Neural Networks)

Abstract

:
In this paper, a new class of impulsive neural networks with fractional-like derivatives is defined, and the practical stability properties of the solutions are investigated. The stability analysis exploits a new type of Lyapunov-like functions and their derivatives. Furthermore, the obtained results are applied to a bidirectional associative memory (BAM) neural network model with fractional-like derivatives. Some new results for the introduced neural network models with uncertain values of the parameters are also obtained.

1. Introduction

Cellular neural network systems [1,2] and their various generalizations have attracted the attention of the researchers due to their incredible opportunities for applications in areas such as pattern recognition, associative memory, classification, parallel computation, as well as, their ability to solve complex optimization problems. As such, neural network models have promising potential for applications in numerous engineering tasks [3,4], including engineering design tasks [5]. It is also observed that an efficient neural network’s training is related to entropy phenomena [6]. In addition, in many cases, entropy is used to measure the complexity in a neural network architecture [6,7].
On the other hand, since impulsive phenomena may affect the neural network behavior, some important and interesting results about different classes of impulsive neural networks have been obtained. See, for example [8,9,10,11,12,13,14,15], and the references therein.
Recently, fractional calculus has become an emerging tool in numerous fields of science and technology. The concept of fractional derivatives generalizes the classical definitions of integer-order derivatives and integrals [16,17]. Due to the hereditary and memory characteristics of fractional derivatives, many real world processes and phenomena are better described by fractional-order models, such as system identification of thermal dynamics of buildings, entropy and information [18,19,20,21]. In addition, the dynamics, chaotic behavior, stability and synchronization of numerous fractional-order neural network models have been investigated in the recent literature [22,23,24,25], including the behavior of fractional impulsive neural networks [26,27,28,29,30]. Besides the most applicable Riemann-Liouville and Caputo types of fractional derivatives, many new types of fractional derivatives were introduced by the researchers. See, for example [31,32,33,34,35], and the references therein.
Despite the great opportunities for applications in modeling of real-world processes, the use of all these derivatives leads to computational complexities. For example, the Riemann–Liouville and Caputo derivatives do not obey the Leibniz rule and chain rule. To overcome these difficulties, new concepts has been proposed in [36,37,38,39,40]. Furthermore, important notes about some of the new concepts are given, for example, in [41,42,43,44,45] and the references therein.
In our early paper [46], we introduced the notion of "fractional-like derivative“ (FLD) which offers some computational simplifications related to FLDs of compositions of functions. Since then, the interest of the researchers to the theory of equations with FLDs has begun. Some basic results on the fundamental and qualitative theory of such equations has been established very recently. See, for example [47,48,49,50] and some of the references therein.
However, the theory of impulsive equations with FLDs is still in a very initial stage. The first results on impulsive equations with conformable derivatives and FLDs have been derived in [51,52,53], where some generalizations of FLDs and integrals have been introduced. Due to the computational convenience that offer generalized FLDs, the theory of such equations needs more developments. Furthermore, the theory of impulsive systems with generalized FLDs has not been applied to real-world models of diverse interest.
The first aim of the present research is to introduce a design of impulsive fractional-like neural network models. The second contribution of our paper is to present efficient stability conditions to the model under consideration. To this end, we investigate its practical stability behavior with respect to manifolds.
It is well known that the stability properties of a neural network are essential for its performance. Furthermore, in numerous cases the model can be unstable in the classical Lyapunov’s sense, but its performance may be sufficient for the practical point of view. For such situations, when the dynamic of systems contained within particular bounds during a fixed time interval, the researchers introduced the notion of practical stability [54,55,56,57]. Due to the great opportunities for applications, the notion has been considered for fractional-order systems [58,59]. For impulsive systems with FLDs, the concept has been investigated only in the paper [52]. However, to the best of the authors’ knowledge, practical stability results have not been derived for impulsive fractional-like neural network systems.
In addition, we will consider the practical stability properties of the designed neural network model with FLDs with respect to manifolds [60,61,62]. Thus, our results are more general than stability (practical stability) results for single solutions: zero solutions, equilibrium, periodic solutions, etc. The case when the behavior of the neural network is affected by some uncertain parameters will also be discussed. Indeed, considering parameters with uncertain values is very important for its qualitative properties [63,64,65].
The rest of the paper is organized as follows. In Section 2, some main definitions and lemmas on generalized FLDs and integrals are presented. We propose a design of an impulsive neural network model with generalized FLDs in Section 3. Some preliminaries are also given. In Section 4, we apply the elaborated in [52] definition of FLDs of piecewise continuous Lyapunov-type functions to derive sufficient conditions for practical stability with respect to manifolds defined by functions. The obtained results are also applied to an impulsive Hopfield fractional-like BAM neural network. In addition, two examples are also presented. Section 5 is devoted to practical stability results for impulsive neural networks with FLDs and uncertain parameters. Finally, the paper concludes in Section 6.

2. Generalized FLDs and Integrals

In this Section, we will state some main definitions and lemmas following [51,52,53].
Let R + = [ 0 , ) , R n be the n-dimensional Euclidean space, and let Ω R n be a bounded domain that contains the origin.
For given t ˜ R + and 0 < q 1 , we will consider a generalized q t h order fractional-like derivative D t ˜ q x ( t ) for a function x : [ t ˜ , ) R n defined as [52]
D t ˜ q x ( t ) = lim x ( t + θ ( t t ˜ ) 1 q ) x ( t ) θ , θ 0 , t > t ˜ .
Let t 0 R + , t 0 < t 1 < t 2 < t k < t k + 1 < and lim k t k = . According to [51,52,53]:
  • if t ˜ = t 0 , then D t ˜ q ( x ( t ) ) has the form
    D t 0 q x ( t ) = lim x ( t + θ ( t t 0 ) 1 q ) x ( t ) θ , θ 0
    which has been applied in systems without impulsive perturbations [36,37,38,39,46,47,48,49,50];
  • if t ˜ = 0 , then D t ˜ q ( x ( t ) ) has the form
    D 0 q x ( t ) = lim x ( t + θ t 1 q ) x ( t ) θ , θ 0 ;
  • if t ˜ = t k for some k = 1 , 2 , , then D t ˜ q ( x ( t ˜ ) ) has the form
    D t k q x ( t k ) = lim t t k + D t k q x ( t ) .
If the generalized fractional-like derivative D t ˜ q x ( t ) of order q of a continuous function x ( t ) exists at any point of an open interval of the type ( t ˜ , b ) for some b > t ˜ , t > t ˜ , t ˜ R + , then we will say that the function x ( t ) is q-differentiable on ( t ˜ , b ) . The class of all q-differentiable on ( t ˜ , b ) functions will be denoted by C q ( ( t ˜ , b ) , R n ) .
Analogous to above, the generalized fractional-like integral of order 0 < q 1 with a lower limit t ˜ , t ˜ 0 , of a function x : [ t ˜ , ) R n is defined by (see [52])
I t ˜ q x ( t ) = t ˜ t ( s t ˜ ) q 1 x ( s ) d s .
Throughout this paper, we will use the following properties of the generalized FLDs D t ˜ q x ( t ) , t > t ˜ for some t ˜ R + [52].
Lemma 1.
Let l ( y ( t ) ) : ( t ˜ , ) R . If l ( · ) is differentiable with respect to y ( t ) and y ( t ) is q-differentiable on ( t ˜ , ) , where 0 < q 1 , then for any t R + , t t ˜ and y ( t ) 0
D t ˜ q l ( y ( t ) ) = l ( y ( t ) ) D t ˜ q y ( t ) ,
where l is a partial derivative of l ( · ) .
Lemma 2.
Let the function x ( t ) : ( t ˜ , ) R be q-differentiable for 0 < q 1 . Then for all t > t ˜
I t ˜ q ( D t ˜ q x ( t ) ) = x ( t ) x ( t ˜ ) .
Remark 1.
For more results on FLDs and integrals we refer the reader to [36,37,38,39,46,47,48,49,50], and for results on the generalized FLDs and integrals see [51,52,53].

3. Impulsive Fractional-Like Neural Networks: Main Notions and Definitions

In this paper, we consider the next system of impulsive Hopfield fractional-like neural networks defined as
D t k q x i ( t ) = 1 C i ( t ) R i ( t ) x i ( t ) + j = 1 n α i j ( t ) f j ( x j ( t ) ) + γ i ( t ) , t t k , k = 0 , 1 , , Δ x i ( t k ) = x i ( t k + ) x i ( t k ) = I i k ( x i ( t k ) ) , k = 1 , 2 , ,
where x ( t ) = c o l ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) , x Ω , C i , R i C q ( R + , ( 0 , ) ) , α i j , γ i C q ( R + , R ) , f j C q ( R , R ) , x i ( t k + ) = lim h 0 + x i ( t k + h ) , x i ( t k ) = x i ( t k ) , I i k C ( R , R ) , i , j = 1 , 2 , n , n 2 , k = 1 , 2 , .
In the above impulsive fractional-like neural network model, x i ( t ) represents the state of the i th node at time t, n corresponds to the number of neurons in the neural network, the positive functions C i , R i are, respectively, the capacitance and the resistance for the node i at time t, α i j are the connection weights, f j denotes the activation function which determines the output f j ( x j ( t ) ) of the jth unit at time t, γ i denotes the external bias of the node i at time t, and t k , k = 1 , 2 , are the moments of impulsive perturbations and satisfy t 0 < t 1 < t 2 < t k < t k + 1 < , lim k t k = . The numbers x i ( t k ) and x i ( t k + ) are, respectively, the states of the ith node before and after an impulse perturbation at the moment t k and the functions I i k represent the magnitudes of the impulsive changes of the states x i ( t ) at the impulsive moments t k .
Remark 2.
The designed impulsive fractional-like neural network model generalizes many existing integer-order neural networks [1,2,3,4,8,9,10,11,12,13,14,15]. The main advantages of the proposed model are in (i) incorporating of the hereditary and memory characteristics of fractional derivatives [26,27,28,29,30]; (ii) using the computational simplicity of the generalized FLDs and integrals; (iii) taking into account the effects of some impulsive perturbations that can be used as controls of the neural network’s performance.
Let x 0 = c o l ( x 01 , x 02 , , x 0 n ) Ω . We will denote by x ( t ) = x ( t ; t 0 , x 0 ) the solution of the fractional-like impulsive neural network system (1) that satisfies the initial condition
x i ( t 0 + ) = x 0 i , i = 1 , 2 , , n .
Following the theory of impulsive fractional-order neural network systems [27,30], and the new theory of impulsive fractional-like systems [51,52,53], the solutions x ( t ) of the neural network models (1) are piecewise continuous functions that have points of discontinuity of the first kind t k and are left continuous at these moments. For such functions, the following identities are satisfied:
x i ( t k ) = x i ( t k ) , x i ( t k + ) = x i ( t k ) + I i k ( x i ( t k ) ) , i = 1 , 2 , , n , k = 1 , 2 , .
All of these piecewise continuous functions formed the space P C q ( R + , R n ) .
Let h : [ t 0 , ) × Ω R be a continuous function. The next sets will be called h m a n i f o l d s defined by the function h:
M t = { x Ω : h ( t , x ) = 0 , t [ t 0 , ) } ,
M t ( ε ) = { x Ω : | h ( t , x ) | < ε , t [ t 0 , ) } , ε > 0 .
To guarantee that the solution x ( t ; t 0 , x 0 ) of the initial value problem (IVP) (1)–(2) exists on [ t 0 , ) , and for the future investigations we will need the following assumptions.
A1. The function h is continuous on [ t 0 , ) × Ω and the sets M t , M t ( ε ) are ( n 1 ) -dimensional manifolds in R n .
A2. Each solution x ( t ; t 0 , x 0 ) of the IVP (1)-(2) satisfying
| h ( t , x ( t ; t 0 , x 0 ) ) | H <
is defined on the interval [ t 0 , ) , H = c o n s t > 0 .
A3. There exist constants L j > 0 such that
| f j ( u ) f j ( v ) | L j | u v | , f j ( 0 ) = 0
for all u , v R , j = 1 , 2 , , n .
In this paper we will use the following definition for practical exponential stability of the neural network system (1) with respect to manifolds defined by the function h given in [51].
Definition 1.
The fractional-like impulsive system (1) is:
(a) ( λ , A ) -practically exponentially stable with respect to the function h, if given ( λ , A ) with 0 < λ < A for any x 0 M t 0 + ( λ ) we have
x ( t ; t 0 , x 0 ) M t ( A + μ | h ( t 0 + , x 0 ) | E q ( κ , t t 0 ) ) , t t 0 f o r s o m e t 0 R + ,
where 0 < q < 1 , μ , κ > 0 ;
(b) ( λ , A ) -globally practically exponentially stable with respect to the function h, if (a) holds for Ω R n .
Remark 3.
The problems of exponential stability of integer-order neural networks have been investigated by numerous authors [3,4,8,11,12,13,14]. Indeed, the concept of exponential stability is one of the the most important qualitative concepts for such models because it guarantees the fast convergent rate [13]. The notion of exponential stability has been generalized in [66] to this of Mittag–Leffler stability for fractional-order systems. For Mittag–Leffler stability results of fractional neural networks see, for example [27,28,30] and the bibliography therein. With the present research, we will complement the existing results and will present results on ( λ , A ) -practical exponential stability for impulsive fractional-like neural network systems.

4. Practical Stability of Impulsive Fractional-Like Neural Networks

4.1. Main Practical Stability Results

In this Section, we will state our main practical exponential stability results. Since we consider impulsive effects in the designed neural network model, we will use the following sets
G k = ( t k 1 , t k ) × Ω , k = 1 , 2 , , G = k = 1 G k ,
and piecewise continuous auxiliary functions [8,9,10,11,12,13,14,15,26,27,28,29,30,52].
What follows is the definition of the class V t k q of Lyapunov-like functions defined in [52] for any t k R + , k = 0 , 1 , 2 , .
Definition 2.
The function V V t k q , if:
  • V is defined on G, V has non-negative values and V ( t , 0 ) = 0 for t t k ;
  • V is continuous in G, q differentiable in t and locally Lipschitz continuous with respect to its second argument on each of the sets G k ;
  • For each k = 0 , 1 , 2 , and x Ω , there exist the finite limits
    V ( t k , x ) = lim t < t k t t k V ( t , x ) , V ( t k + , x ) = lim t > t k t t k V ( t , x ) ,
    and V ( t k , x ) = V ( t k , x ) .
For a function V V t k q , t > t k , we define its the upper right fractional-like derivative as [52]:
+ D t k q V ( t , x ) = lim sup V ( t + θ ( t t k ) 1 q , x ( t + θ ( t t k ) 1 q ; t , x ) ) V ( t , x ) θ , θ 0 + .
Let for simplicity denote by F ( t , x ) = ( F 1 ( t , x ) , F 2 ( t , x ) , , F n ( t , x ) ) , where
F i ( t , x ) = 1 C i ( t ) R i ( t ) x i ( t ) + j = 1 n α i j ( t ) f j ( x j ( t ) ) + γ i ( t ) , i = 1 , 2 , , n .
Then [46,52] the fractional-like derivative of the function V ( t , x ) with respect to the solution x ( t ) of the IVP (1)–(2) is defined by
+ D t k q V ( t , x ) = lim sup V ( t + θ ( t t k ) 1 q , x + θ ( t t k ) 1 q F ( t , x ) ) V ( t , x ) θ , θ 0 + .
If V ( t , x ( t ) ) = V ( x ( t ) ) , 0 < q 1 , V is differentiable on x, and x ( t ) is q-differentiable on t for t > t k , then
+ D t k q V ( t , x ) = V ( x ( t ) ) D t k q x ( t ) ,
where V is a partial derivative of the function V.
From (3) and (4) it follows
+ D t k q V ( t , x ( t ; t 0 , x 0 ) ) = + D t k q V ( t , x ) ( 1 ) ,
t > t k , k = 0 , 1 , 2 , .
We will also need the following result from [52].
Lemma 3.
Assume that the function V V t k q is such that for t [ t 0 , ) , x Ω ,
V ( t k + , x ) V ( t k , x ) , k = 1 , 2 , ,
+ D t k q V ( t , x ) κ V ( t , x ) + g ( t ) , t t k , k = 0 , 1 , 2 , ,
where κ = c o n s t > 0 , g C q ( R + , R + ) .
Then
V ( t , x ( t ) ) V ( t 0 + , x 0 ) E q ( κ , t t 0 ) + t k t W q ( t t k , s t k ) g ( s ) ( s t k ) 1 q d s
+ j = 1 k l = k j + 1 k E q ( κ , t l t l 1 ) t k j t k j + 1 W q ( t t k , s t k j ) g ( s ) ( s t k j ) 1 q d s , t t 0 ,
where W q ( t t k , s t k ) = E q ( κ , t t k ) E q ( κ , s t k ) and E q ( ν , s ) is the fractional-like exponential function defined as [37,39]
E q ( ν , s ) = exp ν s q q , ν R , s R + .
In what follows, for a bounded continuous function f defined on R + , we set
f ¯ = sup t R + f ( t ) , f ̲ = inf t R + f ( t ) .
Theorem 1.
Assume that 0 < λ < A are given, and:
1. Assumptions A1–A3 hold.
2. The models’ parameters C i , R i and α i j , i , j = 1 , 2 , , n , satisfy
min 1 i n 1 C ¯ i R ¯ i > max 1 i n L i j = 1 n | α ¯ j i |
and κ * > 0 is such that
min 1 i n 1 C ¯ i R ¯ i max 1 i n L i j = 1 n | α ¯ j i | κ * > 0 .
3. For t t 0 the system’s parameters γ i , i = 1 , 2 , , n , satisfy
g ( t ) = t 0 W q ( t t k , s t k ) ( s t 0 ) 1 q i = 1 n | γ i ( s ) | d s
+ j = 1 k l = k j + 1 k E q ( κ * , t l t l 1 ) t k j t k j + 1 W q ( t t k , s t k j ) ( s t k j ) 1 q i = 1 n | γ i ( s ) | d s < .
4. The functions I k = d i a g ( I 1 k , I 2 k , , I n k ) are such that
I i k ( x i ( t k ) ) = γ i k x i ( t k ) , 0 < γ i k < 2 , i = 1 , 2 , n , k = 1 , 2 ,
and x Ω implies x + I k ( x ) Ω for k = 1 , 2 , .
5. The function h ( t , x ) satisfies
| h ( t , x ) | < i = 1 n | x i ( t ) | Λ ( H ) | h ( t , x ) | , t [ t 0 , ) ,
where Λ ( H ) 1 exists for any 0 < H .
Then the neural network system (1) is ( λ , A ) -practically exponentially stable with respect to the function h.
Proof. 
Let
x ( t ) = ( x 1 ( t , x 2 ( t ) , , x n ( t ) ) T
be a solution of (1) for x 0 Ω .
Consider the Lyapunov-like function
V ( x ( t ) ) = i = 1 n | x i ( t ) | .
We can easily check that V V t k q . For t k > t 0 0 , k = 1 , 2 , , from condition 4 of Theorem 1 we have that x ( t k ) Ω implies x ( t k + ) Ω for k = 1 , 2 , , and
V ( x ( t k + ) ) = i = 1 n | x i ( t k + ) | = i = 1 n | ( 1 γ i k ) x i ( t k ) | V ( x ( t k ) ) .
From A3 for t ( t k , t k + 1 ] , k = 0 , 1 , 2 , , we get
+ D t k q V ( x ( t ) ) i = 1 n 1 C i ( t ) R i ( t ) | x i ( t ) | + i = 1 n j = 1 n | α i j ( t ) | | f j ( x j ( t ) ) | + i = 1 n | γ i ( t ) |
i = 1 n 1 C ¯ i R ¯ i | x i ( t ) | + i = 1 n j = 1 n | α ¯ i j ( t ) | L j | x j ( t ) | + i = 1 n | γ i ( t ) |
min 1 i n 1 C ¯ i R ¯ i i = 1 n | x i ( t ) | + max 1 i n L i j = 1 n | α ¯ j i | i = 1 n | x i ( t ) | + i = 1 n | γ i ( t ) |
= ( κ 1 κ 2 ) V ( x ( t ) ) + i = 1 n | γ i ( t ) | ,
where
κ 1 = min 1 i n 1 C ¯ i R ¯ i , κ 2 = max 1 i n L i j = 1 n | α ¯ j i | .
From condition 2 of Theorem 1, it follows that there exits a real number κ * > 0 such that
κ 1 κ 2 κ *
and for t ( t k , t k + 1 ] , k = 0 , 1 , 2 , , along (1) we obtain
+ D t k q V ( x ( t ) ) κ * V ( x ( t ) ) + i = 1 n | γ i ( t ) | .
From the last inequality, (5) and Lemma 1 we get
V ( x ( t ) ) V ( x ( t 0 + ) ) E q ( κ * , t t 0 ) + t 0 W q ( t t k , s t k ) ( s t 0 ) 1 q i = 1 n | γ i ( s ) | d s + j = 1 k l = k j + 1 k E q ( κ * , t l t l 1 ) t k j t k j + 1 W q ( t t k , s t k j ) ( s t k j ) 1 q i = 1 n | γ i ( s ) | d s .
Let x 0 M t 0 + ( λ ) , i.e., | h ( t 0 + , x 0 ) | < λ . Then from condition 3 of Theorem 3 it follows that can choose A so that g ( t ) < A .
From (7) and condition 5 of Theorem 1 we obtain
| h ( t , x ( t ; t 0 , x 0 ) ) | < V ( x ( t ; t 0 , x 0 ) ) A + Λ ( H ) | h ( t 0 + , x 0 ) | E q ( κ * , t t 0 ) , t t 0 .
Therefore,
x ( t ; t 0 , x 0 ) M t A + Λ ( H ) | h ( t 0 + , x 0 ) | E q ( κ * , t t 0 )
for t t 0 , i.e., the system (1) is ( λ , A ) -practically exponentially stable with respect to the function h. □
Remark 4.
If the assumptions of Theorem 1 hold globally on R n , i.e., if Ω R n , then the system (1) is ( λ , A ) -globally practically exponentially stable with respect to the function h. Note that, in this case the condition x Ω implies x + I k ( x ) Ω for k = 1 , 2 , is obvious.
Remark 5.
Theorem 1 offers sufficient conditions for practical exponential stability (global practical exponential stability) with respect to a function h for the designed fractional-like impulsive neural network model. Exponential stability results for single solutions of the model (1) (equilibrium, zero solution, periodic solution) can be obtained as corollaries for particular choices of the function h. For example, in the case when h ( t , x ) = | | x x * | | , where x * is a single solution of (1) and | | . | | is the norm in R n , our results extend and improve the existing exponential stability results for integer-order neural networks [3,4,8,11,12,13,14].
Remark 6.
Our results also complement the existing Mittag–Leffler stability results for fractional neural networks [27,28,30]. The key features of FLDs provide less complicated from the computational aspects criteria. Thus, the new results are more appropriate for the numerous applications of neural network models with derivatives of non integer order.
The new exponential stability results proved in Theorem 1 can be useful for various classes of fractional-like neural network models. Next, we will apply the obtained criteria to study the practical stability properties of following system of impulsive Hopfield fractional-like bidirectional associative memory (BAM) neural networks:
D t k q y i ( t ) = 1 C i y R i y y i ( t ) + j = 1 n 1 w j i f j z ( z j ( t ) ) + γ i y ( t ) , D t k q z j ( t ) = 1 C j z R j z z j ( t ) + i = 1 n 2 h i j g i y ( y i ( t ) ) + γ j z ( t ) , t t k , k = 0 , 1 , 2 , , Δ y i ( t k ) = Q i k y i ( t k ) , Δ z j ( t k ) = T j k z j ( t k ) , k = 1 , 2 , ,
where t 0 R + , t 0 < t 1 < t 2 , , j = 1 , 2 , , n 1 , i = 1 , 2 , , n 2 , n = n 1 + n 2 , x i ( t ) and y j ( t ) correspond to the states of the ith unit and jth unit, respectively, at time t, C i y , R i y , C i z , R i z are positive constants, the real constants w j i , h i j are the connection weights, f j z , g i y C q [ R , R ] are the activation functions; γ i y , γ j z C q [ R + , R ] denote external inputs at time t, and the constants Q i k , T j k determine the abrupt changes of the states at the impulsive moments t k .
Note that different types of BAM neural networks of integer order have been intensively investigated due to the great opportunities for their application in many fields such as pattern recognition and automatic control [11,12]. Results on fractional BAM neural network models with Caputo fractional derivatives have been also published in the recent literature. See, for example [27] and the references therein. In this Section, we will extend the existing results to the fractional-like case.
Let t 0 R + and y 0 R n 2 , z 0 R n 1 . Denote by
( y ( t ) , z ( t ) ) T = ( y 1 ( t ) , , y n 2 ( t ) , z 1 ( t ) , , z n 1 ( t ) ) T R n
the solution of system (8) satisfying the initial conditions:
y ( t 0 + ; t 0 , y 0 ) = y 0 , z ( t 0 + ; t 0 , z 0 ) = z 0 .
We introduce the following conditions:
A4. There exist constants L j z > 0 and M i y > 0 such that
| f j z ( u ) f j z ( v ) | L j z | u v | , f j z ( 0 ) = 0 , | g i y ( u ) g i y ( v ) | M i y | u v | , g i y ( 0 ) = 0
for all u , v R , j = 1 , 2 , , n 1 , i = 1 , 2 , , n 2 .
A5. The constants Q i k and T j k are such that
2 < Q i k < 0 , 2 < T j k < 0
for j = 1 , 2 , , n 1 , i = 1 , 2 , , n 2 , k = 1 , 2 , .
The next result follows directly from Theorem 1.
Theorem 2.
Assume that 0 < λ < A are given, and:
1. Assumptions A1, A2, A4, A5 hold.
2. For j = 1 , 2 , , n 1 , i = 1 , 2 , , n 2 it follows
max 1 i n 2 M i y j = 1 n 1 | h i j | < min 1 i n 2 1 C i y R i y , max 1 j n 1 L j z i = 1 n 2 | w j i | < min 1 j n 1 1 C j z R j z
and κ * is such that
0 < κ * min min 1 i n 2 1 C i y R i y max 1 i n 2 M i y j = 1 n 1 | h i j | , min 1 j n 1 1 C j z R j z max 1 j n 1 L j z i = 1 n 2 | w j i | .
3. For t [ t 0 , ) we have
G ¯ ( t ) = t 0 W q ( t t k , s t k ) ( s t 0 ) 1 q i = 1 n 2 | γ i y ( s ) | + j = 1 n 1 | γ j z ( s ) | d s
+ j = 1 k l = k j + 1 k E q ( κ * , t l t l 1 ) t k j t k j + 1 W q ( t t k , s t k j ) ( s t k j ) 1 q i = 1 n 2 | γ i y ( s ) | + j = 1 n 1 | γ j z ( s ) | d s < ;
4. For the function h ( t , y , z ) we have
| h ( t , y , z ) | i = 1 n 1 | z i ( t ) | + j = 1 n 2 | y j ( t ) | Λ ( H ) | h ( t , y , z ) | , t [ t 0 , ) ,
where Λ ( H ) 1 exists for any 0 < H .
Then (8) is ( λ , A ) -globally practically exponentially stable with respect to the function h.
Proof. 
The proof of Theorem 2 follows the steps in the proof of Theorem 1. In this case we can use the Lyapunov’s function
V ( y ( t ) , z ( t ) ) = i = 1 n 1 | z i ( t ) | + j = 1 n 2 | y j ( t ) | .
Then, inequalities in the form (5) follow from the condition A5 and instead of (7), from condition 1 of Theorem 2, we get
+ D t k q V ( y ( t ) , z ( t ) ) j = 1 n 1 i = 1 n 2 | w j i | L j z 1 C j z R j z | z j ( t ) | + i = 1 n 2 j = 1 n 1 | h i j | M i y 1 C i y R i y | y i ( t ) |
+ j = 1 n 1 | γ j z ( t ) | + i = 1 n 2 | γ i y ( t ) | .
Condition 2 of Theorem 2 implies the existence of a positive number κ * such that
κ * min min 1 i n 2 1 C i y R i y max 1 i n 2 M i y j = 1 n 1 | h i j | , min 1 j n 1 1 C j z R j z max 1 j n 1 L j z i = 1 n 2 | w j i | ,
and, hence
+ D t k q V ( y ( t ) , z ( t ) ) κ * V ( y ( t ) , z ( t ) ) + G ¯ ( t ) .
The proof is completed by applying conditions 3 and 4 of Theorem 2. □

4.2. Examples

Example 1.
Consider the following 2-D impulsive fractional-like Hopfield neural network model
D t k q x i ( t ) = 1 C i ( t ) R i ( t ) x i ( t ) + j = 1 2 α i j ( t ) f j ( x j ( t ) ) + γ i ( t ) , t t k , k = 0 , 1 , , Δ x ( t k ) = 3 4 0 0 2 3 x ( t k ) , k = 1 , 2 , ,
where i = 1 , 2 , t 0 = 0 ,
x ( t ) = x 1 ( t ) x 2 ( t ) , γ 1 ( t ) = γ 2 ( t ) = 0 , C 1 = e t / 4 , C 2 = e t / 3 , R 1 = R 2 = 1 , f j ( x j ) = | x j + 1 | | x j 1 | 2 , j = 1 , 2 , α 11 ( t ) = 0.3 + sin ( t ) , α 12 ( t ) = 0.1 0.6 cos ( t ) 0.4 sin t , α 21 ( t ) = 0.3 cos ( t ) + 0.7 , α 22 ( t ) = 0.8 0.3 cos ( t ) + 0.2 sin ( t ) ,
0 < t 1 < t 2 < and t k as k .
Since
α ¯ 11 = 1.3 , α ¯ 12 = 1.1 , α ¯ 21 = 1 , α ¯ 22 = 1.3 , C ¯ 1 = 1 4 , C ¯ 1 = 1 3 , R ¯ 1 = R ¯ 2 = 1 ,
then condition 2 of Theorem 1 is satisfied and 0 < κ * 0.6 .
Also, for i = 1 n | γ i ( t ) | = 0 , we can choose 0 < λ < A so that g ( t ) < A .
In addition, conditions 4 of Theorem 1 is satisfied, since
0 < γ 1 k = 3 4 < 2 , 0 < γ 2 k = 2 3 < 2 , k = 1 , 2 , .
Therefore, according to Theorem 1, the impulsive fractional-like neural network system (9) is ( λ , A ) -globally practically exponentially stable with respect to the function h ( x 1 , x 2 ) = | x 1 | + | x 2 | . The global exponentially stable behavior is shown in Figure 1 for λ = 5 , A = 9 .
Example 2.
Consider the following impulsive BAM fractional-like Hopfield neural network model
D t k q y i ( t ) = 1 C i y R i y y i ( t ) + j = 1 2 w j i f j z ( z j ( t ) ) + γ i y ( t ) , D t k q z j ( t ) = 1 C j z R j z z j ( t ) + i = 1 2 h i j g i y ( y i ( t ) ) + γ j z ( t ) , t t k , k = 0 , 1 , 2 , , Δ y 1 ( t k ) = Q 1 k ( y 1 ( t k ) 2 ) , Δ y 2 ( t k ) = Q 2 k ( y 2 ( t k ) 1 ) , Δ z 1 ( t k ) = T 1 k ( z 1 ( t k ) 2 ) , Δ z 2 ( t k ) = T 2 k ( z 2 ( t k ) 3 ) , k = 1 , 2 , ,
where i , j = 1 , 2 , t 0 = 0 , γ 1 y ( t ) = 2.2 , γ 2 y ( t ) = 3.6 , γ 1 z ( t ) = γ 2 z ( t ) = 2.8 ,
y ( t ) = y 1 ( t ) y 2 ( t ) , z ( t ) = z 1 ( t ) z 2 ( t ) , C 1 y = 1 2 , C 2 y = 1 3 , C 1 z = 1 5 , C 2 z = 1 4 , R 1 y = 4 3 , R 2 y = 3 4 , R 1 z = 5 2 , R 2 z = 4 , f j z ( z j ) = | z j + 1 | | z j 1 | 2 , j = 1 , 2 , g i y ( y i ) = | y i + 1 | | y i 1 | 2 , i = 1 , 2 , w 11 = 0.3 w 12 ( t ) = 0.6 , w 21 = 0.5 , w 22 ( t ) = 0.2 , h 11 = 0.7 , w 12 ( t ) = 0.5 , w 21 = 0.3 , w 22 ( t ) = 0.1 , Q i k = 1 + 1 2 i cos ( 2 k 3 ) , T j k = 1 + 2 5 j sin ( 1 + k ) , i , j = 1 , 2 , k = 1 , 2 , , 0 < t 1 < t 2 <
and t k as k .
We can easily find that the neural network system (10) has an equilibrium
( y * , z * ) T = ( y 1 * , y 2 * , z 1 * , z 2 * ) T = ( 2 , 1 , 2 , 3 ) T .
Set y ¯ i = y i y i * , z ¯ j = z j z j * , i , j = 1 , 2 . Then
D t k q y ¯ i ( t ) = 1 C i y R i y y ¯ i ( t ) + j = 1 2 w j i f j z ( z j ( t ) ) f j z ( z j * ) , D t k q z ¯ j ( t ) = 1 C j z R j z z ¯ j ( t ) + i = 1 2 h i j g i y ( y i ( t ) ) g i y ( y i * ) , t t k , k = 0 , 1 , 2 , , Δ y ¯ 1 ( t k ) = Q 1 k y ¯ 1 ( t k ) , Δ y ¯ 2 ( t k ) = Q 2 k y ¯ 2 ( t k ) , Δ z ¯ 1 ( t k ) = T 1 k z ¯ 1 ( t k ) , Δ z ¯ 2 ( t k ) = T 2 k z ¯ 2 ( t k ) , k = 1 , 2 , .
For the system (12) all conditions of Theorem 2 are satisfied. Indeed, we have that L j z = M i y = 1 , i , j = 1 , 2 ,
1.2 = max 1 i 2 M i y j = 1 2 | h i j | < min 1 i 2 1 C i y R i y = 1.5 ,
0.8 = max 1 j 2 L j z i = 1 2 | w j i | < min 1 j 2 1 C j z R j z = 1 ,
2 < Q i k < 0 , 2 < T j k < 0
for i , j = 1 , 2 , k = 1 , 2 , and 0 < κ * 0.2 .
Hence, the fractional-like impulsive BAM neural network system (10) is ( λ , A ) -globally practically exponentially stable with respect to the function h ( y 1 , y 2 , z 1 , z 2 ) = ( y 1 y 1 * ) 2 + ( y 2 y 2 * ) 2 + ( z 1 z 1 * ) 2 + ( z 2 z 2 * ) 2 . The global exponentially stable behavior is shown in Figure 2 for λ = 8 , A = 11 .

5. Impulsive Fractional-Like Neural Networks with Uncertain Parameters

In this Section, we will consider an impulsive neural network system with FLDs and uncertain parameters given by
D t k q x i ( t ) = 1 C i ( t ) R i ( t ) + a ˜ i ( t ) x i ( t ) + j = 1 n α i j ( t ) + α ˜ i j ( t ) f j ( x j ( t ) ) + γ i ( t ) + γ ˜ i ( t ) , t t k , k = 0 , 1 , , Δ x i ( t k ) = γ i k + P ˜ i k x i ( t k ) , k = 1 , 2 , ,
where the functions a ˜ i C q [ R + , ( 0 , ) ] , α ˜ i j , γ ˜ i C q [ R + , R ] , i , j = 1 , 2 , , n , k = 1 , 2 , and constants P ˜ i k , i = 1 , 2 , , n , k = 1 , 2 , , represent the uncertainty of the system [63]. In the case when all of these functions and constants are zeros the system (13) will be reduced to the “nominal system” (1). [63,64,65].
Definition 3.
The system (1) is called ( λ , A ) -practically robustly exponentially stable with respect to the function h if for given ( λ , A ) with 0 < λ < A , t 0 R + , x 0 M t 0 + ( λ ) and for any a ˜ i , α ˜ i j , γ ˜ i , P ˜ i k , i , j = 1 , 2 , , n , k = 1 , 2 , , the system (13) is ( λ , A ) -practically exponentially stable with respect to the function h.
Using Theorem 1, we can prove the next result.
Theorem 3.
Assume that:
1. Conditions of Theorem 1 hold.
2. For i , j = 1 , 2 , , n the functions γ ˜ i ( t ) , a ˜ i ( t ) and α ˜ i j ( t ) are bounded for t [ t 0 , ) ,
min 1 i n 1 C ¯ i R ¯ i + a ˜ ¯ i > max 1 i n L i j = 1 n ( | α ¯ j i | + | α ˜ ¯ j i | ) ,
κ * > 0 is such that
min 1 i n 1 C ¯ i R ¯ i + a ˜ ¯ i max 1 i n L i j = 1 n ( | α ¯ j i | + | α ˜ ¯ j i | ) κ * > 0 ,
and
t 0 W q ( t t k , s t k ) ( s t 0 ) 1 q i = 1 n | γ i ( s ) | + | γ ˜ i ( s ) | d s
+ j = 1 k l = k j + 1 k E q ( κ * , t l t l 1 )
t k j t k j + 1 W q ( t t k , s t k j ) ( s t k j ) 1 q i = 1 n | γ i ( s ) | + | γ ˜ i ( s ) | d s < .
3. The unknown constants P ˜ i k are bounded such that 0 < P ˜ i k < 1 γ i k , i = 1 , 2 , , n , k = 1 , 2 , .
Then the system (1) is ( λ , A ) - practically robustly exponentially stable with respect to the function h.
Example 3.
Consider the following 2-D uncertain impulsive fractional-like Hopfield neural network model
D t k q x i ( t ) = 1 C i ( t ) R i ( t ) + a ˜ i ( t ) x i ( t ) + j = 1 2 α i j ( t ) + α ˜ i j ( t ) f j ( x j ( t ) ) + γ i ( t ) + γ ˜ i ( t ) , t t k , k = 0 , 1 , , Δ x ( t k ) = 3 4 + P ˜ 1 k 0 0 2 3 + P ˜ 2 k x ( t k ) , k = 1 , 2 , ,
where i = 1 , 2 , t 0 = 0 , for which system (9) is the nominal system, and a ˜ i C q [ R + , ( 0 , ) ] , α ˜ i j , γ ˜ i C q [ R + , R ] , i , j = 1 , 2 , k = 1 , 2 , and constants P ˜ i k , i = 1 , 2 , k = 1 , 2 , are the uncertain parameters.
Then we have that, if all uncertain terms are bounded, and satisfied all conditions of Theorem 3, the system (9) is ( λ , A ) - globally practically robustly exponentially stable with respect to the function h ( x 1 , x 2 ) = | x 1 | + | x 2 | .
Note that, if some of the uncertain terms is unbounded, Theorem 3 cannot guarantee the robust practical stability of the fractional-like model (9). For example, for P ˜ 2 k = 2 , k = 1 , 2 , , the unstable behavior of the model (14) is shown in Figure 3 for λ = 5 , A = 9 .

6. Conclusions

In this paper a new class of impulsive neural network systems with FLDs has been proposed. Practical stability analysis is performed and efficient sufficient conditions are established. With this research we extend the results on impulsive neural network Hopfield-type models to the fractional-like case. In addition, the obtained results are applied to neural networks with uncertain valued of parameters. Since the use of FLDs overcome some difficulties in evaluating fractional derivatives the obtained results are more appropriate for applications.

Author Contributions

Conceptualization, G.S. and I.S.; methodology, G.S., I.S. and A.M.; formal analysis, G.S., I.S. and A.M.; investigation, G.S., I.S. and A.M.; visualization, T.S.; writing—original draft preparation, T.S. These authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chua, L.O.; Yang, L. Cellular neural networks: Theory. IEEE Trans. Circuits Syst. 1988, 35, 1257–1272. [Google Scholar] [CrossRef]
  2. Chua, L.O.; Yang, L. Cellular neural networks: Applications. IEEE Trans. Circuits Syst. 1988, 35, 1273–1290. [Google Scholar] [CrossRef]
  3. Arbib, M. Brains, Machines, and Mathematics, 2nd ed.; Springer: New York, NY, USA, 1987. [Google Scholar]
  4. Haykin, S. Neural Networks: A Comprehensive Foundation, 2nd ed.; Prentice-Hall: Englewood Cliffs, NJ, USA, 1999. [Google Scholar]
  5. Hsu, Y.; Wang, S.; Yu, C. A sequential approximation method using neural networks for engineering design optimization problems. Eng. Optim. 2003, 35, 489–511. [Google Scholar] [CrossRef]
  6. Wiedemann, S.; Marban, A.; Müller, K.-R.; Samek, W. Entropy-constrained training of deep neural networks. arXiv 2018, arXiv:1812.07520. [Google Scholar]
  7. Ban, J.-C.; Chang, C.-H.; Huang, N.-Z. Entropy bifurcation of neural networks on Cayley trees. arXiv 2018, arXiv:1706.09283. [Google Scholar] [CrossRef]
  8. Chen, J.; Li, X.; Wang, D. Asymptotic stability and exponential stability of impulsive delayed Hopfield neural networks. Abstr. Appl. Anal. 2013, 2013, 1–10. [Google Scholar] [CrossRef]
  9. He, W.; Qian, F.; Cao, J. Pinning-controlled synchronization of delayed neural networks with distributed-delay coupling via impulsive control. Neural Netw. 2017, 85, 1–9. [Google Scholar] [CrossRef]
  10. Hu, B.; Guan, Z.-H.; Chen, G.; Lewis, F.L. Multistability of delayed hybrid impulsive neural networks with application to associative memories. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 1537–1551. [Google Scholar] [CrossRef]
  11. Li, X. Existence and global exponential stability of periodic solution for impulsive Cohen–Grossberg-type BAM neural networks with continuously distributed delays. Appl. Math. Comput. 2009, 215, 292–307. [Google Scholar] [CrossRef]
  12. Maharajan, C.; Raja, R.; Cao, J.; Rajchakit, G.; Alsaedi, A. Impulsive Cohen–Grossberg BAM neural networks with mixed time-delays: An exponential stability analysis issue. Neurocomputing 2018, 275, 2588–2602. [Google Scholar] [CrossRef]
  13. Stamova, I.M.; Stamov, G.T. Applied Impulsive Mathematical Models, 1st ed.; Springer: Cham, Switzerland, 2016. [Google Scholar]
  14. Stamova, I.M.; Stamov, T.; Simeonova, N. Impulsive effects on the global exponential stability of neural network models with supremums. Eur. J. Control 2014, 20, 199–206. [Google Scholar] [CrossRef]
  15. Zhang, X.; Lv, X.; Li, X. Sampled-data based lag synchronization of chaotic delayed neural networks with impulsive control. Nonlinear Dyn. 2017, 90, 2199–2207. [Google Scholar] [CrossRef]
  16. Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and Applications of Fractional Differential Equations, 1st ed.; Elsevier Science Limited: Amsterdam, The Netherlands, 2006. [Google Scholar]
  17. Podlubny, I. Fractional Differential Equations, 1st ed.; Academic Press: San Diego, CA, USA, 1999. [Google Scholar]
  18. Chen, L.; Basu, B.; McCabe, D. Fractional order models for system identification of thermal dynamics of buildings. Energ. Build. 2016, 133, 381–388. [Google Scholar] [CrossRef] [Green Version]
  19. Magin, R.L.; Ingo, C. Entropy and information in a fractional order model of anomalous diffusion. IFAC Proc. 2012, 45, 428–433. [Google Scholar] [CrossRef]
  20. Sierociuk, D.; Skovranek, T.; Macias, M.; Podlubny, I.; Petras, I.; Dzielinski, A.; Ziubinski, P. Diffusion process modeling by using fractional-order models. Appl. Math. Comput. 2015, 257, 2–11. [Google Scholar] [CrossRef] [Green Version]
  21. Xi, H.L.; Li, Y.X.; Huang, X. Generation and nonlinear dynamical analyses of fractional-order memristor–based Lorenz systems. Entropy 2014, 16, 6240–6253. [Google Scholar] [CrossRef] [Green Version]
  22. Chen, L.; Qu, J.; Chai, Y.; Wu, R.; Qi, G. Synchronization of a class of fractional-order chaotic neural networks. Entropy 2013, 15, 3265–3276. [Google Scholar] [CrossRef]
  23. Hu, H.-P.; Wang, J.-K.; Xie, F.-L. Dynamics analysis of a new fractional-order Hopfield neural network with delay and its generalized projective synchronization. Entropy 2019, 21, 1. [Google Scholar] [CrossRef] [Green Version]
  24. Li, L.; Wang, Z.; Lu, J.; Li, Y. Adaptive synchronization of fractional-order complex-valued neural networks with discrete and distributed delays. Entropy 2018, 20, 124. [Google Scholar] [CrossRef] [Green Version]
  25. Zhang, S.; Yu, Y.G.; Yu, J.Z. LMI Conditions for global stability of fractional-order neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2016, 28, 2423–2433. [Google Scholar] [CrossRef]
  26. Stamov, G.; Stamova, I. Impulsive fractional-order neural networks with time-varying delays: Almost periodic solutions. Neural Comput. Appl. 2017, 28, 3307–3316. [Google Scholar] [CrossRef]
  27. Stamova, I.M.; Stamov, G.T. Functional and Impulsive Differential Equations of Fractional Order: Qualitative Analysis and Applications, 1st ed.; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  28. Stamova, I.; Stamov, G. Mittag–Leffler synchronization of fractional neural networks with time-varying delays and reaction-diffusion terms using impulsive and linear controllers. Neural Netw. 2017, 96, 22–32. [Google Scholar] [CrossRef] [PubMed]
  29. Wan, P.; Jian, J. Impulsive stabilization and synchronization of fractional-order complex-valued neural networks. Neural Process. Lett. 2019, 50, 2201–2218. [Google Scholar] [CrossRef]
  30. Zhang, X.; Niu, P.; Ma, Y.; Wei, Y.; Li, G. Global Mittag-Leffler stability analysis of fractional-order impulsive neural networks with one-side Lipschitz condition. Neural Netw. 2017, 94, 67–75. [Google Scholar] [CrossRef] [PubMed]
  31. Ahmad, B.; Alsaedi, A.; Ntouyas, S.K.; Tariboon, J. Hadamard-Type Fractional Differential Equations, Inclusions and Inequalities, 1st ed.; Springer: Cham, Switzerland, 2017. [Google Scholar]
  32. Atangana, A.; Gómez–Aguilar, J.F. Numerical approximation of Riemann-Liouville definition of fractional derivative: From Riemann–Liouville to Atangana–Baleanu. Numer. Methods Part. Differ. Equ. 2018, 34, 1502–1523. [Google Scholar] [CrossRef]
  33. Bayın, S.S. Definition of the Riesz derivative and its application to space fractional quantum mechanics. J. Math. Phys. 2016, 57, 123501. [Google Scholar] [CrossRef] [Green Version]
  34. Gao, W.; Ghanbari, B.; Baskonus, H.M. New numerical simulations for some real world problems with Atangana–Baleanu fractional derivative. Chaos Solitons Fractals 2019, 128, 34–43. [Google Scholar] [CrossRef]
  35. Taneco–Heránndez, M.A.; Morales–Delgado, V.F.; Gómez-Aguilar, J.F. Fractional Kuramoto–Sivashinsky equation with power law and stretched Mittag-Leffler kernel. Phys. A 2019, 527, 121085. [Google Scholar] [CrossRef]
  36. Khalil, R.; Al Horani, M.; Yousef, A.; Sababheh, M. A new definition of fractional derivative. J. Comput. Appl. Math. 2014, 264, 65–70. [Google Scholar] [CrossRef]
  37. Abdeljawad, T. On conformable fractional calculus. J. Comput. Appl. Math. 2015, 279, 57–66. [Google Scholar] [CrossRef]
  38. Pospíšil, M.; Pospíšilova Škripkova, L. Sturm’s theorems for conformable fractional differential equation. Math. Commun. 2016, 21, 273–281. [Google Scholar]
  39. Souahi, A.; Ben Makhlouf, A.; Hammami, M.A. Stability analysis of conformable fractional-order nonlinear systems. Indag. Math. 2017, 28, 1265–1274. [Google Scholar] [CrossRef]
  40. Caputo, M.; Fabrizio, M. A new definition of fractional derivative without singular kernel. Progr. Fract. Differ. Appl. 2015, 1, 1–13. [Google Scholar]
  41. Abdelhakim, A.; Tenreiro Machado, J.A. A critical analysis of the conformable derivative. Nonlinear Dynam. 2019, 95, 3063–3073. [Google Scholar] [CrossRef]
  42. Ortigueira, M.; Machado, J. Which Derivative? Fractal Fract. 2017, 1, 3. [Google Scholar] [CrossRef]
  43. Ortigueira, M.; Machado, J. A critical analysis of the Caputo-Fabrizio operator. Commun. Nonlinear Sci. Numer. Simul. 2018, 59, 608–611. [Google Scholar] [CrossRef]
  44. Sales Teodoro, G.; Tenreiro Machado, J.A.; de Oliveira, E.C. A review of definitions of fractional derivatives and other operators. J. Comput. Phys. 2019, 388, 195–208. [Google Scholar] [CrossRef]
  45. Tarasov, V. Caputo–Fabrizio operator in terms of integer derivatives: Memory or distributed lag? Comp. Appl. Math. 2019, 38, 113. [Google Scholar] [CrossRef]
  46. Martynyuk, A.A.; Stamova, I.M. Fractional-like derivative of Lyapunov-type functions and applications to the stability analysis of motion. Electron. J. Differ. Equ. 2018, 2018, 1–12. [Google Scholar]
  47. Kiskinov, H.; Petkova, M.; Zahariev, A. Remarks about the existence of conformable derivatives and some consequences. arXiv 2019, arXiv:1907.03486. [Google Scholar]
  48. Martynyuk, A.A. On the stability of the solutions of fractional-like equations of perturbed motion. Dopov. Nats. Akad. Nauk Ukr. Mat. Prirodozn. Tekh. Nauk. 2018, 6, 9–16. (In Russian) [Google Scholar] [CrossRef]
  49. Martynyuk, A.A.; Stamov, G.; Stamova, I. Integral estimates of the solutions of fractional-like equations of perturbed motion. Nonlinear Anal. Model. Control 2019, 24, 138–149. [Google Scholar] [CrossRef]
  50. Martynyuk, A.A.; Stamov, G.; Stamova, I. Practical stability analysis with respect to manifolds and boundedness of differential equations with fractional-like derivatives. Rocky Mt. J. Math. 2019, 49, 211–233. [Google Scholar] [CrossRef]
  51. Sitho, S.; Ntouyas, S.K.; Agarwal, P.; Tariboon, J. Noninstantaneous impulsive inequalities via conformable fractional calculus. J. Inequal. Appl. 2018, 2018, 261. [Google Scholar] [CrossRef]
  52. Stamov, G.; Martynyuk, A.; Stamova, I. Impulsive fractional-like differential equations: Practical stability and boundedness with respect to h−manifolds. Fractal Fract. 2019, 3, 50. [Google Scholar] [CrossRef] [Green Version]
  53. Tariboon, J.; Ntouyas, S.K. Oscillation of impulsive conformable fractional differential equations. Open Math. 2016, 14, 497–508. [Google Scholar] [CrossRef] [Green Version]
  54. Ballinger, G.; Liu, X. Practical stability of impulsive delay differential equations and applications to control problems. In Optimization Methods and Applications. Applied Optimization; Yang, X., Teo, K.L., Caccetta, L., Eds.; Kluwer: Dordrecht, The Netherlands, 2001; Volume 52, pp. 3–21. [Google Scholar]
  55. Lakshmikantham, V.; Leela, S.; Martynyuk, A.A. Practical Stability of Nonlinear Systems; World Scientific: Teaneck, NJ, USA, 1990. [Google Scholar]
  56. Martynyuk, A.A. Advances in Stability Theory at the End of the 20th Century. Stability and Control: Theory, Methods and Applications, 1st ed.; Taylor and Francis: New York, NY, USA, 2002. [Google Scholar]
  57. Stamov, G.; Stamova, I.M.; Li, X.; Gospodinova, E. Practical stability with respect to h-manifolds for impulsive control functional differential equations with variable impulsive perturbations. Mathematics 2019, 7, 656. [Google Scholar] [CrossRef] [Green Version]
  58. Cicek, M.; Yaker, C.; Gücen, M.B. Practical stability in terms of two measures for fractional order systems in Caputo’s sense with initial time difference. J. Frankl. Inst. 2014, 351, 732–742. [Google Scholar] [CrossRef]
  59. Stamova, I.M.; Henderson, J. Practical stability analysis of fractional-order impulsive control systems. ISA Trans. 2016, 64, 77–85. [Google Scholar] [CrossRef]
  60. Bohner, M.; Stamova, I.; Stamov, G. Impulsive control functional differential systems of fractional order: Stability with respect to manifolds. Eur. Phys. J. Spec. Top. 2017, 226, 3591–3607. [Google Scholar] [CrossRef]
  61. Smale, S. Stable manifolds for differential equations and diffeomorphisms. Ann. Sc. Norm. Sup. Pisa 1963, 3, 97–116. [Google Scholar]
  62. Stamov, G. Lyapunov’s functions and existence of integral manifolds for impulsive differential systems with time-varying delay. Methods Appl. Anal. 2009, 16, 291–298. [Google Scholar]
  63. Liu, B.; Liu, X.; Liao, X. Robust stability of uncertain impulsive dynamical systems. J. Math. Anal. Appl. 2004, 290, 519–533. [Google Scholar] [CrossRef] [Green Version]
  64. Stamov, G.T.; Alzabut, J.O. Almost periodic solutions in the PC-space for uncertain impulsive dynamical systems. Nonlinear Anal. 2011, 74, 4653–4659. [Google Scholar] [CrossRef]
  65. Stamov, G.T.; Simeonov, S.; Stamova, I.M. Uncertain impulsive Lotka–Volterra competitive systems: Robust stability of almost periodic solutions. Chaos Solitons Fractals 2018, 110, 178–184. [Google Scholar] [CrossRef]
  66. Li, Y.; Chen, Y.; Podlubny, I. Mittag–Leffler stability of fractional order nonlinear dynamic systems. Automatica 2009, 45, 1965–1969. [Google Scholar] [CrossRef]
Figure 1. The ( λ , A ) -global exponentially stable behavior of the fractional-like neural network model (9) with respect to the function h = | x 1 | + | x 2 | for λ = 5 , A = 9 . (a) Behavior of the state variable x 1 ( t ) ; (b) Behavior of the state variable x 2 ( t ) .
Figure 1. The ( λ , A ) -global exponentially stable behavior of the fractional-like neural network model (9) with respect to the function h = | x 1 | + | x 2 | for λ = 5 , A = 9 . (a) Behavior of the state variable x 1 ( t ) ; (b) Behavior of the state variable x 2 ( t ) .
Entropy 22 00337 g001
Figure 2. The ( λ , A ) -global exponentially stable behavior of model (10) with respect to the function h ( y 1 , y 2 , z 1 , z 2 ) = ( y 1 y 1 * ) 2 + ( y 2 y 2 * ) 2 + ( z 1 z 1 * ) 2 + ( z 2 z 2 * ) 2 for λ = 8 , A = 11 . (a) Behavior of the state variable y 1 ( t ) ; (b) Behavior of the state variable y 2 ( t ) ; (c) Behavior of the state variable z 1 ( t ) ; (d) Behavior of the state variable z 2 ( t ) .
Figure 2. The ( λ , A ) -global exponentially stable behavior of model (10) with respect to the function h ( y 1 , y 2 , z 1 , z 2 ) = ( y 1 y 1 * ) 2 + ( y 2 y 2 * ) 2 + ( z 1 z 1 * ) 2 + ( z 2 z 2 * ) 2 for λ = 8 , A = 11 . (a) Behavior of the state variable y 1 ( t ) ; (b) Behavior of the state variable y 2 ( t ) ; (c) Behavior of the state variable z 1 ( t ) ; (d) Behavior of the state variable z 2 ( t ) .
Entropy 22 00337 g002
Figure 3. The unstable behavior of the state variable x 2 ( t ) of (14) for P ˜ 2 k = 2 , k = 1 , 2 , and λ = 5 , A = 9 .
Figure 3. The unstable behavior of the state variable x 2 ( t ) of (14) for P ˜ 2 k = 2 , k = 1 , 2 , and λ = 5 , A = 9 .
Entropy 22 00337 g003

Share and Cite

MDPI and ACS Style

Stamov, G.; Stamova, I.; Martynyuk, A.; Stamov, T. Design and Practical Stability of a New Class of Impulsive Fractional-Like Neural Networks. Entropy 2020, 22, 337. https://doi.org/10.3390/e22030337

AMA Style

Stamov G, Stamova I, Martynyuk A, Stamov T. Design and Practical Stability of a New Class of Impulsive Fractional-Like Neural Networks. Entropy. 2020; 22(3):337. https://doi.org/10.3390/e22030337

Chicago/Turabian Style

Stamov, Gani, Ivanka Stamova, Anatoliy Martynyuk, and Trayan Stamov. 2020. "Design and Practical Stability of a New Class of Impulsive Fractional-Like Neural Networks" Entropy 22, no. 3: 337. https://doi.org/10.3390/e22030337

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop