Next Article in Journal
Efficient Analysis of Large-Size Bio-Signals Based on Orthogonal Generalized Laguerre Moments of Fractional Orders and Schwarz–Rutishauser Algorithm
Previous Article in Journal
Dilatancy Equation Based on the Property-Dependent Plastic Potential Theory for Geomaterials
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

New Approach to Quasi-Synchronization of Fractional-Order Delayed Neural Networks

1
College of Science, Northwest A&F University, Yangling 712100, China
2
Institute of Water Resources and Hydropower Research, Northwest A&F University, Yangling 712100, China
*
Author to whom correspondence should be addressed.
Fractal Fract. 2023, 7(11), 825; https://doi.org/10.3390/fractalfract7110825
Submission received: 9 October 2023 / Revised: 8 November 2023 / Accepted: 14 November 2023 / Published: 16 November 2023
(This article belongs to the Special Issue Recent Advances in Fractional-Order Time Delay Systems)

Abstract

:
This article investigates quasi-synchronization for a class of fractional-order delayed neural networks. By utilizing the properties of the Laplace transform, the Caputo derivative, and the Mittag–Leffler function, a new fractional-order differential inequality is introduced. Furthermore, an adaptive controller is designed, resulting in the derivation of an effective criterion to ensure the aforementioned synchronization. Finally, a numerical illustration is provided to demonstrate the validity of the presented theoretical findings.

1. Introduction

With the rapid advancement of numerical algorithms, there has been significant interest in fractional calculus, which is an extension of classical integration and differentiation to arbitrary orders. Fractional-order systems offer distinct advantages over integer-order systems in capturing the memory and hereditary characteristics of numerous materials. For instance, they provide a more accurate description of the relationship between voltage and current in capacitors by utilizing the fractional properties of capacitor dielectrics [1,2]. Indeed, various real-world processes can be effectively described as fractional-order systems, including diffusion theory [3,4], electromagnetic theory [5], colored noise [6], happiness model [7], and dielectric relaxation [8]. The study of fractional-order systems is of paramount importance, offering valuable insights for both theoretical understanding and practical applications.
In recent decades, scholars have extensively studied neural networks due to their broad utility across various domains, including combinational optimization, automatic control, and signal processing [9,10,11,12,13,14]. The limited switching speed of amplifiers gives rise to time delays, which lead to oscillation, instability, and bifurcation [15,16]. Consequently, extensive research has been carried out to investigate the dynamic properties of neural networks incorporating time delays, focusing on aspects such as bifurcation [17], stability [18], and dissipativity [19]. The continuous-time integer-order Hopfield neural network [20] was introduced by Hopfield in 1984 and has garnered significant attention from scientists. In recent years, researchers have recognized the potential of fractional calculus in neural network studies and have extended the order of neural networks from integer-order to fractional-order. In fact, fractional calculus equips neurons with a fundamental and versatile computational capability, which plays a crucial role in enabling efficient information processing and anticipation of stimuli [21]. The integration of fractional calculus with neural networks reveals their true behavior, providing valuable insights into qualitative analysis and synchronization control. This integration has led numerous researchers to construct fractional models within neural networks, resulting in a wealth of findings in the field of fractional-order neural networks. These findings encompass the existence and uniqueness of nontrivial solutions, Mittag–Leffler stability, and synchronization control [22,23,24,25,26]. Anastassiou [27] highlighted the crucial role played by fractional-order recurrent neural networks in parameter estimation, noting their superior accuracy rates in approximations. Consequently, integrating fractional calculus into delayed neural networks and establishing fractional-order delayed neural networks represent a more precise and substantial approach. This approach enhances performance for optimization and complex computations, surpassing the capabilities of conventional integer-order neural networks.
The synchronization analysis of neural networks has been widely explored in the domains of neural networks and complex networks, leading to interesting findings in information science, signal processing, and secure communication [24,28,29,30]. Various forms of synchronization, including complete synchronization, exponential synchronization, and Mittag–Leffler synchronization, have been widely studied by researchers [25,26,31,32,33,34]. In the synchronization schemes mentioned above, it is typically assumed that the system error eventually converges to zero. However, in practical applications, the electronic components used in the construction of artificial neural networks often feature threshold voltages. When these threshold voltages are triggered, they can cause changes in the connection weights, resulting in unforeseen errors in the dynamical system. This, in turn, makes it challenging for the error system to reach a state of complete equilibrium at zero [35]. To address this issue, the concept of quasi-synchronization is employed, signifying that synchronization errors may not necessarily approach zero. Instead, they gradually converge within a confined range around zero as time progresses [36]. It is noteworthy that significant research on the quasi-synchronization of delayed neural networks has emerged in recent years [36,37,38,39]. Adaptive control [24,30], which automatically adjusts its control parameters according to an adaptive law, offers the distinct advantages of cost-effectiveness and ease of operation. Consequently, it is imperative and of considerable significance to study quasi-synchronization in fractional-order delayed neural networks utilizing adaptive control.
In [40], a finite-time synchronization criterion was established for fractional-order delayed fuzzy cellular neural networks by the utilization of a fractional-order Gronwall inequality. It was observed that the estimated value function within this inequality exhibited non-decreasing behavior over a finite time interval, potentially amplifying the disparity between the estimated synchronization error and the actual error. To minimize this difference, another study [41] focused on adaptive finite-time synchronization of the same neural networks. By designing an adaptive controller and proposing a new fractional-order differential inequality, the estimated error bound was shown to exhibit a declining trend over a finite time interval. These studies raise an intriguing question: can we determine a decreasing estimated value function that accurately captures the synchronization error in an infinite-time scenario? If such a function exists, how can we derive the corresponding synchronization criterion? It is worth noting that this problem has not been extensively investigated in the existing literature, indicating a need for further study and exploration in this area. Motivated by the problem, this article presents a criterion for achieving infinite-time synchronization of the considered fractional-order systems. The main results of this article are as follows:
(i)
A new fractional-order differential inequality has been developed on an unbounded time interval. This inequality can be utilized to investigate the quasi-synchronization of fractional-order complex networks or neural networks.
(ii)
Utilizing the proposed inequality in combination with an adaptive controller, a novel criterion for the quasi-synchronization of fractional-order delayed neural networks has been derived.
(iii)
The validity of the developed results is substantiated through a numerical analysis, offering sufficient evidence in support of the obtained synchronization criterion.
Here is an outline of this article. In Section 2, some preliminaries and a model description are provided to lay the foundation for the subsequent analysis. In Section 3, the key fractional-order differential inequality and a new approach to quasi-synchronization are established. In Section 4, connections between mathematical treatment and numerical simulation are outlined, establishing a theoretical foundation for subsequent numerical analysis. Section 5 presents a numerical result, highlighting the significance of the obtained findings in a practical context.
Notations: Suppose that r is a real number, and m is a natural number. Then, N r = { r , r + 1 , r + 2 , } and N r r + m = { r , r + 1 , , r + m } . Let L 1 ( [ t 0 , t ] , R ) denote the set of measurable functions from [ t 0 , t ] to R , where R is the set of real numbers. For each n-dimensional real vector ξ = ( ξ 1 , ξ 2 , , ξ n ) T R n , let ξ = i = 1 n | ξ i | .

2. Preliminaries and Model Formulation

2.1. Preliminaries

In this subsection, we review some fundamental knowledge that will be used later.
Definition 1
([42]). Let α , β ( 0 , + ) . A Mittag–Leffler function with two parameters, denoted by E α , β ( t ) , is defined as
E α , β ( t ) = μ = 0 t μ Γ ( μ α + β ) , t R .
In particular, let β = 1 . Then, its one-parameter form is
E α ( t ) = E α , 1 ( t ) = μ = 0 t μ Γ ( μ α + 1 ) .
Lemma 1
([42,43,44,45]). Let α ( 0 , 1 ) and ω ( 0 , + ) . Then:
(i) 
E α ( ω ( t t 0 ) α ) is monotonically non-increasing for t [ t 0 , + ) .
(ii) 
0 < E α ( ω ( t t 0 ) α ) 1 .
(iii) 
E α , α ( ω ( t t 0 ) α ) > 0 .
Lemma 2
([46]). The following equality holds for the Mittag–Leffler function E α , α ( · )
t 0 t ( t ζ ) α 1 E α , α ( ω ( t ζ ) α ) d ζ = 1 ω 1 E α ( ω ( t t 0 ) α ) .
We next recall from [47] (§ 3) the definition and some basic properties of the one-parameter Laplace transform.
Definition 2
([47]). Given a function f : [ t 0 , + ) R , its Laplace transform with parameter t 0 is defined by
L t 0 { f ( t ) } ( s ) = t 0 + f ( t ) e s ( t t 0 ) d t .
Lemma 3
([47]). Given two piecewise continuous functions f and g on [ t 0 , + ) of exponential order ε ˘ , the following equation holds:
L t 0 { f ( t ) g ( t ) } ( s ) = L t 0 { f ( t ) } ( s ) · L t 0 { g ( t ) } ( s ) ,
where R e ( s ) > ε ˘ and the convolution is given by
f ( t ) g ( t ) = t 0 t f ( ζ ) g ( t + t 0 ζ ) d ζ .
Lemma 4
([41,43]). Let E α , β ( κ ( t t 0 ) α ) be the Mittag–Leffler function. Then,
L t 0 { ( t t 0 ) β 1 E α , β ( κ ( t t 0 ) α ) } ( s ) = s α β s α κ ,
where R e ( s ) > | κ | 1 α .
Definition 3
([42]). Let α ( 0 , + ) and u L 1 ( [ t 0 , t ] , R ) . The α-order integral of u is given by
t 0 D t α u ( t ) = t 0 t ( t ζ ) α 1 Γ ( α ) u ( ζ ) d ζ .
Definition 4
([42]). Let α ( 0 , 1 ) and v C 1 ( [ t 0 , t ] , R ) . The α-order Caputo derivative of v is given by
t 0 c D t α v ( t ) = t 0 t ( t ζ ) α Γ ( 1 α ) v ( ζ ) d ζ .
Lemma 5
([48]). Let α ( 0 , 1 ) and w C 1 ( [ t 0 , t ] , R ) . Then,
t 0 D t α t 0 c D t α w ( t ) = w ( t ) w ( t 0 ) .
Lemma 6
([49]). Let α ( 0 , 1 ) and v C 1 ( [ t 0 , t ] , R ) . Then,
t 0 c D t α | v ( t ) | sign ( v ( t ) ) t 0 c D t α v ( t )
holds almost everywhere.
Lemma 7
([41]). Let w C 1 ( [ t 0 , t ] , R ) and of exponential order ε ˘ . Then,
L t 0 { t 0 c D t α w ( t ) } ( s ) = s α W ( s ) s α 1 w ( t 0 ) ,
where α ( 0 , 1 ) , R e ( s ) > ε ˘ , and W ( s ) = L t 0 { w ( t ) } ( s ) .
Lemma 8
([50]). Let α ( 0 , 1 ) and ω ( 0 , + ) . Assume that h 1 and h 2 are two non-negative differentiable functions, and satisfy
t 0 c D t α ( h 1 ( t ) + h 2 ( t ) ) ω h 1 ( t ) , t t 0 .
For arbitrary positive constant λ, there exists a non-negative constant T satisfying T λ Γ ( α + 1 ) ω ( h 1 ( t 0 ) + h 2 ( t 0 ) + λ ) 1 α such that
h 1 ( t ) h 1 ( t 0 ) + h 2 ( t 0 ) + λ E α ( ω ( t t 0 ) α ) , t [ t 0 , t 0 + T ] .
Lemma 9
([41]). Let α ( 0 , 1 ) , ω ( 0 , + ) , and κ ( , 0 ] . Suppose that two non-negative differentiable functions h 1 and h 2 satisfy
t 0 c D t α ( h 1 ( t ) + h 2 ( t ) ) ω h 1 ( t ) + κ , t t 0 .
Then, for arbitrary positive constant λ, there exists a non-negative constant T ¯ such that
h 1 ( t ) h 1 ( t 0 ) + h 2 ( t 0 ) + λ E α ( ω ( t t 0 ) α ) + κ ω 1 E α ( ω ( t t 0 ) α ) ,
where t [ t 0 , T ¯ ] and T ¯ are the solutions of the equation
E α ( ω ( t t 0 ) α ) h 1 ( t 0 ) + h 2 ( t 0 ) h 1 ( t 0 ) + h 2 ( t 0 ) + λ = 0 .
Lemma 10
([13]). Let α ( 0 , 1 ) , ω ( 0 , + ) , and κ ( 0 , + ) . Suppose that h 1 and h 2 are two non-negative differentiable functions satisfying
t 0 c D t α ( h 1 ( t ) + h 2 ( t ) ) ω h 1 ( t ) + κ , t t 0 .
Then,
h 1 ( t ) h 1 ( t 0 ) + h 2 ( t 0 ) κ ω E α ( ω ( t t 0 ) α ) + κ ω , t t 0 + Γ ( α ) ω 1 1 α .
Lemma 11
([51]). Let α ( 0 , 1 ] , h C 1 ( [ t 0 , t ] , R ) , g C ( [ t 0 , t ] , R ) , and γ be a fixed constant. If
t 0 c D t α h ( t ) γ h ( t ) + g ( t ) , t t 0 ,
then for any t t 0 ,
h ( t ) h ( t 0 ) E α ( γ ( t t 0 ) α ) + t 0 t ( t s ) α 1 E α , α ( γ ( t s ) α ) g ( s ) d s .
If we let g ( t ) c R , then
h ( t ) h ( t 0 ) E α ( γ ( t t 0 ) α ) c γ 1 E α ( γ ( t t 0 ) α ) , t t 0 .

2.2. Model Description

We revisit the fractional-order delayed neural network presented in [13], which is commonly referred to as the driving system:
t 0 c D t α q i ( t ) = c i q i ( t ) + j = 1 m w i j f j ( q j ( t ) ) + j = 1 m w i j ι f j ( q j ( t ι ) ) + I i ( t ) , t t 0 , q i ( s ) = φ i ( s ) , i N 1 m , s [ t 0 ι , t 0 ] ,
where 0 < α < 1 , c i R , ι > 0 is a constant delay, q i ( t ) R is the state variable of the ith neuron, I i is the external input, and w i j , w i j ι R represent the connection weight and delayed connection weight, respectively; f j ( q j ( t ) ) and f j ( q j ( t ι ) ) are the activation functions without delay and with delay, respectively.
The corresponding response system is defined as
t 0 c D t α p i ( t ) = c i p i ( t ) + j = 1 m w i j f j ( p j ( t ) ) + j = 1 m w i j ι f j ( p j ( t ι ) ) + I i ( t ) + u i ( t ) , t t 0 , p i ( s ) = ϕ i ( s ) , i N 1 m , s [ t 0 ι , t 0 ] ,
where p i ( t ) R is the state variable of the ith neuron of (11), and u i ( t ) is a control input.
Let r i ( t ) = p i ( t ) q i ( t ) and r ( t ) = ( r 1 ( t ) , r 2 ( t ) , , r m ( t ) ) T . Then, we obtain the following error system
t 0 c D t α r i ( t ) = c i r i ( t ) + j = 1 m w i j f j ( p j ( t ) ) f j ( q j ( t ) ) + j = 1 m w i j ι f j ( p j ( t ι ) ) f j ( q j ( t ι ) ) + u i ( t ) , t t 0 ,
where the adaptive controller is given by
u i ( t ) = σ i ( t ) r i ( t ) ξ r i ( t ) | r i ( t ) | | r i ( t ι ) | η r i ( t ) | r i ( t ) | , | r i ( t ) | 0 , 0 , | r i ( t ) | = 0 , t 0 c D t α σ i ( t ) = ρ i | r i ( t ) | , i N 1 m ,
where ξ , η , ρ i are tunable positive constants, and σ i ( t ) is the time-varying feedback strength.
Definition 5
([36]). System (10) is quasi-synchronized with system (11) if there exists a small error bound ϵ > 0 and a compact set M = { r ( t ) R n | r ( t ) ϵ } such that when t , the error signal r ( t ) converges into M.
Assumption 1
([41]). For any r 1 , r 2 R , and i N 1 m , there exists a positive real number l i satisfying
| f i ( r 1 ) f i ( r 2 ) | l i | r 1 r 2 | .

3. Main Results

In [41], for the inequality t 0 c D t α ( h 1 ( t ) + h 2 ( t ) ) ω h 1 ( t ) + κ , where α ( 0 , 1 ) , ω ( 0 , + ) , and κ ( , 0 ] , the finite-time dynamic behaviors of h 1 ( t ) were established to study the finite-time synchronization of fractional-order delayed systems. Naturally, one may consider whether this inequality can also be used to investigate the case of infinite time. The purpose of the following Theorem 1 is to address this problem.
Theorem 1.
Let α ( 0 , 1 ) , ω ( 0 , + ) , and κ ( , 0 ] . Suppose that h 1 and h 2 are two non-negative differentiable functions satisfying h 1 ( t 0 ) + h 2 ( t 0 ) + κ ω > 0 and
t 0 c D t α ( h 1 ( t ) + h 2 ( t ) ) ω h 1 ( t ) + κ , t t 0 .
Then, we obtain
h 1 ( t ) h 1 ( t 0 ) + h 2 ( t 0 ) + κ ω 1 E α ( ω ( t t 0 ) α ) , t t 0 .
Proof. 
By (15), there exists a non-negative function h ( t ) satisfying
t 0 c D t α ( h 1 ( t ) + h 2 ( t ) ) + h ( t ) = ω h 1 ( t ) + κ .
Then, the following equation holds:
L t 0 { t 0 c D t α ( h 1 ( t ) + h 2 ( t ) ) } ( s ) + L t 0 { h ( t ) } ( s ) = ω L t 0 { h 1 ( t ) } ( s ) + κ s .
Using Lemma 7, we obtain
s α H 1 ( s ) + H 2 ( s ) s α 1 h 1 ( t 0 ) + h 2 ( t 0 ) + H ( s ) = ω H 1 ( s ) + κ s ,
where H 1 ( s ) = L t 0 { h 1 ( t ) } ( s ) , H 2 ( s ) = L t 0 { h 2 ( t ) } ( s ) and H ( s ) = L t 0 { h ( t ) } ( s ) . Thus,
H 1 ( s ) = s α 1 s α + ω h 1 ( t 0 ) + h 2 ( t 0 ) 1 ω s α + ω H 2 ( s ) H ( s ) s α + ω + s 1 s α + ω κ .
By the Equation (19) and Lemma 3, we have
h 1 ( t ) = L t 0 1 { H 1 ( s ) } = h 1 ( t 0 ) + h 2 ( t 0 ) L t 0 1 s α 1 s α + ω L t 0 1 { H 2 ( s ) } + L t 0 1 ω s α + ω L t 0 1 { H 2 ( s ) } L t 0 1 { H ( s ) } L t 0 1 1 s α + ω + κ L t 0 1 s 1 s α + ω = Lem . 4 h 1 ( t 0 ) + h 2 ( t 0 ) E α ( ω ( t t 0 ) α ) h 2 ( t ) + ω ( t t 0 ) α 1 E α , α ( ω ( t t 0 ) α ) h 2 ( t ) h ( t ) ( t t 0 ) α 1 E α , α ( ω ( t t 0 ) α ) + κ ( t t 0 ) α E α , α + 1 ( ω ( t t 0 ) α ) = Lem . 2 h 1 ( t 0 ) + h 2 ( t 0 ) E α ( ω ( t t 0 ) α ) h 2 ( t ) + ω ( t t 0 ) α 1 E α , α ( ω ( t t 0 ) α ) h 2 ( t ) v 1 ( t ) h ( t ) ( t t 0 ) α 1 E α , α ( ω ( t t 0 ) α ) v 2 ( t ) + κ ω 1 E α ( ω ( t t 0 ) α ) .
Applying (3), we obtain
t 0 D t α t 0 c D t α ( h 1 ( t ) + h 2 ( t ) ) = ( h 1 ( t ) + h 2 ( t ) ) ( h 1 ( t 0 ) + h 2 ( t 0 ) ) 0 .
Also by the non-negativity of the functions h 1 and h 2 , the following inequalities hold:
0 h 2 ( t ) h 1 ( t ) + h 2 ( t ) h 1 ( t 0 ) + h 2 ( t 0 ) .
Based on the convolution given in (2), we obtain
v 1 ( t ) : = ω ( t t 0 ) α 1 E α , α ( ω ( t t 0 ) α ) h 2 ( t ) = t 0 t ω ( t ζ ) α 1 E α , α ( ω ( t ζ ) α ) h 2 ( ζ ) d ζ ( 21 ) h 1 ( t 0 ) + h 2 ( t 0 ) t 0 t ω ( t ζ ) α 1 E α , α ( ω ( t ζ ) α ) d ζ = ( 1 ) h 1 ( t 0 ) + h 2 ( t 0 ) 1 E α ( ω ( t t 0 ) α ) .
By h ( t ) 0 and E α , α ( ω ( t ζ ) α ) > 0 given in Lemma 1i,
v 2 ( t ) : = h ( t ) ( t t 0 ) α 1 E α , α ( ω ( t t 0 ) α ) = t 0 t h ( ζ ) ( t ζ ) α 1 E α , α ( ω ( t ζ ) α ) d ζ 0 .
Furthermore, by (20), (22), (23), and the non-negativity of h 2 ( t ) , we have
h 1 ( t ) h 1 ( t 0 ) + h 2 ( t 0 ) E α ( ω ( t t 0 ) α ) + h 1 ( t 0 ) + h 2 ( t 0 ) 1 E α ( ω ( t t 0 ) α ) + κ ω 1 E α ( ω ( t t 0 ) α ) = h 1 ( t 0 ) + h 2 ( t 0 ) + κ ω 1 E α ( ω ( t t 0 ) α ) , t t 0 .
Remark 1.
Theorem 1 will play a pivotal role in deriving the quasi-synchronization criteria for systems, specifically in the context of various types of fractional-order complex networks or neural networks. It serves as a fundamental tool that enables the analysis of quasi-synchronization phenomena within these interconnected systems.
Remark 2.
The inequality t 0 c D t α ( h 1 ( t ) + h 2 ( t ) ) ω h 1 ( t ) + κ is employed in both Lemma 10 and Theorem 1 to explore the asymptotic synchronization of fractional-order neural networks. In Lemma 10, the sign of t 0 c D t α ( h 1 ( t ) + h 2 ( t ) ) is indefinite, whereas in Theorem 1, the sign is non-positive.
Remark 3.
Let h 1 ^ ( t ) : = h 1 ( t 0 ) + h 2 ( t 0 ) + κ ω 1 E α ( ω ( t t 0 ) α ) for t t 0 . Then, the inequality (16) in Theorem 1 becomes h 1 ( t ) h 1 ^ ( t ) . By Lemma 1, h 1 ^ ( t ) is monotonically non-increasing for t t 0 , h 1 ^ ( t 0 ) = h 1 ( t 0 ) + h 2 ( t 0 ) > 0 , and lim t h 1 ^ ( t ) = h 1 ( t 0 ) + h 2 ( t 0 ) + κ ω > 0 . Thus, we obtain h 1 ^ ( t ) > 0 for t t 0 , which guarantees that the obtained inequality (16) makes sense.
Remark 4.
Even though Theorem 1 and Lemma 9 utilize the same Caputo-derivative inequality, they provide the function estimates on an infinite interval and a finite interval, respectively. In addition, the inverse Laplace techniques of 1 ω s α + ω H 2 ( s ) employed for Theorem 1 and Lemma 9 are different. Notably, the Dirac delta function δ ( · ) is not used in Theorem 1, resulting in a simplification of the proof process. Finally, the estimate inequality (16) in Theorem 1 can be used to study quasi-synchronization of fractional-order systems in Theorem 2.
Remark 5.
When κ = 0 in Theorem 1, the obtained inequality (16) will be reduced to
h 1 ( t ) h 1 ( t 0 ) + h 2 ( t 0 ) , t t 0 ,
which can be also given by (21). In this case, the inequality (15) is reduced to the inequality (4) in Lemma 8.
In addition, as another special case of Theorem 1, let h 2 ( t ) 0 , and then we have
Corollary 1.
Let α ( 0 , 1 ) , ω ( 0 , + ) , and κ ( , 0 ] . Assume that h 1 is a non-negative differentiable function satisfying h 1 ( t 0 ) + κ ω > 0 , and
t 0 c D t α h 1 ( t ) ω h 1 ( t ) + κ , t t 0 .
Then,
h 1 ( t ) h 1 ( t 0 ) + κ ω 1 E α ( ω ( t t 0 ) α ) , t t 0 .
Remark 6.
The inequality (25) in Corollary 1 can be obtained by the differential inequality (8) in Lemma 11, where h ( t ) : = h 1 ( t ) , γ : = ω , and g ( t ) : = κ . Then,
h 1 ( t ) ( 9 ) h 1 ( t 0 ) E α ( ω ( t t 0 ) α ) κ ω 1 E α ( ω ( t t 0 ) α ) Lem . 1 ( i ) h 1 ( t 0 ) + κ ω 1 E α ( ω ( t t 0 ) α ) , t t 0 .
Consequently, the inequality (27) is consistent with (26).
Theorem 2.
Following Assumption 1 and applying the adaptive controller (13), system (10) is quasi-synchronized with system (11) if
W 1 ( t 0 ) + W 2 ( t 0 ) + κ ω > 0 ,
where W 1 ( t ) = i = 1 m | r i ( t ) | , W 2 ( t ) = i = 1 m ( σ i ( t ) σ ) 2 2 ρ i , κ = m η ,
ω = min 1 i m c i j = 1 m | w j i | l i + σ ,
and σ , ξ are two real numbers satisfying
σ > max 1 i m c i + j = 1 m | w j i | l i ,
ξ > max 1 i m j = 1 m | w j i ι | l i .
Proof. 
We construct a Lyapunov function W ( t ) : = W 1 ( t ) + W 2 ( t ) . Then,
t 0 c D t α W ( t ) = t 0 c D t α i = 1 m | r i ( t ) | + i = 1 m ( σ i ( t ) σ ) 2 2 ρ i Lem . 6 i = 1 m sign ( r i ( t ) ) t 0 c D t α r i ( t ) + i = 1 m ( σ i ( t ) σ ) ρ i t 0 c D t α σ i ( t ) = ( 12 ) ( 13 ) i = 1 m sign ( r i ( t ) ) { c i r i ( t ) + j = 1 m w i j f j ( p j ( t ) ) f j ( q j ( t ) ) + j = 1 m w i j ι f j ( p j ( t ι ) ) f j ( q j ( t ι ) ) σ i ( t ) r i ( t ) ξ r i ( t ) | r i ( t ) | | r i ( t ι ) | η r i ( t ) | r i ( t ) | } + i = 1 m ( σ i ( t ) σ ) | r i ( t ) | i = 1 m { c i | r i ( t ) | + j = 1 m | w i j | l j | r j ( t ) | + j = 1 m | w i j ι | l j | r j ( t ι ) | σ i ( t ) | r i ( t ) | ξ | r i ( t ι ) | η } + i = 1 m ( σ i ( t ) σ ) | r i ( t ) | = i = 1 m c i + j = 1 m | w j i | l i σ | r i ( t ) | m η + i = 1 m ξ + j = 1 m | w j i ι | l i | r i ( t ι ) | .
Also by (29)–(31), and κ = m η , we have
t 0 c D t α W ( t ) = t 0 c D t α ( W 1 ( t ) + W 2 ( t ) ) ω W 1 ( t ) + κ .
Furthermore, applying Theorem 1,
r ( t ) = W 1 ( t ) ( 16 ) W 1 ( t 0 ) + W 2 ( t 0 ) + κ ω 1 E α ( ω ( t t 0 ) α ) .
Then, by (28) and lim t E μ ( ω ( t t 0 ) μ ) = 0 , there exists a small error bound
ϵ : = W 1 ( t 0 ) + W 2 ( t 0 ) + κ ω > 0
satisfying
r ( t ) ϵ , t ,
which shows that system (10) is quasi-synchronized with system (11) with error bound W 1 ( t 0 ) + W 2 ( t 0 ) + κ ω . □
Remark 7.
Quasi-synchronization techniques, as utilized in Theorem 2, find applications in various domains. For instance, they can be employed in traffic networks to optimize traffic flow and alleviate congestion. By achieving partial synchronization among traffic signals or regulating the behavior of individual vehicles, overall traffic efficiency can be enhanced. Furthermore, these techniques can also be utilized in financial networks to analyze and forecast market behaviors. By examining synchronized patterns or deviations in the interconnections between financial entities, it becomes feasible to identify systemic risks and make well-informed investment choices.

4. Connections between the Mathematical Treatment and the Numerical Simulation

In this section, we present a modified version of the Adams–Bashforth–Moulton algorithm [52] specifically designed to solve fractional-order differential equations with time delay. This modification serves as a theoretical basis for the subsequent numerical simulation in Section 5.
Consider the following system:
t 0 c D t α h ( t ) = Ψ ( t , h ( t ) , h ( t ι ) ) , t [ t 0 , t 0 + S ] , α ( 0 , 1 ) , h ( t ) = z ( t ) , t [ t 0 ι , t 0 ] .
Fix a uniform grid as follows:
t 0 N ˜ d , t 0 ( N ˜ 1 ) d , , t 0 d , t 0 , t 0 + d , , t 0 + N ^ d ,
where N ˜ is a fixed integer, d = ι N ˜ , and N ^ = [ S d ] is also an integer. For each integer k satisfying N ˜ k N ^ , let t k = t 0 + k d . Then, for N ˜ k 0 , h d ( t k ) is the approximation to z ( t k ) . In addition, h d ( t k ι ) = h d ( t 0 + k d N ˜ d ) = h d ( t k N ˜ ) holds for 0 k N ^ .
Suppose that the approximation h d ( t k ) h ( t k ) holds for N ˜ k N ^ . Then, by Definition 3, (3) and (32), we have
h ( t k + 1 ) = z ( t 0 ) + 1 Γ ( α ) t 0 t k + 1 ( t k + 1 ζ ) α 1 Ψ ( ζ , h ( ζ ) , h ( ζ ι ) ) d ζ .
Furthermore, we utilize the product trapezoidal quadrature method and obtain the corrector formula as follows:
h d ( t k + 1 ) = z ( t 0 ) + d α Γ ( α + 2 ) Ψ ( t k + 1 , h d ( t k + 1 ) , h d ( t k + 1 ι ) ) + d α Γ ( α + 2 ) s = 0 k e s , k + 1 Ψ ( t s , h d ( t s ) , h d ( t s ι ) ) = z ( t 0 ) + d α Γ ( α + 2 ) Ψ ( t k + 1 , h d ( t k + 1 ) , h d ( t k + 1 N ˜ ) ) + d α Γ ( α + 2 ) s = 0 k e s , k + 1 Ψ ( t s , h d ( t s ) , h d ( t s N ˜ ) ) ,
where
e s , k + 1 = k α + 1 ( k α ) ( k + 1 ) α , if s = 0 , ( k s + 2 ) α + 1 + ( k s ) α + 1 2 ( k s + 1 ) α + 1 , if 1 s k , 1 , if s = k + 1 .
Because the Equation (33) contains the unknown term h d ( t k + 1 ) on both sides and involves the nonlinear function Ψ , it is not possible to find an explicit solution for h d ( t k + 1 ) . To address this issue, we introduce a preliminary approximation called a predictor, denoted as h d p ( t k + 1 ) . We then modify (33) by substituting h d p ( t k + 1 ) for h d ( t k + 1 ) on the right-hand side, resulting in the following redefined equation:
h d ( t k + 1 ) = z ( t 0 ) + d α Γ ( α + 2 ) Ψ ( t k + 1 , h d p ( t k + 1 ) , h d ( t k + 1 N ˜ ) ) + d α Γ ( α + 2 ) s = 0 k e s , k + 1 Ψ ( t s , h d ( t s ) , h d ( t s N ˜ ) ) .
In addition, in order to compute the predictor term, we apply the product rectangle rule in (34). Then, we have
h d p ( t k + 1 ) = z ( t 0 ) + 1 Γ ( α ) s = 0 k f s , k + 1 Ψ ( t s , h d ( t s ) , h d ( t s ι ) ) = z ( t 0 ) + 1 Γ ( α ) s = 0 k f s , k + 1 Ψ ( t s , h d ( t s ) , h d ( t s N ˜ ) ) ,
where f s , k + 1 = d α α ( ( k + 1 s ) α ( k s ) α ) .
In this method, the error can be expressed as
max N ˜ k N ^ | h ( t k ) h d ( t k ) | = O ( d m ) ,
where m = min { 2 , 1 + α } .

5. Numerical Simulation

To demonstrate the practical applicability of the key results, we provide a numerical illustration as follows.
Example 1.
Suppose that the parameters in the systems (10) and (11) are given: α = 0.8 , f j ( x ) = tanh ( x ) , c 1 = 0.3 , c 2 = 0.2 , ι = 0.3 , W = ( w i j ) 2 × 2 = 1.4 0.3 0.9 1.5 , W ι = ( w i j ι ) 2 × 2 = 2.6 0.2 0.4 1.6 , I = 0 0 , φ ( t ) = 0.4 0.3 , ϕ ( t ) = 0.2 0.4 , σ ( t 0 ) = 1.9 2.8 .
It is evident that Assumption 1 holds for l 1 = l 2 = 1 . Then, we let t 0 = 0 , σ = 2.1 , ξ = 3.1 , η = 0.075 , ρ i = 1 , i = 1 , 2 to demonstrate the correctness of Theorem 2. By performing straightforward calculations, we obtain ω = 0.1 , κ = 0.15 , W 1 ( t 0 ) = 1.3 , W 2 ( t 0 ) = 0.2650 , and ϵ = W 1 ( t 0 ) + W 2 ( t 0 ) + κ ω = 0.065 .
The time-varying feedback strengths σ i ( t ) are depicted in Figure 1. The evolution of synchronization error is presented in Figure 2. Furthermore, Figure 3 displays the magnitude of the synchronization error, which serves as further validation for the effectiveness of Theorem 2.

6. Conclusions

The quasi-synchronization for a class of fractional-order delayed neural networks has been studied in this paper. To obtain the quasi-synchronization criterion, the properties of the Laplace transform, the Caputo derivative, and the Mittag–Leffler function have been employed. In addition, a new fractional-order differential inequality has been constructed. Finally, a numerical example has been provided to demonstrate the validity of the proposed results. It is worth noting that when simulating continuous-time neural networks on a computer, it is necessary to discretize them to generate corresponding discrete-time networks. However, this discretization process may not fully preserve the dynamics exhibited by the original continuous networks. Therefore, future research will focus on discrete-time fractional order neural networks based on q-exponential and q-calculus.

Author Contributions

Conceptualization, S.Z., F.D. and D.C.; methodology, S.Z. and F.D.; project administration, S.Z., F.D. and D.C.; validation, S.Z. and F.D.; formal analysis, S.Z. and F.D.; writing—original draft, S.Z. and F.D.; writing—review and editing, S.Z., F.D. and D.C.; funding acquisition, S.Z. and F.D. All authors have read and agreed to the published version of the manuscript.

Funding

This study is supported by the Natural Science Basic Research Program in Shaanxi Province of China (2022JQ-022), Shaanxi Fundamental Science Research Project for Mathematics and Physics (22JSQ007), and Chinese Universities Scientific Fund (2452022176).

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors are grateful to the anonymous referees for their useful suggestions and comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Westerlund, S. Dead matter has memory! Phys. Scr. 1991, 43, 174–179. [Google Scholar] [CrossRef]
  2. Westerlund, S.; Ekstam, L. Capacitor theory. IEEE Trans. Dielectr. 1994, 1, 826–839. [Google Scholar] [CrossRef]
  3. Fan, Y.; Gao, J.H. Fractional motion model for characterization of anomalous diffusion from NMR signals. Phys. Rev. E 2015, 92, 012707. [Google Scholar] [CrossRef]
  4. Metzler, R.; Klafter, J. The random walks guide to anomalous diffusion: A fractional dynamics approach. Phys. Rep. 2000, 339, 1–77. [Google Scholar] [CrossRef]
  5. Engheia, N. On the role of fractional calculus in electromagnetic theory. IEEE Antenna Propag. Mag. 1997, 39, 35–46. [Google Scholar] [CrossRef]
  6. Cottone, G.; Paola, M.D.; Santoro, R. A novel exact representation of stationary colored Gaussian processes (fractional differential approach). J. Phys. A Math. Theor. 2010, 43, 085002. [Google Scholar] [CrossRef]
  7. Song, L.; Xu, S.; Yang, J. Dynamical models of happiness with fractional order. Commun. Nonlinear Sci. Numer. Simul. 2010, 15, 616–628. [Google Scholar] [CrossRef]
  8. Reyes-Melo, E.; Martinez-Vega, J.; Guerrero-Salazar, C.; Ortiz-Mendez, U. Application of fractional calculus to the modeling of dielectric relaxation phenomena in polymeric materials. J. Appl. Polym. Sci. 2005, 98, 923–935. [Google Scholar] [CrossRef]
  9. Cao, J.; Chen, G.; Li, P. Global synchronization in an array of delayed neural networks with hybrid coupling. IEEE Trans. SMC 2008, 38, 488–498. [Google Scholar]
  10. Huang, H.; Feng, G. Synchronization of nonidentical chaotic neural networks with time delays. Neural Netw. 2009, 22, 869–874. [Google Scholar] [CrossRef]
  11. Liang, J.; Wang, Z.; Liu, Y.; Liu, X. Robust synchronization of an array of coupled stochastic discrete-time delayed neural networks. IEEE Trans. Neural Netw. 2008, 19, 1910–1921. [Google Scholar] [CrossRef]
  12. Yang, X.; Zhu, Q.; Huang, C. Lag stochastic synchronization of chaotic mixed time-delayed neural networks with uncertain parameters or perturbations. Neurocomputing 2011, 74, 1617–1625. [Google Scholar] [CrossRef]
  13. Li, H.L.; Hu, C.; Cao, J.; Jiang, H.; Alsaedi, A. Quasi-projective and complete synchronization of fractional-order complex-valued neural networks with time delays. Neural Netw. 2019, 118, 102–109. [Google Scholar] [CrossRef] [PubMed]
  14. Li, H.L.; Jiang, H.; Cao, J. Global synchronization of fractional order quaternion-valued neural networks with leakage and discrete delays. Neurocomputing 2020, 385, 211–219. [Google Scholar] [CrossRef]
  15. Baldi, P.; Atiya, A. How delays affect neural dynamics and learning. IEEE Trans. Neural Netw. 1994, 5, 612–621. [Google Scholar] [CrossRef]
  16. Liao, X.; Wong, K.; Leung, C.; Wu, Z. Hopf bifurcation and chaos in a single delayed neuron equation with non-monotonic activation function. Chaos Solitons Fract. 2001, 12, 1535–1547. [Google Scholar] [CrossRef]
  17. Xu, W.; Cao, J.; Xiao, M.; Ho, D.W.C.; Wen, G. A new framework for analysis on stability and bifurcation in a class of neural networks with discrete and distributed delays. IEEE Trans. Cybern. 2015, 45, 2224–2236. [Google Scholar] [CrossRef] [PubMed]
  18. Wu, X.; Tang, Y.; Zhang, W. Stability analysis of switched stochastic neural networks with time-varying delays. Neural Netw. 2014, 51, 39–49. [Google Scholar] [CrossRef]
  19. Tu, Z.; Cao, J.; Alsaedi, A.; Hayat, T. Global dissipativity analysis for delayed quaternion-valued neural networks. Neural Netw. 2017, 89, 97–104. [Google Scholar] [CrossRef]
  20. Hopfield, J. Neurons with graded response have collective computational properties like those of two-state neurons. Proc. Natl. Acad. Sci. USA 1984, 81, 3088–3092. [Google Scholar] [CrossRef]
  21. Lundstrom, B.; Higgs, M.; Spain, W.; Fairhall, A. Fractional differentiation by neocortical pyramidal neurons. Nat. Neurosci. 2008, 11, 1335–1342. [Google Scholar] [CrossRef] [PubMed]
  22. Wu, A.; Zeng, Z. Boundedness, Mittag-Leffler stability and asymptotical α-periodicity of fractional-order fuzzy neural networks. Neural Netw. 2016, 74, 73–84. [Google Scholar] [CrossRef] [PubMed]
  23. Huang, X.; Zhao, Z.; Wang, Z.; Li, Y. Chaos and hyperchaos in fractional-order cellular neural networks. Neurocomputing 2012, 94, 13–21. [Google Scholar] [CrossRef]
  24. Roohi, M.; Zhang, C.; Chen, Y. Adaptive model-free synchronization of different fractional-order neural networks with an application in cryptography. Nonlinear Dyn. 2020, 100, 3979–4001. [Google Scholar] [CrossRef]
  25. Chen, J.; Zeng, Z.; Jiang, P. Global Mittag-Leffler stability and synchronization of memristor-based fractional-order neural networks. Neural Netw. 2014, 51, 1–8. [Google Scholar] [CrossRef] [PubMed]
  26. Li, K.; Peng, J.; Gao, J. A comment on α-stability and α-synchronization for fractional-order neural networks. Neural Netw. 2013, 48, 207–208. [Google Scholar]
  27. Anastassiou, G.A. Fractional neural network approximation. Comput. Math. Appl. 2012, 64, 1655–1676. [Google Scholar] [CrossRef]
  28. Bondarenko, V. Information processing, memories, and synchronization in chaotic neural network with the time delay. Complexity 2005, 11, 39–52. [Google Scholar] [CrossRef]
  29. Zhou, J.; Chen, T.; Xiang, L. Chaotic lag synchronization of coupled delayed neural networks and its applications in secure communication. Circuits Syst. Signal Process 2005, 24, 599–613. [Google Scholar] [CrossRef]
  30. Shanmugam, L.; Mani, P.; Rajan, R.; Joo, Y.H. Adaptive synchronization of reaction-diffusion neural networks and its application to secure communication. IEEE Trans. Cybern. 2020, 50, 911–922. [Google Scholar] [CrossRef]
  31. Fu, Q.; Zhong, S.; Jiang, W.; Xie, W. Projective synchronization of fuzzy memristive neural networks with pinning impulsive control. J. Frankl. Inst. 2020, 357, 10387–10409. [Google Scholar] [CrossRef]
  32. Shen, Y.; Shi, J.; Cai, S. Exponential synchronization of directed bipartite networks with node delays and hybrid coupling via impulsive pinning control. Neurocomputing 2021, 453, 209–222. [Google Scholar] [CrossRef]
  33. Tang, Z.; Park, J.H.; Wang, Y.; Zheng, W. Synchronization on Lur’e cluster networks with proportional delay: Impulsive effects method. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 4555–4565. [Google Scholar] [CrossRef]
  34. Yu, J.; Hu, C.; Jiang, H.; Fan, X. Projective synchronization for fractional neural networks. Neural Netw. 2014, 49, 87–95. [Google Scholar] [CrossRef] [PubMed]
  35. Fan, Y.; Huang, X.; Li, Y.; Xia, J.; Chen, G. Aperiodically intermittent control for quasi-synchronization of delayed memristive neural networks: An interval matrix and matrix measure combined method. IEEE Trans. Syst. Man Cybern. Syst. 2018, 49, 2254–2265. [Google Scholar] [CrossRef]
  36. He, W.; Qian, F.; Lam, J.; Chen, G.; Han, Q.L.; Kurths, J. Quasi-synchronization of heterogeneous dynamic networks via distributed impulsive control: Error estimation, optimization and design. Automatica 2015, 62, 249–262. [Google Scholar] [CrossRef]
  37. Tang, Z.; Park, J.H.; Feng, J. Impulsive effects on quasi-synchronization of neural networks with parameter mismatches and time-varying delay. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 908–919. [Google Scholar] [CrossRef]
  38. Tang, Z.; Park, J.H.; Wang, Y.; Feng, J. Distributed impulsive quasi-synchronization of Lur’e networks with proportional delay. IEEE Trans. Cybern. 2019, 49, 3105–3115. [Google Scholar] [CrossRef]
  39. Xin, Y.; Li, Y.; Huang, X.; Cheng, Z. Quasi-synchronization of delayed chaotic memristive neural networks. IEEE Trans. Cybern. 2019, 49, 712–718. [Google Scholar] [CrossRef]
  40. Zheng, M.; Li, L.; Peng, H.; Xiao, J.; Yang, Y.; Zhang, Y.; Zhao, H. Finite-time stability and synchronization of memristor-based fractional-order fuzzy cellular neural networks. Commun. Nonlinear Sci. Numer. Simul. 2018, 59, 272–291. [Google Scholar] [CrossRef]
  41. Du, F.; Lu, J. Adaptive finite-time synchronization of fractional-order delayed fuzzy cellular neural networks. Fuzzy Set Syst. 2023, 466, 108480. [Google Scholar] [CrossRef]
  42. Podlubny, I. Fractional Differential Equations; Academic Press: New York, NY, USA, 1999. [Google Scholar]
  43. Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and Application of Fractional Differential Equations; Elsevier: New York, NY, USA, 2006. [Google Scholar]
  44. Yang, S.; Hu, C.; Yu, J.; Jiang, H. Exponential stability of fractional-order impulsive control systems with applications in synchronization. IEEE Trans. Cybern. 2019, 50, 3157–3168. [Google Scholar] [CrossRef]
  45. Wei, Z.; Li, Q.; Che, J. Initial value problems for fractional differential equations involving Riemann-Liouville sequential fractional derivative. J. Math. Anal. Appl. 2010, 367, 260–272. [Google Scholar] [CrossRef]
  46. Liu, P.; Wang, J.; Zeng, Z. Event-triggered synchronization of multiple fractional-order recurrent neural networks with time-varying delays. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 4620–4630. [Google Scholar] [CrossRef]
  47. Fu, Y. The Laplace Transform and Its Application; Harbin Institute of Technology Press: Harbin, China, 2015. (In Chinese) [Google Scholar]
  48. Diethelm, K. The Analysis of Fractional Differential Equations; Springer: New York, NY, USA, 2010. [Google Scholar]
  49. Chen, B.; Chen, J. Global asymptotical ω-periodicity of a fractional-order non-autonomous neural networks. Neural Netw. 2015, 68, 78–88. [Google Scholar] [CrossRef] [PubMed]
  50. Jia, J.; Zeng, Z.; Wang, F. Pinning synchronization of fractional-order memristor-based neural networks with multiple time-varying delays via static or dynamic coupling. J. Frankl. Inst. 2021, 358, 895–933. [Google Scholar] [CrossRef]
  51. Liu, P.; Zeng, Z.; Wang, J. Asymptotic and finite-time cluster synchronization of coupled fractional-order neural networks with time delay. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 4956–4967. [Google Scholar] [CrossRef] [PubMed]
  52. Bhalekar, S.; Daftardar-Gejji, V.; Baleanu, D.; Magin, R. Fractional Bloch equation with delay. Comput. Math. Appl. 2011, 61, 1355–1365. [Google Scholar] [CrossRef]
Figure 1. The evolution of the feedback strength σ 1 and σ 2 .
Figure 1. The evolution of the feedback strength σ 1 and σ 2 .
Fractalfract 07 00825 g001
Figure 2. Synchronization errors between the systems (10) and (11).
Figure 2. Synchronization errors between the systems (10) and (11).
Fractalfract 07 00825 g002
Figure 3. The norm of synchronization error between the systems (10) and (11).
Figure 3. The norm of synchronization error between the systems (10) and (11).
Fractalfract 07 00825 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, S.; Du, F.; Chen, D. New Approach to Quasi-Synchronization of Fractional-Order Delayed Neural Networks. Fractal Fract. 2023, 7, 825. https://doi.org/10.3390/fractalfract7110825

AMA Style

Zhang S, Du F, Chen D. New Approach to Quasi-Synchronization of Fractional-Order Delayed Neural Networks. Fractal and Fractional. 2023; 7(11):825. https://doi.org/10.3390/fractalfract7110825

Chicago/Turabian Style

Zhang, Shilong, Feifei Du, and Diyi Chen. 2023. "New Approach to Quasi-Synchronization of Fractional-Order Delayed Neural Networks" Fractal and Fractional 7, no. 11: 825. https://doi.org/10.3390/fractalfract7110825

Article Metrics

Back to TopTop