Next Article in Journal
A Symmetry-Breaking Node Equivalence for Pruning the Search Space in Backtracking Algorithms
Next Article in Special Issue
A Sharp Oscillation Criterion for a Linear Differential Equation with Variable Delay
Previous Article in Journal
Braids, 3-Manifolds, Elementary Particles: Number Theory and Symmetry in Particle Physics
Previous Article in Special Issue
Around the Model of Infection Disease: The Cauchy Matrix and Its Properties
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Approximation of a Linear Autonomous Differential Equation with Small Delay

1
Department of Electrical Engineering, Sapientia Hungarian University of Transylvania, Corunca, 547367 Mures, Romania
2
Department of Mathematics, University of Pannonia, 8200 Veszprém, Hungary
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(10), 1299; https://doi.org/10.3390/sym11101299
Submission received: 30 September 2019 / Revised: 10 October 2019 / Accepted: 12 October 2019 / Published: 15 October 2019

Abstract

:
A linear autonomous differential equation with small delay is considered in this paper. It is shown that under a smallness condition the delay differential equation is asymptotically equivalent to a linear ordinary differential equation with constant coefficients. The coefficient matrix of the ordinary differential equation is a solution of an associated matrix equation and it can be written as a limit of a sequence of matrices obtained by successive approximations. The eigenvalues of the approximating matrices converge exponentially to the dominant characteristic roots of the delay differential equation and an explicit estimate for the approximation error is given.

1. Introduction

Let C and C n × n denote the set of complex numbers and the n-dimensional space of complex column vectors, respectively. Given a norm · on C n , the associated induced norm on C n × n will be denoted by the same symbol.
We will study the linear autonomous delay differential equation
x ˙ ( t ) = A x ( t ) + B x ( t τ ) ,
where τ > 0 , A C n × n and B C n × n is a nonzero matrix. It is well-known that if ϕ : [ τ , 0 ] C n is a continuous initial function, then Equation (1) has a unique solution x : [ τ , ) C n with initial values x ( t ) = ϕ ( t ) for τ t 0 (see [1]). The characteristic equation of Equation (1) has the form
det Δ ( λ ) = 0 , where Δ ( λ ) = λ I A B e λ τ .
Throughout the paper, we will assume that
B τ e 1 + A τ < 1 ,
which may be viewed as a smallness condition on the delay τ . We will show that if (3) holds, then Equation (1) is asymptotically equivalent to the ordinary differential equation
x ˙ = M x ,
where M C n × n is the unique solution of the matrix equation
M = A + B e M τ
such that
M < μ 0 , where μ 0 = τ 1 ln ( B τ ) > 0 .
Furthermore, the coefficient matrix M in Equation (4) can be written as a limit of successive approximations
M = lim k M k ,
where
M 0 = 0 and M k + 1 = A + B e M k τ for k = 0 , 1 , 2 , .
The convergence in (7) is exponential and we give an estimate for the approximation error M M k . It will be shown that those characteristic roots of Equation (1) which lie in the half-plane Re λ > μ 0 with μ 0 as in (6) coincide with the eigenvalues of matrix M. As a consequence, the above dominant characteristic roots of Equation (1) can be approximated by the eigenvalues of M k . We give an explicit estimate for the approximation error which shows that the convergence of the eigenvalues of M k to the dominant characteristic roots of Equation (1) is exponentially fast.
The investigation of differential equations with small delays has received much attention. Some results which are related to our study are discussed in the last section of the paper.

2. Main Results

In this section, we formulate and prove our main results which were indicated in the Introduction.

2.1. Solution of the Matrix Equation and Its Approximation

First we prove the existence and uniqueness of the solution of the matrix Equation (5) satisfying (6).
Theorem 1.
Suppose (3) holds. Then Equation (5) has a unique solution M C n × n such that (6) holds.
Before we present the proof of Theorem 1, we establish some lemmas.
Lemma 1.
Let P, Q C n × n and γ = max { P , Q } . Then
P k Q k k γ k 1 P Q for k = 1 , 2 , .
Proof. 
We will prove by induction on k that
P k Q k = j = 0 k 1 P j ( P Q ) Q k 1 j
for k = 1 , 2 , . Evidently, (10) holds for k = 1 . Suppose for induction that (10) holds for some positive integer k. Then
P k + 1 Q k + 1 = P k ( P Q ) + ( P k Q k ) Q = P k ( P Q ) + j = 0 k 1 P j ( P Q ) Q k 1 j Q = j = 0 k P j ( P Q ) Q k j .
Thus, (10) holds for all k. From (10), we find that
P k Q k j = 0 k 1 P j P Q Q k 1 j P Q j = 0 k 1 γ j γ k 1 j = k γ k 1 P Q
for k = 1 , 2 , .  □
Using Lemma 1, we can prove the following result about the distance of two matrix exponentials.
Lemma 2.
Let P, Q C n × n and γ = max { P , Q } . Then
e P e Q e γ P Q .
Proof. 
By the definition of the matrix exponential, we have
e P e Q = k = 0 P k k ! k = 0 Q k k ! = k = 1 P k Q k k ! .
From this, by the application of Lemma 1, we find that
e P e Q k = 1 P k Q k k ! P Q k = 1 k γ k 1 k ! = P Q k = 1 γ k 1 ( k 1 ) ! = e γ P Q
which proves (11).  □
We will also need some properties of the scalar equation
λ = a + b e λ τ .
Lemma 3.
Let a [ 0 , ) , b, τ ( 0 , ) and suppose that
b τ e 1 + a τ < 1 .
If we let λ 0 = τ 1 ln ( b τ ) , then λ 0 > 0 and Equation (12) has a unique root λ 1 ( 0 , λ 0 ) . Moreover,
a + b e λ τ < λ for λ ( λ 1 , λ 0 ]
and
b τ e λ τ < 1 for λ < λ 0 .
Proof. 
By virtue of (13), we have b τ < e 1 a τ < 1 which implies that ln ( b τ ) < 0 and hence λ 0 > 0 . Define
f ( λ ) = λ a b e λ τ for λ R .
We have
f ( λ ) = 1 b τ e λ τ and f ( λ ) = b τ 2 e λ τ for λ R .
It is easily seen that f ( λ ) = 0 if and only if λ = τ 1 ln ( b τ ) = λ 0 . Furthermore, (13) is equivalent to f ( λ 0 ) = τ 1 ln ( b τ ) a τ 1 > 0 . Since f ( λ ) < 0 for λ R , f strictly decreases on R . In particular, f ( λ ) > f ( λ 0 ) = 0 for λ < λ 0 . Therefore, (15) holds and f strictly increases on ( , λ 0 ] . This, together with f ( 0 ) < 0 and f ( λ 0 ) > 0 , implies that f and hence Equation (12) have a unique root λ 1 ( 0 , λ 0 ) . Since f strictly increases on [ λ 1 , λ 0 ] , we have that f ( λ ) > f ( λ 1 ) = 0 for λ ( λ 1 , λ 0 ] . Thus, (14) holds. □
Now we can give a proof of Theorem 1.
Proof of Theorem 1.
By Lemma 3, if (3) holds, then the equation
μ = A + B e μ τ
has a unique solution μ 1 ( 0 , μ 0 ) , where μ 0 is given by (6). Moreover,
A + B e μ τ < μ for μ ( μ 1 , μ 0 ]
and
B τ e μ τ < 1 for μ < μ 0 .
Let μ [ μ 1 , μ 0 ) be fixed. Define
F ( M ) = A + B e M τ for M C n × n
and
S = { M C n × n M μ } .
Clearly, S is a nonempty and closed subset of C n × n . By virtue of (17), we have for M S ,
F ( M ) A + B e M τ A + B e μ τ μ .
Thus, F maps S into itself. Let M 1 , M 2 S . By the application of Lemma 2, we obtain
F ( M 1 ) F ( M 2 ) = B ( e M 1 τ e M 2 τ ) B e M 1 τ e M 2 τ B τ e μ τ M 1 M 2 .
In view of (18), F : S S is a contraction and hence there exists a unique M S such that M = F ( M ) . Since μ [ μ 1 , μ 0 ) was arbitrary, this completes the proof. □
In the next theorem, we show that the unique solution of Equation (5) satisfying (6) can be written as a limit of successive approximations M k defined by (8) and we give an estimate for the approximation error.
Theorem 2.
Suppose (3) holds and let M C n × n be the solution of Equation (5) satisfying (6). If { M k } k = 0 is the sequence of matrices defined by (8), then
M k μ 1 for k = 0 , 1 , 2 , ,
and
M M k μ 1 q k for k = 0 , 1 , 2 , ,
where μ 1 is the unique root of Equation (16) in the interval ( 0 , μ 0 ) and q = B τ e μ 1 τ < 1 (see (18)).
Proof. 
Note that M k + 1 = F ( M k ) for k = 0 , 1 , 2 , , where F is defined by Equation (19). Taking μ = μ 1 in the proof of Theorem 1, we find that M μ 1 . Moreover, from (20) and (21), we obtain that M k μ 1 for k = 0 , 1 , 2 , . From this and Equations (5) and (8), by the application of Lemma 2, we obtain for k 0 ,
M M k + 1 = B ( e M τ e M k τ ) B e M τ e M k τ B τ e μ 1 τ M M k = q M M k .
From the last inequality, it follows by easy induction on k that
M M k q k M M 0 = q k M q k μ 1
for k = 0 , 1 , 2 , . □

2.2. Dominant Eigenvalues and Eigensolutions

Let us summarize some facts from the theory of linear autonomous delay differential equations (see [1,2]). By an eigenvalue of Equation (1), we mean an eigenvalue of the generator of the solution semigroup (see [1,2] for details). It is known that λ C is an eigenvalue of Equation (1) if and only if λ is a root of the characteristic equation (2). Moreover, for every β R , Equation (1) has only finite number of eigenvalues with Re λ > β . By an entire solution of Equation (1), we mean a differentiable function x : ( , ) C n satisfying Equation (1) for all t ( , ) . To each eigenvalue λ of Equation (1), there correspond nontrivial entire solutions of the form p ( t ) e λ t , t ( , ) , where p ( t ) is a C n -valued polynomial in t. Such solutions are sometimes called eigensolutions corresponding to λ .
The following theorem shows that under the smallness condition (3) the eigenvalues of Equation (1) with Re λ > μ 0 coincide with eigenvalues of matrix M from Theorem 1 and the corresponding eigensolutions satisfy the ordinary differential Equation (4).
Theorem 3.
Suppose (3) holds so that μ 0 = τ 1 ln ( B τ ) > 0 , and define
Λ = { λ C det Δ ( λ ) = 0 , Re λ > μ 0 } .
Let M C n × n be the unique solution of Equation (5) satisfying (6). Then Λ = σ ( M ) , where σ ( M ) denotes the set of eigenvalues of M. Moreover, for every λ Λ , Equations (1) and (4) have the same eigensolutions corresponding to λ.
In the sequel, the eigenvalues of Equation (1) with Re λ > μ 0 will be called dominant.
As a preparation for the proof of Theorem 3, we establish three lemmas. First we show that if M is a solution of the matrix Equation (5), then every solution of the ordinary differential Equation (4) is an entire solution of the delay differential Equation (1).
Lemma 4.
Let M C n × n be a solution of Equation (5). Then every v C n , x ( t ) = e M t v , t ( , ) , is an entire solution of Equation (1).
Proof. 
Since e P e Q = e P + Q whenever P and Q C n × n commute, from Equation (5), we find that
x ˙ ( t ) = M e M t v = ( A + B e M τ ) e M t v = A e M t v + B e M τ e M t v = A x ( t ) + B e M ( t τ ) v = A x ( t ) + B x ( t τ )
for t ( , ) . □
In the following lemma, we prove the uniqueness of entire solutions of the delay differential Equation (1) with an appropriate exponential growth as t .
Lemma 5.
Suppose (3) holds. If x 1 and x 2 are entire solutions of Equation (1) with x 1 ( 0 ) = x 2 ( 0 ) and such that
sup t 0 x j ( t ) e μ 0 t < , j = 1 , 2 ,
with μ 0 as in (6), then x 1 = x 2 identically on ( , ) .
Proof. 
Define
C = sup t 0 x 1 ( t ) x 2 ( t ) e μ 0 t .
By virtue of (24), we have that 0 C < . From Equation (1), we find for t 0 ,
x j ( t ) = x j ( 0 ) A t 0 x j ( s ) d s B t 0 x j ( s τ ) d s , j = 1 , 2 .
From this, taking into account that x 1 ( 0 ) = x 2 ( 0 ) , we obtain for t 0 ,
x 1 ( t ) x 2 ( t ) A t 0 x 1 ( s ) x 2 ( s ) d s + B t 0 x 1 ( s τ ) x 2 ( s τ ) d s A C t 0 e μ 0 s d s + B C t 0 e μ 0 ( s τ ) d s = C ( A + B e μ 0 τ ) t 0 e μ 0 s d s C A + B e μ 0 τ μ 0 e μ 0 t .
The last inequality implies for t 0 ,
x 1 ( t ) x 2 ( t ) e μ 0 t C A + B e μ 0 τ μ 0 .
Hence C κ C , where
κ = A + B e μ 0 τ μ 0 .
By virtue of (17), we have that κ < 1 . Hence C = 0 and x 1 ( t ) = x 2 ( t ) for t 0 . The uniqueness theorem ([1] Chapter 2, Theorem 2.3) implies that x 1 ( t ) = x 2 ( t ) for all t ( , ) . □
Now we show that those entire solutions of Equation (1) which satisfy the growth condition
sup t 0 x ( t ) e μ 0 t < with μ 0 as in ( 6 )
coincide with the solutions of the ordinary differential Equation (4).
Lemma 6.
Suppose (3) holds. Then, for every v C n , Equation (1) has exactly one entire solution x with x ( 0 ) = v and satisfying (25) given by
x ( t ) = e M t v for t ( , ) ,
where M C n × n is the solution of Equation (5) with property (6).
Proof. 
By Lemma 4, x defined by Equation (26) is an entire solution of Equation (1). Moreover, from Equations (6) and (26), we find for t 0 ,
x ( t ) e M | t | v e μ 0 | t | v = e μ 0 t v .
Hence sup t 0 x ( t ) e μ 0 t v < . Thus, x given by Equation (26) is an entire solution of Equation (1) with x ( 0 ) = v and satisfying (25). The uniqueness follows from Lemma 5. □
Now we can give a proof of Theorem 3.
Proof of Theorem 3.
Suppose that λ Λ . Since det Δ ( λ ) = 0 , there exists a nonzero vector v C n such that Δ ( λ ) v = 0 and hence x ( t ) = e λ t v , t ( , ) , is an entire solution of Equation (1). Since Re λ > μ 0 , we have for t 0 ,
x ( t ) = | e λ t | v = e t Re λ v e μ 0 t v ,
which implies (25). Thus, x ( t ) = e λ t v is an entire solution of (1) with x ( 0 ) = v and satisfying (25). By Lemma 6, we have that e λ t v = e M t v for t ( , ) . Hence
e λ t 1 t v = e M t I t v for t R { 0 } .
Letting t 0 , we obtain λ v = M v . This proves that Λ σ ( M ) .
Now suppose that λ σ ( M ) . Then there exists a nonzero vector v C n such that M v = λ v . According to Lemma 4, x ( t ) = e M t v = e λ t v is an entire solution of Equation (1). Hence Δ ( λ ) v = 0 which implies that det Δ ( λ ) = 0 . In order to prove that λ Λ , it remains to show that Re λ > μ 0 . It is well-known that ρ ( M ) M , where ρ ( M ) = sup λ σ ( M ) | λ | is the spectral radius of M. This, together with (6), yields
| Re λ | | λ | ρ ( M ) M < μ 0 .
Therefore Re λ > μ 0 which proves that σ ( M ) Λ .
Let λ Λ = σ ( M ) . By Lemma 4, every eigensolution of the ordinary differential equation (4) corresponding to λ is an eigensolution of the delay differential equation (1). Now suppose that x is an eigensolution of the delay differential equation (1) corresponding to λ . Then x ( t ) = p ( t ) e λ t , where p ( t ) is a C n -valued polynomial in t. If m is the order of the polynomial p, then there exists K > 0 such that
p ( t ) K ( 1 + | t | m ) for t ( , ) .
Since Re λ > μ 0 , we have that ϵ = Re λ + μ 0 > 0 . From this, we find for t 0 ,
x ( t ) = p ( t ) | e λ t | = p ( t ) e t Re λ K ( 1 + | t | m ) e t Re λ = K ( 1 + | t | m ) e ϵ t e μ 0 t .
Hence
x ( t ) e μ 0 t K ( 1 + | t | m ) e ϵ t 0 as t .
Thus, x is an entire solution of Equation (1) satisfying the growth condition (25). By Lemma 6, x is a solution of the ordinary differential equation (4). □

2.3. Asymptotic Equivalence

The following result from the monograph by Diekmann et al. [2] gives an asymptotic description of the solutions of Equation (1) in terms of the eigensolutions.
Proposition 1.
([2] Chapter I, Theorem 5.4) Let x : [ τ , ) C n × n be a solution of Equation (1) corresponding to some continuous initial function ϕ : [ τ , 0 ] C n . For any γ R such that det Δ ( λ ) = 0 has no roots on the vertical line Re λ = γ , we have the asymptotic expansion
x ( t ) = j = 1 l p j ( t ) e λ j t + o ( e γ t ) as t ,
where λ 1 , λ 2 , , λ l are the finitely many roots of the characteristic equation (2) with real part greater than γ and p j ( t ) are C n -valued polynomials in t of order less than the multiplicity of λ j as a zero of det Δ ( λ ) .
Now we can formulate our main result about the asymptotic equivalence of Equations (1) and (4).
Theorem 4.
Suppose that (3) holds so that μ 0 = τ 1 ln ( B τ ) > 0 . Let M C n × n be the solution of Equation (5) satisfying (6). Then the following statements are valid.
(i) Every solution of the ordinary differential equation (4) is an entire solution of the delay differential equation (1).
(ii) For every solution x : [ τ , ) C n × n of the delay differential equation (1) corresponding to some continuous initial function ϕ : [ τ , 0 ] C n , there exists a solution x ˜ of the ordinary differential equation (4) such that
x ( t ) = x ˜ ( t ) + o ( e μ 0 t ) as t .
Proof. 
Conclusion (i) follows from Lemma 1. We shall prove conclusion (ii) by applying Proposition 1 with γ = μ 0 . We need to verify that Equation (2) has no root on the vertical line Re λ = μ 0 . Suppose for contradiction that there exists λ C such that det Δ ( λ ) = 0 and Re λ = μ 0 . Then there exists a nonzero vector v C n such that Δ ( λ ) v = 0 and hence λ v = A v + B e λ τ v . From this, we find that
| λ | v A v + B e λ τ v = A v + B | e λ τ | v = ( A + B e τ Re λ ) v = ( A + B e μ 0 τ ) v .
Hence | λ | A + B e μ 0 τ , which together with (17), yields
μ 0 = | Re λ | | λ | A + B e μ 0 τ < μ 0 ,
a contradiction. Thus, we can apply Proposition 1 with γ = μ 0 , which implies that the asymptotic relation (28) holds with
x ˜ ( t ) = j = 1 l p j ( t ) e λ j t ,
where λ 1 , λ 2 , , λ l are those eigenvalues of Equation (1) which have real part greater than μ 0 and p j ( t ) are C n -valued polynomials in t. According to Theorem 3, the eigensolutions of Equation (1) corresponding to eigenvalues with real part greater than μ 0 are solutions of the ordinary differential equation (4). Hence x ˜ given by Equation (29) is a solution of Equation (4). □

2.4. Approximation of the Dominant Eigenvalues

We will need the following result about the distance of the eigenvalues of two matrices in terms of the norm of their difference due to Bhatia, Elsner and Krause [3].
Proposition 2.
[3]Let P, Q C n × n and γ = max { P , Q } . Then the eigenvalues of P and Q can be enumerated as λ 1 , , , λ n and μ 1 , , μ n in such a way that
max 1 j n | λ j μ j | 4 · 2 1 / n n 1 / n ( 2 γ ) 1 1 / n P Q 1 / n .
Recall that the dominant eigenvalues of Equation (1) are those roots of Equation (2) which have real part greater than μ 0 . According to Theorem 3, if (3) holds, then the dominant eigenvalues of Equation (1) coincide with the eigenvalues of M, the unique solution of Equation (5) satisfying (6). By Theorem 2, M can be approximated by the sequence of matrices { M k } k = 0 defined by (8). As a consequence, the dominant eigenvalues of the delay differential equation (1) can be approximated by the eigenvalues of M k . The explicit estimate (23) for M M k , combined with Proposition 2, yields the following result.
Theorem 5.
Suppose (3) holds so that the dominant eigenvalues of Equation (1) coincide with the eigenvalues λ 1 , , λ n of matrix M from Theorem 1 (see Theorem 3). If { M k } k = 0 is the sequence of matrices defined by (8), then the eigenvalues λ 1 [ k ] , , λ n [ k ] of M k can be renumbered such that
max 1 j n | λ j λ j [ k ] | 8 · 4 1 / n n 1 / n μ 1 q k / n ,
where μ 1 and q have the meaning from Theorem 2.
Since q < 1 , the explicit error estimate (31) in Theorem 5 shows that under the smallness condition (3) the eigenvalues of M k converge to the dominant eigenvalues of the delay differential equation (1) at an exponential rate as k .

3. Discussion

Let us briefly mention some results which are relevant to our study. For a class of linear differential equations with small delay, Ryabov [4] introduced a family of special solutions and showed that every solution is asymptotic to some special solution as t . Ryabov’s result was improved by Driver [5], Jarník and Kurzweil [6]. A more precise asymptotic description was given in [7]. For further related results on asymptotic integration and stability of linear differential equations with small delays, see [8] and [9]. Some improvements and a generalization to functional differential equations in Banach spaces were given by Faria and Huang [10]. Inertial and slow manifolds for differential equations with small delays were studied by Chicone [11]. Results on minimal sets of a skew-product semiflow generated by scalar differential equations with small delay can be found in the work of Alonso, Obaya and Sanz [12]. Smith and Thieme [13] showed that nonlinear autonomous differential equations with small delay generate a monotone semiflow with respect to the exponential ordering and the monotonicity has important dynamical consequences. For the effects of small delays on the stability and control, see the paper by Hale and Verduyn Lunel [14].
The results in the above listed papers show that if the delay is small, then there are similarities between the delay differential equation and an associated ordinary differential equation. The description of the associated ordinary differential equation in general requires the knowledge of certain special solutions. Since in most cases the special solutions are not known, the above results are mainly of theoretical interest. In the present paper, in the simple case of linear autonomous differential equations with small delay, we have described the coefficient matrix of the associated ordinary differential equation. Moreover, we have shown that the coefficient matrix can be approximated by a sequence of matrices defined recursively which yields an effective method for the approximation of the dominant eigenvalues.

Author Contributions

All authors contributed equally to this research and to writing the paper.

Funding

This research was funded by the Hungarian National Research, Development and Innovation Office grant no. K120186 and Széchenyi 2020 under the EFOP-3.6.1-16-2016-00015.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Hale, J.K.; Verduyn Lunel, S.M. Introduction to Functional Differential Equations; Springer: New York, NY, USA, 1993. [Google Scholar]
  2. Diekmann, O.; van Gils, S.A.; Verduyn Lunel, S.M.; Walther, H.-O. Delay Equations. Functional-, Complex-, and Nonlinear Analysis; Springer: New York, NY, USA, 1995. [Google Scholar]
  3. Bhatia, R.; Elsner, L.; Krause, G. Bounds for the variation of the roots of a polynomial and the eigenvalues of matrix. Linear Algebra Appl. 1990, 142, 195–209. [Google Scholar] [CrossRef]
  4. Ryabov, Y.A. Certain asymptotic properties of linear systems with small time lag. Soviet Math. 1965, 3, 153–164. (In Russian) [Google Scholar]
  5. Driver, R.D. Linear differential systems with small delays. J. Differ. Equ. 1976, 21, 149–167. [Google Scholar] [CrossRef]
  6. Jarník, J.; Kurzweil, J. Ryabov’s special solutions of functional differential equations. Boll. Un. Mat. Ital. 1975, 11, 198–218. [Google Scholar]
  7. Arino, O.; Pituk, M. More on linear differential systems with small delays. J. Differ. Equ. 2001, 170, 381–407. [Google Scholar] [CrossRef]
  8. Arino, O.; Gyori, I.; Pituk, M. Asymptotically diagonal delay differential systems. J. Math. Anal. Appl. 1996, 204, 701–728. [Google Scholar] [CrossRef]
  9. Gyori, I.; Pituk, M. Stability criteria for linear delay differential equations. Differ. Integral Equ. 1997, 10, 841–852. [Google Scholar]
  10. Faria, T.; Huang, W. Special solutions for linear functional differential equations and asymptotic behaviour. Differ. Integral Equ. 2005, 18, 337–360. [Google Scholar]
  11. Chicone, C. Inertial and slow manifolds for delay equations with small delays. J. Differ. Equ. 2003, 190, 364–406. [Google Scholar] [CrossRef] [Green Version]
  12. Alonso, A.I.; Obaya, R.; Sanz, A.M. A note on non-autonomous scalar functional differential equations with small delay. C. R. Math. 2005, 340, 155–160. [Google Scholar] [CrossRef]
  13. Smith, H.L.; Thieme, H.R. Monotone semiflows in scalar non-quasi-monotone functional differential equations. J. Math. Anal. Appl. 1990, 150, 289–306. [Google Scholar] [CrossRef] [Green Version]
  14. Hale, J.K.; Verduyn Lunel, S.M. Effects of small delays on stability and control. Oper. Theory Adv. Appl. 2001, 122, 275–301. [Google Scholar]

Share and Cite

MDPI and ACS Style

Fehér, Á.; Márton, L.; Pituk, M. Approximation of a Linear Autonomous Differential Equation with Small Delay. Symmetry 2019, 11, 1299. https://doi.org/10.3390/sym11101299

AMA Style

Fehér Á, Márton L, Pituk M. Approximation of a Linear Autonomous Differential Equation with Small Delay. Symmetry. 2019; 11(10):1299. https://doi.org/10.3390/sym11101299

Chicago/Turabian Style

Fehér, Áron, Lorinc Márton, and Mihály Pituk. 2019. "Approximation of a Linear Autonomous Differential Equation with Small Delay" Symmetry 11, no. 10: 1299. https://doi.org/10.3390/sym11101299

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop