Next Article in Journal
A Class of Hybrid Multistep Block Methods with A–Stability for the Numerical Solution of Stiff Ordinary Differential Equations
Next Article in Special Issue
Qualitative Study of a Well-Stirred Isothermal Reaction Model
Previous Article in Journal
Averaging Methods for Second-Order Differential Equations and Their Application for Impact Systems
Previous Article in Special Issue
Rational Limit Cycles on Abel Polynomial Equations

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Time-Varying Vector Norm and Lower and Upper Bounds on the Solutions of Uniformly Asymptotically Stable Linear Systems

by
Robert Vrabel
Institute of Applied Informatics, Automation and Mechatronics, Slovak University of Technology in Bratislava, Bottova 25, 917 01 Trnava, Slovakia
Mathematics 2020, 8(6), 915; https://doi.org/10.3390/math8060915
Submission received: 19 May 2020 / Revised: 1 June 2020 / Accepted: 2 June 2020 / Published: 4 June 2020
(This article belongs to the Special Issue Qualitative Theory for Ordinary Differential Equations)

## Abstract

:
Based on the eigenvalue idea and the time-varying weighted vector norm in the state space $R n$ we construct here the lower and upper bounds of the solutions of uniformly asymptotically stable linear systems. We generalize the known results for the linear time-invariant systems to the linear time-varying ones.
MSC:
34A30; 34L15; 34D23; 37M05

## 1. Introduction

In addition to the Lyapunov stability criteria for the linear system of ordinary differential equations $x ˙ = A ( t ) x ,$ $x ˙ = d x / d t ,$ $t ≥ t 0 ,$ $x ∈ R n ,$ other types of conditions guaranteeing the stability often are useful. Typically these are sufficient conditions that are proved by application of the Lyapunov stability theorems [1], or the Gronwall–Bellman inequality [2], though sometimes either technique can be used, and sometimes both are used in the same proof of a stability criterion. One of these useful results for stability analysis of the linear systems is the following theorem ([3], p. 132, Theorem 8.2).
Theorem 1.
For the linear system $x ˙ = A ( t ) x ,$ $t ≥ t 0$ denote the largest and smallest point-wise eigenvalues of $A T ( t ) + A ( t )$ by $λ max ( t )$ and $λ min ( t ) .$ Then for any $t 0$ and $x ( t 0 )$ the solution $x ( t )$ satisfies
$x ( t 0 ) I e 1 / 2 ∫ t 0 t λ min ( τ ) d τ ≤ x ( t ) I ≤ x ( t 0 ) I e 1 / 2 ∫ t 0 t λ max ( τ ) d τ , t ≥ t 0 .$
Throughout the whole paper it is assumed that a matrix function $A ( · ) :$ $[ t 0 , ∞ ) → R n × n$ is continuous.
This theorem belongs, as a special case, to the wider family of sufficient conditions for stability of the linear systems based on the “logarithmic measure” of the system matrices ([4], p. 58, Theorem 3) taking into account the fact that for the Euclidean norm $· I$ the “logarithmic measure” of a real matrix A is just the largest eigenvalue of $1 2 ( A T + A )$ ([4], p. 41).
Our aim in this paper is to prove more useful theorem based on the eigenvalues idea for estimating asymptotics of the solutions of uniformly asymptotically stable linear systems. The theory is illustrated by two examples.

#### Notations, Definitions and Preliminary Results

Let $R n$ denotes n-dimensional vector space over the real numbers, $x = ( x 1 , ⋯ , x n ) T ∈ R n$ is a column vector and the symbol $·$ refers to any (real) vector norm on $R n .$ Specifically, for a symmetric, positive definite real matrix $H ,$ we define the weight H vector norm $x H ≜ x T H x 1 / 2 .$ Obviously, for $H = I$ ($I =$ the $n × n$ identity matrix) we obtain the Euclidean norm, $x I .$ For the matrices $H ∈ R n × n$ as an operator norm we will use an induced norm. Particularly, for weight H vector norm in $R n ,$ the norm $M H = λ max M ^ T M ^ 1 / 2$ where $M ^ = H 1 / 2 M H − 1 / 2 ,$ as was proved in [5]. Further, $λ i M ,$ $i = 1 , ⋯ , n$ denotes the eigenvalues of the matrix $M ,$ $λ min M = min { λ i M : i = 1 , ⋯ , n }$ and, analogously, $λ max M = max { λ i M : i = 1 , ⋯ , n } .$
In this paper we will deal solely with the uniformly asymptotically (⇔ uniformly exponentially) stable linear systems ([1], Theorem 4.11), ([3], Theorem 6.13); for the different types of stability and their relation, see e.g., [6].
Definition 1
([1,3]). The linear system $x ˙ = A ( t ) x$ is called uniformly asymptotically stable (UAS) if there exist finite positive constants $γ ,$λ such that for any $t 0$ and $x ( t 0 )$ the corresponding solution satisfies
$x ( t ) ≤ γ x ( t 0 ) e − λ ( t − t 0 ) , t ≥ t 0 .$
We recall that the transition matrix of the linear system $x ˙ = A ( t ) x$ is $Φ ( t , τ ) ≜ X ( t ) X − 1 ( τ ) ,$ where $X ( t ) ,$ $t ≥ t 0$ is a fundamental matrix of the system. In particular, if $A ( t ) = A$ is an $n × n$ constant matrix, then the transition matrix is $Φ ( t , τ ) = e A ( t − τ ) .$
For the following theorem see, e.g., ([1], p. 156, Theorem 4.11) or ([3], p. 102, Theorem 6.7).
Theorem 2.
The linear system $x ˙ = A ( t ) x$ is uniformly asymptotically stable if and only if there exist finite positive constants $γ ,$λ such that
$Φ ( t , τ ) ≤ γ e − λ ( t − τ )$
for all $t , τ$ such that $t ≥ τ ≥ t 0 .$
Theorem 1 leads to proof of some simple criterion based on the eigenvalues of $A T ( t ) + A ( t )$ ([3], p. 133, Corollary 8.4); for a wider context in connection with so called “logarithmic measure” of the matrices see also, e.g., [7,8,9].
Corollary 1.
If there exist real positive constants $γ ˜ ,$ $λ ˜$ such that the largest point-wise eigenvalue of $A T ( t ) + A ( t )$ satisfies
$∫ τ t λ max A T ( s ) + A ( s ) d s ≤ γ ˜ − λ ˜ ( t − τ )$
for all $t , τ$ such that $t ≥ τ ≥ t 0 ,$ then the linear system $x ˙ = A ( t ) x$ is UAS.
This criterion is quite conservative in the sense that many UAS linear systems do not satisfy the condition (2) as demonstrated by the following example.
Example 1.
The system $x ˙ = A x ,$ $t ≥ 0$ with
$A = 0 10 − 10 − 2$
is UAS because $λ 1 , 2 A = − 1 ± 3 i .$ Because $λ max [ A T + A ] = 0 ,$ there does not exist a pair of positive constants $γ ˜ ,$ $λ ˜$ such that the inequality (2) holds for all $t ≥ τ ≥ 0 ,$ and so Corollary 1 is not applicable in this particular case. A straightforward computation by Theorem 1 gives
$x ( 0 ) I e − 2 t ≤ x ( t ) I ≤ x ( 0 ) I$
for all $t ≥ 0 .$
Despite such examples the eigenvalue idea is not to be completely rejected. In Theorem 3 below we prove for the UAS linear systems $x ˙ = A ( t ) x$ the stronger result, generalizing Theorem 1 in such a way as to be meaningfully applicable to every UAS system.

## 2. Main Results

The main results of this paper are summarized in the following theorem generalizing ([5], Theorem 3.1) to the linear time-varying systems. Recall that although its claims are mainly of theoretical relevance, providing the necessary conditions for exponential stability, within its framework without giving details and exact mathematical explanation the important results regarding convergent systems were derived in [10]; for the definitions and comparisons with the notion of incremental stability see also [11]. Moreover, this theorem provides also the lower bound on the solutions generally classified as difficult to obtain.
Theorem 3.
If the linear system $x ˙ = A ( t ) x ,$ is UAS, where $A ( · ) :$ $[ t 0 , ∞ ] → R n × n$ is a continuous matrix function, then there exists a continuous, symmetric and positive definite matrix function $H ( · ) :$ $[ t 0 , ∞ ] → R n × n$ such that every solution $x ( t )$ of the system satisfies
$λ min [ H ( t ) ] λ max [ H ( t ) ] 1 / 2 x ( t 0 ) I e − 1 2 ∫ t 0 t d τ λ min [ H ( τ ) ] ≤ x ( t ) I$
$≤ λ max [ H ( t ) ] λ min [ H ( t ) ] 1 / 2 x ( t 0 ) I e − 1 2 ∫ t 0 t d τ λ max [ H ( τ ) ] for all t ≥ t 0 ,$
and there exist two positive real constant $γ , λ$ such that
$λ max H ( t ) ≤ γ 2 2 λ .$
In particular, if $A ( t )$ is bounded, $A ( t ) I ≤ L$ for all $t ≥ t 0$, then
$1 2 L ≤ λ min H ( t ) ≤ λ max H ( t ) ≤ γ 2 2 λ .$
Proof.
Set
$H ( t ) = ∫ t ∞ Φ T ( τ , t ) Φ ( τ , t ) d τ , t ≥ t 0 .$
In particular, if $A ( t ) = A$ is a constant matrix, we have
$H = ∫ 0 ∞ e A T τ e A τ d τ .$
We begin with the analysis of the properties of the matrix function $H ( · ) .$ Observe that $H ( t )$ is symmetric and positive definite because such is the integrand $Φ T ( τ , t ) Φ ( τ , t )$ ([12], Corollary 14.2.10). The use of
• The Rayleigh–Ritz ratio [13],
• The fact that $Φ ( τ , t ) I = Φ T ( τ , t ) I$ because every matrix and its transpose have the same characteristic polynomial ([12], Lemma 21.1.2),
• The fact that spectral radius of the matrix $Φ T ( τ , t ) Φ ( τ , t )$ is less or equal to any induced matrix norm $Φ T ( τ , t ) Φ ( τ , t ) ,$ and
• Theorem 2
yields for every fixed $t ≥ t 0$ and $x ∈ R n$ that
$x T H ( t ) x ≤ λ max ∫ t ∞ Φ T ( τ , t ) Φ ( τ , t ) d τ x I 2 ≤ ∫ t ∞ Φ T ( τ , t ) Φ ( τ , t ) d τ I x I 2$
$≤ x I 2 ∫ t ∞ Φ ( τ , t ) I 2 d τ ≤ x I 2 ∫ t ∞ γ 2 e − 2 λ ( τ − t ) d τ = γ 2 2 λ x I 2 ,$
where $γ ,$ $λ$ are the constants given by Theorem 2. As a consequence, $λ max H ( t ) ≤ γ 2 2 λ$ because there is equality $x T H ( t ) x = λ max H ( t ) x I 2$ for x equal to the eigenvector corresponding to $λ max H ( t ) .$ To prove the left inequality in (4) we will need the following result.
Lemma 1.
If $A ( t ) I ≤ L$ for all $t ≥ t 0 ,$ then the solution $x ( t )$ of the $x ˙ = A ( t ) x$ satisfies
$x ( t 0 ) I e − L ( t − t 0 ) ≤ x ( t ) I ≤ x ( t 0 ) I e L ( t − t 0 ) , t ≥ t 0 .$
Observe that the right-hand side inequality is uninteresting for UAS systems, every estimate of $x ( t ) I$ would grow exponentially as $t → ∞ .$
Proof.
The claim of the lemma follows immediately from the chain of inequalities
$λ max A T ( t ) + A ( t ) ≤ A T ( t ) + A ( t ) I ≤ 2 A ( t ) I ≤ 2 L ,$
$λ min A T ( t ) + A ( t ) ≥ − A T ( t ) + A ( t ) I ≥ − 2 A ( t ) I ≥ − 2 L ,$
and (1). □
Now if $ϕ ( τ )$ is a solution of $d ϕ / d τ = A ( τ ) ϕ$ starting at $( t , x ) ,$ that is, $ϕ ( τ ) = Φ ( τ , t ) x ,$ then for all $x ∈ R n$ one gets
$x T H ( t ) x = x T ∫ t ∞ Φ T ( τ , t ) Φ ( τ , t ) d τ x = ∫ t ∞ ϕ T ( τ ) ϕ ( τ ) d τ$
and, by (5),
$∫ t ∞ ϕ ( τ ) I 2 d τ ≥ x I 2 ∫ t ∞ e − 2 L ( τ − t ) d τ = 1 2 L x I 2 .$
Arguing analogously as above, $λ min H ( t ) ≥ 1 2 L$ and the inequality (4) is proved.
Now we are ready to prove the remaining part of the theorem, namely the inequality (3). Suppose $x ( t )$ is a solution of $x ˙ = A ( t ) x$ corresponding to a given $t 0$ and nonzero $x ( t 0 ) .$ Let us formally consider a time-varying weighted vector norm of the solutions $x ( t ) H ( t ) .$ Then
$d d t x ( t ) H ( t ) 2 = d d t x T ( t ) H ( t ) x ( t ) = x T ( t ) A T ( t ) H ( t ) + H ˙ ( t ) + H ( t ) A ( t ) x ( t ) .$
Now we show that the function $H ( t )$ satisfies time-varying Lyapunov equation (e.g., [1,3,14])
$H ˙ ( t ) + A T ( t ) H ( t ) + H ( t ) A ( t ) = − I .$
Using the Equations ([15], p. 70) and ([3], p. 62)
$d d t Φ ( τ , t ) = − Φ ( τ , t ) A ( t ) ,$
$d d t Φ T ( τ , t ) = − A T ( t ) Φ T ( τ , t )$
and
$Φ ( τ = ∞ , t ) = 0 ( ⇐ UAS ) , Φ ( t , t ) = I ,$
we obtain that
$H ˙ ( t ) = ∫ t ∞ Φ T ( τ , t ) ∂ ∂ t Φ ( τ , t ) d τ + ∫ t ∞ ∂ ∂ t Φ T ( τ , t ) Φ ( τ , t ) d τ − I$
$= − ∫ t ∞ Φ T ( τ , t ) Φ ( τ , t ) d τ A ( t ) − A T ( t ) ∫ t ∞ Φ T ( τ , t ) Φ ( τ , t ) d τ − I$
$= − A T ( t ) H ( t ) − H ( t ) A ( t ) − I .$
Returning to (6), $d d t x ( t ) H ( t ) 2 = − x ( t ) I 2 .$ Dividing through by $x ( t ) H ( t ) 2$ which is positive at each $t ≥ t 0 ,$ the Rayleigh–Ritz ratio yields
$− 1 λ min H ( t ) ≤ d d t x ( t ) H ( t ) 2 x ( t ) H ( t ) 2 = − x I 2 x T H ( t ) x ≤ − 1 λ max H ( t ) .$
Integrating from $t 0$ to any $t ≥ t 0$ one gets
$− ∫ t 0 t d τ λ min H ( τ ) ≤ ln x ( t ) H ( t ) 2 − ln x ( t 0 ) H ( t ) 2 ≤ − ∫ t 0 t d τ λ max H ( τ ) .$
Exponentiation followed by taking the nonnegative square root gives for all $t ≥ t 0$ the inequality
$x ( t 0 ) H ( t ) e − 1 2 ∫ t 0 t d τ λ min [ H ( τ ) ] ≤ x ( t ) H ( t ) ≤ x ( t 0 ) H ( t ) e − 1 2 ∫ t 0 t d τ λ max [ H ( τ ) ] .$
Finally using “norm conversion rule” between different weight $H 1$ and $H 2$ (recall $H 1 ,$ $H 2$ are symmetric and positive definite matrices)
$λ min [ H 1 ] λ max [ H 2 ] ≤ x H 1 2 x H 2 2 = x T H 1 x x T H 2 x ≤ λ max [ H 1 ] λ min [ H 2 ] for x ≠ 0 ,$
we obtain the inequality (3). □
Remark 1.
Combining ([5], Lemma 2.3, Theorem 2.1) and ([4], p. 58, Theorem 3) we obtain
$x ( t 0 ) H ˜ e − ∫ t 0 t d τ λ min [ H ˜ ] ≤ x ( t ) H ˜ ≤ x ( t 0 ) H ˜ e − ∫ t 0 t d τ λ max [ H ˜ ]$
which is a special case of (7) if $H ( t ) = H ˜ / 2 .$ Observe that $H ˜$ in [5] satisfies the Lyapunov equation $A T H ˜ + A H ˜ = − 2 I .$ Thus, Theorem 3 represents generalization to the time-varying systems. Moreover, because $x ( t ) = Φ ( t , t 0 ) x ( t 0 ) ,$ and from the properties of induced matrix norm we have
$λ min [ H ( t ) ] λ max [ H ( t ) ] 1 / 2 e − 1 2 ∫ t 0 t d τ λ min [ H ( τ ) ] ≤ Φ ( t , t 0 ) I ≤ λ max [ H ( t ) ] λ min [ H ( t ) ] 1 / 2 e − 1 2 ∫ t 0 t d τ λ max [ H ( τ ) ]$
for $t ≥ τ ≥ t 0 .$ The general idea of the proof follows, e.g., the proof of ([3], p. 100, Theorem 6.4) and so the proof is omitted here. The last inequality generalizes ([5], Theorem 3.1) to the linear time-varying systems. Moreover, we get also the lower bound on the solutions.

## 3. Simulation Results

Example 2 (Example 1 revisited).
Let us consider again the system from Example 1. One gets
$e A t = e − t 3 3 cos 3 t + sin 3 t 10 sin 3 t − 10 sin 3 t 3 cos 3 t − sin 3 t$
and
$H = ∫ 0 ∞ e A T τ e A τ d τ = 3 / 5 10 / 20 10 / 20 1 / 2 .$
Since the eigenvalues of H are $λ min H = 11 / 20 − 11 / 20 ,$ $λ max H = 11 / 20 + 11 / 20 ,$ the inequality (7) for $t 0 = 0$ becomes
$x ( 0 ) H e − 10 t 11 − 11 ≤ x ( t ) H ≤ x ( 0 ) H e − 10 t 11 + 11 ,$
where $x H = 3 x 1 2 / 5 + 10 / 10 x 1 x 2 + x 2 2 / 2 1 / 2 .$ The result of simulation in the Matlab environment demonstrating effectiveness of the developed approach is depicted in Figure 1.
Example 3.
For the linear time-varying system $x ˙ = A ( t ) x ,$ $t ≥ 0$ with
$A ( t ) = − 1 e − t 0 − 3$
the fundamental matrix of the system (see [6]) is
$X ( t ) = e − t e − t 3 − e − 4 t 3 0 e − 3 t .$
The eigenvalues of $A T ( t ) A ( t ) , t ≥ 0$ satisfy
$λ 1 A T ( t ) A ( t ) = e − 2 t 2 − e − 2 t 2 4 e 2 t + 1 16 e 2 t + 1 1 / 2 + 5 → 1$
as $t → ∞ ,$
$λ 2 A T ( t ) A ( t ) = e − 2 t 2 + e − 2 t 2 4 e 2 t + 1 16 e 2 t + 1 1 / 2 + 5 → 9$
as $t → ∞ ;$ $λ 1 A T ( t ) A ( t ) < λ 2 A T ( t ) A ( t )$ for all $t ≥ 0$ and $A ( 0 ) I = 3 . 1796 ,$ $A ( t ) I = λ max A T ( t ) A ( t ) 1 / 2 → 3$ (monotonically) as $t → ∞$ and therefore the constant L in (4) is equal to $A ( 0 ) I = 3 . 1796$ (Figure 2).
The transition matrix is
$Φ ( t , τ ) = X ( t ) X − 1 ( τ ) = e τ − t e − t 3 − e 3 τ − 4 t 3 0 e 3 τ − 3 t$
and the matrix function $H ( t )$ from Theorem 3 is
$H ( t ) = ∫ t ∞ Φ T ( τ , t ) Φ ( τ , t ) d τ = 1 2 e − t 10 e − t 10 e − 2 t 40 + 1 6$
with the eigenvalues
$λ min H ( t ) = e − 2 t 80 − e − 2 t 240 336 e 2 t + 1600 e 4 t + 9 1 / 2 + 1 3 → 1 / 6$
$λ max H ( t ) = e − 2 t 80 + e − 2 t 240 336 e 2 t + 1600 e 4 t + 9 1 / 2 + 1 3 → 1 / 2$
as $t → ∞$ (Figure 3).
The integrals in (3) can be calculated explicitly
$− 1 2 ∫ 0 t d τ λ min H ( τ ) = 3 2 ln ρ − 1 − 5 2 ln 2 6 5 − ρ + 7 5$
$+ 1 2 ln ρ + 1 2 6 − ρ + 5 + 3.2375954052$
and
$− 1 2 ∫ 0 t d τ λ max H ( τ ) = 3 2 ln ρ + 1 − 5 2 ln 2 6 5 + ρ + 7 5$
$+ 1 2 ln ρ − 1 2 6 + ρ + 5 + 2.1447615497 ,$
where
$ρ = 100 e 2 t + 3 6 + 21 2 100 e 2 t − 3 6 + 21 2 1 / 2 .$
The result of simulation—the solution of system and the lower and upper bounds—are depicted in the Figure 4.
Analyzing the properties of matrix function $H ( t )$ it is obvious that
$λ min H ( 0 ) = 1 80 − 1945 240 + 1 3 ( ≈ 0 . 1621 ) ≤ λ min H ( t ) ,$
$λ max H ( t ) ≤ λ max H ( 0 ) = 1 80 + 1945 240 + 1 3 ( ≈ 0 . 5296 )$
and
$( − t / 2 ) λ min H ( 0 ) − 1 = − 3.0845 t ≤ − 1 2 ∫ 0 t 1 / λ min H ( τ ) d τ ,$
$( − t / 2 ) λ max H ( 0 ) − 1 = − 0.9441 t ≥ − 1 2 ∫ 0 t 1 / λ max H ( τ ) d τ$
for every $t ≥ 0 .$ Thus we obtain more readable approximate estimate of the solutions
$0.5531 x ( 0 ) I e − 3 . 0845 t ≤ x ( t ) I ≤ 1.8075 x ( 0 ) I e − 0 . 9441 t$
and Theorem 2 is satisfied for
$γ = λ max [ H ( 0 ) ] λ min [ H ( 0 ) ] 1 / 2 = 0 . 5296 0 . 1621 1 / 2 = 1.8075 ,$
and
$λ = ( 1 / 2 ) λ max H ( 0 ) − 1 = 0.9441 .$

## 4. Conclusions

In this paper we established the lower and upper bounds of all solutions to uniformly asymptotically stable linear time-varying systems from the knowledge of one fundamental matrix solution. Our approach is based on the eigenvalue idea and a time-varying metric on the state space $R n .$ The simulation experiments demonstrates the effectiveness of the proposed method for estimating solutions, generally classified as “difficult to obtain”, especially in the case of the lower bounds.

## Funding

The research was supported by the project VEGA 1/0272/18: “Holistic approach of knowledge discovery from production data in compliance with Industry 4.0 concept” and by Research and Development Operational Program (ERDF) [“University Scientific Park: Campus MTF STU—CAMBO”, grant number 26220220179].

## Conflicts of Interest

The author declares no conflict of interest.

## References

1. Khalil, H.K. Nonlinear Systems, 3rd ed.; Prentice-Hall: Englewood Cliffs, NJ, USA, 2002. [Google Scholar]
2. Chicone, C. Ordinary Differential Equations with Applications; (Texts in Applied Mathematics); Springer: New York, NY, USA, 1999; Volume 34. [Google Scholar]
3. Rugh, W.J. Linear System Theory, 2nd ed.; Prentice-Hall, Inc.: Upper Saddle River, NJ, USA, 1996. [Google Scholar]
4. Coppel, W.A. Stability and Asymptotic Behavior of Differential Equations; D. C. Heath and Company: Boston, MA, USA, 1965. [Google Scholar]
5. Hu, G.-D.; Liu, M. The weighted logarithmic matrix norm and bounds of the matrix exponential. Linear Algebra Appl. 2004, 390, 145–154. [Google Scholar] [CrossRef] [Green Version]
6. Zhou, B. On asymptotic stability of linear time-varying systems. Automatica 2016, 68, 266–276. [Google Scholar] [CrossRef]
7. Afanas’ev, V.N.; Kolmanovskii, V.B.; Nosov, V.R. Mathematical Theory of Control Systems Design; Originally published by Kluwer Academic Publishers in 1996; Springer Science+Business Media: Dordrecht, The Netherlands, 1996. [Google Scholar]
8. Dekker, K.; Verwer, J.G. Stability of Runge-Kutta Methods for Stiff Nonlinear Differential Equations; North-Holland: Amsterdam, The Netherlands, 1984. [Google Scholar]
9. Desoer, C.A.; Haneda, H. The Measure of a Matrix as a Tool to Analyze Computer Algorithms for Circuit Analysis. IEEE Trans. Circuits Theory 1972, 19, 480–486. [Google Scholar] [CrossRef]
10. Lohmiller, W.; Slotine, J.-J.E. On contraction analysis for non-linear systems. Automatica 1998, 34, 683–696. [Google Scholar] [CrossRef] [Green Version]
11. Rüffer, B.S.; van de Wouw, N.; Mueller, M. Convergent systems vs. incremental stability. Syst. Control Lett. 2013, 62, 277–285. [Google Scholar]
12. Harville, D.A. Matrix Algebra From a Statistician’s Perspective; Springer: New York, NY, USA, 2008. [Google Scholar]
13. Horn, R.A.; Johnson, C.R. Matrix Analysis; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
14. Goh, B.S. Global stability in many-species systems. Am. Nat. 1977, 111, 135–143. [Google Scholar] [CrossRef]
15. Coddington, E.A.; Levinson, N. Theory of Ordinary Differential Equations; McGraw-Hill: New York, NY, USA, 1955. [Google Scholar]
Figure 1. Solution of the linear time-invariant system from Example 1 and 2 with an initial state$x ( 0 ) = ( x 1 ( 0 ) , x 2 ( 0 ) ) T = ( − 4 , 3 ) T$(the solid line) and the lower and upper bound given by (8) (the dashed lines).
Figure 1. Solution of the linear time-invariant system from Example 1 and 2 with an initial state$x ( 0 ) = ( x 1 ( 0 ) , x 2 ( 0 ) ) T = ( − 4 , 3 ) T$(the solid line) and the lower and upper bound given by (8) (the dashed lines).
Figure 2. Time development of the$A ( t ) I ,$$t ≥ 0$.
Figure 2. Time development of the$A ( t ) I ,$$t ≥ 0$.
Figure 3. Time development of the functions$λ min ( H ( t ) )$and$λ max ( H ( t ) ) ,$$t ≥ 0$.
Figure 3. Time development of the functions$λ min ( H ( t ) )$and$λ max ( H ( t ) ) ,$$t ≥ 0$.
Figure 4. Solution of the linear time-varying system from Example 3 with initial state$x ( 0 ) = ( 2 , − 1 ) T$(the solid line) and the lower and upper bounds given by (3), (9) and (10) (the dashed lines).
Figure 4. Solution of the linear time-varying system from Example 3 with initial state$x ( 0 ) = ( 2 , − 1 ) T$(the solid line) and the lower and upper bounds given by (3), (9) and (10) (the dashed lines).

## Share and Cite

MDPI and ACS Style

Vrabel, R. Time-Varying Vector Norm and Lower and Upper Bounds on the Solutions of Uniformly Asymptotically Stable Linear Systems. Mathematics 2020, 8, 915. https://doi.org/10.3390/math8060915

AMA Style

Vrabel R. Time-Varying Vector Norm and Lower and Upper Bounds on the Solutions of Uniformly Asymptotically Stable Linear Systems. Mathematics. 2020; 8(6):915. https://doi.org/10.3390/math8060915

Chicago/Turabian Style

Vrabel, Robert. 2020. "Time-Varying Vector Norm and Lower and Upper Bounds on the Solutions of Uniformly Asymptotically Stable Linear Systems" Mathematics 8, no. 6: 915. https://doi.org/10.3390/math8060915

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.