Next Article in Journal
A Multilevel Monte Carlo Approach for a Stochastic Optimal Control Problem Based on the Gradient Projection Method
Previous Article in Journal
Acknowledgment to the Reviewers of AppliedMath in 2022
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Fast Algorithm for the Eigenvalue Bounds of a Class of Symmetric Tridiagonal Interval Matrices

Department of Mathematical Sciences, Ball State University, Muncie, IN 47306, USA
*
Author to whom correspondence should be addressed.
AppliedMath 2023, 3(1), 90-97; https://doi.org/10.3390/appliedmath3010007
Submission received: 15 September 2022 / Revised: 4 January 2023 / Accepted: 17 January 2023 / Published: 3 February 2023

Abstract

:
The eigenvalue bounds of interval matrices are often required in some mechanical and engineering fields. In this paper, we improve the theoretical results presented in a previous paper “A property of eigenvalue bounds for a class of symmetric tridiagonal interval matrices” and provide a fast algorithm to find the upper and lower bounds of the interval eigenvalues of a class of symmetric tridiagonal interval matrices.

1. Introduction

In a lot of practical engineering problems, many quantities are uncertain but bounded due to inaccurate measurements, errors in manufacturing, changes to the natural environment, etc., so they have to be expressed as intervals. Therefore, interval analysis has received much interest not only from engineers but also from mathematicians [1,2,3,4,5]. Rohn in [6] surveyed some of the most important results on interval matrices and other interval linear problems up to that time.
In particular, since it is often necessary to compute the eigenvalue bounds for interval eigenvalue problems in structural analysis, control fields and some related issues in engineering and mechanics, many studies have been conducted on these problems in the past few decades (e.g., [7,8,9,10,11,12,13,14,15]).
The eigenvalue problems with interval matrices dealt with in this paper can be formulated as follows:
K u = λ u
subject to
K ̲ K K ¯ or k i j ̲ k i j k i j ¯ , for i , j = 1 , 2 , , n
where K ̲ = ( k i j ̲ ) , K ¯ = ( k i j ¯ ) and K = ( k i j ) are all n × n real symmetric matrices. K ̲ and K ¯ are known matrices which are composed of the lower and upper bounds of the intervals, respectively. K is an uncertain-but-bounded matrix and ranges over the inequalities in Equation (2). λ is the eigenvalue of the eigenvalue problem in Equation (1) with an unknown matrix K, and u is the eigenvector corresponding to λ . All interval quantities are assumed to vary independently within the bounds.
In order to facilitate the expression of the interval matrices, interval matrix notations [2] are used in this paper. The inequalities in Equation (2) can be written as K K I , in which K I = [ K ̲ , K ¯ ] is a symmetric interval matrix. Therefore, the problem can be referred to as follows: for a given interval matrix K I , find an eigenvalue interval λ I , which is to say
λ I = [ λ ̲ , λ ¯ ] = ( λ i I ) , λ i I = [ λ i ̲ , λ i ¯ ]
such that it encloses all possible eigenvalues λ satisfying K u = λ u when K K I = [ K ̲ , K ¯ ] = ( k i j I ) , K T = K , k i j I = [ k i j ̲ , k i j ¯ ] .
Furthermore, let the midpoint matrix of the interval matrix K I = [ K ̲ , K ¯ ] be defined as
K c = ( K ̲ + K ¯ ) 2 = ( k i j c ) , k i j c = ( k i j ¯ + k i j ̲ ) 2 , i , j = 1 , 2 , , n
and the uncertain radius of the interval matrix K I = [ K ̲ , K ¯ ] be
K = ( K ¯ K ̲ ) 2 = ( k i j ) , k i j = ( k i j ¯ k i j ̲ ) 2 , i , j = 1 , 2 , , n
For some special cases of these problems, such as when K is a real symmetric matrix in Equation (1), some methods based on perturbation theory were proposed in [7,9,16,17,18].
From many numerical experiments, we observed that the eigenvalue bounds are often reached at certain vertex matrices (in other words, at the boundary values of the interval entries). Therefore, the conditions under which the eigenvalue bounds will be reached at certain vertex matrices has been raised as an interesting question. Under the condition that the signs of the components of the eigenvectors remain invariable, Deif [7] developed an effective method which can yield the exact eigenvalue bounds for the interval eigenvalue problem in Equation (1). As a corollary of Deif’s method, Dimarogonas [17] proposed that the eigenvalue bounds should be reached at some vertex matrix as under Deif’s condition. However, there exists no criterion for judging Deif’s condition in advance. Methods depending on three theorems dealing with the standard interval eigenvalue problem were introduced in [18], which claimed that the eigenvalue bounds should be achieved when the matrix entries take their boundary values for Equation (1) according to the vertex solution theorem and the parameter decomposition solution theorem. Unfortunately, this is not true. There exists a certain matrix (see the example in the Appendix of [16]) in which the upper or lower point of an eigenvalue interval is achieved when some matrix entries take certain interior values of the element intervals but not the end points. This contradicts the conclusion in [18]. For symmetric tridiagonal interval matrices, we also have counterexamples, such as
3 , 3.1 1 , 1 1 , 1 3 , 3.1
The eigenvalues intervals are λ 1 = [ 2 , 3.1 ] and λ 2 = [ 3 , 4.1 ] . The maximum value of λ 1 is obtained at
3.1 0 0 3.1 .
This is not a vertex matrix.Therefore, the conditions under which the end values of an eigenvalue interval can be achieved when the matrix entries take their boundary values need to be established.
In [19], Yuan et al. considered a special case of the problem in Equation (1). They proved that for a symmetric tridiagonal interval matrix, under some assumptions, the upper and lower bounds of the interval eigenvalues of the problem will be achieved at a certain vertex matrix of the interval matrix. Additionally, the assumptions proposed in their paper can be judged by a sufficient condition. This result is important. However, there is a drawback in that to determine the upper and lower bounds of an eigenvalue, 2 n 1 vertex matrices should be checked.
In this paper, we will present an improved theoretical result based on the work in [19]. From this result, we can derive a fast algorithm to find the upper or lower bound of an eigenvalue. Instead of checking 2 2 n 1 vertex matrices, we just need to check 2 n 1 matrices.
The rest of the paper is arranged as follows. Section 2 introduces the theorical results given in [19]. Section 3 provides the improved theorical result and the fast algorithm to find the upper or lower bound of an eigenvlaue of an symmetric tridiagonal interval matrix. Section 4 illustrates the main findings of this paper by means of a simulation example. Section 5 provides further remarks.

2. A Property for Symmetric Tridiagonal Interval Matrices

For the convenience of reading and deriving the results of the next section, we present the results from [19] here.
Let an irreducible symmetric tridiagonal matrix A , which is a normal matrix, be denoted as
A = a 1 b 1 b 1 a 2 b 2 b n 2 a n 1 b n 1 b n 1 a n .
Obviously, all eigenvalues of A are real.
Several lemmas and corollaries are given below:
Definition 1 
(leading or trailing principal submatrix). Let D be a matrix of the order n with diagonal elements d 11 , , d n n . The submatrices of D whose diagonal entries are composed of d 11 , , d k k for k = 1 , , n are called leading principal submatrices of the order k, and those whose diagonal entries are composed of d n k + 1 , n k + 1 , , d n n   k = 1 , , n are called trailing principal submatrices of the order k.
Let the leading and trailing principal submatrices of an order k be denoted as D k and D k for k = 1 , , n , respectively. The leading and trailing principal minors can be defined similarly.
Next, three theorems from the literature are introduced as lemmas below:
Lemma 1 
([20], p. 36) A , denoted as in Equation (6), has the following properties:
  • All eigenvalues of A are distinct;
  • The n 1 eigenvalues of the leading (trailing) principal submatrix of the order n 1 separate the n eigenvalues of A strictly.
The following corollary can be easily deduced from Lemma 1:
Corollary 1. 
If a characteristic polynomial of A satisfies f ( λ ) = 0 , then f ( λ ) 0 . Furthermore, the characteristic polynomial can be rewritten in the form of a determinant | A λ I | . The leading (trailing) principal minor of the order n 1 of | A λ I | does not equal zero if f ( λ ) = 0 .
Lemma 2 
([21], Th.3.1). Let A ^ = A 0 A 1 , in which
A 0 = a 1 b 1 b 1 b k 1 b k 1 a k , A 1 = a k + 1 b k + 1 b k + 1 b n 1 b n 1 a n .
Let the eigenvalues of A be denoted by λ 1 < λ 2 < < λ n and those of A ^ be λ ^ 1 λ ^ 2 λ ^ n . Then, it holds that
λ 1 < λ ^ 1 < λ 2 , λ n 1 < λ ^ n < λ n .
The following can easily be obtained from Lemma 2:
Corollary 2. 
If λ is the minimum or maximum eigenvalue of A such that | A λ I | = 0 , then the leading and trailing principal minors of | A λ I | of the order k for k < n do not equal zero.
Lemma 3 
([22]). If n 1 eigenvalues of certain principal submatrix of A of the order n 1 are distinct, then they separate n eigenvalues of A strictly.
The proof in [22] is written in Chinese. Its English translation can be found in the Appendix of [19].
From this lemma, it is easy to deduce the following corollary:
Corollary 3. 
If  n 1 eigenvalues of each principal submatrix of A of the order n 1 are distinct, and | A λ I | = 0 , then the leading and trailing principal minors of | A λ I | of the order k with k < n do not equal zero.
The proof of this corollary is in [19].
Next, the main theorem of this paper can be introduced. We give the whole proof here since we need to use the results and notations in the next section:
Theorem 1. 
Let an interval matrix A I = [ A ̲ , A ¯ ] be a symmetric tridiagonal interval matrix, and it is denoted as
A I = [ a 1 ̲ , a 1 ¯ ] [ b 1 ̲ , b 1 ¯ ] [ b 1 ̲ , b 1 ¯ ] [ a 2 ̲ , a 2 ¯ ] [ b 2 ̲ , b 2 ¯ ] [ b n 2 ̲ , b n 2 ¯ ] [ a n 1 ̲ , a n 1 ¯ ] [ b n 1 ̲ , b n 1 ¯ ] [ b n 1 ̲ , b n 1 ¯ ] [ a n ̲ , a n ¯ ] .
Let its vertex matrices be expressed as A s for s = 1 , , 2 2 n 1 , with its diagonal elements being either a i ̲ or a i ¯ for i = 1 , , n and its subdiagonal elements being either b j ̲ or b j ¯ for j = 1 , , n 1 . Let the eigenvalues of A I be λ i I = [ λ i ̲ , λ i ¯ ] and the eigenvalues of A s be λ i s . Here, suppose that λ i ̲ , λ i ¯ and λ i s are all arranged in an increasing order for i = 1 , , n .
If A I satisfies
(a)
its sub-diagonal intervals not including zero, then
λ i ̲ = min s λ i s a n d λ i ¯ = max s λ i s f o r i = 1 , n , s = 1 , 2 , , 2 2 n 1 ;
Furthermore, if A I also satisfies
(b)
each of its principal sub-matrices of the order n 1 possessing n 1 non-overlapping eigenvalue intervals, then
λ i ̲ = min s λ i s a n d λ i ¯ = max s λ i s f o r i = 1 , , n , s = 1 , 2 , , 2 2 n 1 .
Proof. 
Let the central points and the radii of the entries of A I be denoted, respectively, as
a i = ( a i ̲ + a i ¯ ) / 2 , b j = ( b j ̲ + b j ¯ ) / 2 , r a i = ( a i ¯ a i ̲ ) / 2 , r b j = ( b j ¯ b j ̲ ) / 2 ,
for i = 1 , , n ; j = 1 , , n 1 . Denote 2 n 1 real variables as
X = ( x a 1 , , x a n , x b 1 , , x b n 1 ) T R 2 n 1 .
Therefore, the ith diagonal entry of A I can be expressed as a i + r a i sin x a i for i = 1 , , n and the jth subdiagonal entry of A I can be expressed as b j + r b j sin x b j for j = 1 , , n 1 .
Let A I λ I I be the characteristic determinant of A I . The definition of its leading or trailing principal minors is the same as in Definition 1. Obviously, λ i is a function of X.
Let F ( X , λ i ( X ) ) = A I λ i ( X ) I . F is differentiable due to the differentiability of all entries in A I λ i ( X ) I . Then, F ( X , λ i ( X ) ) can be expressed as
a 1 + r a 1 sin x a 1 λ b 1 + r b 1 sin x b 1 b 1 + r b 1 sin x b 1 a 2 + r a 2 sin x a 2 λ b n 1 + r b n 1 sin x b n 1 b n 1 + r b n 1 sin x b n 1 a n + r a n sin x a n λ .
Consider the partial derivative of λ with respect to x a 1 when F ( X , λ i ( X ) ) = 0 . Based on the derivative rule for a determinant, we obtain
( r a 1 cos x a 1 λ x a 1 ) | D n 1 | λ x a 1 | D 1 | | D n 2 | λ x a 1 | D 2 | | D n 3 | λ x a 1 | D n 1 | = 0 .
Thus, we obtain that
λ x a 1 = r a 1 cos x a 1 | D n 1 | | D n 1 | + | D 1 | | D n 2 | + | D 2 | | D n 3 | + + | D n 1 | .
In a similar way, the partial derivative of λ with respect to x b 1 when F ( X , λ i ( X ) ) = 0 is
λ x b 1 = 2 r b 1 cos x b 1 ( b 1 + r b 1 sin x b 1 ) | D n 2 | | D n 1 | + | D 1 | | D n 2 | + | D 2 | | D n 3 | + + | D n 1 | .
Moreover, it can be deduced that
λ i x a k = r a k cos x a k | D k 1 | | D n k | D ( k = 1 , 2 , , n ) , λ i x b l = 2 r b l cos x b l ( b l + r b l sin x b l ) | D l 1 | | D n l 1 | D ( l = 1 , 2 , , n 1 ) ,
where D = | D n 1 | + | D 1 | | D n 2 | + + | D n 2 | | D 1 | + | D n 1 | and | D 0 | = | D 0 | = 1 .
From Corollary 1, we know that D 0 when F ( X , λ i ) = 0 since D is the derivative of F with respect to λ i . Additionally, from Corollary 2, when condition (a) is satisfied and F ( X , λ 1 ) = 0 (or F ( X , λ n ) = 0 ), it is obtained that all | D i | and | D i | for i = 1 , 2 , , n 1 do not equal zero. From Lemma 3, the same conclusion can be drawn when conditions (a) and (b) are both satisfied and F ( X , λ i ) = 0 .
Therefore, if all partial derivatives in Equation (11) vanish, then it must have cos x a k = 0 and cos x b l = 0 . Then, the corresponding values of s i n e must be ± 1 . From the extremal property of a function, the eigenvalue bounds should be reached when the matrix entries take their boundary values:    □
Remark 1. 
We would like to mention here that condition (b) in Theorem 1 is restrictive. However, the following are true:
(i)
In many engineering problems, the matrix which satisfies condition (b) can be verified by some sufficient conditions. Yuan, He and Leng provided a theorem in [19] to verify condition (b). Hladík in [15] provided an alternative test which can also verify condition (b).
(ii)
In many engineering problems, we just need to find the interval of the largest eigenvalue, and in this scenario, we do not need to verify condition (b).
(iii)
In [15], the author presented another approach to obtaining eigenvalue intervals for symmetric tridiagonal interval matrices. Compared with the algorithm in [15], our algorithm (which will be presented in the next section) does not need to use eigenvectors. Therefore, it is still competitive.

3. An Improved Theoretical Result and a Fast Algorithm

As we mentioned before, Theorem 1 provides an important property of symmetric tridiagonal interval matrices. However, it helps us little in finding the upper and lower bounds of eigenvalues, since 2 n 1 vertex matrices need to be checked to determine the upper and lower bounds of an eigenvalue. Below, we will prove a better property:
Definition 2 
(uniformly monotone). In a function G ( x 1 , , x n ) : E R , if for all ( x 1 , x 2 , , x n ) E , G is monotone increasing or monotone decreasing with respect to x i , then G is uniformly monotone with respect to x i .
Lemma 4 
(multivariate intermediate value theorem). For any continuous function over a closed region f ( x ) , if f ( x ) = 0 has no zero in the internal part, then either f ( x ) > 0 or f ( x ) < 0 for all x in the internal part.
Proof. 
Suppose we have x and y such that f ( x ) < 0 and f ( y ) > 0 . Then, take a connected open set U containing both x and y . As f is continuous, we know f ( U ) is connected and contains both a positive and negative number and thus also contains zero.   □
Theorem 2. 
If the conditions in Theorem 1 are satisfied, then in Equation (9), if we restrict each component between [ π / 2 , π / 2 ] , then λ i ( X ) is uniformly monotone.
Proof. 
In the proof of Theorem 1, we showed that all | D i | and | D i | for i = 1 , 2 , , n 1 do not equal zero. According to Lemma 4, we have in Equation (11) that all λ i x a k and λ i x b l will keep the same sign in ( π / 2 , π / 2 ) for all x a k and x b l , k = 1 , 2 , , n , l = 1 , 2 , , n 1 . This means that λ i ( X ) is uniformly monotone.    □
Theorem 2 tells us the following:
  • Assume that the conditions in Theorem 1 are satisfied;
  • Assume a component (for example, x a 1 ) causes λ i ( X ) to monotonically reach its extreme value when other components are fixed (from Theorem 1, x a 1 will reach one of its ending points).
Then, for another component (for example, x a 2 ), when x a 2 reaches one of its ending points, changing x a 1 cannot improve the value of λ i ( X ) . Based on the above analysis, we proposed an algorithm to find the upper or lower bound of an eigenvalue. Suppose we need to find the lower bound of the smallest eigenvalue λ 1 ; We have Algorithm 1 as follows (for the upper bounds and other eigenvalues, the algorithms are similar):
Algorithm 1 (Fast Algorithm for lower bound of λ 1 )
1:
Set a k = a k ̲ , b l = b l ̲ , k = 1 , 2 , , n , l = 1 , 2 , , n 1 . Here, a k ̲ and b l ̲ have the same meaning as in Equation (7). We will obtain a matrix A as in Equation (6) and obtain λ 1 .
2:
For k from 1 to n:
Let a k = a k ¯ . Here, a k ¯ has the same meaning as in Equation (7). Therefore, we have a matrix A and λ 1 .
If λ 1 < λ 1 : λ 1 λ 1
Else: a k a k ̲
3:
For l from 1 to n 1 :
Let b l = b l ¯ . Here, b l ¯ has the same meaning as in Equation (7). Therefore, we have a matrix A and λ 1 .
If λ 1 < λ 1 : λ 1 λ 1
Else: b l b l ̲
To obtain the upper or lower bound of λ i , we just need to check 2 n 1 vertex matrices instead of 2 2 n 1 vertex matrices.

4. Numerical Experiment

A numerical example is presented here to illustrate the algorithm. The example comes from a spring mass system. The practical background here is unimportant, and therefore it was omitted.
Example 1. 
Calculate the eigenvalues bounds for the symmetric tridiagonal interval matrix
A I = [ 2975 , 3025 ] [ 2015 , 1985 ] [ 2015 , 1985 ] [ 4965 , 5035 ] [ 3020 , 2980 ] [ 3020 , 2980 ] [ 6955 , 7045 ] [ 4025 , 3975 ] [ 4025 , 3975 ] [ 8945 , 9055 ]
It is easy to verify that all conditions in Theorem 1 hold (see [19]). Using Algorithm 1 in Section 3, we can find the results in Table 1.
The results were the same as those in [11].

5. Conclusions

This paper improved the theoretical results presented in [19] and presented a fast algorithm to find the upper and lower bounds of the interval eigenvalues of a class of symmetric tridiagonal interval matrices. Since this kind of matrix is popular in engineering problems, we believe that this algorithm offers strong application value. It is still an open question under what kinds of assumptions the conclusions in Theorems 1 and 2 can be generalized to symmetric but non-tridiagonal interval matrices or even more general interval matrices.

Author Contributions

Conceptualization, Q.Y.; methodology, Q.Y. and Z.Y.; software, Q.Y.; validation, Z.Y.; formal analysis, Q.Y.; writing—original draft preparation, Q.Y.; writing—review and editing, Q.Y. and Z.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Moore, R. Interval Analysis; Prentice-Hall: Englewood Cliffs, NY, USA, 1966. [Google Scholar]
  2. Moore, R. Methods and Applications of Interval Analysis; SIAM: Philadelphia, PA, USA, 1979. [Google Scholar]
  3. Alefeld, G.; Herzberger, J. Introduction to the Interval Computations; Academic Press: New York, NY, USA, 1983. [Google Scholar]
  4. Gong, W.; Sun, X. Adaptive diagonal loading STAP arithmetic based on multistage nested Wiener filter. J. Electron. Meas. Instrum. 2010, 24, 899–904. [Google Scholar] [CrossRef]
  5. Pan, Y.; Lu, Y.; Luo, Y.; Li, S. A research of robust GPS anti-jamming scheme. J. Chongqing Univ. Posts Telecom. 2012, 24, 330–334. [Google Scholar]
  6. Rohn, J. A Handbook of Results on Interval Linear Problems. Available online: http://uivtx.cs.cas.cz/~rohn/publist/!aahandbook.pdf (accessed on 1 January 2023).
  7. Deif, A. Advanced Matrix Theory for Scientists and Engineers, 2nd ed.; Abacus Press: Tunbridge Wells, UK, 1991; pp. 262–281. [Google Scholar]
  8. Koyluoglu, H.; Cakmak, A.; Nielsen, S. Interval algebra to deal with pattern loading and structural uncertainties. J. Eng. Mech. 1995, 121, 1149–1157. [Google Scholar]
  9. Qiu, Z.; Chen, S.; Jia, H. The Rayleigh Quotient Iteration Method for Computing Eigenvalue Bounds of Structures with Bounded Uncertain Parameters. Comput. Struct. 1995, 55, 221–227. [Google Scholar] [CrossRef]
  10. Rao, S.; Berke, L. Analysis of uncertain structural system Using interval analysis. AIAA 1997, 35, 727–735. [Google Scholar] [CrossRef]
  11. Yuan, Q.; He, Z.; Leng, H. An Evolution Strategy Method for Computing Eigenvalue Bounds of Interval Matrices. Appl. Math. Comp. 2008, 196, 257–265. [Google Scholar] [CrossRef]
  12. Xia, H.; Zhou, Y. A novel evolution strategy algorithm for solving the standard eigenvalue of the symmetric interval matrix. Comp. Eng. Sci. 2011, 33, 97–101. [Google Scholar]
  13. Jian, Y. Extremal eigenvalue intervals of symmetric tridiagonal interval matrices. Numer. Linear. Algebra Appl. 2017, 242, e2083. [Google Scholar] [CrossRef]
  14. Leng, H. Real eigenvalue bounds of standard and generalized real interval eigenvalue problems. Appl. Math. Comput. 2014, 232, 164–171. [Google Scholar] [CrossRef]
  15. Hladík, M. Eigenvalues of symmetric tridiagonal interval matrices revisited. arXiv 2018, arXiv:1704.03670v2. [Google Scholar]
  16. Leng, H.; He, Z. Computing Eigenvalue Bounds of Structures with Uncertain-but-non-random Parameters by the Method Based on Perturbation Theory. Comm. Num. Methods Eng. 2007, 23, 973–982. [Google Scholar] [CrossRef]
  17. Dimarogonas, A. Interval analysis of vibrating systems. J. Sound Vib. 1995, 183, 739–749. [Google Scholar] [CrossRef]
  18. Qiu, Z.; Wang, X. Solution theorems for the standard eigenvalue problem of structures with uncertain-but-bounded parameters. J. Sound Vib. 2005, 282, 381–399. [Google Scholar] [CrossRef]
  19. Yuan, Q.; Leng, H.; He, Z. A property of eigenvalue bounds for a class of symmetric tridiagonal interval matrices. Num. Linear Alge. Appl. 2011, 18, 707–717. [Google Scholar] [CrossRef]
  20. Jiang, E. The Computation of Symmetric Matrices; Shanghai Press of Science & Techology: Shanghai, China, 1984. (In Chinese) [Google Scholar]
  21. Li, T.; Zeng, Z. The Laguerre Iteration Solving the Symmetric Tridiagonal Eigenproblem. SIAM J. Sci. Comp. 1994, 15, 1145–1173. [Google Scholar] [CrossRef]
  22. Jiang, E. An Extension of the Interlace Theorem. Num. Math. J. Chin. Univ. 1999, 12, 305–310. (In Chinese) [Google Scholar]
Table 1. The upper and lower bounds of the eigenvalues.
Table 1. The upper and lower bounds of the eigenvalues.
EigenvalueCorresponding Matrix Entries
Bounds ( k 11 , k 22 , k 33 , k 44 , k 12 , k 23 , k 34 )
λ 1 ̲ 842.9251 ( 2975 , 4965 , 6955 , 8945 , 2015 , 3020 , 4025 )
λ 1 ¯ 967.1082 ( 3025 , 5035 , 7045 , 9055 , 1985 , 2980 , 3975 )
λ 2 ̲ 3337.0785 ( 2975 , 4965 , 6955 , 8945 , 1985 , 3020 , 4025 )
λ 2 ¯ 3443.3127 ( 3025 , 5035 , 7045 , 9055 , 2015 , 2980 , 3975 )
λ 3 ̲ 7002.2828 ( 2975 , 4965 , 6955 , 8945 , 1985 , 2980 , 4025 )
λ 3 ¯ 7126.8283 ( 3025 , 5035 , 7045 , 9055 , 2015 , 3020 , 3975 )
λ 4 ̲ 12,560.8377 ( 2975 , 4965 , 6955 , 8945 , 1985 , 2980 , 3975 )
λ 4 ¯ 12,720.2273 ( 3025 , 5035 , 7045 , 9055 , 2015 , 3020 , 4025 )
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yuan, Q.; Yang, Z. A Fast Algorithm for the Eigenvalue Bounds of a Class of Symmetric Tridiagonal Interval Matrices. AppliedMath 2023, 3, 90-97. https://doi.org/10.3390/appliedmath3010007

AMA Style

Yuan Q, Yang Z. A Fast Algorithm for the Eigenvalue Bounds of a Class of Symmetric Tridiagonal Interval Matrices. AppliedMath. 2023; 3(1):90-97. https://doi.org/10.3390/appliedmath3010007

Chicago/Turabian Style

Yuan, Quan, and Zhixin Yang. 2023. "A Fast Algorithm for the Eigenvalue Bounds of a Class of Symmetric Tridiagonal Interval Matrices" AppliedMath 3, no. 1: 90-97. https://doi.org/10.3390/appliedmath3010007

Article Metrics

Back to TopTop