Next Article in Journal
A Class of Adaptive Exponentially Fitted Rosenbrock Methods with Variable Coefficients for Symmetric Systems
Previous Article in Journal
Dynamic Behaviors of Optimized K12 Anti-Ram Bollards
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Diagonal Transformation Algorithm for the Maximum Eigenvalue of Zero Symmetric Nonnegative Matrices

School of Management Science, Qufu Normal University, Rizhao 276800, China
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(8), 1707; https://doi.org/10.3390/sym14081707
Submission received: 19 June 2022 / Revised: 24 July 2022 / Accepted: 2 August 2022 / Published: 17 August 2022
(This article belongs to the Section Mathematics)

Abstract

:
The irreducibility of nonnegative matrices is an important condition for the diagonal transformation algorithm to succeed. In this paper, we introduce zero symmetry to replace the irreducibility of nonnegative matrices and propose an improved diagonal transformation algorithm for finding the maximum eigenvalue without any partitioning. The improved algorithm retains all of the benefits of the diagonal transformation algorithm while having fewer computations. Numerical examples are reported to show the efficiency of the proposed algorithm. As an application, the improved algorithm is used to check whether a zero symmetric matrix is an H-matrix.

1. Introduction

The issues of the maximum eigenvalues of nonnegative matrices are among the most interesting and important problems in matrix analysis and engineering [1]. Some researchers directly estimated the bounds of the maximum eigenvalues according to the nature of the nonnegative irreducible matrices [2,3,4,5,6,7,8]. Based on a geometric symmetrization of the powers of a matrix, Szyld et al. [8] presented a sequence of lower bounds for the spectral radius. By computing the sums for each row, Li et al. [5] presented a new method to finding the two-sided bounds for the spectral radius. Adam et al. [2] proposed some new bounds for the spectral radius of a nonnegative matrix with an average of the two-row sums. However, it is difficult to find the maximum eigenvalue by means of estimation methods. Therefore, some efficient algorithms for computing the eigenpairs of irreducible nonnegative matrices have been proposed [9,10,11,12]. Based on the diagonal transformation, Bunse et al. [9] obtained the maximum eigenvalue by constructing a similar matrix sequence and obtained the minimum row sum { r k } and the maximum row sum { R k } . Duan et al. [10] pointed out that the diagonal transformation algorithm (Algorithm 1) was convergent for irreducible nonnegative matrices. By eliminating redundant operations, Wen et al. [12] improved the diagonal transformation algorithm so that it quickly caught the maximum eigenvalue for irreducible nonnegative matrices. Unfortunately, a major problem is that the diagonal transformation techniques above may not be convergent for reducible nonnegative matrices. To stimulate this question, we want to replace irreducibility with some symmetry conditions for nonnegative matrices, since symmetry plays an essential part in physics and mathematics [1,13]. To this end, we introduce zero symmetric reducible nonnegative matrices, which can be thought of as combinations of irreducible principal submatrices. Based on these properties, we propose an improved diagonal transformation algorithm (Algorithm 2) for computing the largest eigenvalue without any partitioning and show that the algorithm is convergent for zero symmetric nonnegative matrices. These constitute the main motivations of this article.
The remainder of this paper is organized as follows. In Section 2, some definitions and preliminary results are recalled. In Section 3, we establish the improved diagonal transformation algorithm and prove that the algorithm is convergent. In Section 4, we give Algorithm 3 to verify whether a zero symmetric matrix is an H-matrix. Numerical examples are provided to illustrate the obtained results.

2. Preliminaries

We start this section with the preliminary results [1] and introduce zero symmetric matrices.
Definition 1. 
(1) A matrix A R n × n is said to be reducible if there exists a permutation matrix P R n × n such that
P A P = A 11 0 A 21 A 22 ,
where A 11 R r × r ,   A 22 R n r × n r ,   A 21 R n r × r ,   1 r n 1 , and  0 R r × n r is the zero matrix. If any permutation matrix does not exist, then A is called irreducible.
(2) The spectral radius of A is denoted by
ρ ( A ) = max { | λ | : λ σ ( A ) } ,
where σ ( A ) represents the set of all eigenvalues on A.
Obviously, ρ ( A ) is the maximum eigenvalue when A is nonnegative.
Definition 2. 
(i) A = ( a i j ) R n × n is called the following:
(1) symmetric if a i j = a j i ;
(2) zero symmetric if
a i j = 0 a j i = 0     and     a j i = 0 a i j = 0 .
Clearly, if A is symmetric, then A is zero symmetric. Conversely, the result may not hold.
Definition 3. 
Let A R n × n and I { 1 , , n } with | I | = r . A I is an r-dimensional submatrix of A consisting of r 2 elements, defined by
A I = ( a i j ) , i , j I ,
where | I | is the number of elements of I.
For a nonnegative irreducible matrix A = A 0 = ( a i j 0 ) , we can compute its row sum a i 0 = j = 1 n a i j 0 , ( i = 1 , 2 , , n ) to find the minimum row sum r 0 and the maximum sum row R 0 . Clearly, r 0 ρ ( A ) R 0 [1]. According to [10], we set D 0 = d i a g ( a 1 0 , a 2 0 , , a n 0 )   and   A k + 1 = ( a i j k + 1 ) = D k 1 A k D k . Therefore, we find the serial of similar matrices { A k } , the serial of the minimum row sum { r k } , and the serial of the maximum row sum { R k } . It can be verified that
r 0 r k r k + 1 ρ ( A ) R k + 1 R k R 0 .
Based on these properties, Duan et al. [10] established a diagonal transformation algorithm and the following convergent theorem for irreducible nonnegative matrices as follows (Algorithm 1):
Algorithm 1
Step 0. 
Given a nonnegative irreducible matrix B = ( b i j ) R n × n and ϵ > 0 .
Step 1. 
Let A = A 0 = I + B = ( a i j 0 ) .
Step 2. 
Compute
a i k = j = 1 n a i j k , r k = min 1 i n a i k , R k = max 1 i n a i k .
If ( R k r k ) < ϵ , go to Step 4.
Step 3. 
Compute D k = d i a g ( a 1 k , a 2 k , , a n k ) and update A k + 1 = D k 1 A k D k , go to Step 2.
Step 4. 
ρ ( B ) = 1 2 ( r k + R k ) 1 . Stop.
Theorem 1 
(Theorem 2.5 of [10]). Let B be a nonnegative irreducible matrix. For any λ > 0 , it is obvious that A = λ I + B = ( a i j ) n × n is a nonnegative irreducible matrix with positive diagonal elements. If  ρ ( B ) is the maximum eigenvalue of B for the sequences { r k } ,   { R k } generated by Algorithm 1, then it holds that
lim k r k λ = lim k R k λ = ρ ( B ) .

3. Improved Diagonal Transformation Algorithm for the Maximum Eigenvalue

In this section, we give the improved diagonal transformation algorithm for computing the maximum eigenvalue of zero symmetric reducible nonnegative matrices. First, we investigate the properties of zero symmetric reducible nonnegative matrices.
Lemma 1. 
Let B = ( b i j ) R n × n be a zero symmetric reducible nonnegative matrix. Then, the following results hold:
(1) There exists a partition { I 1 , I 2 , , I s } of { 1 , 2 , , n } such that each induced matrix B I i is either an irreducible matrix or a zero matrix for i = 1 , 2 , , s ;
(2) ρ ( B ) = max 1 i s ρ ( B I i ) .
Proof. 
(1) Since B is reducible, by Definition 2, we can find a partition { I 1 ,   I 2 } of { 1 , , n } such that b s t = 0 for any s I 1   a n d   t I 2 . Since B = ( b i j ) R n × n is zero symmetric, we obtain b t s = 0 for any t I 2   a n d   s I 1 . If both B I 1 and B I 2 are irreducible, then we are done. Otherwise, we can repeat the above analysis for any reducible block(s) obtained above. In this way, since { 1 , , n } is a finite set, we can arrive at the desired result.
(2) Since B is a zero symmetric reducible nonnegative matrix, we deduce that B I i represents the principal submatrices for i = 1 , 2 , , s . Following Theorem 5.3 of [1], we obtain
ρ ( B ) = max { ρ ( B I 1 ) , ρ ( B I 2 ) , , ρ ( B I s ) } ,
which implies the desired results.    □
In general, partitioning a large-size reducible nonnegative matrix to be irreducible nonnegative is expensive. Hence, it is important to find an algorithm to compute the maximum eigenvalues without any partitioning. Fortunately, we observe that { R k } is monotonously decreasing and convergent for the diagonal transformation algorithm. Following the algorithm technology in [12], we state the improved diagonal transformation algorithm as follows (Algorithm 2):
Algorithm 2
Step 0 
Given a zero symmetric nonnegative matrix B = ( b i j ) R n × n and ϵ > 0 .
Step 1 
Let A = A 0 = I + B = ( a i j 0 ) n × n .
Step 2 
Compute p k = ( p 1 k , , p n k ) and R k , where p i k = j = 1 n a i j k , R k = max 1 i n p i k . If ( R k 1 R k ) < ϵ . Go to Step 4.
Step 3 
Update a i j k + 1 = p j k p i k a i j k . Go to Step 2.
Step 4 
ρ ( B ) = R k 1 . Stop.
In what follows, we show the properties of the sequence { R k } .
Lemma 2. 
Suppose that B = ( b i j ) R n × n is a zero symmetric reducible nonnegative matrix, and the sequence { R k } is generated by Algorithm 2. Then, { R k } is monotonously decreasing and bounded to the findings below.
Proof. 
According to Algorithm 2, it is easy to check R k 1 . Now, we show that { R k } is a decreasing sequence. From the definition of p i k + 1 , it holds that
p i k + 1 = j = 1 n a i j k + 1 = j = 1 n p j k p i k a i j k = 1 p i k j = 1 n a i j k p j k , 1 i n .
It follows from p j k max 1 i n p i k = R k that
p i k + 1 R k p i k j = 1 n a i j k = R k , 1 i n ,
which implies
max 1 i n p i k + 1 = R k + 1 R k .
Consequently, { R k } is monotonously decreasing and bounded to the findings below.    □
The following theorem demonstrates that the sequences { R k } generated by Algorithm 2 are convergent for zero symmetric reducible nonnegative matrices.
Theorem 2. 
Suppose that B = ( b i j ) R n × n is a zero symmetric reducible nonnegative matrix and the sequence { R k } is generated by Algorithm 2. Then, we have
lim k R k = ρ ( A ) = ρ ( B ) + 1 .
Proof. 
From Lemma 2, we obtain that { R k } is monotonously decreasing and bounded to the findings below. Thus, there exists R ¯ such that R k R ¯ when k . Next, we show that R ¯ = ρ ( A ) . Since B is zero symmetric, without loss of generality, we assume that A is stated as follows:
A = A I 1 0 0 0 A I 2 0 0 0 A I s ,
where each block A I i = B I i + I I i ( i = 1 , 2 , , s ) is either an irreducible matrix or a unit matrix. It follows from Lemma 1 that
ρ ( A ) = max { ρ ( A I 1 ) , ρ ( A I 2 ) , , ρ ( A I s ) } .
We now break up the argument into three cases.
Case 1: A has a unique maximum eigenvalue. Without loss of generality, we can assume
ρ ( A ) = ρ ( A I 1 ) > max i = 2 , , s ρ ( A I i ) .
Let { R k } be the maximum row sum of matrix A. Let ρ ( A I i ) and R i k be the maximum eigenvalue and the maximum row sum of A I i generated by Algorithm 2, respectively. It follows from Equation (2) that
α i = ρ ( A I 1 ) ρ ( A I i ) , i = 2 , , s .
Obviously, α i > 0 from Equation (2). Taking into account that each A I i ( i = 1 , 2 , , s ) is irreducible, from Theorem 1, we have
lim k R i k = ρ ( A I i ) ( i = 1 , 2 , , s ) .
For 0 < ϵ < min 2 i s α i , K , when k > K , one has
| R i k ρ ( A I i ) | < ϵ ,
which implies
ρ ( A I i ) + ϵ > R i k ,   i = 2 , , s .
Since ρ ( A I 1 ) ρ ( A I i ) = α i and 0 < ϵ < α i , it holds that
ρ ( A I 1 ) > ρ ( A I i ) + ϵ ,   i = 2 , , s .
Recalling Equations (3) and (4), for  k > K , we obtain
R 1 k ρ ( A I 1 ) > R i k ,   i = 2 , , s ,
It follows from R k = max { R 1 k , R 2 k , , R s k } and Equation (5) that
R 1 k = R k , k > K .
By Theorem 1, we have
ρ ( A I 1 ) = lim k R 1 k = lim k R k = ρ ( A ) = ρ ( B ) + 1 .
Case 2: A has two maximum eigenvalues. Without loss of generality, we can assume
ρ ( A ) = ρ ( A I 1 ) = ρ ( A I 2 ) > max i = 3 , , s ρ ( A I i ) .
By a similar argument from Equation (5), there exists K with k > K such that
R 1 k > R i k     or     R 2 k > R i k , i = 3 , , s .
For k > K , we deduce R 1 k = R k or R 2 k = R k . It follows from Theorem 1 that
lim k R 1 k = ρ ( A I 1 )     or     lim k R 2 k = ρ ( A I 2 ) .
Hence, we have
ρ ( A I 1 ) = lim k R 1 k = lim k R k = ρ ( A )     or     ρ ( A I 2 ) = lim k R 2 k = lim k R k = ρ ( A ) .
Case 3: A has more than two maximum eigenvalues. We repeat the process above and can obtain the same convergent conclusions.    □
Remark 1. 
Algorithm 2 has three nice properties: (1) the convergence property is guaranteed; (2) it has fewer calculations since there is no need to calculate { r k } and A k + 1 = D k 1 A k D k ; (3) it obtains the maximum eigenvalue without any partitioning.
In the following, we report some numerical results to show that the above new algorithm is efficient. For Algorithm 2, we stop the iteration as long as | R k R k 1 | ϵ , where ϵ = 10 10 .
All tested zero symmetric reducible nonnegative matrices were generated as follows. Give an integer p, and randomly generate three zero symmetric matrices B I 1 , B I 2 , and B I 3 . Let I 1 = { 1 , 2 , , p } ,   I 2 = { p + 1 , , 2 p } , and I 3 = { 2 p + 1 , , 3 p } . Then, define B = ( B I 1 , B I 2 , B I 3 ) with other elements being zero, where n = 3 p . Clearly, B is reducible. Algorithm 2 was implemented in MATLAB (R2015b), and all the numerical computations were conducted using an Intel 3.60-GHz computer with 8 GB of RAM. The numerical results are reported in Table 1, where ρ is the maximum eigenvalue obtained by the algorithm and cpu(s) denotes the total computer time in seconds. Meanwhile, the cpu time is the average of 10 instances for each n. From Table 1, we see that Algorithm 2 is convergent and efficient.

4. Identifying an H -Matrix

Identifying an H-matrix, especially for large-scale matrices, has important theory and application value in numerical algebra and matrix analysis. A number of effective algorithms for identifying an H-matrix have been presented [14,15,16,17,18]. In this section, we shall identify whether a zero symmetric reducible matrix is an H-matrix by Algorithm 3 without any partitioning.
Definition 4. 
A matrix A = ( a i j ) R n × n is said to be the following:
(1) 
Strictly diagonally dominant if
| a i i | > j = 1 , j i n | a i j | , 1 i n .
(2) 
An H-matrix if there exists a diagonal matrix D with positive diagonal elements such that A D is strictly diagonally dominant;
(3) 
An M-matrix if there exists a nonnegative matrix B with the spectral radius ρ ( B ) such that A = s I B , where s > ρ ( B ) .
Definition 5. 
The comparison matrix of A = ( a i j ) R n × n is the matrix M ( A ) with the elements
m i j = | a i i | i f   i = j = 1 , 2 , , n | a i j | i f   i , j = 1 , 2 , , n , i j .
Consequently, a matrix A R n × n is an H-matrix if and only if its comparison matrix is an M-matrix [1,15].
Theorem 3. 
Let A = ( a i j ) R n × n with nonzero diagonal entries. Then, A is an H-matrix if and only if ρ ( | I D A 1 A | ) < 1 , where D A denotes the diagonal matrix with the same dimensions and diagonal entries as A, and  | I D A 1 A | represents the matrix composed of absolute values of each element.
Proof. 
For necessity, we decompose the comparison matrix M ( A ) of A as follows:
M ( A ) = D M ( A ) E ,
where D M ( A ) denotes the diagonal matrix of the same dimensions and diagonal entries as M ( A ) and E is a nonnegative matrix. Taking into account that the diagonal entry is a nonzero of A, we obtain
| I D A 1 A | = | I D M ( A ) 1 M ( A ) | .
Since A is an H-matrix, then D A 1 A is an H-matrix. Meanwhile, D M ( A ) 1 M ( A ) is the comparison matrix of D A 1 A . Thus, D M ( A ) 1 M ( A ) is an M-matrix. It follows from Equation (6) that
ρ ( | I D A 1 A | ) = ρ ( | I D M ( A ) 1 M ( A ) | ) < 1 .
For sufficiency, since a i i 0 for all 1 i n and ρ ( | I D A 1 A | ) < 1 , from Equation (6), one has
ρ ( | I D M ( A ) 1 M ( A ) | ) < 1 ,
which implies D M ( A ) 1 M ( A ) is an M-matrix. Since D M ( A ) 1 M ( A ) is the comparison matrix of D A 1 A , then D A 1 A is an H-matrix. Thus, A is an H-matrix.    □
From Theorem 3, we can identify whether A is an H-matrix by computing ρ ( | I D A 1 A | ) . Indeed, | I D A 1 A | has the following form:
| I D A 1 A | = 0 | a 12 | | a 11 | | a 1 n | | a 11 | | a 21 | | a 22 | 0 | a 2 n | | a 22 | | a n 1 | | a n n | | a n 2 | | a n n | 0 = B .
It is clear that B is nonnegative and zero symmetric when A is zero symmetric. We define
C ( k ) = I + B ( k ) , k = 0 , 1 , 2 , ,
where B ( k ) = ( D ( k 1 ) ) 1 B ( k 1 ) D ( k 1 ) and B ( 0 ) = B . Thus, B ( k ) and B ( k 1 ) are similar, and so are B ( k ) and B ( 0 ) ; that is, B ( k ) has the same eigenvalues. Furthermore, the  maximum eigenvalue ρ ( B ) = ρ ( C ) 1 holds. Consequently, if  ρ ( C ) < 2 , then A is an H-matrix.
In the following, we propose Algorithm 3 to judge whether A is an H-matrix:
Algorithm 3
Step 0. 
Given a zero symmetric matrix A = ( a i j ) R n × n with a i i 0 .
Step 1. 
Set C = I + | I D A 1 A | and compute ρ ( C ) by Algorithm 2.
Step 2. 
If ρ ( C ) < 2 , Output “A is an H-matrix”. Otherwise, Output “A is not an H-matrix”.
Remark 2. 
In Algorithm 3, set
c i ( k ) = j = 1 n | c i j ( k ) | ,   R ( k ) = max 1 i n c i ( k ) .
From C ( k ) = I + B ( k ) , k = 0 , 1 , 2 , , we have
ρ ( B ) + 1 R ( 2 ) R ( 1 ) .
If R ( k ) < 2 , then A is an H-matrix. Therefore, it is not necessary to calculate the exact maximum eigenvalue in some cases.
The following examples illustrate the efficiency of Algorithm 3.
Example 1. 
Consider a zero symmetric reducible matrix A defined by
A = 3 2 1 0 1 3 1 0 3 2 4 0 0 0 0 1 .
It is easy to see that C = I + | I D A 1 A | as follows:
C = 1 2 3 1 3 0 1 3 1 1 3 0 3 4 1 2 1 0 0 0 0 1 .
We may compute R ( 3 ) = 1.9628 by Algorithm 2. Therefore, A is an H-matrix.
Example 2. 
Consider the irreducible matrix A of the example in [15,19], defined by
A = 1 1.146392 0 0 0 0.5 1 0 0.6 0 0 0.1 1 0 0.5 0 0.5 0 1 0.5 0.2 0.1 0.3 0 1 .
We compute C = I + | I D A 1 A | as follows:
C = 1 1.146392 0 0 0 0.5 1 0 0.6 0 0 0.1 1 0 0.5 0 0.5 0 1 0.5 0.2 0.1 0.3 0 1 .
Under Algorithm 3, A is not an H-matrix since ρ ( C ) = 2.0000 . Compared with Algorithm AH 2 in [15], which requires 37 iterations to verify that A is not an H-matrix, Algorithm 3 requires only 20 iterations and takes 0.0002 s.

5. Conclusions

In this paper, we proposed the improved diagonal transformation algorithm to compute the maximum eigenvalue of zero symmetric nonnegative matrices, which inherited all of the advantages of the diagonal transformation algorithm while having fewer computations. In addition, the improved algorithm can also deal with irreducible matrices more efficiently than the diagonal transformation algorithms of [10,12]. Based on the improved diagonal transformation algorithm, we established Algorithm 3 to judge whether a zero symmetric matrix is an H-matrix quickly. Further studies can be considered to develop a diagonal transformation algorithm to compute eigenvalue problems of multi-linear algebra.

Author Contributions

Writing—original draft, editing, and software, G.W.; supervision, writing—review, and funding acquisition, J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Natural Science Foundation of Shandong Province (ZR2020MA025) and the National Natural Science Foundation of China (12071250).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hehryk, M. Nonnagtive Matrices; John Wiley: Hoboken, NJ, USA, 1988. [Google Scholar]
  2. Adam, M.; Aggeli, D.; Aretaki, A. Some new bounds on the spectral radius of nonnegative matrices. AIMS Math. 2020, 5, 701–716. [Google Scholar] [CrossRef]
  3. Dursun, T.; Kirkland, S. A sequence of upper bounds for the Perron root of a nonnegative matrix. Linear Algebra Appl. 1998, 273, 23–28. [Google Scholar]
  4. Huang, G.; Yin, F.; Guo, K. The lower and upper bounds on Perron root of nonnegative irreducible matrices. J. Comput. Appl. Math. 2008, 217, 259–267. [Google Scholar] [CrossRef]
  5. Li, L.; Huang, T. Two sided bounds for the spectral radius of nonnegative irreducible matrices. Acta Math. Appl. Sin. Engl. Ser. B 2008, 31, 271–277. [Google Scholar]
  6. Lu, L.; Ng, M. Localization of Perron roots. Linear Algebra Appl. 2004, 392, 103–117. [Google Scholar] [CrossRef]
  7. Rojo, O.; Soto, R. Perron root bounding for nonnegative persymmetric matrices. Comput. Math. Appl. 1996, 31, 69–76. [Google Scholar] [CrossRef]
  8. Szyld, D. A sequence of lower bounds for the spectral radius of nonnegative matrices. Linear Algebra Appl. 1992, 174, 239–242. [Google Scholar] [CrossRef]
  9. Bunse, W. A class of diagonal transformation methods for the computation of the spectral radius of a nonnegative matrix. SIAM J. Numer. Anal. 1981, 18, 693–704. [Google Scholar] [CrossRef]
  10. Duan, F.; Zhang, K. An algorithm of diagonal transformation for Perron root of nonnegative irreducible matrices. Appl. Math. Comput. 2006, 175, 762–772. [Google Scholar] [CrossRef]
  11. Lv, H.; Shang, Y.; Zhang, M. An origin displacement method for calculating maximum eigenvalue of nonnegative matrix under diagonal similarity transformation. J. Beihua Univ. 2019, 20, 588–593. [Google Scholar]
  12. Wen, C.; Huang, T. A modified algorithm for the Perron root of a nonnegative matrix. Appl. Math. Comput. 2011, 217, 4453–4458. [Google Scholar] [CrossRef]
  13. Byron, F.; Fuller, R. Mathematics of Classical and Quantum Physics; Dover Publications: Mineola, NY, USA, 1992. [Google Scholar]
  14. Alanelli, M.; Hadjidimos, A. A new iterative criterion for H-matrices. SIAM J. Matrix Anal. Appl. 2007, 29, 160–176. [Google Scholar] [CrossRef]
  15. Alanelli, M.; Hadjidimos, A. A new iterative criterion for H-matrices: The reducible case. Linear Algebra Appl. 2008, 428, 2761–2777. [Google Scholar] [CrossRef]
  16. Guan, J.; Lu, L.; Li, R.; Shao, R. Self-corrective iterative algorithm for generalized diagonally dominant matrices. J. Comput. Appl. Math. 2016, 302, 285–300. [Google Scholar] [CrossRef]
  17. Li, L. On the iterative criterion for generalized diagonally dominant matrices. SIAM J. Matrix Anal. Appl. 2002, 24, 17–24. [Google Scholar] [CrossRef]
  18. Wang, K.; Cao, J.; Pei, H. Robust extreme learning machine in the presence of outliers by iterative reweighted algorithm. Appl. Math. Comput. 2020, 377, 125–186. [Google Scholar] [CrossRef]
  19. Hadjidimos, A. An extended compact profile iterative method criterion for sparse H-matrices. Linear Algebra Appl. 2004, 389, 329–345. [Google Scholar] [CrossRef]
Table 1. Numerical comparisons of Algorithm 2, Wen’s algorithm, and Duan’s algorithm.
Table 1. Numerical comparisons of Algorithm 2, Wen’s algorithm, and Duan’s algorithm.
Algorithm 2Wen’s Algorithm of [12]Duan’s Algorithm of [10]
n λ cpu(s) λ cpu(s) λ cpu(s)
30056.17800.249656.17800.422056.17800.7021
600100.72330.8714100.72331.2132100.72332.1438
900152.96861.9056152.96863.4469152.96867.3729
1200201.55743.8674201.55746.6561201.557410.7564
1500251.26867.9249251.268612.6122251.268620.5346
1800300.342712.5756300.342719.4923300.342739.1875
2100351.046426.2648351.046438.9256351.046451.8204
2400401.863038.5693401.863052.4726401.863086.4629
2700451.193548.2369452.193568.7222452.1935120.3801
3000501.317861.8363501.317899.5643501.3178146.5896
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, G.; Liu, J. An Improved Diagonal Transformation Algorithm for the Maximum Eigenvalue of Zero Symmetric Nonnegative Matrices. Symmetry 2022, 14, 1707. https://doi.org/10.3390/sym14081707

AMA Style

Wang G, Liu J. An Improved Diagonal Transformation Algorithm for the Maximum Eigenvalue of Zero Symmetric Nonnegative Matrices. Symmetry. 2022; 14(8):1707. https://doi.org/10.3390/sym14081707

Chicago/Turabian Style

Wang, Gang, and Jinfa Liu. 2022. "An Improved Diagonal Transformation Algorithm for the Maximum Eigenvalue of Zero Symmetric Nonnegative Matrices" Symmetry 14, no. 8: 1707. https://doi.org/10.3390/sym14081707

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop