Next Article in Journal
Semiclassical Approach to the Nonlocal Kinetic Model of Metal Vapor Active Media
Next Article in Special Issue
Study of Dynamics of a COVID-19 Model for Saudi Arabia with Vaccination Rate, Saturated Treatment Function and Saturated Incidence Rate
Previous Article in Journal
Global Dynamics for an Age-Structured Cholera Infection Model with General Infection Rates
Previous Article in Special Issue
Mathematical Analysis of Biodegradation Model under Nonlocal Operator in Caputo Sense
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

A New Inversion-Free Iterative Scheme to Compute Maximal and Minimal Solutions of a Nonlinear Matrix Equation

Mathematical Modeling and Applied Computation (MMAC) Research Group, Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
Mathematics 2021, 9(23), 2994;
Submission received: 27 October 2021 / Revised: 19 November 2021 / Accepted: 19 November 2021 / Published: 23 November 2021
(This article belongs to the Special Issue New Trends and Developments in Numerical Analysis)


The goal of this article is to investigate a new solver in the form of an iterative method to solve X + A X 1 A = I as an important nonlinear matrix equation (NME), where A , X , I are appropriate matrices. The minimal and maximal solutions of this NME are discussed as Hermitian positive definite (HPD) matrices. The convergence of the scheme is given. Several numerical tests are also provided to support the theoretical discussions.

1. Subject, Literature, Motivation and Progress

1.1. Problem Statement

In this article, we take into account the nonlinear matrix equation (NME) [1]:
X + A X 1 A = I ,
where X C n × n is its Hermitian positive definite (HPD) solution, I is an identity matrix of the appropriate size, A is an n × n invertible real or complex matrix and A is the conjugate transpose of the matrix A. The existence of the solution for this NME was discussed in [2] and it was found that if one HPD solution was available, then all its Hermitian solutions would be positive definite. This problem for any HPD solution X possesses the minimal solution X S and the maximal solution X L such that X S X X L , for more details see [3].
The maximal solution for (1) can be derived via
X L = I Y S ,
where Y S is the minimal solution of the dual NME as follows:
A Y 1 A + Y = I .
The NME in (1) is an important member of the general NME of the following form:
X α + A X β A + B X γ B = I ,
where α , β , γ 1 , A , B are arbitrary n × n matrices [4,5].
In several disciplines such as control theory, stochastic filtering, queuing theory, etc., this type of nonlinear equation typically appears, see [6,7,8] for further discussions. On the other hand, it is always a daunting task to find a positive definite solution (PDS) to an NME [9,10]. This is because most of the existing methods are computationally expensive due to presence of the inverse part in each loop, see the discussions given in [11,12].

1.2. Literature

One of the pioneer solvers for computing the maximal solutions of (1) is the iterative expression (FPI) below which is based on a fixed-point strategy [13]:
X 0 = I , X k + 1 = I A X k 1 A .
Here, for each computing cycle, one matrix inverse must be computed. The authors in [14] presented another iteration scheme (SM) to compute the minimal solution as follows:
X 0 = A A , X k + 1 = X k ( 2 I A ( I X k ) A 1 X k ) .
This scheme is categorized as a method without the computation of an inverse for each computing step, since A 1 is calculated once only by noticing that A = A 1 .
The main challenge in employing iterative schemes such as (6) is their slow rate of convergence specially at the initial stage of the process. To overcome this, author in [15] proposed to apply the multiple second-order Newton’s scheme for finding the HPD solution at the initial phase of convergence. This was pursued as follows:
X k + 1 = X k ( t + 1 ) I t A k X k ,
for any 1 t 2 .
Another well-known iteration scheme is the method proposed in [16,17], here denoted by EAM, which is as follows:
X 0 = Z 0 = I , Z k + 1 = I + ( I X k ) Z k , X k + 1 = I A Z k + 1 A .
Authors in [18] proposed another iteration scheme as a generalization of the Chebyshev’s method (SOM) to solve (1) as follows:
X 0 = A A , X k + 1 = X k [ 3 I A ( I X k ) A 1 X k ( 3 I A ( I X k ) A 1 X k ) ] .

1.3. Our Result

Our major result is a new iteration scheme similar to the form of SM for extreme solutions of (1). The method needs only the calculation of one inversion at the initial stage of the process and because of this, is categorized under the inversion-free iterative methods. Under mild conditions, the convergence of the scheme as well as some theoretical investigations to show how the method produces HPD solutions are given. In conclusion, we accelerate the long-standing algorithms for an important problem of numerical linear algebra. The experiments considered in this work reconfirm the competitiveness of the proposed iteration scheme compared to known algorithms. These will make this work interesting both practically and theoretically.

1.4. Organization of the Paper

After having a short introductory discussion regarding the issues with iterative methods for NMEs in this section, the remaining parts of this article are structured as follows. In Section 2, the solution of this NME as the zero of a nonlinear operator equation is introduced. In Section 2, several numerical experiments are investigated with implementation details given in Section 3 to confirm its applicability and to compare with several well-known and state-of-the-art methods from the literature. It is shown by way of illustration that the proposed solution method is useful and provides promising results. Lastly, some concluding remarks are made in Section 4.

2. Novel Iteration Method

2.1. An Equivalent NME

Another way to calculate the HPD solutions of (1) is to compute the zero of the NME, P ( X ) = 0 , where [14]:
P ( X ) : = I + X + A X 1 A = 0 .
The relation (10) can be re-written as follows [18]:
P ( X ) = A X 1 A + X I = ( I X ) + A X 1 A = ( A X A ) A + A X 1 A = A X 1 A A ( A 1 A A 1 X A ) A = ( A 1 X A ) 1 A ( A 1 A A 1 X A ) A .
Clearly we can now impose a transformation as follows
L = A 1 X A ,
to further simplify the nonlinear operator equation:
G ( L ) = L 1 A ( A 1 A L ) A = 0 .
To obtain an iteration scheme to find the minimal HPD solutions, now it is necessary to solve the following problem:
L 1 B = 0 ,
B = A ( A 1 A L ) A .

2.2. Our Method

We employ (14) and (15) and view this NME as an inverse-finding problem via the Schulz-type iterative schemes (see, e.g., [19]), and accordingly we propose the following method:
X 0 = A A , H k = A ( I X k ) A 1 , T k = I H k X k , X k + 1 = X k [ I + T k + T k 2 + T k 3 ] .
The matrix iteration (16) requires to compute A 1 once specially at the initial stage of the implementation and it provides the iteration scheme considered an inversion-free method to solve (1).
Moreover, it is straightforward to state that the zeros of the map G ( Z ) = L 1 B are equal to the zeros of the map P ( X ) = X 1 H , wherein H = A ( I X ) A 1 . To discuss further, we find the inverse of H, which corresponds to the minimal HPD solution of (1).
The derivation of (16) can mainly be viewed as a matrix inverse finder. In fact, the last step of (16) is a method that can be used for finding generalized inverses. It is a member of the family of hyperpower iteration schemes employing several matrix products to compute the inverse of a given matrix. The iterative method can be rearranged to be written in its Horner form in order to reduce the rounding and cancellation errors when facing big matrices.

2.3. Theoretical Investigations

Lemma 1.
Employing the Hermitian matrix X 0 = A A , the iteration scheme (16) provides a sequence of Hermitian matrices.
It is obvious that the seed A A is Hermitian, and H k = A ( I X k ) A 1 . Therefore, H 0 = A A 1 A A A A 1 is also Hermitian, i.e., H 0 = H 0 . Using an inductive argument, we have:
( X 1 ) = ( X 0 [ I + T 0 + T 0 2 + T 0 3 ] ) = [ X 0 + X 0 ( I H 0 X 0 ) + X 0 ( I H 0 X 0 ) 2 + X 0 ( I H 0 X 0 ) 3 ] = X 1 .
By considering ( X l ) = X l , ( l k ), we then show that:
( X l + 1 ) = ( X l [ I + T l + T l 2 + T l 3 ] ) = [ X l + X l ( I H l X l ) + X l ( I H l X l ) 2 + X l ( I H l X l ) 3 ] = X l + 1 .
Note that H l = ( H l ) has been employed in (18). The conclusion is valid for any l + 1 . □
Theorem 1.
Assume that X k and A are invertible matrices. Then the sequence { X k } produced by (16) tends to the minimal solution of the NME (1) by having X 0 = A A .
We start by denoting that H k = A ( I X k ) A 1 . Thus, we can write:
I H k X k + 1 = I [ A ( I X k ) A 1 ] ( X k [ I + T k + T k 2 + T k 3 ] ) = I [ A ( I X k ) A 1 ] ( X k [ I + ( I H k X k ) + ( I H k X k ) 2 + ( I H k X k ) 3 ] ) = ( I H k X k ) 4 .
Then, by using (19) and taking a norm on both sides, one obtains:
I H k X k + 1 I H k X k 4 .
Therefore, the inequality (20) can lead to convergence as long as the initial approximation reads I H X 0 < 1 . This yields to the following condition:
I [ A ( I X ) A 1 ] X 0 < 1 .
Then, by employing the HPD initial approximation X 0 = A A , one obtains that
A X A < 1 ,
which holds as long as X is the minimal HPD solution of (1). Furthermore, due to
0 < X S = I A X S 1 A = A [ A A 1 X S 1 ] A = A [ X 0 1 X S 1 ] A ,
we obtain that X 0 1 > X S 1 , thus X 0 < X S . Hence, by mathematical induction, it is seen that { X k } converges to X S . □
It is required to discuss the rate of convergence. Although the proposed method (16) converges to the HPD solution under mild conditions, its q order of convergence is just linear. As will be seen later, however, due to its structure and a higher rate for computing matrix inverses, it converges faster because it comes to the convergence phase more quickly. This linear rate of convergence can be improved by employing an accelerator such as (7), in the initial phase of convergence.

3. Experiments

The computational efficiency and convergence behavior of PM (16) for the NME (1) are hereby illustrated on some numerical problems. All the compared methods were written in the programming package Mathematica 12.0 [20].
All computations were performed in standard floating point arithmetic without applying any compilations. In addition, computations were done on a laptop with 16.0 GB of RAM and Intel® Core™ i7-9750H. The matrix inversions whenever required were done using Mathematica 12.0 built-in commands. We found the number of iterations required to observe convergence. In the code we wrote to implement different schemes, we stopped all the applied schemes when two successive iterations in the infinity norm was less than a tolerance, as follows:
X k + 1 X k < ϵ .
Here PM and SOM were employed along with (7) with k = 2 and t = 1.5 iterations and then (16) was used. Without imposing (7) the iteration schemes PM and SOM were faster than the other competitive ones, but the superiority was not tangible. Because of this, a multiple Newton’s method was imposed at the beginning of the iterations to accelerate the convergence as much as possible. The impact of the multiple Newton’s method is that it is useful when there is a clear smoothness around the zero of the nonlinear operator equation. It helps arrive at the convergence region much faster than simple root finders.
Example 1.
This example tests the effectiveness of PM on a 3 × 3 real matrix A defined as follows:
A = 0.1 0.13 0.32 0.23 0.02 0.4 0.31 0.14 0.16 ,
to find the minimal solution of (1) which is given by
X S = 0.168846 0.133619 0.0927809 0.133619 0.244969 0.00671869 0.0927804 0.00671813 0.216639 .
The results are given in Figure 1 using ϵ = 10 6 .
Note that PM converges to X S , therefore we employed other methods to compute the maximal solution of the dual problem A X 1 A + X I = 0 in our written implementations to have fair comparisons.
Numerical evidence of Example 1 reveals that PM converges much faster than existing solvers for the same purpose. Our method requires fewer numbers of iterations to reach the stopping criterion. In fact, under mild conditions, the error analysis, convergence and stability of the scheme were observed based on the expected norm of the error.
Example 2.
Using ϵ = 10 8 , we compare different iteration schemes to find the minimal solution of (1) when A is a complex matrix as follows:
A = 1 5 1.2 1.1 0.5 0.3 + 0.1 i 0.1 0.6 0.5 0.7 0.5 0.5 0.1 0.8 0.1 i 1.8 0.5 ,
and the solution is:
X S = 0.136245 0.0143856 + 0.00708363 i 0.00451262 + 0.00795439 i 0.0249869 + 0.0539135 i 0.0143856 0.00708363 i 0.0489688 0.0161382 0.000846549 i 0.0183093 0.0268946 i 0.00451262 0.00795439 i 0.0161382 + 0.000846549 i 0.0563786 0.0284051 + 0.0188581 i 0.0249869 0.0539135 i 0.0183093 + 0.0268946 i 0.0284051 0.0188581 i 0.197196 .
The numerical evidence is given in Figure 2. It too reveals that the proposed method is efficient in solving NME (1). Note that the CPU time for all methods is low, though PM converges in less than a second for the given examples and performs faster than the other solvers.

4. Concluding Remarks

We have studied the minimal HPD solution of (1). The proposed solver needs the calculation of one matrix inversion at the initial stage of the process and then it is an inversion-free scheme. In this paper, a novel iterative scheme was developed. Several computational tests were given and found in agreement with the theoretical findings. Lastly, based on the computational evidence, we can conclude that the novel scheme is useful and effective in computing the numerical solutions of a broad range of NMEs.


The Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, under Grant No. (FP-163-43).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No special data were used in this study.


The Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia has funded this project, under grant no. FP-163-43.

Conflicts of Interest

The author declares that he has no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.


  1. El-Sayed, S.M. An algorithm for computing positive definite solutions of the nonlinear matrix equation X + A*X−1A = I. Int. J. Comput. Math. 2003, 80, 1527–1534. [Google Scholar] [CrossRef]
  2. Engwerda, J.C.; Ran, C.M.A.; Rijkeboer, A.L. Necessary and sufficient conditions for the existence of a positive definite solution of the matrix equation X + A*X−1A = Q. Linear Algebra Its Appl. 1993, 186, 255–275. [Google Scholar] [CrossRef] [Green Version]
  3. Ivanov, I.G.; Hasanov, V.I.; Minchev, B.V. On matrix equations X ± A*X−2A = I. Linear Algebra Its Appl. 2001, 326, 27–44. [Google Scholar] [CrossRef] [Green Version]
  4. Huang, N.; Ma, C.; Tang, J. The inversion-free iterative methods for a system of nonlinear matrix equations. Int. J. Comput. Math. 2016, 93, 1470–1483. [Google Scholar] [CrossRef]
  5. Shil, S.; Nashine, H.K. Latest inversion-free iterative scheme for solving a pair of nonlinear matrix equations. J. Math. 2021, 2021, 2667885. [Google Scholar] [CrossRef]
  6. Bougerol, P. Kalman filtering with random coefficients and contractions. SIAM J. Control Optim. 1993, 31, 942–959. [Google Scholar] [CrossRef]
  7. Soheili, A.R.; Toutounian, F.; Soleymani, F. A fast convergent numerical method for matrix sign function with application in SDEs. J. Comput. Appl. Math. 2015, 282, 167–178. [Google Scholar] [CrossRef]
  8. Tian, Y.; Xia, C. On the low-degree solution of the Sylvester matrix polynomial equation. J. Math. 2021, 2021, 4612177. [Google Scholar] [CrossRef]
  9. Guo, C.H.; Lancaster, P. Iterative solution of two matrix equations. Math. Comput. 1999, 68, 1589–1603. [Google Scholar] [CrossRef] [Green Version]
  10. Liu, A.; Chen, G. On the Hermitian positive definite solutions of nonlinear matrix equation Xs + A*X−t1A + B*X−t2B = Q. Math. Prob. Eng. 2011, 2011, 163585. [Google Scholar] [CrossRef] [Green Version]
  11. El-Sayed, S.M.; Ran, A.C.M. On an iteration method for solving a class of nonlinear matrix equations. SIAM J. Matrix Anal. Appl. 2002, 23, 632–645. [Google Scholar] [CrossRef]
  12. Li, J. Solutions and improved perturbation analysis for the matrix equation X + A*X−pA = Q. Abst. Appl. Anal. 2013, 2013, 575964. [Google Scholar]
  13. Engwerda, J.C. On the existence of a positive definite solution of the matrix equation X + A*X−1A = I. Linear Algebra Its Appl. 1993, 194, 91–108. [Google Scholar] [CrossRef] [Green Version]
  14. Monsalve, M.; Raydan, M. A new inversion-free method for a rational matrix equation. Linear Algebra Its Appl. 2010, 433, 64–71. [Google Scholar] [CrossRef] [Green Version]
  15. Zhang, L. An improved inversion-free method for solving the matrix equation X + A*X−αA = Q. J. Comput. Appl. Math. 2013, 253, 200–203. [Google Scholar] [CrossRef]
  16. El-Sayed, S.M.; Al-Dbiban, A.M. A new inversion free iteration for solving the equation X + A*X−1A = Q. J. Comput. Appl. Math. 2005, 181, 148–156. [Google Scholar] [CrossRef] [Green Version]
  17. Zhan, X. Computing the extremal positive definite solutions of a matrix equation. SIAM J. Sci. Comput. 1996, 17, 1167–1174. [Google Scholar] [CrossRef]
  18. Soleymani, F.; Sharifi, M.; Vanani, S.K.; Haghani, F.K.; Kiliçman, A. An inversion-free method for finding positive definite solution of a rational matrix equation. Sci. World J. 2014, 2014, 560931. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Soleymani, F.; Sharifi, M.; Shateyi, S.; Haghani, F.K. An algorithm for computing geometric mean of two Hermitian positive definite matrices via matrix sign. Abst. Appl. Anal. 2014, 2014, 978629. [Google Scholar] [CrossRef]
  20. Trott, M. The Mathematica Guide-Book for Numerics; Springer: New York, NY, USA, 2006. [Google Scholar]
Figure 1. Convergence history for Example 1.
Figure 1. Convergence history for Example 1.
Mathematics 09 02994 g001
Figure 2. Convergence history for Example 2.
Figure 2. Convergence history for Example 2.
Mathematics 09 02994 g002
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zaka Ullah, M. A New Inversion-Free Iterative Scheme to Compute Maximal and Minimal Solutions of a Nonlinear Matrix Equation. Mathematics 2021, 9, 2994.

AMA Style

Zaka Ullah M. A New Inversion-Free Iterative Scheme to Compute Maximal and Minimal Solutions of a Nonlinear Matrix Equation. Mathematics. 2021; 9(23):2994.

Chicago/Turabian Style

Zaka Ullah, Malik. 2021. "A New Inversion-Free Iterative Scheme to Compute Maximal and Minimal Solutions of a Nonlinear Matrix Equation" Mathematics 9, no. 23: 2994.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop