Next Article in Journal
Spatially Distributed Differential Game Theoretic Model of Fisheries
Next Article in Special Issue
A Modified Fletcher–Reeves Conjugate Gradient Method for Monotone Nonlinear Equations with Some Applications
Previous Article in Journal
The Performance Quantitative Model Based on the Specification and Relation of the Component
Previous Article in Special Issue
Numerical Solution of Heston-Hull-White Three-Dimensional PDE with a High Order FD Scheme
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Calculating the Weighted Moore–Penrose Inverse by a High Order Iteration Scheme

by
Haifa Bin Jebreen
Mathematics Department, College of Science, King Saud University, Riyadh 11451, Saudi Arabia
Mathematics 2019, 7(8), 731; https://doi.org/10.3390/math7080731
Submission received: 16 July 2019 / Revised: 3 August 2019 / Accepted: 7 August 2019 / Published: 10 August 2019
(This article belongs to the Special Issue Iterative Methods for Solving Nonlinear Equations and Systems)

Abstract

:
The goal of this research is to extend and investigate an improved approach for calculating the weighted Moore–Penrose (WMP) inverses of singular or rectangular matrices. The scheme is constructed based on a hyperpower method of order ten. It is shown that the improved scheme converges with this rate using only six matrix products per cycle. Several tests are conducted to reveal the applicability and efficiency of the discussed method, in contrast with its well-known competitors.

1. Introduction

1.1. Background

Constructing and discussing different features of iterative schemes for the calculation of outer inverses is an active topic of current research in Applied Mathematics (for more details, refer to [1,2,3]). Many papers have been published in the field of outer inverses over the past few decades, each having their own domain of validity and usefulness. In fact, in 1920, Moore was a pioneer of this field and published seminal works about the outer inverse [4,5]. However, several deep works were published during the 1950s (as reviewed and observed in [4]). It is also noted that pseudo-inverse operator was first introduced by Fredholm in [6].
The method of partitioning (due to Greville) was a pioneering work in computing generalized inverses, which was re-introduced and re-investigated in [4,7]. This scheme requires a lot of operations and is subject to cancelation and rounding errors. Among the generalized inverses, the weighted Moore–Penrose (WMP) inverse is important, as it can be simplified to a pseudo-inverse, as well as a regular inverse. Several applications of computing the WMP inverse can be observed, with some discussion, in the recent literature [8,9]; including applications to the solution of matrix equations. See [10,11,12,13] for further discussions and applications.
Furthermore, for large matrices, or as long as the weight matrices in the process of computing the WMP inverse are ill-conditioned, symbolic computation of the current algorithms may not properly work due to several reasons, such as time consumption, requiring higher memory space, or instability. On the other hand, several numerical methods for the weighted Moore–Penrose (WMP) inverse are not stabie or possess slow convergence rates. Hence, it is necessary to investigate and extend novel and useful iterative matrix methods for such an objective; see, also, the discussions in [14,15].

1.2. Definition

Let us consider that M and N are two square Hermitian positive definite (HPD) matrices of sizes m and n ( m n ) and A C m × n . Then, there is a unique matrix X satisfying the following identities [16]:
  • A X A = A ,
  • X A X = X ,
  • ( M A X ) * = M A X ,
  • ( N X A ) * = N X A .
Then, X C n × m is called the WMP inverse of A, and is shown by A M N . Noting that, as long as M = I m × m and N = I n × n , then X is the Moore–Penrose (MP) inverse, or simply the pseudo-inverse of A, and we show it by A [17]. Furthermore, when the matrix A is non-singular, then the pseudo-inverse will be simplified to the regular inverse.
The weighted singular value decomposition (WSVD), first introduced in [18], is normally applied to define this generalized inverse. Consider that the rank of A is r. Then, we have U C m × m and V C n × n , satisfying the following relations:
U * M U = I m × m ,
and
V * N 1 V = I n × n ,
such that
A = U D 0 0 0 V * .
Thus, A M N is furnished as follows:
A M N = N 1 V D 1 0 0 0 U * M ,
where we have a diagonal matrix D = diag ( σ 1 , σ 2 , , σ r ) , for σ 1 σ 2 σ r > 0 , while σ i 2 is the non-zero eigenvalue of N 1 A * M A . In addition,
A M N = σ 1 , A M N N M = 1 σ r .
In this work, A # = N 1 A * M is used as the weighted matrix of the conjugate transpose of A. See [19] for more details.

1.3. Literature

Schulz-type methods for the calculation of the WMP inverse are sensitive to the choice of the initial value; that is, the initial choice of matrix must be close enough to the generalized inverse so as to guarantee the scheme to converge [20]. More precisely, convergence can only be observed if the starting matrix is chosen carefully. However, this starting value can be chosen simply for the case of the WMP inverse. The pioneering work in [21] gave several suggestions, along with deep discussions, about how to make such a choice quickly.
Let us, now, briefly provide some of the pioneering and most important matrix iterative methods for computing the WMP inverse.
The second–order Schulz scheme for finding the WMP inverse, requiring only two matrix products per computing cycle, is given by [22]:
X k + 1 = X k ( 2 I A X k ) , k = 0 , 1 , 2 , .
Throughout the work, I stands for the identity matrix, unless clearly stated otherwise.
An improvement of (6) with third-order convergence, known as Chebyshev’s method, was discussed in [23] for computing A M N as follows:
X k + 1 = X k ( 3 I A X k ( 3 I A X k ) ) , k = 0 , 1 , 2 , .
The authors in [23] proposed another third-order iterative formulation, having one more matrix multiplication, as follows:
X k + 1 = X k I + 1 2 ( I A X k ) ( I + ( 2 I A X k ) 2 ) , k = 0 , 1 , 2 , .
It is necessary to recall that a general class of iteration schemes for computing the WMP inverse and some other kinds of other generalized inverses was discussed and investigated in [24] (Chapter 5) to have p-th order using a total of p matrix products. An example could be the following fourth-order iteration:
X k + 1 = X k ( I + B k ( I + B k ( I + B k ) ) ) , k = 0 , 1 , 2 , ,
where B k = I A X k . As another instance, a tenth-order matrix method could be furnished as follows [25]:
X k + 1 = X k ( I + B k ( I + B k ( I + B k ( I + B k ( I + B k ( I + B k ( I + B k ( I + B k ( I + B k ) ) ) ) ) ) ) ) ) , k = 0 , 1 , 2 , .

1.4. Motivation and Organization

The main motivation behind proposing and extending new iterative methods for the WMP inverse is to apply them in practical large scale problems [26], as well as to improve the computational efficiency, which is directly linked to the concept of numerical analysis for designing new iterative expressions which are economically useful, in being able to reduce computational complexity and time requirements.
Hence, with this motivation at hand, to increase the computational efficiency index as well as to contribute in this field, the main focus of this work is to investigate a tenth-order method requiring only six matrix multiplications per cycle. We prove that this can provide an improvement of the computational efficiency index in calculating the WMP inverse.
The paper is organized as follows. Section 1 discusses the preliminaries and literature of this topic very briefly, to prepare the reader for the analytical discussions of Section 2, in which we describe an effective iteration formulation for the WMP inverse. It is investigated that the method needs only six matrix multiplications to reach its tenth order of convergence.
Concrete proofs of convergence are furnished in Section 3. Section 4 discusses the application of our formulation to the WMP inverses of many randomly generated matrices of various dimensions. Numerical evidence demonstrates the usefulness of this method for computing the WMP inverse, in terms of the elapsed computation time. Finally, several concluding remarks and comments are given in Section 5.

2. A High Order Scheme for the WMP Inverse

For the use of iterative methods, such as the ones described in Section 1, it is required to employ a starting value when computing the WMP inverse. As in [27], one general procedure to find this starting matrix is of the following form:
X 0 = λ A # ,
where A # = N 1 A * M is the matrix of weighted conjugate transpose (WCT) for the input matrix A and
λ = 1 σ 1 2 .
Recall that, in (12), σ 1 is the the largest eigenvalue of N 1 A * M A .

2.1. Derivation

Another reason for proposing a higher order method is that methods based on improvements of the Schulz iteration scheme are slow in the initial phase of iteration. This means that the convergence order cannot be observed at the beginning, it can be seen only after performing several iterates. On the other hand, by incorporating a stop condition using matrix norms, we can increase the elapsed time of executing the written programs for finding the WMP inverse.
Accordingly, to contribute and extend a high order matrix iteration scheme in this context, we first take into account a tenth-order scheme having ten matrix multiplications per cycle, as follows:
X k + 1 = X k ( I + B k + B k 2 + + B k 9 ) .
Now, to develop the performance of (13), we factorize (13) to reduce the number of products. So, we can write
X k + 1 = X k I + B k [ ( I B k + B k 2 B k 3 + B k 4 ) ( I + B k + B k 2 + B k 3 + B k 4 ) ] .
This formulation for the matrix iteration requires seven matrix products. However, it is possible to reduce this number of products by considering a more tight formulation for (14). Hence, we write
X k + 1 = X k I + B k M k ,
where
M k = [ ( I + χ B k 2 + B k 4 ) ( I + κ B k 2 + B k 4 ) ] .
To find the unknown weighting coefficients in (15) and, more specifically, in (16), we need to solve a symbolic problem. As such, a Mathematica code [28] was employed to do such a task, as follows:
  • ClearAll["Global‘∗"];
  • fact1 = (1 + a B^2 + B^4);
  • fact2 = (1 + b B^2 + B^4);
  • sol = fact1∗fact2 + (c B^2) // Expand
  • S = Table[
  •    s[i] = Coefficient[sol, B^i], {i, 2, 6, 2}
  •    ] // Simplify
  • Solve[
  •   s[2] == 1 && s[4] == 1 && s[6] == 1, {a, b, c}
  •   ] // Simplify
  • {a, b, c} = {a, b, c} /. %[[1]] // Simplify
  • Chop@sol // Simplify
This was given only to ease understanding of the procedure of obtaining the coefficient. Now, we obtain:
χ = 1 2 1 5 , κ = 1 2 1 + 5 .
This means that (15) requires only six matrix products per cycle to hit a convergence speed of ten.

2.2. Several Lemmas

Before providing the main results concerning the convergence analysis of the proposed scheme, we furnish the following lemmas, inspired by [29], which reveal how the iterates generated by (15) have some specific important relations and, then, show a relation between (4) and (15).
Lemma 1.
For { X k } k = 0 k = produced by (15) using the starting matrix (11), for any k 0 , it holds that
( M A X k ) * = M A X k , ( N X k A ) * = N X k A , X k A A M N = X k , A M N A X k = X k .
Proof. 
The proof can be done by employing mathematical induction. When k = 0 and X 0 is the suitable initial matrix, the first two relations in (18) are straightforward. Hence, we discuss the last two relations by applying the following identities:
( A A M N ) # = A A M N ,
and
( A M N A ) # = A M N A .
Accordingly, we have:
X 0 A A M N = λ A # A A M N = λ A # ( A A M N ) # = λ A # ( A M N ) # A # = λ ( A A M N A ) # = λ A # = X 0 ,
and also
A M N A X 0 = λ A M N A A # = λ ( A M N A ) # A # = λ ( A # ( A M N ) # A # ) = λ ( A ( A M N A ) # = λ A # = X 0 .
Subsequently, now the relation is valid for k > 0 , then we discuss that it will still be true for k + 1 . Taking our matrix iteration (15) into consideration, we have:
( M A X k + 1 ) * = ( M A ( X k I + B k [ ( I + χ B k 2 + B k 4 ) ( I + κ B k 2 + B k 4 ) ] ) ) * = [ M A X k I + B k + B k 2 + B k 3 + B k 4 + B k 5 + B k 6 + B k 7 + B k 8 + B k 9 ] * = M A [ X k I + B k + B k 2 + B k 3 + B k 4 + B k 5 + B k 6 + B k 7 + B k 8 + B k 9 ] = M A X k + 1 ,
using that
( M ( A X k ) ) * = M A X k ,
M is a Hermitian positive definite matrix ( M * = M ), and similar facts, such as:
( M ( A X k ) 2 ) * = ( M ( A X k ) ( A X k ) ) * = ( A X k ) * ( M ( A X k ) ) * = ( A X k ) * ( M ( A X k ) ) = ( A X k ) * M * ( A X k ) = ( M ( A X k ) ) * ( A X k ) = M ( A X k ) ( A X k ) = M ( A X k ) 2 .
Hence, the first relation in (18) is true for k + 1 , and the 2nd relation could be investigated similarly. For the other relation in (18), by employing the assumption that
X k A A M N = X k ,
and (15), we have:
X k + 1 A A M N = ( X k I + B k [ ( I + χ B k 2 + B k 4 ) ( I + κ B k 2 + B k 4 ) ] ) A A M N = ( X k + X k B k + X k B k 2 + X k B k 3 + X k B k 4 + X k B k 5 + X k B k 6 + X k B k 7 + X k B k 8 + X k B k 9 ) A A M N = X k A A M N + X k B k A A M N + X k B k 2 A A M N + X k B k 3 A A M N + X k B k 4 A A M N + X k B k 5 A A M N + X k B k 6 A A M N + X k B k 7 A A M N + X k B k 8 A A M N + X k B k 9 A A M N = ( X k + X k B k + X k B k 2 + X k B k 3 + X k B k 4 + X k B k 5 + X k B k 6 + X k B k 7 + X k B k 8 + X k B k 9 ) = X k + 1 .
Therefore, the third relation in (18) is valid for k + 1 . The final relation could be investigated in a similar way, and the result follows. The proof is, thus, complete. □
Lemma 2.
Employing the assumptions of Lemma 1 and (3), then for (15) we have:
( V 1 N ) X k ( M 1 ( U * ) 1 ) = diag ( T k , 0 ) ,
where T k is a diagonal matrix, V * N 1 V = I n × n , U * M U = I m × m , V C n × n , U C m × m , and A = U Σ V * .
Proof. 
Assume that T 0 = λ D and that σ i 2 are the non-zero eigenvalues of the matrix N 1 A * M A , while D = diag ( σ 1 , σ 2 , , σ r ) , σ i > 0 for any i. Thus, we can write that:
T k + 1 : = φ ( T k ) = T k I + ( I D T k ) [ ( I + χ ( I D T k ) 2 + ( I D T k ) 4 ) × ( I + κ ( I D T k ) 2 + ( I D T k ) 4 ) ] .
Applying mathematical induction, one can write that
( V 1 N ) X 0 ( M 1 ( U * ) 1 ) = λ ( V 1 N ) A # ( M 1 ( U * ) 1 ) = λ ( V 1 N ) N 1 A * ( M M 1 ( U * ) 1 ) = λ ( V 1 N ) N 1 V diag ( D , 0 ) U * ( M M 1 ( U * ) 1 ) = diag ( λ D , 0 ) .
In addition, when (31) is satisfied, then using (15), one can get that:
( V 1 N ) X k + 1 ( M 1 ( U * ) 1 ) = ( V 1 N ) X k ( M 1 ( U * ) 1 ) × ( 2 I ( V 1 N ) A X k ( M 1 ( U * ) 1 ) ) × [ ( I + χ ( V 1 N ) ( I A X k ) 2 ( M 1 ( U * ) 1 ) + ( V 1 N ) ( I A X k ) 4 ( M 1 ( U * ) 1 ) ) × ( I + κ ( V 1 N ) ( I A X k ) 2 ( M 1 ( U * ) 1 ) + ( V 1 N ) ( I A X k ) 4 ) ( M 1 ( U * ) 1 ) ] .
Using the fact that A = U * M U diag ( D , 0 ) = V * N V , one attains
( V 1 N ) X k + 1 ( M 1 ( U * ) 1 ) = diag ( φ ( T k ) , 0 ) ,
which shows that (27) is a diagonal matrix. This completes the proof. □

3. Error Analysis

The objective of this section is to provide a matrix analysis for the convergence of the iteration scheme (15).
Theorem 1.
Let us consider that A is an m × n matrix whose WSVD is provided by (4). Furthermore, assume that the starting value is given by (11). Thus, the matrix sequence from (15) tends to A M N .
Proof. 
In light of (4), to prove our convergence for the WMP inverse, we now just need to prove that
lim k ( V 1 N ) X k ( M 1 ( U * ) 1 ) = diag ( D 1 , 0 ) .
It is obtained, using Lemmas 1 and 2, that
T k = diag ( τ 1 ( k ) , τ 2 ( k ) , , τ r ( k ) ) ,
where
τ i ( 0 ) = λ σ i
and
τ i ( k + 1 ) = τ i ( k ) 2 I + σ i τ i ( k ) [ ( I + χ ( σ i τ i ( k ) ) 2 + ( σ i τ i ( k ) ) 4 ) ( I + κ ( σ i τ i ( k ) ) 2 + ( σ i τ i ( k ) ) 4 ) ] .
The sequence produced by (34) is the result of employing (15) in calculating the zero σ i 1 of the function
ϕ ( τ ) = σ i τ 1 ,
using the starting condition τ i ( 0 ) .
We observe that convergence to σ i 1 can be achieved, as long as
0 < τ i ( 0 ) < 2 σ i ,
which results in a criterion on λ (the selection in formula (12) has now been shown). Hence, { T k } Σ 1 , and (31) is satisfied. It is now clear that { X k } k = 0 k = A M N when k . This concludes the proof. □

4. Computational Tests

In this section, our aim is to study the efficiency of the proposed approach for calculating the WMP inverse computationally and analytically. To do this, we considered several competitors from the literature in our comparisons, such as those from (6), (7), (10), and (15), denoted by “SM2”, “CM3”, “KMS10”, and “PM10”, respectively.
Note that all computations were done in Mathematica 11.0 [30] and the time is reported in seconds. The hardware used was a CPU Intel Core i5 2430-M with 16 GB of RAM.
We know that the efficiency index is expressed by [31]:
E I = ρ 1 κ ,
where ρ and κ stand for the speed and the whole cost in each cycle, respectively.
As such, the efficiency index of different methods (610) and (15) are reported by: 2 1 2 1.414 , 3 1 3 1.442 , 3 1 4 1.316 , 4 1 4 1.414 , 10 1 10 1.258 , and 10 1 6 1.467 , respectively. Clearly, our investigated iterative expression has better a index and can be more useful in finding the WMP inverse.
Example 1.
[29] The purpose of this experiment was to examine the calculation of WMP inverses for 10 uniform randomly provided m 1 × n 1 = 200 × 210 matrices, as follows:
  • SeedRandom[12]; no = 10; m1 = 200; n1 = 210;
  • ParallelTable[A[k] = RandomReal[{1}, {m1, n1}];, {k, no}];
where the ten various HPD matrices M and N were given by:
  • ParallelTable[MM[k] = RandomReal[{2}, {m1, m1}];, {k, no}];
  • ParallelTable[MM[k] = Transpose[MM[k]].MM[k];, {k, no}];
  • ParallelTable[NN[k] = RandomReal[{3}, {n1, n1}];, {k, no}];
  • ParallelTable[NN[k] = Transpose[NN[k]].NN[k];, {k, no}];
The results by applying the stop termination
| | X k + 1 X k | | 2 10 10 ,
are reported in Table 1 and Table 2, based on the number of iterations, elapsed CPU time (in seconds), and X 0 = 1 σ 1 2 A # . As can be observed from the results, the best scheme in terms of number of iterations and time was (15).
Example 2.
The iterative methods were compared for five randomly generated dense m 1 × n 1 = 500 × 500 matrices produced in Mathematica environment by the following piece of code:
  • m1 = 500; n1 = 500; no = 5; SeedRandom[12];
  • ParallelTable[A[k] = RandomReal[{0, 1}, {m1, n1}];, {k, no}];
  • ParallelTable[MM[k] = RandomReal[{0, 1}, {m1, m1}];, {k, no}];
  • ParallelTable[MM[k] = Transpose[MM[k]].MM[k];, {k, no}];
  • ParallelTable[NN[k] = RandomReal[{0, 1}, {n1, n1}];, {k, no}];
  • ParallelTable[NN[k] = Transpose[NN[k]].NN[k];, {k, no}];
Here, we applied the stopping condition
| | X k + 1 X k | | 10 10 ,
with a change in the initial approximation as X 0 = 1.5 σ 1 2 A # . Noting that the weights M and N were very ill-conditioned, as we had produced them to be. We report the results in Table 3 and Table 4, which reveal that the novel approach was superior to the existing solvers.
One other application of (15), aside from computing the WMP inverse, is in finding good approximate inverse pre-conditioners for Krylov methods when tackling large sparse linear system of equations (see, e.g., [29]). In fact, to apply our scheme in such environments, we can employ several commands, such as SparseArray[] for handling sparse matrices.
The main advantage of the proposed method is the improvement of convergence order obtained by improving the computational efficiency index. Although this computational efficiency index improvement was not observed to be drastic, in solving practical problems in higher dimensions it leads to a clear reduction of computation time.

5. Ending Notes

We have investigated a tenth order iterative method for computing the WMP inverse requiring only six matrix products. The WMP inverse has many applications, from the numerical solution of non-linear equations (those involving singular linear systems [32]) to direct engineering applications. Clearly, the efficiency index will reach 10 1 / 6 1.46 , which is better than the Newton–Schulz and Chebyshev methods for calculating the WMP inverse. The convergence order of the scheme was supported and upheld analytically. The extension of this improved version of the hyperpower family for computing other types generalized inverses, such as outer and inner inverses, under special criteria and initial matrices provides a direction for future works in this active topic of research.

Funding

This research project was supported by a grant from the “Research Center of the Female Scientific and Medical Colleges”, Deanship of Scientific Research, King Saud University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bin Jebreen, H.; Chalco-Cano, Y. An improved computationally efficient method for finding the Drazin inverse. Disc. Dyn. Nat. Soc. 2018, 2018, 6758302. [Google Scholar] [CrossRef]
  2. Niazi Moghani, Z.; Khanehgir, M.; Mohammadzadeh Karizaki, M. Explicit solution to the operator equation AXD + FX*B = C over Hilbert C*-modules. J. Math. Anal. 2019, 10, 52–64. [Google Scholar]
  3. Stanimirović, P.S.; Katsikis, V.N.; Srivastava, S.; Pappas, D. A class of quadratically convergent iterative methods. RACSAM 2019, 1–22. [Google Scholar] [CrossRef]
  4. Ben-Israel, A.; Greville, T.N.E. Generalized Inverses: Theory and Applications, 2nd ed.; Springer: New York, NY, USA, 2003. [Google Scholar]
  5. Godunov, S.K.; Antonov, A.G.; Kiriljuk, O.P.; Kostin, V.I. Guaranteed Accuracy in Numerical Linear Algebra; Springer: Dordrecht, The Netherlands, 1993. [Google Scholar]
  6. Fredholm, I. Sur une classe d’équations fonctionnelles. Acta Math. 1903, 27, 365–390. [Google Scholar] [CrossRef]
  7. Wang, G.R. A new proof of Grevile’s method for computing the weighted M-P inverse. J. Shangai Norm. Univ. 1985, 3, 32–38. [Google Scholar]
  8. Bakhtiari, Z.; Mansour Vaezpour, S. Positive solutions to the system of operator equations TiX = Ui and TiXVi = Ui. J. Math. Anal. 2016, 7, 102–117. [Google Scholar]
  9. Xia, Y.; Chen, T.; Shan, J. A novel iterative method for computing generalized inverse. Neural Comput. 2014, 26, 449–465. [Google Scholar] [CrossRef]
  10. Courriee, P. Fast computation of Moore-Penrose inverse matrices. arXiv 2008, arXiv:0804.4809. [Google Scholar]
  11. Lu, S.; Wang, X.; Zhang, G.; Zhou, X. Effective algorithms of the Moore-Penrose inverse matrices for extreme learning machine. Intell. Data Anal. 2015, 19.4, 743–760. [Google Scholar] [CrossRef]
  12. Sheng, X.; Chen, G. The generalized weighted Moore-Penrose inverse. J. Appl. Math. Comput. 2007, 25, 407–413. [Google Scholar] [CrossRef]
  13. Soleymani, F.; Soheili, A.R. A revisit of stochastic theta method with some improvements. Filomat 2017, 31, 585–596. [Google Scholar] [CrossRef]
  14. Söderström, T.; Stewart, G.W. On the numerical properties of an iterative method for computing the Moore-Penrose generalized inverse. SIAM J. Numer. Anal. 1974, 11, 61–74. [Google Scholar] [CrossRef]
  15. Stanimirović, P.S.; Ciric, M.; Stojanović, I.; Gerontitis, D. Conditions for existence, representations, and computation of matrix generalized inverses. Complexity 2017, 2017, 6429725. [Google Scholar] [CrossRef]
  16. Gulliksson, M.E.; Wedin, P.A.; Wei, Y. Perturbation identities for regularized Tikhonov inverse and weighted pseudo inverse. BIT 2000, 40, 513–523. [Google Scholar] [CrossRef]
  17. Roy, F.; Gupta, D.K.; Stanimirović, P.S. An interval extension of SMS method for computing weighted Moore-Penrose inverse. Calcolo 2018, 55, 15. [Google Scholar] [CrossRef]
  18. Van Loan, C.F. Generalizing the singular value decomposition. SIAM J. Numer. Anal. 1976, 13, 76–83. [Google Scholar] [CrossRef]
  19. Zhang, N.; Wei, Y. A note on the perturbation of an outer inverse. Calcolo 2008, 45, 263–273. [Google Scholar] [CrossRef]
  20. Ghorbanzadeh, M.; Mahdiani, K.; Soleymani, F.; Lotfi, T. A class of Kung-Traub-type iterative algorithms for matrix inversion. Int. J. Appl. Comput. Math. 2016, 2, 641–648. [Google Scholar] [CrossRef]
  21. Pan, V.Y. Structured Matrices and Polynomials: Unified Superfast Algorithms; BirkhWauser: Boston, MA, USA; Springer: New York, NY, USA, 2001. [Google Scholar]
  22. Schulz, G. Iterative Berechnung der Reziproken matrix. Z. Angew. Math. Mech. 1933, 13, 57–59. [Google Scholar] [CrossRef]
  23. Li, H.-B.; Huang, T.-Z.; Zhang, Y.; Liu, X.-P.; Gu, T.-X. Chebyshev-type methods and preconditioning techniques. Appl. Math. Comput. 2011, 218, 260–270. [Google Scholar] [CrossRef]
  24. Krishnamurthy, E.V.; Sen, S.K. Numerical Algorithms—Computations in Science and Engineering; Affiliated East-West Press: New Delhi, India, 1986. [Google Scholar]
  25. Sen, S.K.; Prabhu, S.S. Optimal iterative schemes for computing Moore-Penrose matrix inverse. Int. J. Syst. Sci. 1976, 8, 748–753. [Google Scholar] [CrossRef]
  26. Grevile, T.N.E. Some applications of the pseudo-inverse of matrix. SIAM Rev. 1960, 3, 15–22. [Google Scholar] [CrossRef]
  27. Huang, F.; Zhang, X. An improved Newton iteration for the weighted Moore-Penrose inverse. Appl. Math. Comput. 2006, 174, 1460–1486. [Google Scholar] [CrossRef]
  28. Sánchez León, J.G. Mathematica Beyond Mathematics: The Wolfram Language in the Real World; Taylor & Francis Group: Boca Raton, FL, USA, 2017. [Google Scholar]
  29. Zaka Ullah, M.; Soleymani, F.; Al-Fhaid, A.S. An efficient matrix iteration for computing weighted Moore-Penrose inverse. Appl. Math. Comput. 2014, 226, 441–454. [Google Scholar] [CrossRef]
  30. Trott, M. The Mathematica Guide-Book for Numerics; Springer: New York, NY, USA, 2006. [Google Scholar]
  31. Ostrowski, A.M. Sur quelques transformations de la serie de LiouvilleNewman. CR Acad. Sci. Paris 1938, 206, 1345–1347. [Google Scholar]
  32. Soheili, A.R.; Soleymani, F. Iterative methods for nonlinear systems associated with finite difference approach in stochastic differential equations. Numer. Algor. 2016, 71, 89–102. [Google Scholar] [CrossRef]
Table 1. Comparison based on the number of iterations and the required mean in Experiment 1.
Table 1. Comparison based on the number of iterations and the required mean in Experiment 1.
MethodsSM2CM3KMS10PM10
A 1 68432222
A 2 69442222
A 3 67432121
A 4 71462323
A 5 72462323
A 6 72462323
A 7 66422121
A 8 78502525
A 9 63412020
A 10 69442222
Mean69.544.522.222.2
Table 2. Comparison based on the elapsed CPU time and its mean in Experiment 1.
Table 2. Comparison based on the elapsed CPU time and its mean in Experiment 1.
MethodsSM2CM3KMS10PM10
A 1 1.49541.048260.9961550.755317
A 2 1.45631.080060.9840570.767785
A 3 1.373011.038470.9674270.720294
A 4 1.532011.109271.03650.789994
A 5 1.509081.101640.9988530.794098
A 6 1.512151.114211.033610.823177
A 7 1.394811.001160.9152440.743779
A 8 1.627421.244381.124340.87007
A 9 1.329160.9996830.9030720.709523
A 10 1.497381.057360.9850840.764156
Mean1.472671.079450.9944340.773819
Table 3. Comparison based on the number of iterations and the required mean in Experiment 2.
Table 3. Comparison based on the number of iterations and the required mean in Experiment 2.
MethodsSM2CM3KMS10PM10
A 1 98613030
A 2 86552727
A 3 83532626
A 4 85542727
A 5 81522626
Mean86.655.27.227.2
Table 4. Comparison based on the elapsed time and its mean in Experiment 2.
Table 4. Comparison based on the elapsed time and its mean in Experiment 2.
MethodsSM2CM3KMS10PM10
A 1 7.897457.2088512.18017.11963
A 2 6.903466.639710.9336.50042
A 3 2.340132.236223.753412.20977
A 4 2.231332.156793.788482.23819
A 5 2.443162.267333.791532.2391
Mean4.363114.101786.889294.06142

Share and Cite

MDPI and ACS Style

Bin Jebreen, H. Calculating the Weighted Moore–Penrose Inverse by a High Order Iteration Scheme. Mathematics 2019, 7, 731. https://doi.org/10.3390/math7080731

AMA Style

Bin Jebreen H. Calculating the Weighted Moore–Penrose Inverse by a High Order Iteration Scheme. Mathematics. 2019; 7(8):731. https://doi.org/10.3390/math7080731

Chicago/Turabian Style

Bin Jebreen, Haifa. 2019. "Calculating the Weighted Moore–Penrose Inverse by a High Order Iteration Scheme" Mathematics 7, no. 8: 731. https://doi.org/10.3390/math7080731

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop