Next Article in Journal
Experimental Investigation of the Compaction-Crushing Characteristics of Graded Fractured Coal Gangue Based on Infill Mining
Previous Article in Journal
On Averaging Principle for Caputo–Hadamard Fractional Stochastic Differential Pantograph Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

From Fractal Behavior of Iteration Methods to an Efficient Solver for the Sign of a Matrix

by
Tao Liu
1,
Malik Zaka Ullah
2,
Khalid Mohammed Ali Alshahrani
2 and
Stanford Shateyi
3,*
1
School of Mathematics and Statistics, Northeastern University at Qinhuangdao, Qinhuangdao 066000, China
2
Mathematical Modeling and Applied Computation (MMAC) Research Group, Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
3
Department of Mathematics and Applied Mathematics, School of Mathematical and Natural Sciences, University of Venda, P. Bag X5050, Thohoyandou 0950, South Africa
*
Author to whom correspondence should be addressed.
Fractal Fract. 2023, 7(1), 32; https://doi.org/10.3390/fractalfract7010032
Submission received: 18 November 2022 / Revised: 17 December 2022 / Accepted: 22 December 2022 / Published: 28 December 2022

Abstract

:
Investigating the fractal behavior of iteration methods on special polynomials can help to find iterative methods with global convergence for finding special matrix functions. By employing such a methodology, we propose a new solver for the sign of an invertible square matrix. The presented method achieves the fourth rate of convergence by using as few matrix products as possible. Its attraction basin shows larger convergence radii, in contrast to its Padé-type methods of the same order. Computational tests are performed to check the efficacy of the proposed solver.

1. Introduction and the Fractal Behavior of Iteration Methods

When it comes to the designing, motivation behind, and importance of new iterative methods, the convergence order is not the only factor when improving the existing solvers. Based on the context that the researchers/practitioners deal with, the order or the method is chosen or constructed. To illustrate this further, consider the problem of finding generalized outer inverses for arbitrary matrices using iterative methods. In this context, the optimal iterative methods do not yield useful iterations in the matrix environment, and one must rely fully on non-optimal methods that can yield matrix methods having as few matrix multiplications as possible; see [1,2] for more information.
Another context is when the dealing nonlinear problem is in the application and theory of the matrix functions. For instance, when we need to compute the matrix sign function using iterative methods, after Newton’s iteration, other higher-order optimal schemes will not be useful. Additionally, many of the existing iterative methods are not fruitful since they do not result in proper counterparts in the matrix environment (for such a task). In addition, some of the optimal iterative methods will lose their global convergence in the calculation of the sign of a matrix.
The other context is in solving nonlinear algebraic system of equations. As a matter of fact, when it comes to a system of equations, the order optimality (as discussed by Kung–Traub-1974 for iteration schemes without memory [3]) cannot be achieved anymore. In such cases, the lower the computational cost of computing Jacobians/Hessian matrices, the more useful the method is; see [4] for more information.
Note that the structure of the iterative method, the initial guess, the number of the floating point arithmetic, the dealing problem, and the stopping criterion all affect the choice of an iterative method in practice.
Recalling that Newton’s iteration for the nonlinear equation g ( z ) = 0 is given by:
z l + 1 = z l f ( z l ) 1 f ( z l ) , l 0 ,
and the secant method is given by:
z l + 1 = z l f ( z l ) f ( z l ) f ( z l 1 ) z l z l 1 , l 1 .
Iterative methods could be compared in terms of their sensitivity of initial approximations, when they possess the same rate of convergence and the same structure [5,6]. Let us now draw the attraction basins of the methods (5) and (2) in Figure 1, Figure 2, Figure 3 and Figure 4 for different polynomials in a square of the complex plane using double-precision arithmetic. Here, we follow the methodology discussed in [7,8] to find the attraction basins. Here, the red color stands for the risky area that could cause an indeterminate state or divergence. As can be observed clearly, the secant method has the lowest convergence rate and thus smaller converging areas, while this is improved for Newton’s method. Note that in this way, 10 6 initial approximations are tested one by one when we find the attraction basins.
The importance of the fractal behavior of different iterative schemes when applied to different polynomials is in the fact that they could reveal which method globally converges when it is extended for solving special matrix nonlinear equations. Important matrix functions that depend clearly on the attraction basins are matrix sign functions and sector functions.
This work focuses on finding and computing the matrix sign function, which plays a significant role in the theory and practical applications of the matrix function [9].
The remaining portions of this paper are as follows. In Section 2, a novel multi-step iteration method is proposed carefully to be employable for the calculation of the matrix sign. Section 3 extends this method for calculating the sign of an invertible matrix. It is theoretically investigated how the solver converges and does this under asymptotical stability with an appropriate choice of the initial matrix. To test the efficacy and stability of the proposed solver, we examine numerical experiments on different test problems in Section 4. Ultimately, based on the obtained computational observations, the suggested technique is determined to be efficient. The conclusion, with some outlines for the forthcoming works, is furnished in Section 5.

2. Iteration Methods on the Sign of a Matrix

Let A C n × n be a nonsingular square matrix, and g stands for a scalar function. The function g ( A ) is defined as a matrix function of A with the size n × n . If the function g is given on the spectrum of the A [10,11], then it is possible to have the facts below about the matrix function g ( A ) :
  • For a given square matrix Z, it commutes with g ( A ) as long as Z commutes with A,
  • g ( γ i ) stand for the eigenvalues of g ( A ) , where γ i are the eigenvalues of A,
  • g ( Z A Z 1 ) = Z g ( A ) Z 1 ,
  • g ( A T ) = g ( A ) T ,
  • g ( A ) commutes with A.
The sign of a matrix could be defined using the Cauchy integral as follows:
sign ( A ) = M = 2 π A 0 ( t 2 I + A 2 ) 1 d t .
Some of the functions of matrices under given assumptions can be computed numerically by employing the fixed-point type iterative methods of the general form below:
Z l + 1 = g ( Z l ) , l = 0 , 1 , 2 , ,
wherein Z 0 must be chosen carefully.
The second-order Newton’s scheme has the following structure for calculating the sign of a square invertible matrix:
Z l + 1 = 1 2 Z l + Z l 1 ,
where the starting matrix is
Z 0 = A .
The work [12] presented an important family of iterative schemes for finding (3) by employing the following Padé approximants:
g ( ζ ) = 1 ζ 1 / 2 .
Let us consider that the ( a 1 , a 2 ) -Padé approximate to g ( ζ ) is defined by
P a 1 ( ζ ) Q a 2 ( ζ ) ,
and a 1 + a 2 1 . Then, the authors [12] showed that the following iterative scheme
z l + 1 = z l P a 1 ( 1 z l 2 ) Q a 2 ( 1 z l 2 ) : = ψ 2 a 1 + 1 , 2 a 2 ,
converges with convergence speed a 1 + a 2 + 1 to ± 1 .
By considering (9), the following well-known locally convergent inversion-free Newton–Schulz method:
Z l + 1 = 1 2 Z l ( 3 I Z l 2 ) ,
and the following Halley’s iteration scheme:
Z l + 1 = [ I + Z l 2 ] [ Z l ( 3 I + Z l 2 ) ] 1 ,
belong to this family for appropriate choices of m 1 and m 2 . It is noted that the Newton’s scheme (5) is a member of the reciprocal of (9), see [13,14].
Two fourth-order methods from (9) having global convergence behavior are given by
Z l + 1 = [ I + 6 Z l 2 + Z l 4 ] [ 4 Z l ( I + Z l 2 ) ] 1 , Pad é [ 1 , 2 ] ,
Z l + 1 = [ 4 Z l ( I + Z l 2 ) ] [ I + 6 Z l 2 + Z l 4 ] 1 , Reciprocal of Pad é [ 1 , 2 ] .
After discussing the existing iteration methods to find the matrix sign functions, this paper proposes a new one. It is known that the most challenging methods for such a purpose arises from (9) with arbitrary order of convergence. Such methods are called optimal. However, in this paper, we present a novel root solver with global fourth-rate speed for our target to calculate the sign of a matrix.

A New Solver

The following nonlinear equation plays an important role when the solvers for the sign of a matrix are constructed (see e.g., [15]):
g ( z ) : = z 2 1 = 0 .
The main aim here is to propose a new solver to be effective when it is extended for finding the matrix sign function. This means that when we now want to propose a new root solver, the purpose is not to fulfill the optimality conjecture of Kung–Traub for producing optimal root solvers or to design methods with memory to achieve as high an order of convergence as possible [16]. In fact, the iterative root solver should be useful and new when it is applied to finding the matrix sign function. Therefore, many of the recent super-high-order methods are put aside. Here, we propose a three-step method without memory for such a target as follows:
x l = z l g ( z l ) g ( z l ) 1 , l = 0 , 1 , , v l = z l g ( z l ) 9 / 30 g ( x l ) g ( z l ) 39 / 30 g ( x l ) g ( z l ) g ( z l ) , z l + 1 = v l g ( v l ) g ( v l ) g ( z l ) ( v l z l ) .
It is necessary to show the convergence order of (15) before any extensions to the matrix environments. This is pursued in the following theorem.
Theorem 1. 
Assume that β D is a simple root of the sufficiently differentiable function g : D C C , and assume that the initial value w 0 is close to the solution. The scheme (15) has quartical convalescence rate and reads the error below:
μ l + 1 = 7 b 2 3 10 μ l 4 + O ( μ l 5 ) ,
where μ l = z l β and b l = g ( l ) ( β ) l ! g ( β ) .
Proof. 
Considering the assumption of the theorem, now we expand g ( z l ) and g ( z l ) around β to obtain that:
g ( z l ) = g ( β ) [ μ l + b 2 μ l 2 + b 3 μ l 3 + b 4 μ l 4 + b 5 μ l 5 + O ( μ l 6 ) ] ,
and
g ( z l ) = g ( β ) [ 1 + 2 b 2 μ l + 3 b 3 μ l 2 + 4 b 4 μ l 3 + 5 b 5 μ l 4 + O ( μ l 5 ) ] .
Now, from (17) and (18), we have
g ( z l ) g ( z l ) = μ l b 2 μ l 2 + 2 ( b 2 2 b 3 ) μ l 3 + 4 b 2 3 + 7 b 2 b 3 3 b 4 μ l 4 + O ( μ l 5 ) .
By substituting (19) into z l of (15), we obtain
z l = β + b 2 μ l 2 + 2 b 2 2 + 2 b 3 μ l 3 4 b 2 3 + 7 b 2 b 3 3 b 4 μ l 4 + O ( μ l 5 ) .
From (17) and (19), and a similar methodology, we obtain
v ( z l ) β = 7 10 b 2 2 μ l 3 + 9 b 2 b 3 5 159 b 2 3 100 μ l 4 + O ( μ l 5 ) .
Now, the use of (15) and (21) implies that
g ( v l ) g ( z l ) ( v l z l ) = b 2 g ( β ) μ l + b 3 g ( β ) μ l 2 + 7 b 2 3 g ( β ) 10 + b 4 g ( β ) + g ( β ) μ l 3 + O ( μ l 4 ) .
At last, by employing (15) and (22), it is possible to attain (16). Therefore, the iteration (15) has fourth-order convergence.    □
Here, it might be asked how the scheme (15) is built. Although the scheme is presented as a given, the presence of a detailed argument regarding the choice of such a scheme could make it possible to understand whether the scheme is unique or hides a class of possible schemes behind it. The scheme (15) consists of three steps. The first two steps have been designed based on a Traub-like third-order method with the coefficients set to 9/30 and 39/30 to obtain the third order of convergence and as high a number as possible of the attraction basins. The last substep is based on the secant solver from the first and second steps, which includes the computation of a divided difference operator. To discuss further, this structure was obtained by us after severe attempts in order to fulfil the following:
  • To keep the global convergence order at four for the quadratic equations.
  • To keep the number of matrix products at four, with one matrix inversion, just like the associated methods (12) and (13).
  • The global attraction basins (carried forward in Figure 5 and Figure 6) must be larger than the associated methods (12) and (13).

3. A New Solver and Its Convergence

Let us now solve (14) by the iterative method (15). This can be symbolically deduced to the following numerical method in the matrix environment:
Z l + 1 = Z l 29 I + 114 Z l 2 + 17 Z l 4 3 I + 86 Z l 2 + 71 Z l 4 1 ,
using the initial value (6). Note that, similarly, one may obtain the reciprocal version of (23) as follows:
Z l + 1 = 3 I + 86 Z l 2 + 71 Z l 4 Z l 29 I + 114 Z l 2 + 17 Z l 4 1 .
Theorem 2. 
For computing the sign of A with no eigenvalues on the axis of imaginary, let Z 0 be selected via (6). The iterative method (24) (or (23)) converges to the M, and the convergence order is four.
Proof. 
We consider that W is the Jordan block matrix, K is an invertible matrix of the same size, and A is decomposed by:
A = K W K 1 .
Now, by using the solver (24) from the iterate l to l + 1 , we obtain an iteration that maps the eigenvalues as follows (see [14] for more information):
γ l + 1 i = 3 + 86 γ l i 2 + 71 γ l i 4 γ l i 29 + 114 γ l i 2 + 17 γ l i 4 1 , 1 i n ,
where m i = sign ( γ l i ) = ± 1 . In general, and after some mathematical simplifications, the expression (26) reveals that the eigenvalues conerge to m i = ± 1 ; that is to say,
lim l γ l + 1 i m i γ l + 1 i + m i = 0 .
The relation (27) gives the convergence of the iteration to ± 1 via (24). Now, to investigate the rate of convergence, we first write as follows:
Δ l = Z l ( 29 I + 114 Z l 2 + 17 Z l 4 ) .
Using (28), we can write:
Z l + 1 M = ( 3 I + 86 Z l 2 + 71 Z l 4 ) Δ l 1 M = [ 3 I + 86 Z l 2 + 71 Z l 4 M Δ l ] Δ l 1 = [ 3 I + 86 Z l 2 + 71 Z l 4 29 Z l M 114 Z l 3 M 17 Z l 5 M ] Δ l 1 = [ 3 ( Z l M ) 4 + 17 Z l M ( Z 4 4 Z l 3 M + 6 Z l 2 M 2 4 Z l M 3 + I ) ] Δ l 1 = [ 3 ( Z l M ) 4 + 17 Z l M ( Z l M ) 4 ] Δ l 1 = ( Z l M ) 4 [ 3 I + 17 Z l M ] Δ l 1 .
Using (29), it is possible to obtain that:
Z l + 1 M 3 I + 17 Z l M Δ l 1 Z l M 4 ,
which shows the fourth order of convergence. This completes the proof. The error analysis for (23) can be deduced similarly.    □
For economic reasons, it is important to employ an algorithm to solve practical problems. That is to say, the convergence rate is not the only factor, and a method is useful only if it can compete the most efficient existing solvers of the same type. When we compare (24) to (12) and (13), it is observed that all possess four matrix products and only one matrix inversion per cycle. Moreover, it is now seen and checked that the proposed methods (23) and (24) have wider convergence radii.
To check the global convergence of the presented method in contrast to the existing solvers, we may draw attraction basins of the iterative methods when solving (14) on the complex domain [ 2 , 2 ] × [ 2 , 2 ] . In fact, we divide the domain into a refined mesh and test to what root each of the mesh points converge. The results of the comparisons are brought forward in Figure 5 and Figure 6 by employing the stopping termination
| g ( z l ) | 10 3 .
The results are shaded via the number of cycles required to attain the convergence. They also show that for (23) and (24), there are wider convergence radii, in contrast to their competitors of the same order from (9).
Theorem 3. 
Using (24) and similar assumptions on A as in Theorem 2, then { Z l } l = 0 with Z 0 = A is stable asymptotically.
Proof. 
Let χ l stand for a computational perturbation that is produced in the l t h iterate. Now, one can write
Z ˜ l = Z l + χ l .
From now on, it is also considered that ( χ l ) i 0 , i 2 , which is via the first-order analysis of error in this theorem. This is valid if χ l is sufficiently small. Now, we obtain
Z ˜ l + 1 = [ 3 I + 86 Z ˜ l 2 + 71 Z ˜ l 4 ] × [ Z ˜ l ( 29 I + 114 Z ˜ l 2 + 17 Z ˜ l 4 ) ] 1 .
For sufficiently large l, that is to say, in the convergence phase, we consider that Z l sign ( A ) = M , where a note is used for simplifying
( L + U ) 1 L 1 L 1 U L 1 ,
for any the matrix U and any invertible matrix L, and while we also use M 2 = I , and M 1 = M , in order to obtain the following relation:
Z ˜ l + 1 M + 1 2 χ l 1 2 M χ l M .
By further simplifications and using χ l + 1 = Z ˜ l + 1 Z l + 1 , we can find:
χ l + 1 1 2 χ l 1 2 M χ l M .
This can lead to the point that the method at the stage l + 1 is bounded, i.e., we have:
χ l + 1 2 1 χ 0 M χ 0 M .
The inequality (37) reveals that the sequence of matrices obtained by (23) has asymptotical stability. This finishes the proof here.    □

4. Numerical Treatments

In this section, the iterative solvers discussed up to now have been compared in the Mathematica 12.0 [17] in l norm and the following termination:
Z l + 1 2 I 2 10 5 .
The methods that will be compared are (5), (11)–(13), (23), and (24) (shown by Newton, Halley, Padé 4-1, Padé 4-2, PM1, and PM2, respectively). Other matrix norms might be employed in (38). Although they might be very useful in terms of reducing the whole elapsed time for the compared methods, for the general case of complex dense matrices, the · 2 would be a reliable choice. The stopping termination (38) comes from the fact that at each iterate, the numerical approximation must satisfy the matrix nonlinear quadraric equation.
Example 1. 
We have tested all of the numerical methods of the same type on 10 randomly generated complex matrices given by the piece of Mathematica code below:
SeedRandom[456];
min = 11; number = 20;
Table[A[l] = RandomComplex[{-200 - 200 I,
      200 + 200 I}, {50 l, 50 l}]; {l, min, number}];
tolerance = 10^−5;
The sizes are varied and are 550 × 550 , , 1000 × 1000 .
Results are provided in Table 1 and Table 2 showing that PM1 has the best performance against its competitors. Note that the convergence of the solvers for the matrix sign function depends on the suitable selection of the initial matrices. It is seen the PM1 beats all of its competitors by converging to the sign as quickly as possible. The mean number of iterates required to achieve the convergence, as well as the mean of the CPU elapsed times, is lower for the proposed solvers.
Thee computational tests in Section 4 have been performed to show the efficacy of the new iteration method (and its reciprocal) for a variety of complex matrices of different sizes. The mean of the CPU times for (23) and (24) performed better than the others.
Example 2. 
The target of this problem is to compare the solvers for 10 real, randomly generated matrices of different sizes, as follows:
SeedRandom[123];
min = 11; number = 20;
Table[A[l] = RandomReal[{-1000, 1000},
     {50 l, 50 l}]; {l, min, number}];
tolerance = 10^−5;
The results are given in Table 3 and Table 4, which confirm the superiority of the proposed solver against the existing solvers in terms of the number of iterates as well as the elapsed CPU time.

5. Concluding Remarks

To calculate the sign of an invertible matrix is always a significant problem in the theory and application of functions of matrices in mathematics. Accordingly, it is important to design new methods for such a purpose. Toward this goal, in this paper, after discussing the importance of studying the fractal behavior of iteration methods for solving nonlinear equations on different polynomials, we have proposed a new solver. The proposed multiplication-rich fourth-order method was developed, and its stability was proved, in this paper. Computational tests were performed to check the efficacy of the proposed solver and confirm the theoretical discussions. The presented scheme in this work has global convergence for the matrix sign, but like the other similar methods of the same structure, its convergence for the matrix sector function is local. This makes it important to investigate how to choose a proper starting matrix to be in the convergence phase when it is employed for finding the matrix-sector functions. Such an investigation is under investigation in our team for future research works.

Author Contributions

Conceptualization, T.L. and M.Z.U.; formal analysis, T.L. and S.S.; funding acquisition, T.L., M.Z.U. and S.S.; investigation, M.Z.U. and S.S.; methodology, T.L. and K.M.A.A.; supervision, T.L.; validation, M.Z.U. and K.M.A.A.; writing—original draft, M.Z.U., K.M.A.A. and S.S.; and writing—review and editing, T.L.; M.Z.U. and S.S. All authors have read and agreed to the published version of the manuscript.

Funding

The research (of Tao Liu) was funded by the Natural Science Foundation of Hebei Province of China (A2020501007), the Fundamental Research Funds for the Central Universities (N2123015), and the Technical Service Project of Eighth Geological Brigade of Hebei Bureau of Geology and Mineral Resources Exploration (KJ2022-021).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

For the data-availability statement, we state that data sharing is not applicable to this article as no new data were created in this study.

Acknowledgments

The second author (Malik Zaka Ullah) thanks The Deanship of Scientific Research (DSR) at King Abdulaziz University (KAU), Jeddah, Saudi Arabia, which has funded his project under grant number RG-11-130-43.

Conflicts of Interest

The authors declare that they have no known competing financial interest or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Soheili, A.R.; Soleymani, F.; Petković, M.D. On the computation of weighted Moore-Penrose inverse using a high-order matrix method. Comput. Math. Appl. 2013, 66, 2344–2351. [Google Scholar] [CrossRef]
  2. Haghani, F.K.; Soleymani, F. An improved Schulz-type iterative method for matrix inversion with application. Tran. Inst. Measur. Cont. 2014, 36, 983–991. [Google Scholar] [CrossRef]
  3. Soleymani, F.; Vanani, S.K.; Siyyam, H.I.; Al-Subaihi, I.A. Numerical solution of nonlinear equations by an optimal eighth-order class. Ann. Univ. Ferrara 2013, 59, 159–171. [Google Scholar] [CrossRef]
  4. Qasim, S.; Ali, Z.; Ahmad, F.; Serra-Capizzano, S.; Ullah, M.Z.; Mahmood, A. Solving systems of nonlinear equations when the nonlinearity is expensive. Comput. Math. Appl. 2016, 71, 1464–1478. [Google Scholar] [CrossRef]
  5. Gdawiec, K.; Kotarski, W.; Lisowska, A. Visual analysis of Newton’s method with fractional order derivatives. Symmetry 2019, 11, 1143. [Google Scholar] [CrossRef] [Green Version]
  6. Padilla, J.J.; Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Parametric family of root-finding iterative methods: Fractals of the basins of attraction. Fractal Fract. 2022, 6, 572. [Google Scholar] [CrossRef]
  7. Neta, B.; Chun, C. Comparison of several families of optimal eighth order methods. Appl. Math. Comput. 2016, 274, 762–773. [Google Scholar]
  8. Getz, C.; Helmstedt, J. Graphics with Mathematica Fractals, Julia Sets, Patterns and Natural Forms; Elsevier: Amsterdam, The Netherlands, 2004. [Google Scholar]
  9. Misrikhanov, M.S.; Ryabchenko, V.N. Matrix sign function in the problems of analysis and design of the linear systems. Auto. Remote Control 2008, 69, 198–222. [Google Scholar] [CrossRef]
  10. Roberts, J.D. Linear model reduction and solution of the algebraic Riccati equation by use of the sign function. Int. J. Cont. 1980, 32, 677–687. [Google Scholar] [CrossRef]
  11. Soleymani, F.; Haghani, F.K.; Shateyi, S. Several numerical methods for computing unitary polar factor of a matrix. Adv. Diff. Equat. 2016, 2016, 1–11. [Google Scholar] [CrossRef] [Green Version]
  12. Kenney, C.S.; Laub, A.J. Rational iterative methods for the matrix sign function. SIAM J. Matrix Anal. Appl. 1991, 12, 273–291. [Google Scholar] [CrossRef] [Green Version]
  13. Gomilko, O.; Greco, F.; Ziȩtak, K. A Padé family of iterations for the matrix sign function and related problems. Numer. Lin. Alg. Appl. 2012, 19, 585–605. [Google Scholar] [CrossRef]
  14. Soleymani, F.; Stanimirović, P.S.; Stojanović, I. A novel iterative method for polar decomposition and matrix sign function. Disc. Dyn. Nat. Soc. 2015, 2015, 649423. [Google Scholar] [CrossRef] [Green Version]
  15. Zainali, N.; Lotfi, T. A globally convergent variant of mid-point method for finding the matrix sign. Comp. Appl. Math. 2018, 37, 5795–5806. [Google Scholar] [CrossRef]
  16. Soleymani, F. Letter to the editor regarding the article by Khattri: Derivative free algorithm for solving nonlinear equations. Computing 2013, 95, 159–162. [Google Scholar] [CrossRef]
  17. Magrab, E.B. An Engineer’s Guide To Mathematica; John Wiley & Sons: West Sussex, UK, 2014. [Google Scholar]
Figure 1. Attraction basins on f ( z ) = z 2 1 for the secant; Newton solvers on left and right, respectively.
Figure 1. Attraction basins on f ( z ) = z 2 1 for the secant; Newton solvers on left and right, respectively.
Fractalfract 07 00032 g001
Figure 2. Attraction basins on f ( z ) = z 3 1 for the secant; Newton solvers on left and right, respectively.
Figure 2. Attraction basins on f ( z ) = z 3 1 for the secant; Newton solvers on left and right, respectively.
Fractalfract 07 00032 g002
Figure 3. Attraction basins on f ( z ) = z 4 1 for the secant; Newton solvers on left and right, respectively.
Figure 3. Attraction basins on f ( z ) = z 4 1 for the secant; Newton solvers on left and right, respectively.
Fractalfract 07 00032 g003
Figure 4. Attraction basins on f ( z ) = z 6 1 for the secant; Newton solvers on left and right, respectively.
Figure 4. Attraction basins on f ( z ) = z 6 1 for the secant; Newton solvers on left and right, respectively.
Fractalfract 07 00032 g004
Figure 5. Basins of attractions for (5) on left, (12) on right.
Figure 5. Basins of attractions for (5) on left, (12) on right.
Fractalfract 07 00032 g005
Figure 6. Basins of attractions for (23) on left and (24) on right.
Figure 6. Basins of attractions for (23) on left and (24) on right.
Fractalfract 07 00032 g006
Table 1. Results of comparisons via number of iterates for Experiment 1.
Table 1. Results of comparisons via number of iterates for Experiment 1.
MatricesSizeNewtonHalleyPadé 4-1Padé 4-2PM1PM2
#1 550 × 550 22141111119
#2 600 × 600 231512121110
#3 650 × 650 281814141312
#4 700 × 700 241512121211
#5 750 × 750 231512121110
#6 800 × 800 241512121111
#7 850 × 850 261713131211
#8 900 × 900 241512121211
#9 950 × 950 231512121110
#10 1000 × 1000 241512121111
Mean 24.115.412.212.211.510.6
Table 2. Results of comparisons based on CPU time (seconds) for Experiment 1.
Table 2. Results of comparisons based on CPU time (seconds) for Experiment 1.
MatricesSizeNewtonHalleyPadé 4-1Padé 4-2PM1PM2
#1 550 × 550 1.691.621.511.571.641.37
#2 600 × 600 2.252.162.042.122.061.88
#3 650 × 650 3.323.092.892.862.912.63
#4 700 × 700 3.543.163.013.073.162.98
#5 750 × 750 4.123.893.603.713.443.33
#6 800 × 800 5.224.484.244.353.994.24
#7 850 × 850 6.606.045.345.475.254.92
#8 900 × 900 7.326.265.905.936.165.92
#9 950 × 950 8.457.536.896.936.596.31
#10 1000 × 1000 10.648.818.238.707.727.88
Mean 5.324.714.374.474.294.15
Table 3. Results of comparisons based on number of iterates for Experiment 2.
Table 3. Results of comparisons based on number of iterates for Experiment 2.
MatricesSizeNewtonHalleyPadé 4-1Padé 4-2PM1PM2
#1 550 × 550 281814141312
#2 600 × 600 332117171414
#3 650 × 650 231512121110
#4 700 × 700 251613131211
#5 750 × 750 231512121110
#6 800 × 800 281814141312
#7 850 × 850 251613131211
#8 900 × 900 241512121110
#9 950 × 950 271714141312
#10 1000 × 1000 251613131211
Mean 26.116.713.413.412.211.3
Table 4. Results of comparisons based on CPU time (seconds) for Experiment 2.
Table 4. Results of comparisons based on CPU time (seconds) for Experiment 2.
MatricesSizeNewtonHalleyPadé 4-1Padé 4-2PM1PM2
#1 550 × 550 1.100.920.820.820.750.74
#2 600 × 600 1.531.311.181.221.031.06
#3 650 × 650 1.341.111.021.040.980.90
#4 700 × 700 1.651.441.281.401.281.22
#5 750 × 750 1.821.601.431.521.381.30
#6 800 × 800 2.672.221.912.041.901.80
#7 850 × 850 2.742.332.072.202.051.86
#8 900 × 900 3.052.562.212.372.221.96
#9 950 × 950 3.853.182.943.132.912.72
#10 1000 × 1000 4.133.523.423.363.132.92
Mean 2.392.021.831.911.761.65
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, T.; Ullah, M.Z.; Alshahrani, K.M.A.; Shateyi, S. From Fractal Behavior of Iteration Methods to an Efficient Solver for the Sign of a Matrix. Fractal Fract. 2023, 7, 32. https://doi.org/10.3390/fractalfract7010032

AMA Style

Liu T, Ullah MZ, Alshahrani KMA, Shateyi S. From Fractal Behavior of Iteration Methods to an Efficient Solver for the Sign of a Matrix. Fractal and Fractional. 2023; 7(1):32. https://doi.org/10.3390/fractalfract7010032

Chicago/Turabian Style

Liu, Tao, Malik Zaka Ullah, Khalid Mohammed Ali Alshahrani, and Stanford Shateyi. 2023. "From Fractal Behavior of Iteration Methods to an Efficient Solver for the Sign of a Matrix" Fractal and Fractional 7, no. 1: 32. https://doi.org/10.3390/fractalfract7010032

Article Metrics

Back to TopTop