Next Article in Journal
Resource Exploitation in a Stochastic Horizon under Two Parametric Interpretations
Next Article in Special Issue
Purely Iterative Algorithms for Newton’s Maps and General Convergence
Previous Article in Journal
The Study of the Theoretical Size and Node Probability of the Loop Cutset in Bayesian Networks
Previous Article in Special Issue
The Newtonian Operator and Global Convergence Balls for Newton’s Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Two Iterative Methods with Memory Constructed by the Method of Inverse Interpolation and Their Dynamics

School of Mathematical Sciences, Bohai University, Jinzhou 121000, China
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(7), 1080; https://doi.org/10.3390/math8071080
Submission received: 11 June 2020 / Revised: 29 June 2020 / Accepted: 1 July 2020 / Published: 3 July 2020
(This article belongs to the Special Issue Iterative Methods for Solving Nonlinear Equations and Systems 2020)

Abstract

:
In this paper, we obtain two iterative methods with memory by using inverse interpolation. Firstly, using three function evaluations, we present a two-step iterative method with memory, which has the convergence order 4.5616. Secondly, a three-step iterative method of order 10.1311 is obtained, which requires four function evaluations per iteration. Herzberger’s matrix method is used to prove the convergence order of new methods. Finally, numerical comparisons are made with some known methods by using the basins of attraction and through numerical computations to demonstrate the efficiency and the performance of the presented methods.

1. Introduction

Solving nonlinear equations is one of the most important problems in scientific computation. Since 1960’s, many multipoint iterative methods have been proposed for solving nonlinear equations of the form f ( x ) = 0 . Inverse interpolation method and self-accelerating parameter method are two effective ways to construct iterative method with memory. An iterative method uses some self-accelerating parameters, which is called the self-accelerating type iterative method with memory. The self-accelerating parameters is a variable parameter, which can be constructed by Newton interpolation or Hermite interpolation. Many efficient self-accelerating type iterative methods have been presented in recent years, see [1,2,3,4,5,6,7,8,9,10,11]. Džunić et al. [1], Soleymani et al. [2] and Sharma et al. [3] have proposed some derivative free iterative methods with one self-accelerating parameter for solving nonlinear equations. We [4,5,6] have obtained some Newton type iterative methods with memory using one simple self-accelerating parameter, which is constructed by the iterative sequences. By increasing the numbers of the self-accelerating parameters, Cordero et al. [7], Lotfi et al. [8] and Zafar et al. [9] have obtained some iterative method with high efficiency. Chicharro et al. [10] have analyzed the stability of iterative method with memory by dynamical theory. Narang et al. [11] have presented a class of Steffensen type method with memory for solving nonlinear systems. However, the self-accelerating parameter will be very complex if it is constructed by high order interpolation polynomial. In order to save computing time, we should construct the self-accelerating parameter with simple structure. If an iterative method with memory is constructed by using inverse interpolation polynomial, we will call it inverse interpolation method with memory. Neta [12] has derived a very fast inverse interpolation iterative method with memory, which is given by
y k = N ( x k ) + [ f ( y k 1 ) ϕ ( z k 1 ) f ( z k 1 ) ϕ ( y k 1 ) ] f ( x k ) 2 f ( y k 1 ) f ( z k 1 ) , z k = N ( x k ) + [ f ( y k ) ϕ ( z k 1 ) f ( z k 1 ) ϕ ( y k ) ] f ( x k ) 2 f ( y k ) f ( z k 1 ) , x k + 1 = N ( x k ) + [ f ( y k ) ϕ ( z k ) f ( z k ) ϕ ( y k ) ] f ( x k ) 2 f ( y k ) f ( z k ) ,
where
N ( x k ) = x k f ( x k ) f ( x k ) ,
ϕ ( t ) = 1 f ( t ) f ( x k ) t x k f ( t ) f ( x k ) 1 f ( x k ) .
Method (1) is denoted by Neta’s method (NETM) in this paper. Petković and Neta [13] have obtained that the convergence order of NETM is at least 10.1311. Inspired by Neta’s idea, Petković et al. [14] have presented the following two-step iterative method with memory
y k = N ( x k ) + f ( x k ) 2 ϕ ( y k 1 ) , x k + 1 = N ( x k ) + f ( x k ) 2 ϕ ( y k ) ,
where N ( x k ) and ϕ ( t ) are defined by (2) and (3), respectively. The convergence order of method (4) is 4.562. Method (4) is denoted by Petković’s method (PETM) in this paper.
In this paper, two new iterative methods with memory are proposed for solving nonlinear equations, which are constructed by using inverse interpolation method. We construct a two-step iterative method of order 4.5616 by using three function evaluations. In order to further improve the convergence order, a three-step iterative method with convergence order 10.1311 is obtained, which requires four function evaluations. Herzberger’s matrix method is used to prove the order of convergence of new methods. Finally, numerical experiments are employed to support the theory developed in this work. The basins of attraction of existing methods and our methods are presented and compared to illustrate their performance.

2. Two Inverse Interpolation Iterative Methods with Memory

Using inverse interpolation rational polynomial, we construct a two-step iterative method with memory. Let
R ( f ( x ) ) = a 0 + a 1 ( f ( x ) f ( x k ) ) 1 + a 2 ( f ( x ) f ( x k ) ) ,
be a rational polynomial satisfying
x k = R ( f ( x k ) ) ,
1 f ( x k ) = R ( f ( x k ) ) ,
y k 1 = R ( f ( y k 1 ) ) .
From (6)–(8), we get
a 0 = x k ,
and the following system
a 1 = 1 f ( x k ) + a 2 x k , y k 1 = x k + a 1 ( f ( y k 1 ) f ( x k ) ) 1 + a 2 ( f ( y k 1 ) f ( x k ) ) .
Solving system (10), we obtain
a 1 = 1 f ( x k ) + φ ( y k 1 ) x k ,
a 2 = φ ( y k 1 ) ,
φ ( t ) = 1 t x k ( 1 f ( x k ) t x k f ( t ) f ( x k ) ) ,
where φ ( t ) is a rational function.
From (5), (9), (11) and (12), we get
y k = R ( 0 ) = x k a 1 f ( x k ) 1 a 2 f ( x k ) = x k f ( x k ) f ( x k ) ( 1 φ ( y k 1 ) f ( x k ) )
In the next step, x k + 1 can be obtained by carrying out the same calculation as y k but y k 1 should be instead by y k . We get
x k + 1 = x k f ( x k ) f ( x k ) ( 1 φ ( y k ) f ( x k ) ) .
Together with (14) and (15), we obtain a new two-step method with memory as follows:
y k = x k f ( x k ) f ( x k ) ( 1 φ ( y k 1 ) f ( x k ) ) , x k + 1 = x k f ( x k ) f ( x k ) ( 1 φ ( y k ) f ( x k ) ) ,
where φ ( t ) is defined by (13).
We will use Herzberger’s matrix method [15] to determine the convergence order of Method (16).
Theorem 1.
Let α I be a simple zero of a sufficiently differentiable function f : I R R for an open interval I . If an initial approximation x 0 is sufficiently close to a simple zero α of f , then the order of convergence of the two-step method (16) with memory is at least 4.5616 .
Proof. 
The lower bound of order of a single step s-point method x k = G ( x k 1 , x k 2 , , x k s ) is the spectral radius of a matrix M ( s ) = ( m i j ) , associated to this method, with elements
m 1 , j = amount of information required at point x k j , ( j = 1 , 2 , s ) ,
m i , i 1 = 1 ( i = 2 , 3 , s ) ,
m i , j = 0 otherwise.
The lower bound of order of an s-step method G = G 1 G 2 G s is the spectral radius of the product of matrices M ( s ) = M 1 · M 2 M s . According to Method (16), we get the following matrices.
x k + 1 = G 1 ( y k , x k ) M 1 = 1 2 1 0 ,
y k = G 2 ( x k , y k 1 ) M 2 = 2 1 1 0 .
The matrix M ( 2 ) corresponding to Method (15) is
M ( 2 ) = 1 2 1 0 2 1 1 0 = 4 1 2 1 .
The characteristic polynomial of matrix M ( 2 ) is
P 2 ( λ ) = λ 2 5 λ + 2
and the eigenvalues of matrix M ( 2 ) are 4.5616 , 0.4384 . Since the spectral radius of matrix M ( 2 ) is ρ ( M ( 2 ) ) 4 . 5616 . We conclude that the convergence order of Method (16) with memory is at least 4.5616. ☐
In order to improve the computational efficiency of iterative method, we construct a three-step iterative method by using inverse interpolation rational polynomial. Let
R ( f ( x ) ) = a 0 + a 1 ( f ( x ) f ( x k ) ) + a 2 ( f ( x ) f ( x k ) ) 2 1 + a 3 ( f ( x ) f ( x k ) ) ,
be a rational polynomial satisfying
x k = R ( f ( x k ) ) ,
1 f ( x k ) = R ( f ( x k ) ) ,
y k 1 = R ( f ( y k 1 ) ) .
z k 1 = R ( f ( z k 1 ) ) .
From (18)–(21), we get
a 0 = x k ,
and the following system
a 1 + a 3 x k = 1 f ( x k ) , a 3 y k 1 a 1 a 2 ( f ( y k 1 ) f ( x k ) ) = x k y k 1 f ( y k 1 ) f ( x k ) , a 3 z k 1 a 1 a 2 ( f ( z k 1 ) f ( x k ) ) = x k z k 1 f ( z k 1 ) f ( x k ) ,
Solving system (23), we obtain
a 1 = 1 f ( x k ) + φ ( y k 1 ) x k ,
a 2 = φ ( y k 1 ) φ ( z k 1 ) f [ z k 1 , x k ] f [ y k 1 , x k ] ,
a 3 = φ ( z k 1 ) f [ y k 1 , x k ] φ ( y k 1 ) f [ z k 1 , x k ] f [ y k 1 , x k ] f [ z k 1 , x k ] ,
where φ ( t ) is defined by (13).
From (16), (21) and (23)–(25), we have
y k = R ( 0 ) = x k a 1 f ( x k ) + a 2 f ( x k ) 2 1 a 3 f ( x k )
= x k f ( x k ) [ 1 M k , 1 ( φ ( y k 1 ) φ ( z k 1 ) ) f ( x k ) f ( x k ) ] f ( x k ) { 1 M k , 1 ( φ ( y k 1 ) f [ z k 1 , x k ] φ ( z k 1 ) f [ y k 1 , x k ] ) f ( x k ) } ,
where M k , 1 = 1 f [ z k 1 , x k ] f [ y k 1 , x k ] .
In the next step, z k can be obtained by carrying out the same calculation as y k but y k 1 should be instead by y k . We get
z k = x k f ( x k ) [ 1 M k , 2 ( φ ( y k ) φ ( z k 1 ) ) f ( x k ) f ( x k ) ] f ( x k ) { 1 M k , 2 ( φ ( y k ) f [ z k 1 , x k ] φ ( z k 1 ) f [ y k , x k ] ) f ( x k ) } ,
where M k , 2 = 1 f [ z k 1 , x k ] f [ y k , x k ] . x k + 1 can be obtained by carrying out the same calculation as z k but z k 1 should be instead by z k .
x k + 1 = x k f ( x k ) [ 1 M k , 3 ( φ ( y k ) φ ( z k ) ) f ( x k ) f ( x k ) ] f ( x k ) { 1 M k , 1 ( φ ( y k ) f [ z k , x k ] φ ( z k ) f [ y k , x k ] ) f ( x k ) } ,
where M k , 3 = 1 f [ z k , x k ] f [ y k , x k ] .
Together with (27)–(29), we obtain a new three-step method with memory as follows:
y k = x k f ( x k ) [ 1 M k , 1 ( φ ( y k 1 ) φ ( z k 1 ) ) f ( x k ) f ( x k ) ] f ( x k ) { 1 M k , 1 ( φ ( y k 1 ) f [ z k 1 , x k ] φ ( z k 1 ) f [ y k 1 , x k ] ) f ( x k ) } , z k = x k f ( x k ) [ 1 M k , 2 ( φ ( y k ) φ ( z k 1 ) ) f ( x k ) f ( x k ) ] f ( x k ) { 1 M k , 1 ( φ ( y k ) f [ z k 1 , x k ] φ ( z k 1 ) f [ y k , x k ] ) f ( x k ) } , x k + 1 = x k f ( x k ) [ 1 M k , 3 ( φ ( y k ) φ ( z k ) ) f ( x k ) f ( x k ) ] f ( x k ) { 1 M k , 1 ( φ ( y k ) f [ z k , x k ] φ ( z k ) f [ y k , x k ] ) f ( x k ) } ,
where M k , 1 = 1 f [ z k 1 , x k ] f [ y k 1 , x k ] , M k , 2 = 1 f [ z k 1 , x k ] f [ y k , x k ] and M k , 3 = 1 f [ z k , x k ] f [ y k , x k ] .
Now, we give the order of convergence of Method (30) by the following theorem.
Theorem 2.
Let α I be a simple zero of a sufficiently differentiable function f : I R R for an open interval I . If an initial approximation x 0 is sufficiently close to a simple zero α of f , then the order of convergence of the three-step Method (30) with memory is at least 10.1311 .
Proof. 
Using Herzberger’s matrix method [15], we obtain the following matrices of Method (30)
x k + 1 = G 1 ( z k , y k , x k ) M 1 = 1 1 2 1 0 0 0 1 0 ,
z k = G 2 ( y k , x k , z k 1 ) M 2 = 1 2 1 1 0 0 0 1 0 ,
y k = G 2 ( x k , z k 1 , y k 1 ) M 3 = 2 1 1 1 0 0 0 1 0 .
Matrix M ( 3 ) corresponding to three-step Method (30) is
M ( 3 ) = M 1 M 2 M 3 = 1 1 2 1 0 0 0 1 0 1 2 1 1 0 0 0 1 0 2 1 1 1 0 0 0 1 0 = 8 3 2 4 2 1 2 1 1 .
The characteristic polynomial of matrix M ( 3 ) is
P 3 ( λ ) = λ 3 11 λ 2 + 9 λ 2
and the eigenvalues of matrix M ( 3 ) are 10.1311 , 0.4344 ± 0.0932 i . Since the spectral radius of matrix M ( 3 ) is ρ ( M ( 4 ) ) 10 . 1311 . We conclude that the convergence order of method with memory (30) is at least 10.1311. ☐

3. Numerical Results

Our methods (16) and (30) were compared with Neta’s method (NETM), Petković’s method (PETM) and Wang’s method (WANM) for solving some nonlinear equations. Numerical computations were performed in the Matlab 7 computer algebra system. The computer specifications are Microsoft Windows 7 Intel(R), Core(TM) i3-2350M CPU, 1.79 GHz with 2GB of RAM.
Wang’s method [8]
z n = x n f ( x n ) f ( x n ) , y n = z n λ n ( z n x n ) 2 , x n + 1 = y n f ( y n ) 2 f [ x n , y n ] f ( x n ) ,
where λ n = 1 x n 1 z n 1 x n z n 1 z n 1 x n 1 + 2 ( z n x n ) x n y n 1 .
The following test functions were used in numerical experiments.
f 1 ( x ) = x e x 2 sin 2 ( x ) + 3 cos ( x ) + 5 , a 1.2076478271309189 , x 0 = 1.5 ,
f 2 ( x ) = x 5 + x 4 + 4 x 2 15 , a 1.3474280989683050 , x 0 = 1.1 ,
f 3 ( x ) = arcsin ( x 2 1 ) 0.5 x + 1 , a 0.59481096839836918 , x 0 = 1.1 .
f 4 ( x ) = ln ( x 2 2 x + 2 ) + exp ( x 2 4 x + 4 ) sin ( x 1 ) , a = 1 , x 0 = 0.54
f 5 ( x ) = sin ( x ) x / 3 , a 2.2788626600758283 , x 0 = 3.27
f 6 ( x ) = 10 x e x 2 1 , a 1.6796306104284499 , x 0 = 2.1
The absolute errors x k a in the first four iterations are given in Table 1, where a is the exact root computed with 3600 significant digits. For Methods (1), (4), (16) and (30), the initial approximation y 1 is calculated by y 1 = N ( x 0 ) , z 1 is calculated by z 1 = y 1 + | f ( x 0 ) | / 10 . The approximated computational order of convergence (ACOC) is defined by [16]:
ρ ln ( x n + 1 x n / x n x n 1 ) ln ( x n x n 1 / x n 1 x n 2 ) .
Table 1 shows that numerical results are in concordance with the theory developed in this paper. The computing accuracy of our Method (16) is better than that of NETM for solving nonlinear equations f i ( i = 1 , 2 , 4 , 5 , 6 ) . The computing accuracy of our Method (30) is better than that of method PETM for solving nonlinear equations f i ( i = 1 , 2 , 4 , 6 ) .

4. Dynamical Analysis

The dynamical properties of the rational function give us important information about numerical features of the iterative method as its stability and reliability. In this section, we compare our Methods (16) and (30) to Newton’s method (2), NETM (1), PETM (4) and WANM (30) by using the basins of attraction for three complex polynomials f ( z ) = z k 1 , k = 2 , 3 , 4 , 5 , 6 . To generate the basins of attraction for the zeros of a polynomial and an iterative method we take a grid of 300 × 300 points in a rectangle D = [ 3.0 , 3.0 ] × [ 3.0 , 3.0 ] C and we use these points as z 0 . If the sequence generated by iterative method reaches a zero z * of the polynomial with a tolerance | z k z * | < 10 5 and a maximum of 25 iterations, we decide that z 0 is in the basin of attraction of the zero and we paint this point in a blue color for this root. In the same basin of attraction, the number of iterations needed to achieve the solution is showed in darker or brighter colors (the less iterations, the brighter color). Black color denotes lack of convergence to any of the roots (with the maximum of iterations established) or convergence to the infinity. The parameters used in iterative Method (30) is λ 0 = 0.001 . All the figures are created by the Mathematica 4 computer algebra system. The computer specifications are Microsoft Windows 7 Intel(R), Core(TM) i3-2350M CPU, 1.79 GHz with 2GB of RAM.
Figure 1 show that our Method (16) and Newton’s method are global convergence for solving complex polynomial f ( z ) = z 2 1 . Compared to the other method, our Method (16) has the least diverging points in Figure 2. Figure 3 show that the convergence speed of our method (30) is faster than that of the other methods. Figure 1, Figure 2, Figure 3, Figure 4 and Figure 5 show that the stability of Method (16) is better than the other methods and method WANM has the worst stability. The basins of attraction for our Method (16) with memory are larger than the other methods. On the whole, our Methods (16) and (30) are better than the other methods in this paper.

5. Conclusions

In this paper, two new iterative methods with memory are proposed for solving nonlinear equations, which are constructed by inverse interpolation method. We first propose a two-step iterative method with convergence order 4.5616, which requires three function evaluations. By increasing a function evaluation, a three-step method is obtained, which has the convergence order 10.1311. New methods are compared in performance with the existing methods by numerical examples. Numerical examples confirm the theoretical results. The basins of attraction of existing methods and our methods are presented and compared to illustrate their performance.The basin of attraction for our Method (16) is better than any of the other methods.

Author Contributions

Methodology, X.W. and M.Z.; writing—original draft preparation, X.W.; writing—review and editing, X.W. and M.Z.; All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China (No. 61976027), Natural Science Foundation of Liaoning Province of China (No. 20180551262), Educational Commission Foundation of Liaoning Province of China (No. LJ2019010) and University-Industry Collaborative Education Program (Nos. 201901077017, 201902014012, 201902184038).

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Džunić, J.; Petković, M.S.; Petković, L.D. Three-point methods with and without memory for solving nonlinear equations. Appl. Math. Comput. 2012, 218, 4917–4927. [Google Scholar] [CrossRef]
  2. Soleymani, F.; Lotfi, T.; Tavakoli, E.; Haghani, F.K. Several iterative methods with memory using- accelerators. Appl. Math. Comput. 2015, 254, 452–458. [Google Scholar] [CrossRef]
  3. Sharma, J.R.; Guha, R.K.; Gupta, P. Some efficient derivative free methods with memory for solving nonlinear equations. Appl. Math. Comput. 2012, 219, 699–707. [Google Scholar] [CrossRef]
  4. Wang, X.; Tao, Y. A new Newton method with memory for solving nonliear equations. Mathematics 2020, 8, 108. [Google Scholar] [CrossRef] [Green Version]
  5. Wang, X.; Fan, Q. A modifified Ren’s method with memory using a simple self-accelerating parameter. Mathematics 2020, 8, 540. [Google Scholar] [CrossRef] [Green Version]
  6. Wang, X. A family of Newton-type iterative methods using some special self-accelerating parameters. Int. J. Comput. Math. 2018, 95, 2112–2127. [Google Scholar] [CrossRef]
  7. Cordero, A.; Lotfi, T.; Bakhtiari, P.; Torregrosa, J.R. An efficient two-parametric family with memory for nonlinear equations. Numer. Algor. 2015, 68, 323–335. [Google Scholar] [CrossRef] [Green Version]
  8. Lotfi, T.; Assari, P. New three- and four-parametric iterative with memory methods with efficiency index near 2. Appl. Math. Comput. 2015, 270, 1004–1010. [Google Scholar] [CrossRef]
  9. Zafar, F.; Cordero, A.; Torregrosa, J.R.; Rafi, A. A class of four parametric with- and without memory root finding methods. Comp. Math. Methods 2019, 1, 1–14. [Google Scholar] [CrossRef] [Green Version]
  10. Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Dynamics of iterative families with memory based on weight functions procedure. J. Comput. Appl. Math. 2019, 354, 286–298. [Google Scholar] [CrossRef]
  11. Narang, M.; Bhatia, S.; Alshomrani, A.S. General efficient class of steffensen type methods with memory for solving systems of nonlinear equations. J. Comput. Appl. Math. 2019, 352, 23–39. [Google Scholar] [CrossRef]
  12. Neta, B. A New Family of Higher Order Methods for Solving Equations. Int. J. Comput. Math. 1983, 14, 191–195. [Google Scholar] [CrossRef]
  13. Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint methods for solving nonliear equations: A survey. Appl. Math. Comput 2014, 226, 635–660. [Google Scholar]
  14. Petković, M.S.; Džunić, J.; Neta, B. Interpolatory multipoint methods with memory for solving nonlinear equations. Appl. Math. Comput. 2011, 218, 2533–2541. [Google Scholar] [CrossRef]
  15. Herzberger, J. Über Matrixdarstellungen für iterationverfahren bei nichtlinearen Gleichungen. Computing 1974, 12, 215–222. [Google Scholar] [CrossRef]
  16. Grau-Sánchez, M.; Noguera, M.; Grau, À.; Herrero, J.R. On new computational local orders of convergence. Appl. Math. Lett. 2012, 25, 2023–2030. [Google Scholar]
Figure 1. The results are for the polynomial z 2 1 .
Figure 1. The results are for the polynomial z 2 1 .
Mathematics 08 01080 g001
Figure 2. The results are for the polynomial z 3 1 .
Figure 2. The results are for the polynomial z 3 1 .
Mathematics 08 01080 g002
Figure 3. The results are for the polynomial z 4 1 .
Figure 3. The results are for the polynomial z 4 1 .
Mathematics 08 01080 g003
Figure 4. The results are for the polynomial z 5 1 .
Figure 4. The results are for the polynomial z 5 1 .
Mathematics 08 01080 g004
Figure 5. The results are for the polynomial z 6 1 .
Figure 5. The results are for the polynomial z 6 1 .
Mathematics 08 01080 g005
Table 1. Numerical results for f i ( x ) ( i = 1 , , 6 ) .
Table 1. Numerical results for f i ( x ) ( i = 1 , , 6 ) .
Methods f i ( x ) | x 1 a | | x 2 a | | x 3 a | | x 4 a | ρ
PETM f 1 ( x ) 0.35559 × 10 2 0.19719 × 10 10 0.59271 × 10 48 0.47227 × 10 219 4.5599492
f 2 ( x ) 0.15137 × 10 2 0.14866 × 10 12 0.51501 × 10 58 0.26586 × 10 265 4.5613777
f 3 ( x ) 0.14922 × 10 4 0.65850 × 10 24 0.57645 × 10 112 0.15218 × 10 513 4.5603961
f 4 ( x ) 0.47316 × 10 1 0.40385 × 10 5 0.11174 × 10 23 0.26640 × 10 108 4.5598985
f 5 ( x ) 0.13949 × 10 2 0.61756 × 10 14 0.18890 × 10 65 0.25756 × 10 300 4.5592146
f 6 ( x ) 0.113770.51251 × 10 4 0.59924 × 10 19 0.51705 × 10 87 4.5582387
WANM f 1 ( x ) 0.51808 × 10 2 0.86196 × 10 8 0.38505 × 10 33 0.408023 × 10 142 4.2988150
f 2 ( x ) 0.25772 × 10 2 0.18965 × 10 11 0.23539 × 10 49 0.224057 × 10 212 4.3006584
f 3 ( x ) 0.11168 × 10 1 0.13757 × 10 8 0.13978 × 10 38 0.108856 × 10 167 4.3046156
f 4 ( x ) 0.55431 × 10 1 0.58447 × 10 3 0.89039 × 10 13 0.477788 × 10 54 4.2038911
f 5 ( x ) 0.96475 × 10 2 0.22227 × 10 10 0.50290 × 10 47 0.226046 × 10 204 4.2937795
f 6 ( x ) 0.57236 × 10 1 0.69530 × 10 5 0.34120 × 10 22 0.116978 × 10 96 4.3020510
Method (16) f 1 ( x ) 0.22190 × 10 3 0.81848 × 10 18 0.18945 × 10 83 0.92583 × 10 383 4.5601986
f 2 ( x ) 0.80426 × 10 4 0.26205 × 10 19 0.94196 × 10 90 0.53248 × 10 411 4.5603052
f 3 ( x ) 0.89331 × 10 3 0.21584 × 10 15 0.10233 × 10 72 0.42013 × 10 334 4.5598019
f 4 ( x ) 0.11157 × 10 1 0.39468 × 10 11 0.16859 × 10 53 0.14820 × 10 246 4.5564931
f 5 ( x ) 0.13173 × 10 2 0.19405 × 10 14 0.41159 × 10 68 0.81326 × 10 313 4.5591294
f 6 ( x ) 0.92941 × 10 2 0.29837 × 10 9 0.23171 × 10 43 0.63284 × 10 199 4.5606727
NETM f 1 ( x ) 0.99121 × 10 4 0.28313 × 10 38 0.31361 × 10 388 10.130669
f 2 ( x ) 0.18992 × 10 5 0.51355 × 10 59 0.52899 × 10 579 10.122309
f 3 ( x ) 0.23910 × 10 10 0.60431 × 10 113 0.32556 × 10 1151 10.119841
f 4 ( x ) 0.20425 × 10 4 0.23991 × 10 48 0.21325 × 10 492 10.108125
f 5 ( x ) 0.30800 × 10 8 0.68005 × 10 89 0.63841 × 10 905 10.117379
f 6 ( x ) 0.31973 × 10 2 0.12293 × 10 24 0.50418 × 10 251 10.099743
Method (30) f 1 ( x ) 0.35789 × 10 10 0.93271 × 10 112 0.61422 × 10 1140 10.121489
f 2 ( x ) 0.50281 × 10 9 0.54597 × 10 96 0.14347 × 10 976 10.125776
f 3 ( x ) 0.26423 × 10 10 0.10036 × 10 111 0.36975 × 10 1138 10.120580
f 4 ( x ) 0.11409 × 10 9 0.10448 × 10 99 0.76373 × 10 1012 10.130546
f 5 ( x ) 0.15455 × 10 6 0.10783 × 10 75 0.75250 × 10 775 10.109797
f 6 ( x ) 0.19512 × 10 7 0.14705 × 10 81 0.15699 × 10 831 10.117955

Share and Cite

MDPI and ACS Style

Wang, X.; Zhu, M. Two Iterative Methods with Memory Constructed by the Method of Inverse Interpolation and Their Dynamics. Mathematics 2020, 8, 1080. https://doi.org/10.3390/math8071080

AMA Style

Wang X, Zhu M. Two Iterative Methods with Memory Constructed by the Method of Inverse Interpolation and Their Dynamics. Mathematics. 2020; 8(7):1080. https://doi.org/10.3390/math8071080

Chicago/Turabian Style

Wang, Xiaofeng, and Mingming Zhu. 2020. "Two Iterative Methods with Memory Constructed by the Method of Inverse Interpolation and Their Dynamics" Mathematics 8, no. 7: 1080. https://doi.org/10.3390/math8071080

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop