Next Article in Journal
Geometric Properties and Algorithms for Rational q-Bézier Curves and Surfaces
Next Article in Special Issue
Numerical Solution of Nonlinear Diff. Equations for Heat Transfer in Micropolar Fluids over a Stretching Domain
Previous Article in Journal
Generalized Integral Transforms via the Series Expressions
Previous Article in Special Issue
Domain of Existence and Uniqueness for Nonlinear Hammerstein Integral Equations

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# A Modified Ren’s Method with Memory Using a Simple Self-Accelerating Parameter

by
Xiaofeng Wang
* and
Qiannan Fan
School of Mathematics and Physics, Bohai University, Jinzhou 121000, China
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(4), 540; https://doi.org/10.3390/math8040540
Submission received: 8 March 2020 / Revised: 23 March 2020 / Accepted: 1 April 2020 / Published: 7 April 2020

## Abstract

:
In this paper, a self-accelerating type method is proposed for solving nonlinear equations, which is a modified Ren’s method. A simple way is applied to construct a variable self-accelerating parameter of the new method, which does not increase any computational costs. The highest convergence order of new method is $2 + 6 ≈ 4.4495$. Numerical experiments are made to show the performance of the new method, which supports the theoretical results.
MSC:
65H05; 65B99

## 1. Introduction

Self-accelerating type method is a kind of efficient iterative method with memory for solving nonlinear equations, which chooses some varying parameters as self-accelerating parameters in the iteration processes. The self-accelerating parameter is calculated by using information from previous and current iterations, which does not increase the computational cost of iterative method. Thus, self-accelerating type methods possess a very high computational efficiency. Self-accelerating parameter is very important to self-accelerating type method, which can make a big difference to the efficiency of iterative method. There are two ways to construct the self-accelerating parameter, which are secant method and interpolation method. Traub’s method [1] is one of the most representative methods for self-accelerating type method, which can be written as
$w n = x n + T n f ( x n ) , T n = − f [ x n , x n − 1 ] − 1 , x n + 1 = x n − f ( x n ) f [ x n , w n ] ,$
where $f [ x n , w n ] = { f ( x n ) − f ( w n ) } / ( x n − w n )$ is a divided difference and the self-accelerating parameter $T n$ is constructed by using secant method. The convergence order of method (1) is $1 + 2$. Similar, Petković et al. [2] obtained the following self-accelerating type method with order $2 + 5$
$w n = x n − T n f ( x n ) , T n = − f [ x n , x n − 1 ] − 1 , y n = x n − f ( x n ) f [ x n , w n ] , x n + 1 = y n − f ( y n ) f [ x n , w n ] ( 1 + f ( y n ) f ( x n ) + f ( y n ) f ( w n ) ) .$
Zheng et al. [3] also proposed a self-accelerating type method with order $( 3 + 3 ) / 2 ≈ 3.3028$, which can be given by
$w n = x n + T n f ( x n ) , T n = f [ x n , x n − 1 ] − 1 , y n = x n − f ( x n ) f [ x n , w n ] , x n + 1 = x n − f ( x n ) 2 f [ x n , w n ] ( f ( x n ) − f ( y n ) ) .$
Using interpolation method to construct self-accelerating parameter, we [4,5,6] obtained some self-accelerating type methods, one of them is given by
$y n = x n − f ( x n ) T n f ( x n ) + f ′ ( x n ) , T n = − H ″ 2 ( x n ) f ′ ( x n ) , x n + 1 = y n − f ( y n ) 2 T n f ( x n ) + f ′ ( x n ) 1 + 2 f ( y n ) f ( x n ) + f ( y n ) f ( x n ) 2 ,$
where $H 2 ( x ) = f ( x n ) + f ′ ( x n ) ( x − x n ) + f [ x n , x n , y n − 1 ] ( x − x n ) 2$, $H 2 ″ ( x n ) = 2 f [ x n , x n , y n − 1 ]$ and the self-accelerating parameter $T n$ is constructed by Hermite interpolation polynomial. Method (4) has convergence order $( 5 + 17 ) / 2 .$ Using Newton interpolation to construct self-accelerating parameter, Džunić et al. [7,8] gave some efficient self-accelerating type methods, one of them can be given by
$w n = x n + T n f ( x n ) , T n = − 1 N ′ 2 ( x n ) , x n + 1 = x n − f ( x n ) f [ x k , w k ] ,$
where $N 2 ( t ) = N 2 ( t ; x n , x n − 1 , w n − 1 )$ is Newton’s interpolation polynomial of second degree and $N 2 ′ ( x n ) = f [ x n , x n − 1 ] + f [ x n , x n − 1 , w n − 1 ] ( x n − x n − 1 ) .$ The convergence order of method (5) is 3. By increasing the number of self-accelerating parameters or using interpolation polynomial of high degree to construct self-accelerating parameter, the convergence order and the computational efficiency of self-accelerating type methods can be improved greatly. Recently, more and more higher order self-accelerating type methods have been presented for solving nonlinear equations. Zaka et al. [9] gave an efficient tri-parametric iterative method by using Newton interpolation polynomial to construct self-accelerating parameter. Using four self-accelerating parameters, Lotfi and Assari [10] obtained a derivative-free iterative method with efficiency index near 2. We also proposed an efficient iterative method with n self-accelerating parameters, some known methods [1,2,3,7,8,9,10,11,12,13,14] can be seen as the special cases of our method [15]. Cordero et al. [16] and Campos et al. [17] designed some new self-accelerating type methods and studied the stability of their methods. A more extensive list of references as well as a survey on progress made on the self-accelerating type methods may be found in the recent books by Petković et al. [18] and Iliev et al. [19]. It is obvious that the secant method and interpolation method are very effective ways to construct self-accelerating parameter. However, all of these researches focused their works on the design of the methods trying to improve their convergence order and computational efficiency. Only some recent works [20,21] try to use new techniques to construct the self-accelerating parameter.
The main purpose of this paper is that a new way is proposed to construct a self-accelerating parameter, which is different from the secant method and interpolation method. In Section 2, firstly, based on Ren’s method [22], a modified optimal fourth-order method is proposed for solving nonlinear equations. Then, the modified iterative method is extended into a new self-accelerating type method by using a self-accelerating parameter. Using the interpolation method to construct the self-accelerating parameter, the convergence order of the new method reaches 4.2361. In Section 3, a new way is applied to construct the self-accelerating parameter. The maximal convergence order of new method is 4.4495. Numerical examples are given in Section 4 to confirm theoretical results. Section 5 is a short conclusion.

## 2. A New Self-Accelerating Type Method

In order to obtain a new self-accelerating type method, we first propose a modified fourth-order method without memory. Based on the Ren’s method [22],
$w n = x n + f ( x n ) , y n = x n − f ( x n ) f [ x n , w n ] , x n + 1 = y n − f ( y n ) f [ x n , y n ] + f [ y n , w n ] − f [ x n , w n ] + α ( y n − x n ) ( y n − w n ) , α ∈ R ,$
we construct the following modified iterative method
$w n = x n + f ( x n ) , z n = x n − f ( x n ) f [ x n , w n ] , y n = z n − T ( z n − x n ) 2 , x n + 1 = y n − f ( y n ) f [ x n , y n ] + f [ y n , w n ] − f [ x n , w n ] ,$
where $T ∈ R$ is a parameter. For method (7), we have the following convergence analysis.
Theorem 1.
If function $f : I ⊂ R → R$ is sufficiently differentiable and has a simple zero ξ on an open interval I, then iterative method (7) is of fourth-order convergence and its error equation is as follows
$e n + 1 = ( c 2 + c 2 f ′ ( ξ ) − T ) [ c 2 2 ( 1 + f ′ ( ξ ) ) − c 3 ( 1 + f ′ ( ξ ) ) − c 2 T ] e n 4 + O ( e n 5 ) ,$
where $e n = x n − ξ$, $T ∈ R$ and $c n = ( 1 / n ! ) f ( n ) ( ξ ) / f ′ ( ξ )$, $n = 2 , 3 , ⋯$.
Proof.
Using Taylor expansion of $f ( x )$, we have
$f ( x n ) = f ′ ( ξ ) [ e n + c 2 e n 2 + c 3 e n 3 + c 4 e n 4 + c 5 e n 5 + O ( e n 6 ) ] ,$
$e n , w = w n − ξ = e n + f ( x n ) = ( 1 + f ′ ( ξ ) ) e n + c 2 f ′ ( ξ ) e n 2 + c 3 f ′ ( ξ ) e n 3 + O ( e n 4 ) ,$
$f ( w n ) = f ′ ( ξ ) [ ( 1 + f ′ ( ξ ) ) e n + c 2 ( 1 + 3 f ′ ( ξ ) + f ′ ( ξ ) 2 ) e n 2 + ( 2 c 2 2 f ′ ( ξ ) ( 1 + f ′ ( ξ ) ) + c 3 ( 1 + 4 f ′ ( ξ ) + 3 f ′ ( ξ ) 2 + f ′ ( ξ ) 3 ) ) e n 3 + O ( e n 4 ) ] ,$
$f [ x n , w n ] = f ′ ( ξ ) [ 1 + c 2 ( 2 + f ′ ( ξ ) ) e n + ( c 2 2 f ′ ( ξ ) + c 3 ( 3 + 3 f ′ ( ξ ) + f ′ ( ξ ) 2 ) ) e n 2 + ( 2 + f ′ ( ξ ) ) ( 2 c 2 c 3 f ′ ( ξ ) + c 4 ( 2 + 2 f ′ ( ξ ) + f ′ ( ξ ) 2 ) ) e n 3 + O ( e n 4 ) ] .$
According to (7), (9) and (12), we get
$e n , z = z n − ξ = c 2 ( 1 + f ′ ( ξ ) ) e n 2 + ( − c 2 2 ( 2 + 2 f ′ ( ξ ) + f ′ ( ξ ) 2 ) + c 3 ( 2 + 3 f ′ ( ξ ) + f ′ ( ξ ) 2 ) ) e n 3 + O ( e n 4 ) .$
From (7) and (13), we obtain
$e n , y = y n − ξ = ( c 2 + c 2 f ′ ( ξ ) − T ) e n 2 + ( − c 2 2 ( 2 + 2 f ′ ( ξ ) + f ′ ( ξ ) 2 ) + c 3 ( 2 + 3 f ′ ( ξ ) + f ′ ( ξ ) 2 ) + 2 c 2 ( 1 + f ′ ( ξ ) ) T ) e n 3 + O ( e n 4 ) .$
By a similar argument to that of (9), we get
$f ( y n ) = f ′ ( ξ ) ( c 2 + c 2 f ′ ( ξ ) − T ) e n 2 + f ′ ( ξ ) ( − c 2 2 ( 2 + 2 f ′ ( ξ ) + f ′ ( ξ ) 2 ) + c 3 ( 2 + 3 f ′ ( ξ ) + f ′ ( ξ ) 2 ) + 2 c 2 ( 1 + f ′ ( ξ ) ) T ) e n 3 + O ( e n 4 ) .$
Using (9), (11), (14) and (15), we have
$f [ x n , y n ] = f ′ ( a ) + c 2 f ′ ( ξ ) e n + f ′ ( ξ ) ( c 3 + c 2 2 ( 1 + f ′ ( ξ ) ) − c 2 T ) e n 2 + O ( e n 3 ) ,$
$f [ y n , w n ] = f ′ ( ξ ) + c 2 f ′ ( ξ ) ( 1 + f ′ ( ξ ) ) e n + f ′ ( ξ ) ( c 3 ( 1 + f ′ ( ξ ) ) 2 + c 2 2 ( 1 + 2 f ′ ( ξ ) ) − c 2 T ) e n 2 + O ( e n 3 ) .$
Together with (7) and (14)–(17), we obtain the error equation
$e n + 1 = x n + 1 − ξ = ( c 2 + c 2 f ′ ( ξ ) − T ) [ c 2 2 ( 1 + f ′ ( ξ ) ) − c 3 ( 1 + f ′ ( ξ ) ) − c 2 T ] e n 4 + O ( e n 5 ) .$
The proof is completed. □
Remark 1.
From (8), we can see that the convergence order of method (7) is at least five provided that $T = c 2 ( 1 + f ′ ( ξ ) )$. Hence, in order to obtain a new self-accelerating type method, we will use a self-accelerating parameter $T n$ to replace the parameter T if the parameter $T n$ satisfies the relation $lim n → ∞ T n = T = c 2 ( 1 + f ′ ( ξ ) )$. Similar to methods (4) and (5), we can use interpolation method to construct self-accelerating parameter. For example, the self-accelerating parameter $T n$ can be given by
$T n = N ″ 2 ( x n ) 2 N ′ 2 ( x n ) ( 1 + N 2 ′ ( x n ) ) ,$
where $N 2 ( t ) = N 2 ( t ; x n , x n − 1 , w n − 1 )$ is Newton’s interpolatory polynomial of second degree, $N 2 ′ ( x n ) = f [ x n , x n − 1 ] + f [ x n , x n − 1 , w n − 1 ] ( x n − x n − 1 )$ and $N 2 ″ ( x n ) = 2 f [ x n , x n − 1 , w n − 1 ]$ Now, we obtain a new self-accelerating type method as follows:
$w n = x n + f ( x n ) , z n = x n − f ( x n ) f [ x n , w n ] , y n = z n − T n ( z n − x n ) 2 , x n + 1 = y n − f ( y n ) f [ x n , y n ] + f [ y n , w n ] − f [ x n , w n ] .$
Theorem 2.
Let the varying parameter $T n$ be calculated by (19) in method (20). If an initial value $x 0$ is sufficiently close to a simple zero ξ of function $f ( x )$, then the R-order of convergence of self-accelerating type method (20) is at least $2 + 5 ≈ 4.2361$.
Proof.
If an iterative method (IM) generates sequence $x n$ that converges to the zero $ξ$ of $f ( x )$ with the R-order $O R ( IM , a ) ≥ r ,$ then we can write
$e n + 1 ∼ D n , r e n r ,$
where $e n = x n − ξ$ and the limit of $D n , r$ is the asymptotic error constant of iterative method, as $n → ∞$. So,
$e n + 1 ∼ D n , r ( D n − 1 , r e n − 1 r ) r = D n , r D n − 1 , r r e n − 1 r 2 .$
Similar to (22), if the R-order of iterative sequence $y n$ is p, then
$e n , y ∼ D n , p e n p ∼ D n , p ( D n − 1 , r e n − 1 r ) p = D n , p D n − 1 , r p e n − 1 r p .$
According to (14) and (18), we obtain
$e n , y = y n − ξ ∼ ( c 2 + c 2 f ′ ( ξ ) − T n ) e n 2 ,$
$e n + 1 = x n + 1 − ξ ∼ ( c 2 + c 2 f ′ ( ξ ) − T n ) [ c 2 2 ( 1 + f ′ ( ξ ) ) − c 3 ( 1 + f ′ ( ξ ) ) − c 2 T n ] e n 4 .$
Here, we omit the higher-order terms in (24)–(25). Let $N 2 ( t )$ be the Newton interpolating polynomial of degree two that interpolates the function f at nodes $x n , w n − 1 , x n − 1$ contained in interval. Then, the error of the Newton interpolation can be expressed as follows:
$f ( t ) − N 2 ( t ) = f ( 3 ) ( ζ ) 3 ! ( x − x n ) ( x − w n − 1 ) ( x − x n − 1 ) , ζ ∈ I .$
Differentiating (26) at the point $t = x n$, we get
$N 2 ′ ( x n ) ∼ f ′ ( ξ ) ( 1 − c 3 ( 1 + f ′ ( ξ ) ) e n − 1 2 + O ( e n − 1 3 ) ) ,$
$N 2 ″ ( x n ) ∼ 2 f ′ ( ξ ) ( c 2 + c 3 ( 2 + f ′ ( ξ ) ) e n − 1 + O ( e n − 1 2 ) ) ,$
$T n = N ″ 2 ( x n ) 2 N ′ 2 ( x n ) ( 1 + N 2 ′ ( x n ) ) ∼ c 2 ( 1 + f ′ ( ξ ) ) + c 3 ( 1 + f ′ ( ξ ) ) ( 2 + f ′ ( ξ ) ) e n − 1 + O ( e n − 1 2 ) .$
Using (24), (25) and (29), we get
$e n , y ∼ ( c 2 + c 2 f ′ ( ξ ) − T n ) e n 2$
$∼ − c 3 ( 1 + f ′ ( ξ ) ) ( 2 + f ′ ( ξ ) ) e n − 1 ( D n − 1 , r e n − 1 r ) 2$
$∼ − c 3 ( 1 + f ′ ( ξ ) ) ( 2 + f ′ ( ξ ) ) D n − 1 , r 2 e n − 1 2 r + 1 ,$
$e n + 1 ∼ ( c 2 + c 2 f ′ ( ξ ) − T n ) [ c 2 2 ( 1 + f ′ ( ξ ) ) − c 3 ( 1 + f ′ ( ξ ) ) − c 2 T n ] e n 4$
$∼ c 3 2 ( 1 + f ′ ( ξ ) ) 2 ( 2 + f ′ ( ξ ) ) e n − 1 ( D n − 1 , r e n − 1 r ) 4$
$∼ c 3 2 ( 1 + f ′ ( ξ ) ) 2 ( 2 + f ′ ( ξ ) ) D n − 1 , r 4 e n − 1 4 r + 1 .$
By comparing exponents of $e n − 1$ in relations (23), (30)) and ((22), (31), we have
$2 r + 1 = r p , 4 r + 1 = r 2 .$
Solving system (32), we obtain $r = 2 + 5 ≈ 4.2361$ and $p = 5 ≈ 2.2361$. Therefore, the R-order of method (20) is at least 4.2361, when $T n$ is calculated by (19). The proof is completed. □

## 3. A New Technique to Construct the Self-Accelerating Parameter of New Method

In this Section, we will give a new way to construct the self-accelerating parameter. It is known that Steffensen method (SM) without memory [23] converges quadratically, which can be written as
$w n = x n + f ( x n ) , x n + 1 = x n − f ( x n ) f [ x n , w n ] ,$
which satisfies the following expression
$lim n → ∞ x n + 1 − ξ ( x n − ξ ) 2 = lim n → ∞ e n + 1 e n 2 = c 2 ( 1 + f ′ ( ξ ) ) .$
From (34), we know that the asymptotic error constant of SM is $c 2 ( 1 + f ′ ( ξ ) )$. Coincidentally, the first two steps of our method (20) is Steffensen method, so Equation (34) can be the self-accelerating parameter, which satisfies $lim n → ∞ T n = T = c 2 ( 1 + f ′ ( ξ ) )$. Since the root $ξ$ of function is unknown, we could use sequence information from the current and previous iterations to approximate the root $ξ$ and construct the following formulas for $T n$:
Formula 1:
$T n = z n − 1 − x n ( x n − x n − 1 ) 2 .$
Formula 2:
$T n = ( z n − 1 − x n ) ( y n − 1 − x n − 1 ) ( x n − x n − 1 ) 3 .$
Formula 3:
$T n = z n − 1 − x n ( x n − x n − 1 ) 2 x n + y n − 1 − w n − 1 − z n − 1 z n − 1 − w n − 1 − x n − y n − 1 z n − 1 − x n − 1 − 2 z n − x n − w n ( x n − y n − 1 ) ( z n − 1 − x n − 1 ) − ( x n − z n − 1 ) 2 ( w n − 1 − x n − 1 ) 2 ( z n − 1 − w n − 1 ) 2 ( z n − 1 − x n − 1 ) 3 .$
Formula 4:
$T n = ( z n − 1 − x n ) ( y n − 1 − x n − 1 ) ( x n − x n − 1 ) 3 x n + y n − 1 − w n − 1 − z n − 1 z n − 1 − w n − 1 − 2 z n − x n − w n ( x n − y n − 1 ) ( z n − 1 − x n − 1 ) − ( x n − z n − 1 ) 2 ( w n − 1 − x n − 1 ) 2 ( z n − 1 − w n − 1 ) 2 ( z n − 1 − x n − 1 ) 3 .$
The self-accelerating parameter $T n$ is calculated by using one of the formulas (35)–(38).
Theorem 3.
Let varying parameters $T n$ be calculated by (35) or (36) in method (20), respectively. If an initial value $x 0$ is sufficiently close to a simple zero ξ of $f ( x )$, then the R-order of convergence of iterative method (20) is at least $2 + 5 ≈ 4.2361$.
Proof.
From (13), (14) and (18), we get
$z n − 1 − x n = c 2 ( 1 + f ′ ( ξ ) ) e n − 1 2 + ( − c 2 2 ( 2 + 2 f ′ ( ξ ) + f ′ ( ξ ) 2 ) + c 3 ( 2 + 3 f ′ ( ξ ) + f ′ ( ξ ) 2 ) ) e n − 1 3 + ( c 2 3 ( 3 + 3 f ′ ( ξ ) + 2 f ′ ( ξ ) 2 + f ′ ( ξ ) 3 ) + 2 c 2 2 ( 1 + f ′ ( ξ ) ) T n − 1 + ( 1 + f ′ ( ξ ) ) ( c 4 ( 3 + 3 f ′ ( ξ ) + f ′ ( ξ ) 2 ) − c 3 T n − 1 ) − c 2 ( 2 c 3 ( 3 + 4 f ′ ( ξ ) + 3 f ′ ( ξ ) 2 + f ′ ( ξ ) 3 ) + T n − 1 2 ) ) e n − 1 4 + O ( e n − 1 5 ) ,$
$x n − x n − 1 = − e n − 1 + ( c 2 + c 2 f ′ ( ξ ) − T n − 1 ) ( c 2 2 ( 1 + f ′ ( ξ ) ) − c 3 ( 1 + f ′ ( ξ ) ) − c 2 T n − 1 ) e n − 1 4 + O ( e n − 1 5 ) ,$
$y n − 1 − x n − 1 = − e n − 1 + ( c 2 + c 2 f ′ ( ξ ) − T n − 1 ) e n − 1 2 + ( − c 2 2 ( 2 + 2 f ′ ( ξ ) + f ′ ( ξ ) 2 ) + c 3 ( 2 + 3 f ′ ( ξ ) + f ′ ( ξ ) 2 ) + 2 c 2 ( 1 + f ′ ( ξ ) ) T n − 1 ) e n − 1 3 + ( c 2 3 ( 4 + 5 f ′ ( ξ ) + 3 f ′ ( ξ ) 2 + f ′ ( ξ ) 3 ) − c 2 c 3 ( 7 + 10 f ′ ( ξ ) + 7 f ′ ( ξ ) 2 + 2 f ′ ( ξ ) 3 ) − c 2 2 ( 5 + 6 f ′ ( ξ ) + 3 f ′ ( ξ ) 2 ) T n − 1 + ( 1 + f ′ ( ξ ) ) ( c 4 ( 3 + 3 f ′ ( ξ ) + f ′ ( ξ ) 2 ) + 2 c 3 ( 2 + f ′ ( ξ ) ) T n − 1 ) ) e n − 1 4 + O ( e n − 1 5 ) .$
From (39)–(41), we get
$T n = z n − 1 − x n ( x n − x n − 1 ) 2$
$= c 2 ( 1 + f ′ ( ξ ) ) + ( − c 2 2 ( 2 + 2 f ′ ( ξ ) + f ′ ( ξ ) 2 ) + c 3 ( 2 + 3 f ′ ( ξ ) + f ′ ( ξ ) 2 ) ) e n − 1 + O ( e n − 1 2 ) ,$
$c 2 ( 1 + f ′ ( ξ ) ) − T n = ( c 2 2 ( 2 + 2 f ′ ( ξ ) + f ′ ( ξ ) 2 ) − c 3 ( 2 + 3 f ′ ( ξ ) + f ′ ( ξ ) 2 ) ) e n − 1 + O ( e n − 1 2 ) ,$
$T n = ( z n − 1 − x n ) ( y n − 1 − x n − 1 ) ( x n − x n − 1 ) 3 = c 2 ( 1 + f ′ ( ξ ) ) + ( c 3 ( 2 + 3 f ′ ( ξ ) + f ′ ( ξ ) 2 ) − c 2 2 ( 3 + 4 f ′ ( ξ ) + 2 f ′ ( ξ ) 2 ) + c 2 ( 1 + f ′ ( ξ ) ) T n − 1 ) e n − 1 + O ( e n − 1 2 ) ,$
$c 2 ( 1 + f ′ ( ξ ) ) − T n = − ( c 3 ( 2 + 3 f ′ ( ξ ) + f ′ ( ξ ) 2 ) − c 2 2 ( 3 + 4 f ′ ( ξ ) + 2 f ′ ( ξ ) 2 ) + c 2 ( 1 + f ′ ( ξ ) ) T n − 1 ) e n − 1 + O ( e n − 1 2 ) .$
Using (24), (25) and (43), we get
$e n , y ∼ ( c 2 + c 2 f ′ ( ξ ) − T n ) e n 2$
$∼ − ( − c 2 2 ( 2 + 2 f ′ ( ξ ) + f ′ ( ξ ) 2 ) + c 3 ( 2 + 3 f ′ ( ξ ) + f ′ ( ξ ) 2 ) ) e n − 1 ( D n − 1 , r e n − 1 r ) 2$
$∼ − ( − c 2 2 ( 2 + 2 f ′ ( ξ ) + f ′ ( ξ ) 2 ) + c 3 ( 2 + 3 f ′ ( ξ ) + f ′ ( ξ ) 2 ) ) D n − 1 , r 2 e n − 1 2 r + 1 ,$
$e n + 1 ∼ ( c 2 + c 2 f ′ ( ξ ) − T n ) [ c 2 2 ( 1 + f ′ ( ξ ) ) − c 3 ( 1 + f ′ ( ξ ) ) − c 2 T n ] e n 4$
$∼ − c 3 ( 1 + f ′ ( ξ ) ) ( c 2 2 ( 2 + 2 f ′ ( ξ ) + f ′ ( ξ ) 2 ) − c 3 ( 2 + 3 f ′ ( ξ ) + f ′ ( ξ ) 2 ) ) e n − 1 ( D n − 1 , r e n − 1 r ) 4$
$∼ − c 3 ( 1 + f ′ ( ξ ) ) ( c 2 2 ( 2 + 2 f ′ ( ξ ) + f ′ ( ξ ) 2 ) − c 3 ( 2 + 3 f ′ ( ξ ) + f ′ ( ξ ) 2 ) ) D n − 1 , r 4 e n − 1 4 r + 1 .$
Comparing exponents of $e n − 1$ in relations ((23), (46)) and ((22), (47)), we get
$2 r + 1 = r p , 4 r + 1 = r 2 .$
Solving system (48), we obtain $r = 2 + 5 ≈ 4.2361$ and $p = 5 ≈ 2.2361$. Therefore, the R-order of method (20), when $T n$ is calculated by (35), is at least 4.2361. From (43) and (45), we know that self-accelerating parameters (35) and (36) have the same error level. So, the R-order of method (20), when $T n$ is calculated by (36), is at least 4.2361.
The proof is completed. □
Theorem 4.
Let varying parameters $T n$ be calculated by (37) or (38) in method (20), respectively. If an initial value $x 0$ is sufficiently close to a simple zero ξ of $f ( x )$, then the R-order of convergence of iterative method (20) is at least $2 + 6 ≈ 4.4495$.
Proof.
According to (10), (13), (14) and (18), we have
$x n − y n − 1 = ( − c 2 ( 1 + f ′ ( ξ ) ) + T n − 1 ) e n − 1 2 + ( c 2 2 ( 2 + 2 f ′ ( ξ ) + f ′ ( ξ ) 2 ) − c 3 ( 2 + 3 f ′ ( ξ ) + f ′ ( ξ ) 2 ) − 2 c 2 ( 1 + f ′ ( ξ ) ) T n − 1 ) e n − 1 3 + ( − c 2 3 ( 3 + 3 f ′ ( ξ ) + 2 f ′ ( ξ ) 2 + f ′ ( ξ ) 3 ) + c 2 2 ( 3 + 4 f ′ ( ξ ) + 3 f ′ ( ξ ) 2 ) T n − 1 − ( 1 + f ′ ( ξ ) ) ( c 4 ( 3 + 3 f ′ ( ξ ) + f ′ ( ξ ) 2 ) + c 3 ( 3 + 2 f ′ ( ξ ) ) T n − 1 ) + c 2 ( 2 c 3 ( 3 + 4 f ′ ( ξ ) + 3 f ′ ( ξ ) 2 + f ′ ( ξ ) 3 ) + T n − 1 2 ) ) e n − 1 4 + O ( e n − 1 5 ) ,$
$z n − 1 − x n − 1 = − e n − 1 + c 2 ( 1 + f ′ ( ξ ) ) e n − 1 2 + ( − c 2 2 ( 2 + 2 f ′ ( ξ ) + f ′ ( ξ ) 2 ) + c 3 ( 2 + 3 f ′ ( ξ ) + f ′ ( ξ ) 2 ) ) e n − 1 3 + ( c 2 3 ( 4 + 5 f ′ ( ξ ) + 3 f ′ ( ξ ) 2 + f ′ ( ξ ) 3 ) + c 4 ( 3 + 6 f ′ ( ξ ) + 4 f ′ ( ξ ) 2 + f ′ ( ξ ) 3 ) − c 2 c 3 ( 7 + 10 f ′ ( ξ ) + 7 f ′ ( ξ ) 2 + 2 f ′ ( ξ ) 3 ) ) e n − 1 4 + O ( e n − 1 5 ) ,$
$y n − 1 − w n − 1 = ( − 1 − f ′ ( ξ ) ) e n − 1 + ( c 2 − T n − 1 ) e n − 1 2 + ( − c 2 2 ( 2 + 2 f ′ ( ξ ) + f ′ ( ξ ) 2 ) + c 3 ( 2 + 2 f ′ ( ξ ) + f ′ ( ξ ) 2 ) + 2 c 2 ( 1 + f ′ ( ξ ) ) T n − 1 ) e n − 1 3 + O ( e n − 1 4 ) ,$
$z n − 1 − w n − 1 = ( − 1 − f ′ ( ξ ) ) e n − 1 + c 2 e n − 1 2 − ( c 2 2 − c 3 ) ( 2 + 2 f ′ ( ξ ) + f ′ ( ξ ) 2 ) e n − 1 3 + ( c 2 3 ( 4 + 5 f ′ ( ξ ) + 3 f ′ ( ξ ) 2 + f ′ ( ξ ) 3 ) + c 3 ( 3 + 5 f ′ ( ξ ) + 4 f ′ ( ξ ) 2 + f ′ ( ξ ) 3 ) − c 2 c 3 ( 7 + 10 f ′ ( ξ ) + 7 f ′ ( ξ ) 2 + 2 f ′ ( ξ ) 3 ) ) e n − 1 4 + O ( e n − 1 5 ) ,$
$w n − 1 − x n − 1 = f ′ ( ξ ) e n − 1 + c 2 f ′ ( ξ ) e n − 1 2 + c 3 f ′ ( ξ ) e n − 1 3 + c 4 f ′ ( ξ ) e n − 1 4 + O ( e n − 1 5 ) ,$
$z n − x n = − ( c 2 + c 2 f ′ ( ξ ) − T n − 1 ) ( c 2 2 ( 1 + f ′ ( ξ ) ) − c 3 ( 1 + f ′ ( ξ ) ) − c 2 T n − 1 ) e n − 1 4 + O ( e n − 1 5 ) ,$
$z n − w n = − ( 1 + f ′ ( ξ ) ) ( c 2 + c 2 f ′ ( ξ ) − T n − 1 ) ( c 2 2 ( 1 + f ′ ( ξ ) ) − c 3 ( 1 + f ′ ( ξ ) ) − c 2 T n − 1 ) e n − 1 4 + O ( e n − 1 5 ) .$
According to (37) and (49)–(55), we get
$T n = z n − 1 − x n ( x n − x n − 1 ) 2 x n + y n − 1 − w n − 1 − z n − 1 z n − 1 − w n − 1 − x n − y n − 1 z n − 1 − x n − 1 − 2 z n − x n − w n ( x n − y n − 1 ) ( z n − 1 − x n − 1 ) − ( x n − z n − 1 ) 2 ( w n − 1 − x n − 1 ) 2 ( z n − 1 − w n − 1 ) 2 ( z n − 1 − x n − 1 ) 3 = c 2 ( 1 + f ′ ( ξ ) ) − A 1 e n − 1 2 + O ( e n − 1 3 ) ,$
where
$A 1 = ( − c 2 3 ( 1 + 3 f ′ ( ξ ) + 6 f ′ ( ξ ) 2 + 3 f ′ ( ξ ) 3 ) + c 2 2 ( 2 + 2 f ′ ( ξ ) + f ′ ( ξ ) 2 ) T n − 1 + ( 1 + f ′ ( ξ ) ) ( c 4 + c 4 f ′ ( ξ ) + c 3 T n − 1 ) + c 2 ( c 3 f ′ ( ξ ) ( 3 + 3 f ′ ( ξ ) + f ′ ( ξ ) 2 ) + T n − 1 2 ) ) ,$
$c 2 ( 1 + f ′ ( ξ ) ) − T n = A 1 e n − 1 2 + O ( e n − 1 3 ) ,$
Using (24), (25) and (58), we obtain
$e n , y ∼ ( c 2 + c 2 f ′ ( ξ ) − T n ) e n 2 ∼ A 1 e n − 1 2 ( D n − 1 , r e n − 1 r ) 2$
$∼ A 1 D n − 1 , r 2 e n − 1 2 r + 2 ,$
$e n + 1 ∼ ( c 2 + c 2 f ′ ( ξ ) − T n ) [ c 2 2 ( 1 + f ′ ( ξ ) ) − c 3 ( 1 + f ′ ( ξ ) ) − c 2 T n ] e n 4$
$∼ − A 1 c 3 ( 1 + f ′ ( ξ ) ) e n − 1 2 ( D n − 1 , r e n − 1 r ) 4$
$∼ − A 1 c 3 ( 1 + f ′ ( ξ ) ) D n − 1 , r 4 e n − 1 4 r + 2 .$
By comparing exponents of $e n − 1$ in relations ((23), (59)) and ((22), (60)), we get
$2 r + 2 = r p , 4 r + 2 = r 2 .$
Solving system (61), we get $r = 2 + 6 ≈ 4.4495$ and $p = 6 ≈ 2.4495$. Therefore, the R-order of method (20) is at least 4.4495, when $λ n$ is calculated by (37). According to (38) and (49)–(55), we get
$T n = ( z n − 1 − x n ) ( y n − 1 − x n − 1 ) ( x n − x n − 1 ) 3 x n + y n − 1 − w n − 1 − z n − 1 z n − 1 − w n − 1 − 2 z n − x n − w n ( x n − y n − 1 ) ( z n − 1 − x n − 1 ) − ( x n − z n − 1 ) 2 ( w n − 1 − x n − 1 ) 2 ( z n − 1 − w n − 1 ) 2 ( z n − 1 − x n − 1 ) 3 = c 2 ( 1 + f ′ ( ξ ) ) − A 2 e n − 1 2 + O ( e n − 1 3 ) ,$
where
$A 2 = ( c 2 c 3 f ′ ( ξ ) ( 3 + 3 f ′ ( ξ ) + f ′ ( ξ ) 2 ) − c 2 3 ( 1 + 4 f ′ ( ξ ) + 8 f ′ ( ξ ) 2 + 4 f ′ ( ξ ) 3 ) + c 2 2 ( 3 + 4 f ′ ( ξ ) + 2 f ′ ( ξ ) 2 ) λ n − 1 + ( 1 + f ′ ( ξ ) ) ( c 4 + c 4 f ′ ( ξ ) + c 3 T n − 1 ) ) ,$
$c 2 ( 1 + f ′ ( ξ ) ) − T n = A 2 e n − 1 2 + O ( e n − 1 3 ) .$
Equations (58) and (64) have the same error level. Therefore, the R-order of method (20), when $T n$ is calculated by (38), is at least 4.4495. The proof is completed. □
Remark 2.
Theorems 3 and 4 prove that the self-accelerating parameters (35)–(38) of method (20) are efficient. In fact, the self-accelerating parameter $T n$ can be constructed by many schemes. Here, we get some other schemes as follows:
$T n = z n − 1 − x n ( z n − 1 − x n − 1 ) 2 .$
$T n = z n − 1 − x n ( z n − 1 − x n − 1 ) ( y n − 1 − x n − 1 ) .$
$T n = z n − 1 − x n ( y n − 1 − x n − 1 ) 2 .$
$T n = z n − 1 − x n ( z n − 1 − x n − 1 ) 2 .$
$T n = z n − 1 − x n ( z n − 1 − x n − 1 ) ( y n − 1 − x n − 1 ) .$
$T n = z n − 1 − z n ( y n − 1 − x n − 1 ) 2 .$
$T n = z n − 1 − z n ( x n − x n − 1 ) 2 .$
$T n = z n − 1 − z n ( z n − 1 − x n − 1 ) 2 .$
$T n = z n − 1 − x n ( z n − 1 − x n − 1 ) ( x n − x n − 1 ) .$
Using the results of Theorem 3, we can prove that the R-order of method (20), is at least 4.2361, when $T n$ is calculated by any schemes of (65)–(73). Here, we omit the proving process.

## 4. Numerical Results

The new methods (7) and (20) are employed to solve nonlinear equations $f i ( x ) ( i = 1 , 2 , 3 )$ and compared with Ren’s method ($α = 0$) (RM, (6)), Petković’s method (PM, (2)), Zheng’s method (ZM, (3)) and Wang’s method (WM, (4)).
Table 1, Table 2 and Table 3 show the absolute errors $x k − ξ$ in the first four iterations, where the root $ξ$ is computed with 1200 significant digits. The first iteration uses initial parameters $T = 0.1$ and $T 0 = 0.1$. The approximate computational order of convergence $ρ$ is defined by [24]:
$ρ ≈ ln ( x n + 1 − x n / x n − x n − 1 ) ln ( x n − x n − 1 / x n − 1 − x n − 2 ) .$
Test functions are used as follows:
$f 1 ( x ) = cos ( x ) − x , ξ ≈ 0.7390851332151606 , x 0 = 0.5 .$
$f 2 ( x ) = 10 x e − x 2 − 1 , ξ ≈ 1.6796306104284499 , x 0 = 1.8 .$
$f 3 ( x ) = sin ( x ) − 1 3 x , ξ ≈ 2.2788626600758283 , x 0 = 2.0 .$
Table 1, Table 2 and Table 3 show that our methods have better convergence behaves than some existing methods in this paper. Numerical results prove the validity of theory.
Remark 3.
If we use high-order Newton’s interpolation polynomial to calculate the self-accelerating parameter, our method (20) will obtain higher convergence order. High-order interpolation polynomial is complex, which is disadvantageous to reduce computation. Therefore, we give a simple way to construct self-accelerating parameter.

## 5. Conclusions

In this paper, a new self-accelerating type method is proposed for solving nonlinear equations, which is a modified scheme of Ren’s method. The new method reaches the highest convergence order 4.4495 by using a self-accelerating parameter. More importantly, a novel technique is applied to construct a self-accelerating parameter, which is different from the secant method and interpolation method. The new self-accelerating parameter does not increase the computational cost of the new method. The new method is compared in performance with the existing methods. Numerical examples confirm the validity of theoretical results and show that the new method has a better convergence feature than some existing methods.

## Author Contributions

Methodology, X.W., Q.F.; writing—original draft preparation, X.W.; writing—review and editing, X.W., Q.F. All authors have read and agreed to the published version of the manuscript.

## Funding

This research was supported by the National Natural Science Foundation of China (Nos. 61976027 and 61572082), Natural Science Foundation of Liaoning Province of China (No. 20180551262), Educational Commission Foundation of Liaoning Province of China (No. LJ2019010) and University-Industry Collaborative Education Program (No.201901077017).

## Conflicts of Interest

The authors declare no conflict of interest.The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

## References

1. Traub, J.F. Iterative Method for the Solution of Equations; Prentice Hall: New York, NY, USA, 1964. [Google Scholar]
2. Petković, M.S.; Ilić, S.; Džunić, J. Derivative free two-point methods with and without memory for solving nonlinear equations. Appl. Math. Comput. 2010, 217, 1887–1895. [Google Scholar] [CrossRef]
3. Zheng, Q.; Zhao, P.; Zhang, L.; Ma, W. Variants of Steffensen-secant method and applications. Appl. Math. Comput. 2010, 216, 3486–3496. [Google Scholar] [CrossRef]
4. Wang, X.; Zhang, T. A new family of Newton-type iterative methods with and without memory for solving nonlinear equations. Calcolo 2014, 51, 1–15. [Google Scholar] [CrossRef]
5. Wang, X.; Zhang, T. Some Newton-type iterative methods with and without memory for solving nonlinear equations. Int. J. Comput. Methods 2014, 11, 1350078. [Google Scholar] [CrossRef]
6. Wang, X.; Qin, Y.; Qian, W.; Zhang, S.; Fan, X. A Family of Newton Type Iterative Methods for Solving onlinear Equations. Algorithms 2015, 8, 786–798. [Google Scholar] [CrossRef] [Green Version]
7. Džunić, J.; Petković, M.S.; Petković, L.D. Three-point methods with and without memory for solving nonlinear equations. Appl. Math. Comput. 2012, 218, 4917–4927. [Google Scholar] [CrossRef]
8. Džunić, J.; Petković, M.S. On generalized multipoint root-solvers with memory. J. Comput. Appl. Math. 2015, 236, 2909–2920. [Google Scholar] [CrossRef]
9. Zaka Ullah, M.; Kosari, S.; Soleylmani, F.; Khaksar Haghani, F.; Al-Fhaid, A.S. Asuper-fast tri-parametric iterative mehod with memory. Appl. Math. Comput. 2016, 289, 486–491. [Google Scholar]
10. Lotfi, T.; Assari, P. New three- and four-parametric iterative with memory methods with efficiency index near 2. Appl. Math. Comput. 2015, 270, 1004–1010. [Google Scholar] [CrossRef]
11. Cordero, A.; Lotfi, T.; Bakhtiari, P.; Torregrosa, J.R. An efficient two-parametric family with memory for nonlinear equations. Numer. Algorithms 2015, 68, 323–335. [Google Scholar] [CrossRef] [Green Version]
12. Soleymani, F.; Lotfi, T.; Tavakoli, E.; Haghani, F.K. Several iterative methods with memory using- accelerators. Appl. Math. Comput. 2015, 254, 452–458. [Google Scholar] [CrossRef]
13. Sharma, J.R.; Guha, R.K.; Gupta, P. Some efficient derivative free methods with memory for solving nonlinear equations. Appl. Math. Comput. 2012, 219, 699–707. [Google Scholar] [CrossRef]
14. Wang, X.; Zhang, T.; Qin, Y. Efficient two-step derivative-free iterative methods with memory and their dynamics. Int. J. Comput. Math. 2016, 93, 1423–1446. [Google Scholar] [CrossRef]
15. Wang, X.; Zhang, T. Efficient n-point iterative methods with memory for solving nonlinear equations. Numer. Algorithms 2015, 70, 357–375. [Google Scholar] [CrossRef]
16. Campos, B.; Cordero, A.; Torregrosa, J.R.; Vindel, P. A multidimensional dynamical approach to iterative methods with memory. Appl. Math. Comput. 2015, 217, 701–715. [Google Scholar] [CrossRef] [Green Version]
17. Cordero, A.; Jordan, C.; Torregroda, J.R. A dynamical comprison between iterative methods with memory: Are the derivative good for the memory? J. Comput. Appl. Math. 2017, 318, 335–347. [Google Scholar] [CrossRef] [Green Version]
18. Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint methods for solving nonlinear equations: A survey. Appli. Math. Comput. 2014, 226, 635–660. [Google Scholar] [CrossRef] [Green Version]
19. Iliev, A.; Kyurkchiev, N. Nontrivial Methods in Numerical Analysis: Selected tOpics in Numerical Analysis; Lap Lambert Academic Publishing: Saarbrucken, Germany, 2010. [Google Scholar]
20. Wang, X. A new Newton method with memory for solving nonliear equations. Mathematics 2020, 8. [Google Scholar] [CrossRef] [Green Version]
21. Wang, X. A new accelerating technique applied to a variant of Cordero-Torregrosa method. J. Comput. Appl. Math. 2018, 330, 695–709. [Google Scholar] [CrossRef]
22. Ren, H.; Wu, Q.; Bi, W. A class of two-step Steffensen type methods with fourth-order convergence. Appl. Math. Comput. 2009, 209, 206–210. [Google Scholar] [CrossRef]
23. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iterations. J. Appl. Comput. Math. 1974, 21, 643–651. [Google Scholar] [CrossRef]
24. Cordero, A.; Torregrosa, J.R. Variants of Newton’s Method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
Table 1. Numerical results for $f 1 ( x )$.
Table 1. Numerical results for $f 1 ( x )$.
Methods$| x 1 − ξ |$$| x 2 − ξ |$$| x 3 − ξ |$$| x 4 − ξ |$$ρ$
RM$0.30201 × 10 − 4$$0.96552 × 10 − 20$$0.10086 × 10 − 81$$0.12011 × 10 − 329$4.0000000
(7)$0.67349 × 10 − 4$$0.51236 × 10 − 18$$0.17160 × 10 − 74$$0.21590 × 10 − 300$4.0000000
PM$0.63702 × 10 − 3$$0.78493 × 10 − 16$$0.57495 × 10 − 70$$0.20389 × 10 − 299$4.2384668
ZM$0.79077 × 10 − 3$$0.14418 × 10 − 11$$0.25525 × 10 − 40$$0.25811 × 10 − 135$3.3039563
WM$0.75634 × 10 − 3$$0.25811 × 10 − 15$$0.15116 × 10 − 67$$0.60634 × 10 − 289$4.2386876
(20) with (19)$0.67349 × 10 − 4$$0.32043 × 10 − 20$$0.47317 × 10 − 89$$0.10705 × 10 − 380$4.2371414
(20) with (35)$0.67349 × 10 − 4$$0.13942 × 10 − 19$$0.64548 × 10 − 86$$0.61388 × 10 − 367$4.2364379
(20) with (36)$0.67349 × 10 − 4$$0.20839 × 10 − 19$$0.33951 × 10 − 85$$0.70224 × 10 − 364$4.2360962
(20) with (37)$0.67349 × 10 − 4$$0.21778 × 10 − 20$$0.56497 × 10 − 94$$0.27015 × 10 − 421$4.4481352
(20) with (38)0.67349$× 10 − 4$$0.25160 × 10 − 20$$0.98512 × 10 − 94$$0.33331 × 10 − 420$4.4473908
Table 2. Numerical results for $f 2 ( x )$.
Table 2. Numerical results for $f 2 ( x )$.
Methods$| x 1 − ξ |$$| x 2 − ξ |$$| x 3 − ξ |$$| x 4 − ξ |$$ρ$
RM$0.33251 × 10 − 3$$0.30709 × 10 − 13$$0.22312 × 10 − 53$$0.62179 × 10 − 214$4.0000000
(7)$0.29605 × 10 − 3$$0.16982 × 10 − 13$$0.18366 × 10 − 54$$0.25128 × 10 − 218$4.0000000
PM$0.34882 × 10 − 2$$0.10531 × 10 − 10$$0.36075 × 10 − 46$$0.14899 × 10 − 196$4.2403195
ZM$0.15866 × 10 − 2$$0.46751 × 10 − 9$$0.14185 × 10 − 30$$0.11659 × 10 − 101$3.3035264
WM$0.10688 × 10 − 2$$0.25811 × 10 − 15$$0.34580 × 10 − 66$$0.35950 × 10 − 283$4.2408357
(20) with (19)$0.29605 × 10 − 3$$0.14719 × 10 − 16$$0.16544 × 10 − 72$$0.13025 × 10 − 309$4.2378388
(20) with (35)$0.29605 × 10 − 3$$0.70804 × 10 − 15$$0.34181 × 10 − 64$$0.44018 × 10 − 273$4.2357244
(20) with (36)$0.29605 × 10 − 3$$0.18175 × 10 − 14$$0.18755 × 10 − 62$$0.10235 × 10 − 265$4.2358506
(20) with (37)$0.29605 × 10 − 3$$0.48384 × 10 − 15$$0.95540 × 10 − 68$$0.38865 × 10 − 302$4.4472587
(20) with (38)$0.29605 × 10 − 3$$0.73524 × 10 − 15$$0.45757 × 10 − 67$$0.47216 × 10 − 299$4.4436750
Table 3. Numerical results for $f 3 ( x )$.
Table 3. Numerical results for $f 3 ( x )$.
Methods$| x 1 − ξ |$$| x 2 − ξ |$$| x 3 − ξ |$$| x 4 − ξ |$$ρ$
RM$0.14664 × 10 − 4$$0.12289 × 10 − 23$$0.60662 × 10 − 100$$0.36019 × 10 − 405$4.0000000
(7)$0.10564 × 10 − 5$$0.40124 × 10 − 26$$0.83509 × 10 − 108$$0.15669 × 10 − 434$4.0000000
PM$0.90035 × 10 − 2$$0.25682 × 10 − 10$$0.87635 × 10 − 46$$0.33663 × 10 − 196$4.2410060
ZM$0.51016 × 10 − 2$$0.26111 × 10 − 8$$0.52455 × 10 − 29$$0.21691 × 10 − 97$3.3040229
WM$0.84480 × 10 − 2$$0.96292 × 10 − 11$$0.89256 × 10 − 48$$0.74204 × 10 − 205$4.2416331
(20) with (19)$0.10564 × 10 − 5$$0.21218 × 10 − 30$$0.70199 × 10 − 134$$0.16895 × 10 − 572$4.2386648
(20) with (35)$0.10564 × 10 − 5$$0.13904 × 10 − 26$$0.10705 × 10 − 116$$0.49497 × 10 − 498$4.2317152
(20) with (36)$0.10564 × 10 − 5$$0.13529 × 10 − 26$$0.95737 × 10 − 117$$0.30814 × 10 − 498$4.2317416
(20) with (37)$0.10564 × 10 − 5$$0.17838 × 10 − 29$$0.28166 × 10 − 135$$0.50483 × 10 − 606$4.4493324
(20) with (38)$0.10564 × 10 − 5$$0.20284 × 10 − 29$$0.47006 × 10 − 135$$0.50638 × 10 − 605$4.4489767

## Share and Cite

MDPI and ACS Style

Wang, X.; Fan, Q. A Modified Ren’s Method with Memory Using a Simple Self-Accelerating Parameter. Mathematics 2020, 8, 540. https://doi.org/10.3390/math8040540

AMA Style

Wang X, Fan Q. A Modified Ren’s Method with Memory Using a Simple Self-Accelerating Parameter. Mathematics. 2020; 8(4):540. https://doi.org/10.3390/math8040540

Chicago/Turabian Style

Wang, Xiaofeng, and Qiannan Fan. 2020. "A Modified Ren’s Method with Memory Using a Simple Self-Accelerating Parameter" Mathematics 8, no. 4: 540. https://doi.org/10.3390/math8040540

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.