Next Article in Journal
A Constrained Production System Involving Production Flexibility and Carbon Emissions
Next Article in Special Issue
Adaptive Iterative Splitting Methods for Convection-Diffusion-Reaction Equations
Previous Article in Journal
Characterizing Complete Fuzzy Metric Spaces Via Fixed Point Results
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Impact on Stability by the Use of Memory in Traub-Type Schemes

by
Francisco I. Chicharro
1,
Alicia Cordero
2,
Neus Garrido
1 and
Juan R. Torregrosa
2,*
1
Escuela Superior de Ingeniería y Tecnología, Universidad Internacional de La Rioja, 26006 Logroño, Spain
2
Institute for Multidisciplinary Mathematics, Universitat Politècnica de València, 46022 València, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(2), 274; https://doi.org/10.3390/math8020274
Submission received: 23 January 2020 / Revised: 10 February 2020 / Accepted: 12 February 2020 / Published: 18 February 2020

Abstract

:
In this work, two Traub-type methods with memory are introduced using accelerating parameters. To obtain schemes with memory, after the inclusion of these parameters in Traub’s method, they have been designed using linear approximations or the Newton’s interpolation polynomials. In both cases, the parameters use information from the current and the previous iterations, so they define a method with memory. Moreover, they achieve higher order of convergence than Traub’s scheme without any additional functional evaluations. The real dynamical analysis verifies that the proposed methods with memory not only converge faster, but they are also more stable than the original scheme. The methods selected by means of this analysis can be applied for solving nonlinear problems with a wider set of initial estimations than their original partners. This fact also involves a lower number of iterations in the process.

1. Introduction

Finding the solution of a nonlinear equation f ( x ) = 0 has been and still is present in many fields of Technology and Science. Because many of these equations cannot be solved analytically, the solution of this kind of problems is approximated by using iterative methods. Among them, the best-known is Newton’s scheme, with quadratic convergence under some conditions.
The role of iterative processes for solving nonlinear problems of many branches of science and engineering has increased exponentially in recent years. This is due to the applicability of these algorithms to real life problems. For instance, Shacham et al. [1,2], described the fraction of the nitrogen-hydrogen feed that gets converted to ammonia (this fraction is known as fractional conversion) in the form of nonlinear scalar equation. On the other hand, Shacham [3] described the fractional conversion in a chemical reactor also by using a scalar equation. Moreover, Shacham and Kehat [4] gave several examples of real life problems which are modeled by means of nonlinear scalar equation such as: chemical equilibrium calculations problem, energy or material balance problem in a chemical reactor problem, isothermal flash problem, azeotropic point calculation problem, calculation of gas volume from Beattie Bridgeman problem, adiabatic flame temperature problem, liquid flow rate in pipe problem, pressure drop in a converging diverging nozzle problem, etc.
There is an extensive literature related to iterative methods for solving nonlinear equations. The design of new efficient methods is an ongoing issue for scientists. Recently, this design goes hand to hand with the dynamical analysis [5,6,7,8,9], allowing the knowledge of the stability of the methods involved.
Kung and Traub conjectured in [10] that iterative methods without memory which use d functional evaluations per iteration cannot have order of convergence higher than 2 d 1 . The inclusion of the memory in the iterative methods is a powerful technique, since it increases the order of convergence of the method without adding new functional evaluations. Traub designed the first method with memory [11] based on the Steffensen’s one [12], increasing the order of convergence from 2 to 2.41 . Several authors have modified the iterative methods to include memory, as done in [13] for methods with derivatives, or [14,15,16] for derivative-free ones.
In this work, we focus on the order of convergence of two Traub-type methods when we modify its iterative expression to obtain methods with memory. Moreover, the dynamical analysis is performed in a similar way as done in [17,18,19,20].
To analyze the order of convergence of each method with memory, we use the following result [21]:
Theorem 1.
Let φ be an iterative method with memory that generates a convergent sequence { x k } of approximations of a zero α of function f. Let us assume a nonzero constant τ and positive numbers p i , i = 0 , 1 , , m , such that
| e k + 1 | τ i = 0 m | e k i | p i
holds. Then, the R-order of convergence of φ satisfies
O R ( φ , α ) p ,
being p the only positive root of the polynomial
p m + 1 i = 0 m p i p m i .
The rest of this manuscript is organized as follows. In Section 2, the basic concepts of dynamics for both one-dimensional and multidimensional real cases are remembered, since the analysis of iterative methods with memory requires the multidimensional dynamics for real variable. Based on Traub’s method, Section 3 collects the construction of two new methods with memory. The first scheme needs two previous iterations, while the second one also needs an intermediate point. The convergence of the methods without and with memory is also performed. To analyze the stability of the introduced methods, Section 4 is devoted to the dynamical study. Finally, some discussion and conclusions about the results are presented in Section 5.

2. Real Dynamics

The study of the dynamical properties of the rational operator associated with an iterative scheme on low degree polynomials, gives interesting information about the stability of this method. This study is usually carried out applying tools of complex dynamics. However, as stated in [22], the knowledge of the complex dynamics is not included in the study of the real dynamics, since the behavior can be different. Therefore, when we have an iterative method with memory, the fundamentals of the complex dynamics can be applied, but they must be particularized for the real variable case.
The aim of this paper is to compare an iterative scheme without memory with other two schemes with memory that have been obtained from the first one. Then, we will focus on tools of real dynamics distinguishing the one-dimensional case, used for real dynamics without memory, from the multidimensional case, used in real dynamics of methods with memory.
When an iterative method is applied on a nonlinear equation f ( x ) = 0 , we obtain a rational function whose dynamics are unknown. Now we recall some basic concepts of real dynamics for the one-dimensional case. To expand these contents see, for example [23,24].
Let R : R R be the rational function obtained when an iterative scheme is applied on a polynomial f ( x ) . Then, the orbit of a point x 0 R is given by { x 0 , R ( x 0 ) , R 2 ( x 0 ) , } . A fixed point x F of R satisfies R ( x F ) = x F . It is called strange fixed point when it does not coincide with a root of f ( x ) . Moreover, a point x 0 is a T-periodic point when R T ( x 0 ) = x 0 but R t ( x 0 ) x 0 , t < T , where T , t N . A fixed point is T-periodic with T = 1 .
The stability of the fixed points depends on the multiplier of the fixed point, | R ( x ) | , when R denotes the derivative of function R. Therefore, if x F R is a fixed point of R, it is called attracting when | R ( x F ) | < 1 ; repelling if | R ( x F ) | > 1 ; superattracting if | R ( x F ) | = 0 ; and neutral when | R ( x F ) | = 1 .
A critical point x C holds R ( x C ) = 0 , and it is called free critical point when it is different from the roots of f ( x ) .
When x F is an attracting fixed point, we define the basin of attraction as the set of pre-images of any order such that
A ( x F ) = { x 0 R : R n ( x 0 ) x F , n } .
Iterative schemes which use two previous iterations to calculate the following iterate have the general form x k + 1 = h ( x k 1 , x k ) , k 1 , being x 0 and x 1 two initial estimations. To carry out the dynamical study of a method with memory, we need to build an associated discrete dynamical system [17]. For this purpose, we consider the auxiliary function G : R 2 R 2 defined as
G ( x k 1 , x k ) = ( x k , x k + 1 ) = ( x k , h ( x k 1 , x k ) ) , k 1 .
Please note that this definition can be easily adapted to schemes with memory which use more than two previous iterations in each step.
As we did in the one-dimensional case, we recall on some dynamical concepts. A point ( z , x ) R 2 is a fixed point of G when it satisfies G ( z , x ) = ( z , x ) . So, it must verify z = x and x = h ( z , x ) . When a fixed point ( z , x ) does not agree with a root of f ( x ) , it is called strange fixed point.
As in complex dynamics, the orbit of a point x 0 R 2 is composed of its successive images by G
{ x 0 , G ( x 0 ) , , G m ( x 0 ) , } ,
and its dynamical behavior can be classified depending on its asymptotical behavior. Hence, x 0 R 2 is a T-periodic point when G T ( x 0 ) = x 0 but G t ( x 0 ) x 0 , for t < T .
The stability of the fixed points of G can be analyzed using the following result [25]:
Theorem 2.
Let G : R n R n be of class C 2 and x T a T-periodic point. Let λ 1 , λ 2 , , λ n be the eigenvalues of G ( x T ) , where G is the Jacobian matrix of G.
(a) 
If | λ j | < 1 , for j = 1 , 2 , , n , then x T is attracting.
(b) 
If one eigenvalue λ j 0 has | λ j 0 | > 1 , then x T is repelling or saddle.
(c) 
If | λ j | > 1 , for j = 1 , 2 , , n , then x T is repelling.
Moreover, if | λ j | 1 , for j = 1 , 2 , , n , x T is hyperbolic. In particular, if there exists an eigenvalue such that | λ i | < 1 and another such that | λ j | > 1 , the hyperbolic point is called saddle point. Please note that for T = 1 , the T-periodic point is recalled as fixed point, and it is denoted by x F .
Finally, the basin of attraction of a T-periodic point x T is defined as:
A ( x T ) = { x 0 R n : G m ( x 0 ) x T , m } .

3. Traub-Type Methods with Memory

The well-known Traub’s method [11] has the iterative expression
y k = x k f ( x k ) f ( x k ) , x k + 1 = y k f ( y k ) f ( x k ) .
The next result shows the order of convergence of the method from its error equation.
Theorem 3.
Let us consider a real function f : I R R , sufficiently differentiable in an open interval I. If α I is a simple zero of f and x 0 is near enough to α, then sequence { x k } generated by Traub’s method (1) converges to α with order of convergence 3, being its error equation:
e k + 1 = 2 c 2 2 e k 3 + O ( e k 4 ) .
where e k = x k α and c j = f ( j ) ( α ) j ! f ( α ) , j 2 .
Although in Traub’s scheme is not possible to increase the order of convergence, some changes on its iterative expression allow us the inclusion of memory, obtaining methods with higher order of convergence.
To improve the order in Traub’s method, we start by adding an accelerator parameter δ in the first step of its iterative scheme, so we obtain:
y k = x k f ( x k ) f ( x k ) + δ f ( x k ) , x k + 1 = y k f ( y k ) f ( x k ) .
The order of convergence of the resulting family is set in the following result.
Theorem 4.
Let us consider a real function f : I R R , sufficiently differentiable in an open interval I. If α I is a simple zero of f and x 0 is near enough to α, then the iterative family (2) converges to α with order of convergence 3 for any value of parameter δ, being its error equation
e k + 1 = 2 c 2 ( c 2 + δ ) e k 3 + O ( e k 4 ) ,
where e k = x k α and c j = f ( j ) ( α ) j ! f ( α ) , j 2 .
Proof. 
By using Taylor series expansions, f ( x k ) can be expressed as
f ( x k ) = f ( α ) e k + c 2 e k 2 + c 3 e k 3 + c 4 e k 4 + O ( e k 5 ) ,
and its derivative
f ( x k ) = f ( α ) 1 + 2 c 2 e k + 3 c 3 e k 2 + 4 c 4 e k 3 + O ( e k 4 ) .
By using these expansions, we have
y k α = e k f ( x k ) f ( x k ) + δ f ( x k ) = c 2 + δ e k 2 + 2 c 2 δ 2 c 2 2 + 2 c 3 δ 2 e k 3 + O ( e k 4 ) ,
and
f ( y k ) = f ( α ) y k α + c 2 ( y k α ) 2 + c 3 ( y k α ) 3 + O ( ( y k α ) 4 ) = f ( α ) ( c 2 + δ ) e k 2 + 2 c 2 δ 2 c 2 2 + 2 c 3 δ 2 e k 3 + O ( e k 4 ) .
Then, we obtain the following error equation
e k + 1 = y k α f ( y k ) f ( x k ) = 2 c 2 ( c 2 + δ ) e k 3 + O ( e k 4 ) .
 ☐
We study how to increase the order of the class by analyzing its error equation. If δ = c 2 = f ( α ) 2 f ( α ) the order increases in, at least, one unit. Unfortunately, α is not known. Therefore, we need to approximate f ( α ) and f ( α ) transforming the iterative schemes in other ones with memory.
If the following linear approximations are applied
f ( α ) f ( x k ) , f ( α ) f ( x k ) f ( x k 1 ) x k x k 1 ,
the accelerator parameter gets into
δ k 1 2 f ( x k ) f ( x k 1 ) ( x k x k 1 ) f ( x k ) .
Then, we have an iterative scheme with memory whose order of convergence is analyzed below. For simplicity, the Traub-type method with memory (2) with δ k defined by (5) will be denoted by TM1.
Theorem 5.
Let us consider a real function f : I R R , sufficiently differentiable in an open interval I. If α I is a simple zero of f and x 0 is near enough to α, then the iterative method TM1 converges to α with order of convergence p 3.30 , and its error equation is:
e k + 1 = 3 c 2 c 3 e k 1 e k 3 + O 4 ( e k , e k 1 ) ,
where O 4 ( e k , e k 1 ) indicates that the sum of the exponents of e k and e k 1 in the rejected terms of the development is at least 4.
Proof. 
From the error Equation (3), the following relation is satisfied
e k + 1 2 c 2 ( c 2 + δ k ) e k 3 + O ( e k 4 ) .
By using Taylor series expansions, we have
f ( x k ) = f ( α ) e k + c 2 e k 2 + c 3 e k 3 + c 4 e k 4 + O ( e k 5 ) , f ( x k 1 ) = f ( α ) e k 1 + c 2 e k 1 2 + c 3 e k 1 3 + c 4 e k 1 4 + O ( e k 1 5 ) ,
and
f ( x k ) = f ( α ) 1 + 2 c 2 e k + 3 c 3 e k 2 + 4 c 4 e k 3 + O ( e k 4 ) , f ( x k 1 ) = f ( α ) 1 + 2 c 2 e k 1 + 3 c 3 e k 1 2 + 4 c 4 e k 1 3 + O ( e k 1 4 ) .
Therefore, the accelerator parameter is
δ k = 1 2 f ( x k ) f ( x k 1 ) ( e k e k 1 ) f ( x k ) = c 2 3 2 c 3 e k 1 + 2 c 2 2 3 2 c 3 e k + ( 3 c 2 c 3 2 c 4 ) e k 1 e k 2 c 4 e k 1 2 + ( 4 c 2 3 + 6 c 2 c 3 2 c 4 ) e k 2 + O 3 ( e k , e k 1 ) .
By taking the lower order terms,
δ k + c 2 3 2 c 3 e k 1 .
Then, we obtain
e k + 1 2 c 2 3 2 c 3 e k 1 e k 3 e k 1 e k 3 .
Let the R-order of the method be at least p. Then it is satisfied
e k + 1 D k , p e k p ,
where D k , p tends to the asymptotic error constant, D p , when k . Analogously,
e k D k 1 , p e k 1 p .
Then, we have
e k + 1 D k , p ( D k 1 , p e k 1 p ) p = D k , p D k 1 , p p e k 1 p 2 .
In the same way, relation (6) satisfies
e k + 1 e k 1 ( D k 1 , p e k 1 p ) 3 = D k 1 , p 3 e k 1 3 p + 1 .
Finally, the exponents of e k 1 in (7) and (8) must be the same, so it is obtained the following equation:
p 2 = 3 p + 1 .
In the previous equation the only positive solution p 3.30 gives the order of convergence of the method. ☐
Based on Theorem 5, the inclusion of memory increases the order of convergence of the family, since the order of convergence of (1) and (2) has lower values. In Section 4.2 the dynamical study of this family is performed, to check the stability of its members.
Following an analogous way to proceed as in the TM1 case, two accelerating parameters δ 1 and δ 2 are included in Traub’s original method (1) obtaining the iterative scheme
y k = x k f ( x k ) f ( x k ) + δ 1 f ( x k ) , x k + 1 = y k f ( y k ) f ( x k ) + δ 2 f ( x k ) , k = 0 , 1 ,
whose error equation is shown in the next result.
Theorem 6.
Let us consider a real function f : I R R , sufficiently differentiable in an open interval I. If α I is a simple zero of f and x 0 is near enough to α, then the iterative family (9) has order of convergence 3, for any values of parameters δ 1 and δ 2 , and its error equation is:
e k + 1 = ( δ 1 + c 2 ) ( δ 2 + 2 c 2 ) e k 3 + O ( e k 4 ) ,
where e k = x k α and c j = f ( j ) ( α ) j ! f ( α ) , j 2 .
Proof. 
By using Taylor series expansions around the root α , f ( x k ) is expressed as
f ( x k ) = f ( α ) e k + c 2 e k 2 + c 3 e k 3 + c 4 e k 4 + O ( e k 5 ) ,
and its derivative
f ( x k ) = f ( α ) 1 + 2 c 2 e k + 3 c 3 e k 2 + 4 c 4 e k 3 + O ( e k 4 ) .
Using these expansions, we have
y k α = e k f ( x k ) f ( x k ) + δ 1 f ( x k ) = c 2 + δ 1 e k 2 + 2 c 2 δ 1 2 c 2 2 + 2 c 3 δ 1 2 e k 3 + O ( e k 4 ) .
Expanding f ( y k ) around α , we obtain
f ( y k ) = f ( α ) ( c 2 + δ 1 ) e k 2 + ( 2 c 2 δ 1 2 c 2 2 + 2 c 3 δ 1 2 ) e k 3 + O ( e k 4 ) .
Then, the error equation becomes
e k + 1 = y k α f ( y k ) f ( x k ) + δ 2 f ( x k ) = ( δ 1 + c 2 ) ( δ 2 + 2 c 2 ) e k 3 + O ( e k 4 ) .
 ☐
If we study how to increase the order of convergence of this class, we can easily verify that if δ 1 = c 2 and δ 2 = 2 c 2 the order of convergence can increase, at least, up to 5. We need to approximate f ( α ) and f ( α ) and it could be done again by using linear approximations as in the family TM1. However, if we want to get an order of convergence closer to 5 it is preferable to use higher order approximations.
For this purpose, we use the Newton’s interpolation polynomial of second degree, set through three available approximations x k , x k 1 and y k 1 to interpolate f. Denoted as N 2 ( t ; x k , x k 1 , y k 1 ) = N 2 ( t ) , it is defined by
N 2 ( t ) = f ( x k ) + f [ x k , x k 1 ] ( t x k ) + f [ x k , x k 1 , y k 1 ] ( t x k ) ( t x k 1 ) .
Now, if we set the approximations
f ( α ) N 2 ( x k ) , f ( α ) N 2 ( x k ) ,
we have the following accelerator parameters:
δ 1 , k 1 2 N 2 ( x k ) N 2 ( x k ) , δ 2 , k N 2 ( x k ) N 2 ( x k ) .
The Traub-type family with memory (9), taking the expression for the two accelerator parameters in (12), is called TM2 method.
In the next theorem, we prove how much the order has increased with respect to Traub’s method.
Theorem 7.
Let us consider a real function f : I R R , sufficiently differentiable in an open interval I. If α I is a simple zero of f and x 0 is near enough to α, then the iterative method TM2 converges to α with order of convergence p 3.56 .
Proof. 
From expression (10), we have the following relation
e k + 1 ( δ 1 , k + c 2 ) ( δ 2 , k + 2 c 2 ) e k 3 .
Let us denote e k , y = y k α , for all k. Let N 2 ( t ) be defined in (11) and the following Taylor developments in terms of the errors in each iterative step of the method:
f ( x k ) = f ( α ) e k + c 2 e k 2 + c 3 e k 3 + c 4 e k 4 + O ( e k 5 ) , f ( x k 1 ) = f ( α ) e k 1 + c 2 e k 1 2 + c 3 e k 1 3 + c 4 e k 1 4 + O ( e k 1 5 ) , f ( y k 1 ) = f ( α ) e k 1 , y + c 2 e k 1 , y 2 + c 3 e k 1 , y 3 + c 4 e k 1 , y 4 + O ( e k 1 , y 5 ) .
Using these developments in N 2 ( t ) , evaluating the derivatives of N 2 ( t ) in the point x k and rejecting the terms of order higher than 0 of e k , we obtain:
δ 1 , k = c 2 c 3 e k 1 , y c 3 e k 1 c 4 e k 1 , y 2 c 4 e k 1 2 + ( c 2 c 3 c 4 ) e k 1 , y e k 1 + O 3 ( e k 1 , e k 1 , y ) .
From the previous calculation the following relations are satisfied:
δ 1 , k + c 2 e k 1 δ 2 , k + 2 c 2 = 2 ( δ 1 , k + c 2 ) 2 e k 1 e k 1 .
On the other hand, let the R-order of the method be at least p. So, it is satisfied
e k + 1 D k , p e k p
where D k , p tends to D p , the asymptotic error constant, when k .
In the same way,
e k D k 1 , p e k 1 p
so
e k + 1 D k , p ( D k 1 , p e k 1 p ) p = D k , p D k 1 , p p e k 1 p 2 .
Using relations (15) and (16), in (13):
e k + 1 e k 1 2 e k 3 e k 1 2 ( D k 1 , p e k 1 p ) 3 e k 1 3 p + 2
Finally, if we match the exponents of (17) and (18), the following equation is obtained:
p 2 = 3 p + 2
In this problem, the only possible solution for the previous equation is
p 3.56 .
Then, TM2 method has order of convergence p 3.56 . ☐

4. Stability Analysis

4.1. Real Dynamics of Traub’s Method

In [5], Amat et al. carry out a dynamical study about Traub’s method when it is applied to polynomials of second and third degree. By taking this study on quadratic polynomials as a reference, we construct and analyze the bifurcation diagrams and the dynamical lines of this method. This analysis allow us to compare the dynamical features of Traub’s method with the TM1 and TM2 corresponding ones.
Consider the next result that allows the generalization of dynamic study in some iterative schemes.
Theorem 8 (Scaling theorem).
Let f ( x ) be an analytic function, and let A ( x ) = α x + β , with α 0 , be an affine map. Let g ( x ) = λ ( f A ) ( x ) , with λ 0 . Then, the fixed-point operator M f is analytically conjugated to M g by A, i.e., ( A M g A 1 ) ( x ) = M f ( x ) .
In addition, it is possible to analyze the dynamics of a family of polynomials with just the analysis of a few cases.
Theorem 9.
Let q ( x ) = a 1 x 2 + a 2 x + a 3 , a 1 0 , be a generic quadratic polynomial with simple roots. Then q ( x ) can be reduced to p ( x ) = x 2 + c , where c = 4 a 1 a 3 a 2 2 , by means of an affine map. This affine transformation induces a conjugation between M q and M p , the fixed-point operators corresponding to polynomials q ( x ) and p ( x ) , respectively.
Traub’s method satisfies the Scaling Theorem, as proved in [5]. Moreover, according to Theorem 9, the study of the family of polynomials p c ( x ) = x 2 + c can be generalized to any quadratic polynomial.
If we apply Traub’s method on p c ( x ) = x 2 + c , c R , we obtain the fixed-point operator M c : R R which depends on c:
M c ( x ) = c 2 + 6 c x 2 3 x 4 8 x 3 .
Solving M c ( x ) = x we get for all c < 0 two fixed points, x 1 F ( c ) = c and x 2 F ( c ) = c , corresponding to the roots of p c ( x ) , which are superattracting. Moreover, x 3 F ( c ) = c 5 and x 4 F ( c ) = c 5 are two repelling strange fixed points of M c when c < 0 , since | M c ( x 3 , 4 F ( c ) ) | = 6 .
By solving the equation M c ( x ) = 0 , we get that x 1 C ( c ) = c and x 2 C ( c ) = c are the only critical points of M c , so Traub’s method does not have any free critical points.
We have plotted in Figure 1 the bifurcation diagrams for the fixed points of M c . Figure 1a represents the superattracting fixed points x 1 F ( c ) and x 2 F ( c ) , while Figure 1b represents the strange fixed points x 3 F ( c ) and x 4 F ( c ) . The plots have been generated by iterating Traub’s method taking as initial estimation the fixed points with a small perturbation. For each c, the successive values of x k are plotted from the iterate # 500 to # 700 in order to represent the advanced state of the orbit of each initial guess. The bifurcation diagrams show values of c where the method has more stability. In all cases, when c > 0 there is no point because they are complex, so it is verified that c cannot take positive values for any fixed point. As x 1 , 2 F ( c ) are superattracting, in Figure 1a all the points converge to one of the two roots. In Figure 1b it is observed the same behavior because x 3 , 4 F ( c ) are repelling, so, there are not strange attracting points.
As represented in [19], the dynamical lines of Traub’s method when it is applied on p c ( x ) are shown in Figure 2 for different values of c. Using the software MATLAB® 2017b, the interval x 0 [ 5 , 5 ] has been divided in 500 points and they have been used as initial estimations to iterate Traub’s method. When a point in the interval converges to one of the roots of p c ( x ) it is painted in orange or blue. Orange points correspond to the basin of attraction of x 1 F ( c ) , while the blue ones belong to x 2 F ( c ) . It is painted in black the points that do not converge to any root. The stopping criteria are 50 maximum of iterations and a difference between two consecutive iterations of 10 3 .
The convergence plane [26] gathers into one graphic the complete behavior of every member of a family of one-dimensional iterative methods. Figure 3 shows the convergence plane of Traub’s method. The basins of attraction of the fixed points have been colored in blue or orange taking values for the parameter c 30 , 0 and initial estimations for the method in x 0 30 , 30 . As an added value, the superattracting fixed points and the strange fixed points are also represented with black and white lines, respectively, since they depend on the value of c.
For every value of c < 0 , each initial guess converges to a superattracting fixed point. Almost every initial guess of x 0 > 0 tends to x 1 F ( c ) , and almost every initial guess of x 0 < 0 tends to x 2 F ( c ) . There is a thin region of initial estimations that converge to the further superattracting fixed point. The width of this region is set by the value of the strange fixed points x 3 F ( c ) and x 4 F ( c ) .

4.2. Multidimensional Real Dynamics of TM1 Method

From now on, we apply TM1 scheme on the family of polynomials p c ( x ) = x 2 + c . As TM1 is a method with memory, its fixed-point function depends on the two previous iterates, x k and x k 1 , that will be denoted by x and z, respectively. Then the fixed-point function is:
R c ( z , x ) = ( x , g ( z , x ) ) = x , 4 x 2 z 4 + 8 c x 2 z 2 9 x 6 + 15 c x 4 3 c 2 x 2 + c 3 2 x ( c 3 x 2 ) 2 .
Since a fixed point must satisfy z = x and x = g ( z , x ) , the fixed points are these x such that R c ( x , x ) = ( x , x ) . Solving this equation, the only fixed points for c 0 are the roots of p c ( x ) , x 1 , 2 F ( c ) = ( ± c , ± c ) . When c > 0 all the fixed points are complex, so they are out of this study.
To analyze the behavior of the fixed points, let us consider the Jacobian matrix of R c ( z , x ) at the point x 1 F ( c ) = ( c , c ) :
R c ( c , c ) = 0 1 0 0 .
The eigenvalues associated with this fixed point are λ 1 = λ 2 = 0 . The same result is obtained with the fixed point x 2 F ( c ) = ( c , c ) . Then, by applying Theorem 2, the two fixed points are attracting.
In Figure 4 some dynamical planes associated with TM1 method for different c are shown. The implementation of these dynamical planes has followed a similar structure as [27] presents. It has been used a mesh of 500 × 500 estimations to iterate the method. Finally, each point is painted according to the root that it has converged to: blue for ( c , c ) , orange for ( c , c ) , and black in other case. The stopping criteria are 50 maximum of iterations and a difference between two consecutive iterations of 10 3 . An expanding behavior of the basins of attraction can be seen when the value of c decreases. Let us remark that there is not any periodic orbit in the black region.

4.3. Multidimensional Real Dynamics of TM2 Method

As stated in Section 2, methods with memory that use two previous iterates have the expression x k + 1 = g ( x k 1 , x k ) . However, as TM2 method uses the interpolation polynomial N 2 ( t ; x k , x k 1 , y k 1 ) , the calculation of the estimation x k + 1 requires the knowledge of x k , x k 1 and y k 1 . Therefore, the general expression of a method with memory that uses three previous points is
x k + 1 = g ( x k 1 , y k 1 , x k ) , k 1 ,
being g its fixed-point function. Moreover, its fixed-point operator G : R 3 R 3 is defined by the expression
G ( x k 1 , y k 1 , x k ) = ( x k , y k , x k + 1 ) = ( x k , y k , g ( x k 1 , y k 1 , x k ) ) , k = 1 , 2 , ,
where x 0 , y 0 and x 1 are the initial approximations.
It will be denoted x = x k , z = x k 1 , x y = y k and z y = y k 1 , for all k. Then, (19) defines a discrete dynamical system whose fixed points ( z , z y , x ) R 3 must verify
z = z y = x , x = g ( z , z y , x ) .
To compare the stability of TM2 with TM1 and Traub (1) methods, the family of polynomials p c ( x ) = x 2 + c is used again. The fixed-point operator of TM2 scheme, when it is applied on p c ( x ) , is
G c ( z , z y , x ) = x , x y , 2 x ( 2 c 3 + 5 c 2 x 2 8 c x 4 + x 6 ) ( c 3 x 2 ) 2 ( c x 2 ) .
Since the conditions (20) are imposed to G c in order to obtain a real-valued function, the result is a one-dimensional operator G ˜ c ( x ) . The analysis of the stability of its fixed points requires the dynamical analysis. For this purpose, the one-dimensional operator gets the expression
G c ( z , z y , x ) z = z y = x = G ˜ c ( x ) = 2 x ( 2 c 3 + 5 c 2 x 2 8 c x 4 + x 6 ) ( c 3 x 2 ) 2 ( c x 2 ) .
The fixed points of G ˜ c ( x ) are the roots of p c ( x ) , x 1 F ( c ) = c and x 2 F ( c ) = c , for c < 0 . In addition, x 3 F = 0 is a strange fixed point. When c > 0 , all the fixed points of G ˜ c ( x ) are complex, so they are out of the real dynamics analysis. As the Jacobian matrix is of the form
G c ( z , z y , x ) = 0 0 1 0 0 0 0 0 G ˜ c ( x ) ,
the stability of the fixed points can be checked in G ˜ c ( x ) , which is given by
G ˜ c ( x ) = 2 ( 2 c 3 x 2 ) ( c + x 2 ) 4 ( c 3 x 2 ) 3 ( c x 2 ) 2 .
By evaluating the fixed points in | G ˜ c ( x ) | , x 1 F ( c ) and x 2 F ( c ) are superattracting, and x 3 F is repelling, because | G ˜ c ( x ) | = 4 .
Computing G ˜ c ( x ) = 0 , four critical points are obtained: the roots of p c and two free critical points, x 1 C ( c ) = 2 3 c and x 2 C ( c ) = 2 3 c , when c > 0 .
Following the same procedure as in Traub’s method, in Figure 5 the bifurcation diagrams for the fixed points have been plotted. On the one hand, Figure 5a confirm that x 1 , 2 F ( c ) are superattracting fixed points when c < 0 . On the other hand, Figure 5b illustrates that x 3 F is a repelling point because if it is taken with a small perturbation as an initial estimation of the method, the successive iterates do not converge to it and they converge to the roots of p c ( x ) .
The parameter lines of a method show the final orbit of each critical point depending on c. As each point of the lines represents a particular method, they allow the choice of the value of the parameter that guarantees convergence to any of the roots, i.e., the selection of the more stable methods of the family. TM2 method only has critical points when c > 0 . However, in this case there are no fixed points. For this reason, the parameter lines in TM2 method do not give us information about the stability of the family.
Figure 6 shows the dynamical lines of TM2 method when it is applied on p c ( x ) varying the value of c. The parameters of the representation follow the same structure as the dynamical lines of Figure 2. Figure 6a–c show that for every c < 0 , the orbit of each initial x 0 converges to a root of p c ( x ) . Moreover, negative initial guesses converge to x 1 F ( c ) and the positive ones converge to x 2 F ( c ) . As expected, TM2 method does not converge when c is positive as shown in Figure 6d.
To analyze the dynamical behavior of the complete family, Figure 7 plots the convergence plane of TM2 method. According to the notation followed in Figure 3, now there is a white line which represents x 3 F . The strange fixed point splits the plane into two half-planes, each one containing a different basin of attraction.

5. Conclusions

In this paper, two Traub-type iterative schemes are designed by introducing accelerating parameters in the iterative expression of Traub’s scheme. The use of memory in these parameters originates TM1 and TM2 methods, whose convergence analysis and stability have been carried out to compare them with Traub’s scheme.
In the dynamical study, Traub and TM1 methods have strange fixed points but their bifurcation diagrams show that they are repelling. In addition, it can be checked in the bifurcation diagrams that the roots of p c ( x ) are superattracting points because the iterates, although they are close to a strange fixed point, converge to one of them.
In the analysis of the order of convergence in Traub, TM1 and TM2 methods, the use of memory guarantees higher order of convergence than the original method without any additional functional evaluations. Moreover, the convergence plane in Figure 7 verifies that methods with memory have greater stability than the corresponding ones without memory.
These good properties shown, as in order of convergence as in stability, of the proposed methods allows application of them on practical problems relaxing the usually strong conditions on the initial estimations used.

Author Contributions

The individual contribution of the authors have been: conceptualization, J.R.T. and N.G.; software, F.I.C.; validation, F.I.C., A.C. and J.R.T.; formal analysis, N.G.; investigation, F.I.C.; writing—original draft preparation, N.G.; writing—review and editing, J.R.T. and A.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by Ministerio de Ciencia, Innovación y Universidades under grants PGC2018-095896-B-C22 (MCIU/AEI/FEDER/UE).

Acknowledgments

The authors would like to thank the anonymous reviewers for their useful comments that have improved the final version of this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shacham, M. An improved memory method for the solution of a nonlinear equation. Chem. Eng. Sci. 1989, 44, 1495–1501. [Google Scholar] [CrossRef]
  2. Balaji, G.V.; Seader, J.D. Application of interval Newton’s method to chemical engineering problems. Reliab. Comput. 1995, 1, 215–223. [Google Scholar] [CrossRef]
  3. Shacham, M. Numerical solution of constrained nonlinear algebraic equations. Int. J. Numer. Method Eng. 1986, 23, 1455–1481. [Google Scholar] [CrossRef]
  4. Shacham, M.; Kehat, E. Converging interval methods for the iterative solution of nonlinear equations. Chem. Eng. Sci. 1973, 28, 2187–2193. [Google Scholar] [CrossRef]
  5. Amat, S.; Busquier, S.; Plaza, S. Chaotic dynamics of a third order Newton-type method. J. Math. Anal. Appl. 2010, 366, 24–32. [Google Scholar] [CrossRef] [Green Version]
  6. Argyros, I.K.; Cordero, A.; Magreñán, Á.A.; Torregrosa, J.R. Third-degree anomalies of Traub’s method. J. Comput. Appl. Math. 2017, 309, 511–521. [Google Scholar] [CrossRef]
  7. Chicharro, F.I.; Cordero, A.; Gutiérrez, J.M.; Torregrosa, J.R. Complex dynamics of derivative-free methods for nonlinear equations. Appl. Math. Comput. 2013, 219, 7023–7035. [Google Scholar] [CrossRef]
  8. Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Dynamics and fractal dimension of Syeffensen-type methods. Algorithms 2015, 8, 271–279. [Google Scholar] [CrossRef] [Green Version]
  9. Scott, M.; Neta, B.; Chun, C. Basin attractors for various methods. Appl. Math. Comput. 2011, 218, 2584–2599. [Google Scholar] [CrossRef]
  10. Kung, H.T.; Traub, J.F. Optimal order of one-point and multi-point iteration. Appl. Math. Comput. 1974, 21, 643–651. [Google Scholar]
  11. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: New York, NY, USA, 1964. [Google Scholar]
  12. Steffensen, J.F. Remarks on iteration. Scand. Actuar. J. 1933, 1, 64–72. [Google Scholar] [CrossRef]
  13. Wang, W.; Zhang, T. A new family of Newton-type iterative methods with and without memory for solving nonlinear equations. Calcolo 2014, 51, 1–15. [Google Scholar] [CrossRef]
  14. Džunić, J.; Petković, M.S. On generalized biparametric multipoint root finding methods with memory. J. Comput. Appl. Math. 2014, 255, 362–375. [Google Scholar] [CrossRef]
  15. Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint Methods for Solving Nonlinear Equations; Academic Press: Amsterdam, The Netherlands, 2013. [Google Scholar]
  16. Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint methods for solving nonlinear equations: A survey. Appl. Math. Comput. 2014, 226, 635–660. [Google Scholar] [CrossRef] [Green Version]
  17. Campos, B.; Cordero, A.; Torregrosa, J.R.; Vindel, P. A multidimensional dynamical approach to iterative methods with memory. Appl. Math. Comput. 2015, 271, 701–715. [Google Scholar] [CrossRef] [Green Version]
  18. Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Dynamics of iterative families with memory based on weight functions procedure. J. Comput. Appl. Math. 2019, 354, 286–298. [Google Scholar] [CrossRef]
  19. Chicharro, F.I.; Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. King-type derivative-free iterative families: Real and memory dynamics. Complexity 2017, 2017, 2713145. [Google Scholar] [CrossRef]
  20. Wang, W.; Zhang, T. High-order Newton-type iterative methods with memory for solving nonlinear equations. Math. Commun. 2014, 19, 91–109. [Google Scholar]
  21. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  22. Magreñán, Á.A.; Cordero, A.; Gutiérrez, J.M.; Torregrosa, J.R. Real qualitative behaviour of a fourth-order family of iterative methods by using the convergence plane. Math. Comput. Simul. 2014, 105, 49–61. [Google Scholar] [CrossRef]
  23. Blanchard, P. Complex analytic dynamics on the Riemann sphere. Bull. AMS 1984, 11, 85–141. [Google Scholar] [CrossRef] [Green Version]
  24. Devaney, R.L. An Introduction to Chaotic Dynamical Systems; Addison-Wesley: Redwood, CA, USA, 1989. [Google Scholar]
  25. Robinson, R.C. An Introduction to Dynamical Systems: Continuous and Discrete; Pearson Education: Cranbury, NJ, USA, 2004. [Google Scholar]
  26. Magreñán, Á.A. A new tool to study real dynamics: The convergence plane. Appl. Math. Comput. 2014, 248, 215–224. [Google Scholar] [CrossRef] [Green Version]
  27. Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Drawing dynamical and parameters planes of iterative families and methods. Sci. World J. 2013, 2013, 780153. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Bifurcation diagrams for the fixed points of Traub’s method.
Figure 1. Bifurcation diagrams for the fixed points of Traub’s method.
Mathematics 08 00274 g001
Figure 2. Dynamical lines of Traub’s method on p c ( x ) = x 2 + c for different values of c. Color basins: x 1 F ( c ) orange, x 2 F ( c ) blue, no convergent points in black.
Figure 2. Dynamical lines of Traub’s method on p c ( x ) = x 2 + c for different values of c. Color basins: x 1 F ( c ) orange, x 2 F ( c ) blue, no convergent points in black.
Mathematics 08 00274 g002
Figure 3. Convergence plane of Traub’s method. Color basins: x 1 F ( c ) orange, x 2 F ( c ) blue, no convergent points in black
Figure 3. Convergence plane of Traub’s method. Color basins: x 1 F ( c ) orange, x 2 F ( c ) blue, no convergent points in black
Mathematics 08 00274 g003
Figure 4. Dynamical planes of TM1 method on p c ( x ) = x 2 + c for different values of c. Color basins: x 1 F ( c ) orange, x 2 F ( c ) blue, no convergent points in black
Figure 4. Dynamical planes of TM1 method on p c ( x ) = x 2 + c for different values of c. Color basins: x 1 F ( c ) orange, x 2 F ( c ) blue, no convergent points in black
Mathematics 08 00274 g004
Figure 5. Bifurcation diagrams for the fixed points of TM2 method.
Figure 5. Bifurcation diagrams for the fixed points of TM2 method.
Mathematics 08 00274 g005
Figure 6. Dynamical lines of TM2 method on p c ( x ) = x 2 + c for different values of c. Color basins: x 1 F ( c ) orange, x 2 F ( c ) blue, no convergent points in black.
Figure 6. Dynamical lines of TM2 method on p c ( x ) = x 2 + c for different values of c. Color basins: x 1 F ( c ) orange, x 2 F ( c ) blue, no convergent points in black.
Mathematics 08 00274 g006
Figure 7. Convergence plane of TM2 method. Color basins: x 1 F ( c ) orange, x 2 F ( c ) blue, no convergent points in black
Figure 7. Convergence plane of TM2 method. Color basins: x 1 F ( c ) orange, x 2 F ( c ) blue, no convergent points in black
Mathematics 08 00274 g007

Share and Cite

MDPI and ACS Style

Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. Impact on Stability by the Use of Memory in Traub-Type Schemes. Mathematics 2020, 8, 274. https://doi.org/10.3390/math8020274

AMA Style

Chicharro FI, Cordero A, Garrido N, Torregrosa JR. Impact on Stability by the Use of Memory in Traub-Type Schemes. Mathematics. 2020; 8(2):274. https://doi.org/10.3390/math8020274

Chicago/Turabian Style

Chicharro, Francisco I., Alicia Cordero, Neus Garrido, and Juan R. Torregrosa. 2020. "Impact on Stability by the Use of Memory in Traub-Type Schemes" Mathematics 8, no. 2: 274. https://doi.org/10.3390/math8020274

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop