Next Article in Journal
Projector Approach to Constructing Asymptotic Solution of Initial Value Problems for Singularly Perturbed Systems in Critical Case
Next Article in Special Issue
Mathematical and Numerical Modeling of On-Threshold Modes of 2-D Microcavity Lasers with Piercing Holes
Previous Article in Journal
A Note on Anosov Homeomorphisms
Previous Article in Special Issue
Stability Anomalies of Some Jacobian-Free Iterative Methods of High Order of Convergence
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generating Root-Finder Iterative Methods of Second Order: Convergence and Stability

1
Escuela Superior de Ingeniería y Tecnología, Universidad Internacional de La Rioja, Avenida de La Paz 137, 26006 Logroño, Spain
2
Multidisciplinary Mathematics Institute, Universitat Politècnica de València, Camino de Vera s/n, 46022 València, Spain
*
Author to whom correspondence should be addressed.
Axioms 2019, 8(2), 55; https://doi.org/10.3390/axioms8020055
Submission received: 1 April 2019 / Revised: 30 April 2019 / Accepted: 30 April 2019 / Published: 6 May 2019

Abstract

:
In this paper, a simple family of one-point iterative schemes for approximating the solutions of nonlinear equations, by using the procedure of weight functions, is derived. The convergence analysis is presented, showing the sufficient conditions for the weight function. Many known schemes are members of this family for particular choices of the weight function. The dynamical behavior of one of these choices is presented, analyzing the stability of the fixed points and the critical points of the rational function obtained when the iterative expression is applied on low degree polynomials. Several numerical tests are given to compare different elements of the proposed family on non-polynomial problems.

1. Introduction

Solving nonlinear equations f ( x ) = 0 , where f : I R R is a real function defined in an open interval I, is a classical problem with many applications in several branches of science and engineering. In general, to calculate the roots of f ( x ) = 0 , we must appeal to iterative schemes, which can be classified in methods without or with memory. In this manuscript, we are going to work only with iterative methods without memory, that is algorithms that to calculate a new iteration only evaluate the nonlinear function (and its derivatives) on the previous one.
There exist many iterative methods of different orders of convergence, designed to estimate the roots of f ( x ) = 0 (see for example [1,2] and the references therein). The efficiency index is a tool defined by Ostrowski in [3] that allows classifying these algorithms in terms of their order of convergence p and the number of function evaluations d needed per iteration to reach it. It is defined as I = p 1 / d ; the higher the efficiency index of an iterative scheme, the better it is. Moreover, this concept is complemented with Kung–Traub’s conjecture [4] that establishes an upper bound for the order of convergence to be reached with a specific number of functional evaluations, p 2 d 1 . Those methods whose order reaches this bound are called optimal methods, being the most efficient ones among those with the same number of functional evaluations per iteration.
Newton’s method is a well-known technique for solving nonlinear equations,
x k + 1 = x k f ( x k ) f ( x k ) , k = 0 , 1 , ,
starting with an initial guess x 0 . This method fascinates many researchers because it is applicable to several kinds of equations such as nonlinear equations, systems of nonlinear algebraic equations, differential equations, integral equations, and even to random operator equations. However, as is well known, a major difficulty in the application of Newton’s method is the selection of initial guesses, which must be chosen sufficiently close to the true solution in order to guarantee the convergence. Finding a criterion for choosing these initial guesses is quite cumbersome, and therefore, more effective globally-convergent algorithms are still needed.
In this paper, we use the weight function technique (see, for example [5,6,7]) to design a general class of second order iterative methods including Newton’s scheme and also other known and new schemes. These methods appear when specific weight functions (that may depend on one or more parameters) are selected. Therefore, we will choose among those methods with wider areas of converging initial estimations, able to converge when Newton’s method fails. In order to get this aim, we work with the associated complex dynamical system, finding their fixed and critical points, analyzing their asymptotic convergence, and those values that simplify the corresponding rational function. This procedure is widely used in this area of research, as can be found in [8,9,10].

2. The Proposed Family

In this section, we derive a simple family of iterative methods of second order for solving nonlinear equations f ( x ) = 0 , by using the procedure of weight functions, whose iterative expression is:
x k + 1 = x k H ( t k ) , k = 0 , 1 , 2 , ,
where t k = f ( x k ) f ( x k ) . This family contains, for particular choices of function H, many known iterative schemes, in particular Newton’s method is obtained when H ( t ) = t .
The following result shows the required conditions on weight function H in order to reach, at least, order two.
Theorem 1.
Let f : D R R be a sufficiently differentiable function in an open interval D and x * a simple solution of the nonlinear equation f ( x ) = 0 . Let us suppose that the initial estimation x 0 D is close enough to x * and H ( t ) satisfies H ( 0 ) = 0 , H ( 0 ) = 1 , and | H ( 0 ) | < . Then, members of the family (1) have order of convergence two, its error equation being:
e k + 1 = c 2 1 2 H ( 0 ) e k 2 + O ( e k 3 ) ,
where e k = x k x * and c j = f ( j ) ( x * ) j ! f ( x * ) , j 2 .
Proof. 
Let us denote the error in each iteration of (1) by e k = x k x * , where x * is a simple solution of equation f ( x ) = 0 . By using Taylor series around x * , we have:
f ( x k ) = f ( x * ) [ e k + c 2 e k 2 + c 3 e k 3 + c 4 e k 4 ] + O ( e k 5 ) , f ( x k ) = f ( x * ) [ 1 + 2 c 2 e k + 3 c 3 e k 2 + 4 c 4 e k 3 ] + O ( e k 4 ) ,
and therefore, weight variable t k can be expressed as:
t k = f ( x k ) f ( x k ) = e k c 2 e k 2 + ( 2 c 2 2 2 c 3 ) e k 3 + O ( e k 4 ) .
Now, by using Taylor expansion of H and (3), we get:
H ( t k ) = H ( 0 ) + H ( 0 ) t k + 1 2 H ( 0 ) t k 2     = H ( 0 ) + H ( 0 ) e k + 1 2 H ( 0 ) H ( 0 ) c 2 e k 2 + H ( 0 ) ( 2 c 2 2 2 c 3 ) H ( 0 ) c 2 e k 3 + O ( e k 4 ) .
Then, the error equation is:
e k + 1 = e k H ( t k ) = H ( 0 ) + ( 1 H ( 0 ) ) e k + 1 2 H ( 0 ) + H ( 0 ) c 2 e k 2 + O ( e k 3 ) .
By applying conditions H ( 0 ) = 0 , H ( 0 ) = 1 , and | H ( 0 ) | < , we obtain the error Equation (2). Therefore, the elements of the family (1) have quadratic order of convergence. □
Many authors in the literature have devoted their efforts to analyzing the stability of Newton’s method depending on the initial iterates. Good overviews can be found in [8,11]. Then, the dynamics of several iterative schemes belonging to the family (1) can be studied to compare it with that of Newton’s method. We focus on an iterative family belonging to (1), called N 2 α , which has a parameter on its iterative expression. Its stability is studied in Section 3 depending on the values of the parameter.
The weight function H ( t k ) = t k 1 + α t k gives rise to the proposed iterative family:
x k + 1 = x k f ( x k ) f ( x k ) + α f ( x k ) , k = 0 , 1 , 2 , ,
that we have denoted by N 2 α .
Remark 1.
Function H ( t ) = t 1 + α t satisfies the conditions described in Theorem 1. Moreover, the error equation in this case is e k + 1 = ( α + c 2 ) e k 2 + O ( e k 3 ) . Therefore, if we choose α = c 2 , the method increases its order of convergence, but c 2 = ( 1 / 2 ) f ( x * ) / f ( x * ) and x * is not known. This type of error equation leads us to think of methods with memory that are beyond the object of this work.
In this manuscript, we are interested in analyzing the values of α that provide methods with greater stability and that also have the same order as Newton’s method and those whose corresponding schemes have bad stability properties, so we will not use any approximation for the parameter.

3. Complex Dynamics of the Family N 2 α

In this section, a stability analysis of family N 2 α is made in the context of complex dynamics. First, we are going to recall some concepts of this theory. Further information can be found in [12].
Let R : C ^ C ^ be a rational function, where C ^ denotes the Riemann sphere. The orbit of a point z 0 C ^ is defined as the set of its successive images by R, i.e., { z 0 , R ( z 0 ) , R 2 ( z 0 ) , , R m ( z 0 ) , } . When a point z 0 C ^ satisfies R ( z 0 ) = z 0 , it is called a fixed point. In particular, a fixed point z 0 , when z 0 z r where f ( z r ) = 0 , is called a strange fixed point of R. If the point z T C ^ satisfies R T ( z T ) = z T , but R t ( z T ) z T for t < T , it is a T-periodic point.
The orbits of the fixed points are classified depending on the value of their corresponding fixed point multiplier, R ( z ) . If z 0 is a fixed point of R, then it is attracting when | R ( z 0 ) | < 1 . In particular, it is a superattracting point if | R ( z 0 ) | = 0 . The point is called repelling when | R ( z 0 ) | > 1 and neutral in the case | R ( z 0 ) | = 1 . When the operator has an attracting point x * , we define its basin of attraction, A ( x * ) , as the set:
A ( z * ) = { z C ^ : R n ( z ) z * , n } .
A point z 0 C ^ is called a critical point when R ( z 0 ) = 0 , being a free critical point when z 0 z r , where z r satisfies f ( z r ) = 0 .
In this paper, we use two graphical tools to visualize the dynamical behavior of the fixed points in the complex plane: the dynamical and the parameter planes. We can find reference texts around these dynamic tools in works such as [9,13].
Dynamical planes represent the basins of attraction associated with the fixed points of the rational operator corresponding to an iterative method. For its implementation, the complex plane is divided into a mesh of values where the X-axis represents the real part of a point and the Y-axis its imaginary part. Then, each pair of points of the mesh corresponds to a complex number that is taken as the initial estimation to iterate the method. The initial guess is depicted in a color depending on where its orbit has converged. In this way, the basins of attraction of the fixed or periodic points are observed, and we can determine if the method has wide regions of convergence.
Parameter planes are used to study the dynamics of a family of iterative methods with at least one parameter. In this case, the complex plane is divided into a mesh of complex values for the parameter, the X-axis and the Y-axis being the real and imaginary parts of the parameter, respectively. Each point of the plane corresponds to a value of the parameter and, therefore, to a method belonging to the family. For each value of the parameter, the corresponding method is applied successively starting as the initial estimation with a free critical point of the operator, so there are as many parameter planes as independent (under conjugation) free critical points of the operator. The points in the plane are plotted with a non-black color when there is convergence to any of the attracting fixed points, remaining in black in other cases. This representation allows choosing the values of the parameter that give rise to the most stable methods of the family.
Next, the iterative family N 2 α is applied on nonlinear quadratic polynomials in order to check its dependence on the initial estimations.
Taking into account that the method under study does not satisfy the scaling theorem, the quadratic polynomials under study will be p ( z ) = z 2 1 , p + ( z ) = z 2 + 1 , and p 0 ( z ) = z 2 . When family N 2 α is applied on these polynomials, three rational functions are obtained. Then, we analyze the asymptotic behavior of their fixed and critical points, and we represent the corresponding dynamical and parameter planes. Therefore, some conclusions are reached that can be extended to any quadratic polynomial and, up to some extent, to any nonlinear function.
By applying the family N 2 α on the quadratic polynomials under consideration, the resulting rational functions are:
R ( z ) = z z 2 1 2 z + ( z 2 1 ) α , R + ( z ) = z z 2 + 1 2 z + ( z 2 + 1 ) α , R 0 ( z ) = z ( 1 + α z ) 2 + α z ,
for p ( z ) , p + ( z ) , and p 0 ( z ) , respectively. The following result analyzes the number and also the asymptotical performance of the fixed points of operators (6).
Lemma 1.
The fixed points of the rational operators R ( z ) , R + ( z ) , and R 0 ( z ) (6) agree with the roots of their associated polynomials, being superattracting for R ( z ) and R + ( z ) . For operator R 0 ( z ) , the fixed point is attracting. In addition, z = is a strange fixed point of all three operators, being in all cases neutral.
Proof. 
For the rational operator R ( z ) , we must solve R ( z ) = z , that is,
z z 2 1 2 z + ( z 2 1 ) α = z ,
or equivalently,
z 2 1 2 z + ( z 2 1 ) α = 0 .
The solution of (7) is obtained from the numerator by solving z 2 1 = 0 . The only solutions are z 1 , = 1 and z 2 , = 1 , the roots of p ( z ) . The proof for the rational operator R + ( z ) is completely analogous, so we obtain that the fixed points of R + ( z ) are the roots of the polynomial p + ( z ) , denoted by z 1 , + = i and z 2 , + = i .
To check that z = is a fixed point of any rational operator R β , β { , + , 0 } , we define the operator I β ( z ) = 1 / R β ( 1 / z ) . Then, is a fixed point of R β when the point z F = 0 is a fixed point of I β ( z ) . For operators R ( z ) and R + ( z ) , we have, respectively,
I ( z ) = z ( 2 z + α α z 2 ) z + α α z 2 ± z 3 .
From this, it is straightforward that I ( 0 ) = 0 , so infinity is a strange fixed point of all of them.
In the case of polynomial p 0 ( z ) , the fixed points are the solution of:
z ( 1 + α z ) 2 + α z = z z ( 1 + α z ) z ( 2 + α z ) = 0 ,
so the fixed point of R 0 ( z ) is z = 0 , and it is denoted by z 0 .
Regarding the operator associated with infinity, we have for R 0 ( z ) :
I 0 ( z ) = z ( 2 z + α ) z + α ,
and also, infinity is a strange fixed point of R 0 , as I 0 ( 0 ) = 0 .
In order to analyze the stability of the fixed points, the derivatives of (6) are required:
R ( z ) = ( z 2 1 ) ( 2 + 4 α z α 2 + α 2 z 2 ) ( 2 z α + α z 2 ) 2 ,
R 0 ( z ) = 2 + 4 α z + α 2 z 2 ( 2 + α z ) 2 .
It is easy to prove from the term ( z 2 1 ) in (8) that z j , and z j , + , for j = 1 , 2 , are superattracting points of the corresponding operators. It is also immediate from (9) to prove that z 0 is attracting, as R 0 ( z 0 ) = 1 2 < 1 .
The stability of infinity is studied throughout the value of the corresponding derivatives of I ( z ) for each operator:
I ( z ) = ( z 2 1 ) ( 4 α z α 2 + z 2 ( α 2 2 ) ) ( z ± z 3 + α α z 2 ) 2 , I 0 ( z ) = 2 z 2 + 4 α z + α 2 ( z + α ) 2 ,
in z = 0 . For the three considered cases, we have I ( 0 ) = 1 and I 0 ( 0 ) = 1 , so infinity is a neutral point. □
The critical points of the rational operators that are calculated by solving R ( z ) = 0 are presented in the following result.
Lemma 2.
Critical points of operators R ( z ) , R + ( z ) , and R 0 ( z ) (see (6)) satisfy:
(a) 
For R ( z ) , the fixed points z 1 , and z 2 , are critical points. In addition, if α ± i , α 0 , operator R ( z ) has two free critical points: c r 1 , ( α ) = 2 2 + α 2 α and c r 2 , ( α ) = 2 + 2 + α 2 α . When α = ± i , the operator does not have free critical points.
(b) 
If α ± 1 , α 0 , the critical points of operator R + ( z ) are the roots of the polynomial, z 1 , + and z 2 , + , and the free critical points c r 1 , + ( α ) = 2 2 α 2 α and c r 2 , + ( α ) = 2 + 2 α 2 α . When α = ± 1 , the only critical points are the roots of p + ( z ) .
(c) 
If α 0 , operator R 0 ( z ) has two free critical points: c r 1 , 0 ( α ) = 2 2 α and c r 2 , 0 ( α ) = 2 + 2 α . In this case, the root of p 0 ( z ) is not a critical point of R 0 .
Let us remark that case α = 0 corresponds to Newton’s method, whose associated complex dynamics are well known. The proof of Lemma 2 uses the same procedures as in Lemma 1, so we do not extend on it.
As the family N 2 α depends on parameter α , it is useful to draw the parameter planes associated with each critical point given in the previous result. This tool will allow us to choose values of α and compare the resulting dynamical planes.
In this paper, all the planes are plotted using the software MATLAB R2018b. For the parameter plane, α is taken over a mesh of 500 × 500 values in the complex plane in R e ( α ) × I m ( α ) [ 3 , 3 ] × [ 3 , 3 ] . Taking a critical point as initial estimation, the method is iterated until it reaches the maximum of 50 iterations or until there is convergence to any of the fixed points. The convergence is set when the difference between the iterate and one of the attracting fixed point is lower than 10 6 . When there is convergence to any of them, the initial point is represented in the color red in the plane. In other cases, it is represented in black. In addition, the intensity in the color red means that more iterations are required to reach the convergence (the brighter the color, lower is the number of iterations needed).
Figure 1, Figure 2 and Figure 3 show the parameter planes of the considered operators for their associated free critical points. Each parameter plane depicts the values of α in the complex plane that give rise to methods for which there is convergence to the attracting fixed points, represented as white stars.
In Figure 1a, it is observed that only values of α with a positive real part are depicted in red. Figure 1b is symmetric with respect to the imaginary line. The rest of the complex plane is black, so we must avoid these values of α in order to select the most stable methods of the family.
For the rational operator R + ( z ) , Figure 2 shows a very similar behavior to Figure 1, but now, the values of α that provide more stable methods are located around the complex line in the plane, showing a vertical band to the left part of the origin in Figure 2a and another one to the right part of the origin in Figure 2b.
In Figure 3 is observed a completely different dynamical behavior, as the critical point c r 1 , 0 ( α ) provides a black parameter plane, and for the point c r 2 , 0 ( α ) , the parameter plane is all depicted in red; that is, one free critical point belongs to the basin of attraction of the root z 0 for any value of α , and the other one appears in a different basin of attraction with the independence of α .
Below, some dynamical planes are shown that confirm the previous results for the three operators, as the different values of α to generate them have been chosen from the obtained results in the corresponding parameter planes.
The implementation in MATLAB of the dynamical planes is similar to that of the parameter planes (see [9]). To perform them, the real and imaginary parts of the initial estimations are represented on the two axes over a mesh of 500 × 500 points in [ 15 , 15 ] × [ 15 , 15 ] in the complex plane. The convergence criteria are the same as in the parameters plane, but now, different colors are used to indicate to which fixed point the method converges. Orange color represents the convergence to z 1 , , z 1 , + , and z 0 for the associated operator, and blue represents the convergence to z 2 , and z 2 , + . The attracting fixed points are depicted in the dynamical planes with white stars, while the free critical points are depicted with white squares.
Figure 4, Figure 5 and Figure 6 show the dynamical planes corresponding to operators R ( z ) , R + ( z ) , and R 0 ( z ) , respectively, by varying the value of α . In all cases, the parameter has been chosen in order to show different performances from Figure 1, Figure 2 and Figure 3. On the one hand, in Figure 4a,b and Figure 5a,b, the parameter is taken from the red regions shown in Figure 1 and Figure 2 and quite close to the attracting fixed points. On the other hand, Figure 4d and Figure 5d correspond to values of the parameter located in the black region of the parameter planes. As was expected, when α is taken from a red region of the parameter plane, the corresponding basins of attraction in the dynamical plane are bigger than in other cases (Figure 4d and Figure 5d). However, there is a wide black region in all the dynamical planes that corresponds to the basin of attraction of infinity, which generates its own basin of attraction. Furthermore, in the most dynamical planes, a free critical point belongs to the basin of attraction of infinity, the others remaining in the basin of attraction of a root of the corresponding polynomial. Moreover, in Figure 4d and Figure 5d, those with more black color, the two free critical points belong to the basin of attraction of infinity. From Figure 6, we can observe that the basin of attraction of z 0 is bigger when α is close to z = 0 , one free critical point being inside this basin of attraction. As in the other operators, infinity behaves as the attracting point with a free critical point laying in its basin of attraction.
The conclusions observed in the parameter planes can be checked in Figure 4c and Figure 6c, where the value α = 0.3 , which is close to the origin of the complex plane, gives rise to the broadest basins of attraction. For this reason, to perform the numerical experiments in the next section, we selected this value of α , which shows the stability of our proposed method.

4. Numerical Results

In this section, numerical experiments are performed in order to check the efficiency of the N 2 α family to calculate the solution of several nonlinear functions. This section also allows verifying that the stability analysis of the previous section is correct, and in fact, it is possible to obtain values of α that provide methods of the N 2 α family that guarantee more stability in the root finding process. For this purpose, let us consider the following nonlinear test functions:
  • f 1 ( x ) = arctan ( x ) ; x * = 0 ,
  • f 2 ( x ) = x 3 2 x + 2 ; x * 1.769292 ,
  • f 3 ( x ) = x 3 3 x + 17 ; x * 2.957664 ,
  • f 4 ( x ) = ln ( x + 1 ) ; x * = 0 ,
  • f 5 ( x ) = e x + 2 sin ( x ) x + 3.5 ; x * 3.273938 ,
  • f 6 ( x ) = ( x + 2 ) e 1 x + 2 x + 5 ; x * 2.043518 ,
where x * is the exact solution.
The weighted iterative family (1) includes many iterative schemes, due the generality and simplicity of its structure. In Section 2, we have seen that Newton’s method and the N 2 α family can be obtained with weight functions H ( t ) = t and H ( t ) = t 1 + α t , respectively, from the family (1). Moreover, some well-known methods were obtained from different selections of the weight function. By choosing:
H ( t ) = 1 + λ t 1 + ( β + λ ) t t 1 + λ t ,
the resulting class is the iterative family presented by Kou and Li in [14], which has order of convergence two for any value of parameters λ and β . According to the results presented by the authors in [14], for the numerical performance, the method obtained when λ = 1 and β = 0 was used, denoting the resulting one by KL. On the other hand, the weight function:
H ( t ) = 2 t 1 + 1 + 4 γ f ( x ) t 2 , γ R ,
gives rise to an iterative family presented by Noor et al. [15], which converges quadratically for all γ . Let us denote by Noor1 the associated method for γ = 1 .
In this section, a numerical comparison between the Newton, KL, and Noor1 methods with our proposed family N 2 α is carried out in order to show the performance of different methods with similar structures and the same order of convergence. Taking into account the dynamical results in Section 3, the values for the parameter α that provide the methods of N 2 α with more stability are those that are close to the complex origin. For this reason, the numerical results were done with α = 0.3 , denoting the resulting method by N 2 0.3 .
Table 1 and Table 2 show the results obtained for the considered nonlinear test functions f 1 6 and different initial estimations. The numerical tests have been performed using the software MATLAB R2018b with variable precision arithmetics with 50 digits of mantissa. These tables show the number of iterations that each method needs to reach the convergence, which was set when | x k + 1 x k | < 10 12 or | f ( x k + 1 ) | < 10 12 . After a maximum number of 200 iterations, the method did not reach the solution of the equation. Moreover, a computational approximation of the order of convergence of the methods was given by the ACOC [16] calculated as the following quotient:
A C O C = ln ( | x k + 1 x k | / | x k x k 1 | ) ln ( | x k x k 1 | / | x k 1 x k 2 | ) .
As we can observe in the results provided by Table 1 and Table 2, Newton’s method did not always converge for the nonlinear test functions considered, especially when the initial estimation was far from the solution. A similar behavior can be observed in the methods KL and Noor1 for some of the proposed examples. In addition, all three known methods often required a higher number of iterations. This fact shows the efficiency of N 2 α , as the method was always convergent to the root of the functions, and it needed fewer iterations.

5. Conclusions

Starting with a simple weighted iterative structure, a new parametric family with quadratic convergence has been obtained. The asymptotic performance of the family has been analyzed by means of complex dynamics, obtaining the fixed and critical points of the rational functions associated with quadratic polynomials. The parameter planes have been useful to determine the values of α providing the most stable elements of the family. Moreover, the dynamical planes represent the basins of attraction of the attracting points, showing an increase in their width for values of the parameter near zero.
In Section 5, the numerical experiments showed that the proposed family can obtain the zeros of the test functions more efficiently than other quadratic methods, with which it has been compared, including Newton’s scheme.

Author Contributions

The individual contributions of the authors are as follows: conceptualization, J.R.T.; validation, A.C. and J.R.T.; formal analysis, F.I.C.; writing, original draft preparation, N.G. and F.I.C.; numerical experiments, N.G. and A.C.

Funding

This research was partially supported by the Spanish Ministerio de Ciencia, Innovación y Universidades PGC2018-095896-B-C22 and Generalitat Valenciana PROMETEO/2016/089.

Acknowledgments

The authors would like to thank the anonymous reviewers for their comments and suggestions, which have improved the final version of this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint Methods for Solving Nonlinear Equations; Academic Press: New York, NY, USA, 2013. [Google Scholar]
  2. Amat, S.; Busquier, S. Advances in Iterative Methods for Nonlinear Equations; Springer: Cham, Switzerland, 2016. [Google Scholar]
  3. Ostrowski, A.M. Solution of Equations and Systems of Equations; Prentice-Hall: Upper Saddle River, NJ, USA, 1964. [Google Scholar]
  4. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Math. 1974, 21, 643–651. [Google Scholar] [CrossRef]
  5. Sharma, J.R.; Arora, H. On efficient weighted-Newton methods for solving systems of nonlinear equations. Appl. Math. Comput. 2013, 222, 497–506. [Google Scholar] [CrossRef]
  6. Chun, C.; Neta, B.; Kozdon, J.; Scott, M. Choosing weight functions in iterative methods for simple roots. App. Math. Comput. 2014, 227, 788–800. [Google Scholar] [CrossRef]
  7. Lee, S.D.; Kim, Y.I.; Neta, B. An optimal family of eighth-order simple-root finders with weight functions dependent on function-to-function ratios and their dynamics underlying extraneous fixed points. J. Comput. Appl. Math. 2017, 317, 31–54. [Google Scholar] [CrossRef]
  8. Amat, S.; Busquier, S.; Plaza, S. Chaotic dynamics of a third-order Newton-type method. J. Math. Anal. Appl. 2010, 366, 24–32. [Google Scholar] [CrossRef]
  9. Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Drawing Dynamical and Parameters Planes of Iterative Families and Methods. Sci. World J. 2013, 2013, 780153. [Google Scholar] [CrossRef] [PubMed]
  10. Cordero, A.; Guasp, L.; Torregrosa, J.R. Choosing the most stable members of Kou’s family of iterative methods. J. Comput. Appl. Math. 2018, 330, 759–769. [Google Scholar] [CrossRef]
  11. Plaza, S.; Gutiérrez, J.M. Dinámica del Método de Newton; Servicio de Publicaciones Universidad de La Rioja: Logroño, Spain, 2013. [Google Scholar]
  12. Devaney, R.L. An Introduction to Chaotic Dynamical Systems; Addison-Wesley Publishing Company: Reading, MA, USA, 1989. [Google Scholar]
  13. Varona, J.L. Graphic and numerical comparison between iterative methods. Math. Intell. 2002, 24, 37–46. [Google Scholar] [CrossRef]
  14. Kou, J.; Li, Y. A family of new Newton-like method. Appl. Math. Comput. 2007, 192, 162–167. [Google Scholar] [CrossRef]
  15. Noor, M.A.; Noor, K.I.; Khan, W.A.; Ahmad, F. On iterative methods for nonlinear equations. Appl. Math. Comput. 2006, 183, 128–133. [Google Scholar] [CrossRef]
  16. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
Figure 1. Parameter planes of R ( z ) associated with the free critical points.
Figure 1. Parameter planes of R ( z ) associated with the free critical points.
Axioms 08 00055 g001
Figure 2. Parameter planes of R + ( z ) associated with the free critical points.
Figure 2. Parameter planes of R + ( z ) associated with the free critical points.
Axioms 08 00055 g002
Figure 3. Parameter planes of R 0 ( z ) associated with the free critical points.
Figure 3. Parameter planes of R 0 ( z ) associated with the free critical points.
Axioms 08 00055 g003
Figure 4. Dynamical planes of R ( z ) for different values of α.
Figure 4. Dynamical planes of R ( z ) for different values of α.
Axioms 08 00055 g004
Figure 5. Dynamical planes of R + ( z ) for different values of α.
Figure 5. Dynamical planes of R + ( z ) for different values of α.
Axioms 08 00055 g005
Figure 6. Dynamical planes of R 0 ( z ) for different values of α.
Figure 6. Dynamical planes of R 0 ( z ) for different values of α.
Axioms 08 00055 g006
Table 1. Numerical results for the test functions, f 1 , f 2 , and f 3 .
Table 1. Numerical results for the test functions, f 1 , f 2 , and f 3 .
NewtonKLNoor1 N 2 0.3
f 1 ( x ) x 0 = 1 iter6455
ACOC3.00002.99003.00102.0000
| x k + 1 x k | 3.366 × 10 28 5.22 × 10 19 4.19 × 10 23 1.83 × 10 13
| f ( x k + 1 ) | 3.366 × 10 28 5.22 × 10 19 4.19 × 10 23 1.83 × 10 13
x 0 = 2 iter565
ACOC 3.00003.00101.9990
| x k + 1 x k | 1.86 × 10 32 9.16 × 10 20 8.27 × 10 13
| f ( x k + 1 ) | 1.86 × 10 32 9.16 × 10 20 8.27 × 10 13
x 0 = 5 iter697
ACOC 2.92602.99102.0000
| x k + 1 x k | 3.44 × 10 13 4.26 × 10 14 1.84 × 10 24
| f ( x k + 1 ) | 3.44 × 10 13 4.26 × 10 14 1.84 × 10 24
x 0 = 10 iter9158
ACOC 3.00102.98702.0000
| x k + 1 x k | 2.44 × 10 25 3.69 × 10 13 2.95 × 10 17
| f ( x k + 1 ) | 2.44 × 10 25 3.69 × 10 13 2.95 × 10 17
f 2 ( x ) x 0 = 1.5 iter6655
ACOC2.00002.00002.00002.0000
| x k + 1 x k | 2.364 × 10 21 1.49 × 10 23 1.22 × 10 14 1.506 × 10 14
| f ( x k + 1 ) | 1.747 × 10 20 1.11 × 10 22 9.00 × 10 14 1.113 × 10 13
x 0 = 0 iter17
ACOC 2.0000
| x k + 1 x k | 2.465 × 10 16
| f ( x k + 1 ) | 1.822 × 10 15
x 0 = 1 iter33
ACOC 2.0000
| x k + 1 x k | 1.247 × 10 14
| f ( x k + 1 ) | 9.215 × 10 14
x 0 = 3 iter20113
ACOC2.00002.0000 2.0000
| x k + 1 x k | 1.29 × 10 19 4.22 × 10 22 3.285 × 10 15
| f ( x k + 1 ) | 9.51 × 10 19 3.13 × 10 21 2.408 × 10 14
f 3 ( x ) x 0 = 2 iter7585
ACOC2.00002.00102.00002.0000
| x k + 1 x k | 5.45× 10 21 2.456 × 10 13 9.818 × 10 17 4.483 × 10 15
| f ( x k + 1 ) | 1.267 × 10 19 5.729 × 10 12 2.282 × 10 15 1.042 × 10 13
x 0 = 1 iter56
ACOC 2.0000 2.0000
| x k + 1 x k | 1.744 × 10 14 8.759 × 10 22
| f ( x k + 1 ) | 4.045 × 10 13 2.036 × 10 20
x 0 = 1 iter66
ACOC 2.0000 2.0000
| x k + 1 x k | 1.744 × 10 14 9.729 × 10 19
| f ( x k + 1 ) | 4.045 × 10 13 2.261 × 10 17
Table 2. Numerical results for the test functions, f 4 , f 5 , and f 6 .
Table 2. Numerical results for the test functions, f 4 , f 5 , and f 6 .
NewtonKLNoor1 N 2 0.3
f 4 ( x ) x 0 = 1 iter7665
ACOC2.00002.00002.00002.0000
| x k + 1 x k | 3.937 × 10 22 2.411 × 10 23 1.123 × 10 16 3.979 × 10 19
| f ( x k + 1 ) | 3.937 × 10 22 2.411 × 10 23 1.123 × 10 16 3.979 × 10 19
x 0 = 5 iter8117
ACOC 2.00002.00002.0000
| x k + 1 x k | 7.172 × 10 16 2.003 × 10 14 2.026 × 10 20
| f ( x k + 1 ) | 7.172 × 10 16 2.003 × 10 14 2.026 × 10 20
x 0 = 15 iter122710
ACOC 1.99902.00002.0000
| x k + 1 x k | 2.527 × 10 13 5.662 × 10 21 8.711 × 10 17
| f ( x k + 1 ) | 2.527 × 10 13 5.662 × 10 21 8.711 × 10 17
f 5 ( x ) x 0 = 0 iter 21
ACOC 2.0000
| x k + 1 x k | 1.911 × 10 16
| f ( x k + 1 ) | 5.771 × 10 16
x 0 = 0.4 iter24
ACOC 2.0000
| x k + 1 x k | 4.377 × 10 16
| f ( x k + 1 ) | 1.31 × 10 15
x 0 = 5 iter5237
ACOC 2.00002.00002.0000
| x k + 1 x k | 2.286 × 10 22 1.845 × 10 23 2.213 × 10 22
| f ( x k + 1 ) | 6.906 × 10 22 5.572 × 10 23 6.648 × 10 22
f 6 ( x ) x 0 = 1 iter13888
ACOC2.00002.00002.00002.0000
| x k + 1 x k | 1.291 × 10 23 5.073 × 10 21 1.931 × 10 17 3.866 × 10 22
| f ( x k + 1 ) | 3.085 × 10 22 1.212 × 10 19 4.614 × 10 16 9.236 × 10 21
x 0 = 1 iter96
ACOC 2.0000 2.0000
| x k + 1 x k | 5.073 × 10 21 3.581 × 10 13
| f ( x k + 1 ) | 1.212 × 10 19 8.557 × 10 12

Share and Cite

MDPI and ACS Style

Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. Generating Root-Finder Iterative Methods of Second Order: Convergence and Stability. Axioms 2019, 8, 55. https://doi.org/10.3390/axioms8020055

AMA Style

Chicharro FI, Cordero A, Garrido N, Torregrosa JR. Generating Root-Finder Iterative Methods of Second Order: Convergence and Stability. Axioms. 2019; 8(2):55. https://doi.org/10.3390/axioms8020055

Chicago/Turabian Style

Chicharro, Francisco I., Alicia Cordero, Neus Garrido, and Juan R. Torregrosa. 2019. "Generating Root-Finder Iterative Methods of Second Order: Convergence and Stability" Axioms 8, no. 2: 55. https://doi.org/10.3390/axioms8020055

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop