Next Article in Journal
Optimal Control and Treatment of Infectious Diseases. The Case of Huge Treatment Costs
Previous Article in Journal
Birkhoff Normal Forms, KAM Theory and Time Reversal Symmetry for Certain Rational Map
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Higher Order Methods for Nonlinear Equations and Their Basins of Attraction

by
Kalyanasundaram Madhu
and
Jayakumar Jayaraman
*
Department of Mathematics, Pondicherry Engineering College, Pondicherry 605014, India
*
Author to whom correspondence should be addressed.
Mathematics 2016, 4(2), 22; https://doi.org/10.3390/math4020022
Submission received: 30 December 2015 / Revised: 18 March 2016 / Accepted: 22 March 2016 / Published: 1 April 2016

Abstract

:
In this paper, we have presented a family of fourth order iterative methods, which uses weight functions. This new family requires three function evaluations to get fourth order accuracy. By the Kung–Traub hypothesis this family of methods is optimal and has an efficiency index of 1.587. Furthermore, we have extended one of the methods to sixth and twelfth order methods whose efficiency indices are 1.565 and 1.644, respectively. Some numerical examples are tested to demonstrate the performance of the proposed methods, which verifies the theoretical results. Further, we discuss the extraneous fixed points and basins of attraction for a few existing methods, such as Newton’s method and the proposed family of fourth order methods. An application problem arising from Planck’s radiation law has been verified using our methods.

1. Introduction

One of the best root-finding methods for solving nonlinear scalar equation f ( x ) = 0 is Newton’s method. In recent years, numerous higher order iterative methods have been developed and analyzed for solving nonlinear equations that improve classical methods, such as Newton’s method (NM), Halley’s iteration method, etc., which are respectively given below:
x n + 1 = x n f ( x n ) f ( x n )
and:
x n + 1 = x n 2 f ( x n ) f ( x n ) 2 f ( x n ) 2 f ( x n ) f ( x n ) .
The convergence order of Newton’s method is two, and it is optimal with two function evaluations. Halley’s iteration method has third order convergence with three function evaluations. Frequently, f is difficult to calculate and computationally more costly, and therefore, f in Equation (2) is approximated using the finite difference; still, the convergence order and total number function evaluation are maintained [1]. Such a third order method similar to Equation (2) after approximating f in Halley’s iteration method is given below:
y n = x n β f ( x n ) f ( x n ) , x n + 1 = x n 2 β f ( x n ) ( 2 β 1 ) f ( x n ) + f ( y n ) , β 0 .
In the past decade, a few authors have proposed third order methods with three function evaluations free from f ; for example, [2,3] and the references therein. The efficiency index (EI) of an iterative method is measured using the formula p 1 d , where p is the local order of convergence and d is the number of function evaluations per full iteration cycle. Kung–Traub [4] conjectured that the order of convergence of any multi-point without the memory method with d function evaluations cannot exceed the bound 2 d 1 , the “optimal order”. Thus, the optimal order for three evaluations per iteration would be four. Jarratt’s method [5] is an example of an optimal fourth order method. Recently, some optimal and non-optimal multi-point methods have been developed in [6,7,8,9,10,11,12,13,14,15] and the references therein. A non-optimal method [16] has been recently rediscovered based on a quadrature formula, which can also be obtained by giving β = 2 3 in Equation (3). In fact, each iterative fixed-point method produces a unique basins of attraction and fractal behavior, which can be used in the evaluation of algorithms [17]. Polynomiography is defined to be the art and science of visualization in the approximation of zeros of complex polynomials, where the created polynomiography images satisfy the mathematical convergence properties of iteration functions.
This paper considers a new family of optimal fourth order methods, which is an improvement of the method given in [16]. We study extraneous fixed points and basins of attraction for two particular cases of the new family of methods and a few equivalent available methods. The rest of the paper is organized as follows. Section 2 presents the development of the methods, their convergence analysis and the extension of new fourth order methods to sixth and twelfth order. Section 3 includes some numerical examples and results for the new family of methods along with some equivalent methods, including Newton’s method. In Section 4, we obtain all possible extraneous fixed points for these methods as a special study. In Section 5, we study basins of attraction for the proposed fourth order methods, Newton’s method and some existing methods. Section 6 discusses an application on Planck’s radiation law problem. Finally, Section 7 gives the conclusions of our work.

2. Development of the Methods and Convergence Analysis

Noor et al. [16] consider the following third order method for the value of β = 2 3 in Equation (3):
y n = x n 2 3 f ( x n ) f ( x n ) , x n + 1 = x n 4 f ( x n ) f ( x n ) + 3 f ( y n ) .
This Method (4) is of order three with three evaluations per full iteration having EI = 1.442. To improve the order of the above method with the same number of function evaluations leading to an optimal method, we propose the following without memory method, which includes weight functions:
y n = x n 2 3 f ( x n ) f ( x n ) x n + 1 = x n 4 f ( x n ) f ( x n ) + 3 f ( y n ) × H ( τ ) × G ( η ) ,
where H ( τ ) and G ( η ) are two weight functions with τ = f ( y n ) f ( x n ) and η = f ( x n ) f ( y n ) .

2.1. Convergence Analysis

The proofs for Theorems 1 and 2 are worked out with the help of Mathematica.
Theorem 1. 
Let f : D R R be a sufficiently smooth function having continuous derivatives up to fourth order. If f ( x ) has a simple root x * in the open interval D and x 0 is chosen in a sufficiently small neighborhood of x * , then the family of Method (5) is of local fourth-order convergence, when:
H ( 1 ) = G ( 1 ) = 1 , H ( 1 ) = G ( 1 ) = 0 , H ( 1 ) = 5 8 , G ( 1 ) = 1 2 , | H ( 1 ) | = | G ( 1 ) | <
and it satisfies the error equation:
e n + 1 = 1 81 81 c 2 c 3 + 9 c 4 + c 2 3 147 + 32 H ′′′ ( 1 ) 32 G ( 1 ) e n 4 + O ( e n 5 ) ,
where c j = f ( j ) ( x * ) j ! f ( x * ) , j = 2 , 3 , 4 , . . . and e n = x n x * .
Proof. 
Taylor expansion of f ( x n ) and f ( x n ) about x * gives:
f ( x n ) = f ( x * ) e n + c 2 e n 2 + c 3 e n 3 + c 4 e n 4 +
and:
f ( x n ) = f ( x * ) 1 + 2 c 2 e n + 3 c 3 e n 2 + 4 c 4 e n 3 +
so that:
y n = x 2 3 f ( x n ) f ( x n ) = x * + e n 3 + 2 3 c 2 e n 2 4 3 c 2 2 c 3 e n 3 + 2 3 4 c 2 3 7 c 2 c 3 + 3 c 4 e n 4 + .
Again, using Taylor expansion of f ( y n ) about x * gives:
f ( y n ) = f ( x * ) 1 + 2 3 c 2 e n + 1 3 4 c 2 2 + c 3 e n 2 + 4 27 18 c 2 3 + 27 c 2 c 3 + c 4 e n 3 + .
Using Equations (8) and (10), we have:
τ = 1 4 3 c 2 e n + 4 c 2 2 8 3 c 3 e n 2 8 27 36 c 2 3 45 c 2 c 3 + 13 c 4 e n 3 +
and:
η = 1 + 4 3 c 2 e n + 4 9 5 c 2 2 + 6 c 3 e n 2 + 8 27 8 c 2 3 21 c 2 c 3 + 13 c 4 e n 3 +
Using Equations (7), (8) and (10), then we have:
4 f ( x n ) f ( x n ) + 3 f ( y n ) = e n c 2 2 e n 3 + 3 c 2 3 3 c 2 c 3 1 9 c 4 e n 4 + .
Expanding the weight function H ( τ ) and G ( η ) about 1 using Taylor series, we get:
H ( τ ) = H ( 1 ) + ( τ 1 ) H ( 1 ) + 1 2 ( τ 1 ) 2 H ( 1 ) + 1 6 ( τ 1 ) 3 H ( 1 ) + O ( H ( 4 ) ( 1 ) ) , G ( η ) = G ( 1 ) + ( η 1 ) G ( 1 ) + 1 2 ( η 1 ) 2 G ( 1 ) + 1 6 ( η 1 ) 3 G ( 1 ) + O ( G ( 4 ) ( 1 ) ) .
Using Equations (13) and (14) in Equation (5), such that the conditions in Equation (6) are satisfied, we obtain:
e n + 1 = 1 81 81 c 2 c 3 + 9 c 4 + c 2 3 147 + 32 H ( 1 ) 32 G ( 1 ) e n 4 + O ( e n 5 ) .
Equation (15) shows that Method (5) has fourth order convergence. ☐
Note that for each choice of | H ( 1 ) | < and | G ( 1 ) | < in Equation (15) will give rise to a new optimal fourth order method. Method (5) has efficiency index EI = 1.587, better than Method (4). Two members in the family of Method (5) satisfying Condition (6), with corresponding weight functions, are given in the following:
By choosing H ( 1 ) = G ( 1 ) = 0 , we get a new Proposed method called as PM1:
y n = x n 2 3 f ( x n ) f ( x n ) , x n + 1 = x n 4 f ( x n ) f ( x n ) + 3 f ( y n ) 1 + 5 16 τ 1 2 1 + 1 4 η 1 2 ,
where its error equation is:
e n + 1 = 49 27 c 2 3 c 2 c 3 + 1 9 c 4 e n 4 + O ( e n 5 ) .
By choosing H ( 1 ) = 0 , G ( 1 ) = 1 , we get another new Proposed method called as PM2:
y n = x n 2 3 f ( x n ) f ( x n ) , x n + 1 = x n 4 f ( x n ) f ( x n ) + 3 f ( y n ) 1 + 5 16 τ 1 2 1 + 1 4 η 1 2 + 1 6 η 1 3 ,
where its error equation is:
e n + 1 = 115 81 c 2 3 c 2 c 3 + 1 9 c 4 e n 4 + O ( e n 5 ) .
Remark 1. 
By this way, we can propose many such fourth order methods similar to PM1 and PM2. Further, the methods PM1 and PM2 are equally good, since they have the same order of convergence and efficiency. Based on the analysis done using basins of attraction, we find that PM1 is marginally better than PM2, and hence, we have considered PM1 to propose a higher order method, namely PM3.

2.2. Higher Order Methods

We extend the method PM1 to a new sixth order method called as PM3:
y n = x n 2 3 f ( x n ) f ( x n ) , z n = x n 4 f ( x n ) f ( x n ) + 3 f ( y n ) 1 + 5 16 τ 1 2 1 + 1 4 η 1 2 , x n + 1 = z n 1 2 f ( z n ) f ( x n ) 3 η 1 .
The following theorem gives the proof of convergence for Method (18).
Theorem 2. 
Let f : D R R be a sufficiently smooth function having continuous derivatives up to fourth order. If f ( x ) has a simple root x * in the open interval D and x 0 is chosen in a sufficiently small neighborhood of x * , then Method (18) is of local sixth order convergence, and it satisfies the error equation:
e n + 1 = 1 81 10 c 2 2 3 c 3 49 c 2 3 27 c 2 c 3 + 3 c 4 e n 6 + O ( e n 7 ) .
Proof. 
Taylor expansion of f ( z n ) about x * gives:
f ( z n ) = f ( x * ) [ 49 27 c 2 3 c 2 c 3 + 1 9 c 4 e n 4 2 81 403 c 2 4 522 c 2 2 c 3 + 81 c 3 2 + 90 c 2 c 4 12 c 5 e n 5 + 2 243 ( 4529 c 2 5 8835 c 2 3 c 3 + 2343 c 2 2 c 4 891 c 3 c 4 + 135 c 2 ( 25 c 3 2 3 c 5 ) + 63 c 6 ) e n 6 ] .
By using Equations (8), (12) and (19) in Equation (18), we obtain:
e n + 1 = 1 81 10 c 2 2 3 c 3 49 c 2 3 27 c 2 c 3 + 3 c 4 e n 6 + O ( e n 7 ) .
Equation (20) shows that Method (18) has sixth order convergence. ☐
Babajee et al. [7] improved a sixth order Jarratt method to a twelfth order method. Using their technique, we obtain a new twelfth order method called as PM4:
y n = x n 2 3 f ( x n ) f ( x n ) , z n = x n 4 f ( x n ) f ( x n ) + 3 f ( y n ) 1 + 5 16 τ 1 2 1 + 1 4 η 1 2 , w n = z n 1 2 f ( z n ) f ( x n ) 3 η 1 , x n + 1 = w n f ( w n ) f ( w n ) ,
where f ( w n ) is approximated as follows: in order to reduce one function evaluation, we replace:
f ( w n ) 1 z n w n ( f ( x n ) ( z n w n ) + 2 f [ w n , x n , x n ] ( z n x n ) ( w n x n ) + ( f [ z n , x n , x n ] 3 f [ w n , x n , x n ] ) ( w n x n ) 2 ) , f [ z n , x n , x n ] = f [ z n , x n ] f ( x n ) z n x n , f [ z n , x n ] = f ( z n ) f ( x n ) z n x n , f [ w n , x n , x n ] = f [ w n , x n ] f ( x n ) w n x n , f [ w n , x n ] = f ( w n ) f ( x n ) w n x n .
The following theorem is given without proof, which can be worked out with the help of Mathematica.
Theorem 3. 
Let f : D R R be a sufficiently smooth function having continuous derivatives up to fourth order. If f ( x ) has a simple root x * in the open interval D and x 0 is chosen in a sufficiently small neighborhood of x * , then Method (21) is of local twelfth order convergence, and it satisfies the error equation:
e n + 1 = 1 6561 10 c 2 2 3 c 3 49 c 2 3 27 c 2 c 3 + 3 c 4 2 10 c 2 3 3 c 2 c 3 + 3 c 4 e n 12 + O ( e n 13 ) .
Remark 2. 
The efficiency indices for the methods PM3 and PM4 are EI = 1.565 and EI = 1.644, respectively.

2.3. Some Existing Fourth Order Methods

Consider the following fourth order optimal methods for the purpose of comparing results:
Jarratt method (JM) [5]:
y n = x n 2 3 f ( x n ) f ( x n ) , x n + 1 = x n 3 f ( y n ) + f ( x n ) 6 f ( y n ) 2 f ( x n ) f ( x n ) f ( x n ) .
Method of Sharifi-Babajee-Soleymani (SBS1) [12]:
y n = x n 2 3 f ( x n ) f ( x n ) , x n + 1 = x n f ( x n ) 4 1 f ( x n ) + 3 f ( y n ) 1 + 3 8 f ( y n ) f ( x n ) 1 2 69 64 f ( y n ) f ( x n ) 1 3 + f ( x n ) f ( y n ) 4 .
Method of Sharifi-Babajee-Soleymani (SBS2) [12]:
y n = x n 2 3 f ( x n ) f ( x n ) , x n + 1 = x n f ( x n ) 4 1 f ( x n ) + 3 f ( y n ) 1 + 3 8 f ( y n ) f ( x n ) 1 2 + 1 81 f ( x n ) f ( y n ) 3 .
Method of Soleymani-Khratti-Karimi (SKK) [15]:
y n = x n 2 3 f ( x n ) f ( x n ) , x n + 1 = x n 2 f ( x n ) f ( x n ) + f ( y n ) 1 + f ( x n ) f ( x n ) 4 2 7 4 f ( y n ) f ( x n ) + 3 4 f ( y n ) f ( x n ) 2 .
Method of Singh-Jaiswal (SJ) [14]:
y n = x n 2 3 f ( x n ) f ( x n ) , x n + 1 = x n 17 8 9 4 f ( y n ) f ( x n ) + 9 8 f ( y n ) f ( x n ) 2 7 4 3 4 f ( y n ) f ( x n ) f ( x n ) f ( x n ) .
Method of Sharma-Kumar-Sharma (SKS) [13]:
y n = x n 2 3 f ( x n ) f ( x n ) , x n + 1 = x n 1 2 + 9 8 f ( x n ) f ( y n ) + 3 8 f ( y n ) f ( x n ) f ( x n ) f ( x n ) .
Furthermore, consider the following non-optimal method found in Divya Jain (DJ) [10]:
y n = x n f ( x n ) f ( x n ) , z n = x n 2 f ( x n ) f ( x n ) + f ( y n ) , x n + 1 = z n z n x n f ( z n ) f ( x n ) f ( z n ) .

3. Numerical Examples

In this section, we give numerical results on some test functions to compare the efficiency of the proposed family of methods with some known methods. Numerical computations have been carried out in the MATLAB software, rounding to 500 significant digits. Depending on the precision of the computer, we use the stopping criteria for the iterative process: e r r o r = | x N x N 1 | < ϵ , where ϵ = 10 50 and N is the number of iterations required for convergence. d 1 represents the total number of function evaluations. The computational order of convergence (COC) denoted as ρ is given by (see [18]):
ρ = ln | ( x N x N 1 ) / ( x N 1 x N 2 ) | ln | ( x N 1 x N 2 ) / ( x N 2 x N 3 ) | .
Functions taken for our study are mostly used in the literature [7,11], and their simple zeros are given below:
f 1 ( x ) = sin ( 2 cos x ) 1 x 2 + e sin ( x 3 ) , x * = 0.7848959876612125352 . . . f 2 ( x ) = x e x 2 sin 2 x + 3 cos x + 5 , x * = 1.2076478271309189270 . . . f 3 ( x ) = x 3 + 4 x 2 10 , x * = 1.3652300134140968457 . . . f 4 ( x ) = s i n ( x ) + c o s ( x ) + x , x * = 0.4566247045676308244 . . . f 5 ( x ) = x 2 sin x , x * = 1.8954942670339809471 . . . f 6 ( x ) = x 2 + 2 x + 5 2 sin x x 2 + 3 , x * = 2.3319676558839640103 . . . f 7 ( x ) = x cos x , x * = 0.6417143708728826583 . . . f 8 ( x ) = x 2 + sin ( x 5 ) 1 4 , x * = 0.4099920179891371316 . . . f 9 ( x ) = e x sin x + log ( 1 + x 2 ) 2 , x * = 2.4477482864524245021 . . . f 10 ( x ) = x 3 + sin x 30 , x * = 9.7165019933652005655 . . .
From Table 1 and Table 2, we observe that PM1 and PM2 converge in a lesser number of iterations and with low error when compared to Methods (1) and (4). For equivalent fourth order methods, PM1 and PM2 converge in a lesser number of iterations for certain functions, for example PM2 performs better compared to Method (25) for the functions f 2 , f 7 , f 8 , f 9 and f 10 . In terms of the number of iterations for convergence, PM1 and PM2 are equivalent to JM. Table 3 and Table 4 displays the total number of function evaluations ( d 1 ) and the computational order of convergence (COC) ρ for the methods taken for our study.
Table 5 displays the results for the “ f z e r o ” command in MATLAB, where N 1 is the number of iterations to find the interval containing the root and f ( x n ) is the error after N number of iterations. For the f z e r o command, zeros are considered to be points where the function actually crosses, not just touches the x-axis. It is observed that the present methods (PM1 and PM2) converge with a lesser number of total function evaluations than the f z e r o solver.

4. A Study on Extraneous Fixed Points

Definition 4. 
A point z 0 is a fixed point of R if R ( z 0 ) = z 0 .
Definition 5. 
A point z 0 is called attracting if | R ( z 0 ) | < 1 , repelling if | R ( z 0 ) | > 1 and neutral if | R ( z 0 ) | = 1 . If the derivative is also zero, then the point is super attracting.
It is interesting to note that all of the above discussed methods can be written as:
x n + 1 = x n G f ( x n ) u ( x n ) , u = f f .
As per the definition, x * is a fixed point of this method, since u ( x * ) = 0 . However, the points ξ x * at which G f ( ξ ) = 0 are also fixed points of the method, since G f ( ξ ) = 0 ; the second term on the right side of Equation (29) vanishes. Hence, these points ξ are called extraneous fixed points.
Moreover, for a general iteration function given by:
R p ( z ) = z G f ( z ) u ( z ) , z C ,
the nature of extraneous fixed points can be discussed. Based on the nature of the extraneous fixed points, the convergence of the iteration process will be determined. For more details on this aspect, the paper by Vrcsay et al. [19] will be useful. In fact, they investigated that if the extraneous fixed points are attractive, then the method will give erroneous results. If the extraneous fixed points are repelling or neutral, then the method may not converge to a root near the initial guess.
In this section, we will discuss the extraneous fixed points of each method for the polynomial z 3 1 . As G f does not vanish in Theorem 6, there are no extraneous fixed points.
Theorem 6. 
There are no extraneous fixed points for Newton’s Method (1) and Method (4).
Theorem 7. 
There are six extraneous fixed points for Jarratt Method (22).
Proof. 
The extraneous fixed point of Jarratt method for which
G f = 3 f ( y ( z ) ) + f ( z ) 6 f ( y ( z ) ) 2 f ( z )
are found. Upon substituting y ( z ) = z 2 f ( z ) 3 f ( z ) , we get the equation 1 + 7 z 3 + 19 z 6 2 + 14 z 3 + 11 z 6 = 0 . The extraneous fixed points are found to be 0.411175 ± 0.453532 i , 0.598358 ± 0.129321 i , 0.187183 ± 0.582854 i . All of these fixed points are repelling (since | R ( z 0 ) | > 1 ). ☐
Theorem 8. 
There are fifty two extraneous fixed points for Method (23).
Proof. 
We found for Method (23),
G f = 1 + 3 f ( z ) f ( y ( z ) ) f ( z ) + 69 64 ( f ( z ) f ( y ( z ) ) ) 3 f ( z ) 2 + f ( z ) 4 f ( z ) f ( y ( z ) ) 4 + 3 8 ( f ( y ( z ) ) f ( z ) ) 2 f ( z ) .
The extraneous fixed points are at found to be
0.385139 ± 0.301563 i , 0.453731 ± 0.182759 i , 0.0685914 ± 0.484322 i , 0.461227 , 0.690937 , 1.38146 ± 1.63298 i , 0.888193 ± 0.382434 i , 0.626419 ± 0.447214 i , 0.616918 ± 0.228042 i , 0.546519 ± 0.138633 i , 0.504031 ± 0.0757213 i , 0.483094 ± 0.0349619 i , 0.345468 ± 0.598527 i , 0.0785635 ± 0.774853 i , 0.0935008 ± 0.707441 i , 0.140045 ± 0.571188 i , 0.177037 ± 0.495057 i , 0.200558 ± 0.445297 i , 0.205838 ± 1.20668 i , 0.229093 ± 0.403647 i , 0.253758 ± 0.407757 i , 0.299419 ± 0.396038 i , 0.365488 ± 0.402274 i , 0.461153 ± 0.411868 i , 0.659879 ± 0.470766 i , 0.704721 ± 0.329989 i , 1.56532 ± 0.938337 i .
All of these fixed points are repelling (since | R ( z 0 ) | > 1 ). ☐
Theorem 9. 
There are thirty nine extraneous fixed points for Method (24).
Proof. 
For Method (24),
G f = 1 + 3 f ( z ) f ( y ( z ) ) f ( z ) + + f ( z ) 3 f ( z ) 81 f ( y ( z ) ) 3 + 3 8 ( f ( y ( z ) ) f ( z ) ) 2 f ( z ) .
The extraneous fixed points are at
0.385139 ± 0.301563 i , 0.453731 ± 0.182759 i , 0.0685914 ± 0.484322 i 3.98917 ± 6.90945 i , 7.97834 , 0.41942 ± 0.726456 i , 0.838839 , 0.277253 ± 0.480215 i , 0.554505 , 0.46341 ± 0.53288 i , 0.693192 ± 0.134885 i , 0.229782 ± 0.667764 i , 0.367096 ± 0.467142 i , 0.588105 ± 0.0843435 i , 0.221009 ± 0.551486 i , 0.280074 ± 0.381388 i , 0.470329 ± 0.0518574 i , 0.190255 ± 0.433246 i , 0.615945 ± 0.214444 i , 0.493687 ± 0.426202 i , 0.122258 ± 0.640646 i .
All of these fixed points are repelling (since | R ( z 0 ) | > 1 ). ☐
Theorem 10. 
There are twenty four extraneous fixed points for Method (25).
Proof. 
We found for Method (25),
G f = 1 1 + f ( y ( z ) ) f ( z ) f ( z ) + + f ( z ) 4 f ( z ) 3 2 f ( z ) 7 4 f ( y ( z ) ) + 3 4 f ( y ( z ) ) 2 f ( z ) .
The extraneous fixed points are found to be
0.272187 ± 0.394392 i , 0.20546 ± 0.432916 i , 0.477646 ± 0.0385246 i , 0.676726 ± 0.202542 i , 0.513769 ± 0.484791 i , 0.162957 ± 0.687333 i , 2.12619 ± 2.22671 i , 0.51922 ± 0.277607 i , 0.217805 ± 0.487789 i , 0.210804 ± 0.604566 i , 0.524089 ± 0.172222 i , 2.12832 ± 2.00454 i .
All of these fixed points are repelling (since | R ( z 0 ) | > 1 ). ☐
Theorem 11. 
There are eighteen extraneous fixed points for Method (26).
Proof. 
For Method (26),
G f = 17 8 9 4 f ( y ( z ) ) f ( z ) + 9 8 f ( y ( z ) ) f ( z ) 2 7 4 3 4 f ( y ( z ) ) f ( z ) .
The extraneous fixed points are at
0.333371 ± 0.577415 i , 0.666742 , 0.229257 ± 0.397085 i , 0.458515 , 0.710065 ± 0.231721 i , 0.555709 ± 0.499074 i , 0.154356 ± 0.730795 i , 0.275117 ± 0.402579 i , 0.486202 ± 0.0369693 i , 0.211085 ± 0.439548 i .
All of these fixed points are repelling (since | R ( z 0 ) | > 1 ). ☐
Theorem 12. 
There are twelve extraneous fixed points for Method (27).
Proof. 
For Method (27),
G f = 1 2 + 9 8 f ( z ) f ( y ( z ) ) + 3 8 f ( y ( z ) ) f ( z ) .
The extraneous fixed points are at
0.289483 ± 0.382811 i , 0.186782 ± 0.442105 i , 0.476265 ± 0.0592945 i , 0.605298 ± 0.2466 i , 0.516211 ± 0.400903 i , 0.0890867 ± 0.647503 i .
All of these fixed points are repelling (since | R ( z 0 ) | > 1 ). ☐
Theorem 13. 
There are twenty four extraneous fixed points for Method (16).
Proof. 
For Method (16),
G f = 1 1 + 3 f ( y ( z ) ) f ( z ) f ( z ) + 5 16 ( f ( y ( z ) ) f ( z ) ) 2 f ( z ) f ( z ) + 1 4 f ( z ) ( f ( z ) f ( y ( z ) ) ) 2 f ( y ( z ) ) 2 .
The extraneous fixed points are at
0.622907 ± 0.52714 i , 0.767969 ± 0.275883 i , 0.145063 ± 0.803023 i , 0.310217 ± 0.445061 i , 0.540543 ± 0.0461255 i , 0.230326 ± 0.491187 i , 0.280277 ± 0.377418 i , 0.186715 ± 0.431436 i , 0.466992 ± 0.0540183 i , 0.602147 ± 0.210285 i , 0.483186 ± 0.416332 i , 0.118961 ± 0.626617 i .
All of these fixed points are repelling (since | R ( z 0 ) | > 1 ). ☐
Theorem 14. 
There are thirty extraneous fixed points for Method (17).
Proof. 
For Method (17),
G f = 1 1 + 3 f ( y ( z ) ) f ( z ) f ( z ) + 5 16 ( f ( y ( z ) ) f ( z ) ) 2 f ( z ) f ( z ) + 1 4 f ( z ) ( f ( z ) f ( y ( z ) ) ) 2 f ( y ( z ) ) 2 + 1 6 f ( z ) ( f ( z ) f ( y ( z ) ) ) 3 f ( y ( z ) ) 3 .
The extraneous fixed points are at
0.280277 ± 0.377418 i , 0.186715 ± 0.431436 i , 0.466992 ± 0.0540183 i , 0.602147 ± 0.210285 i , 0.483186 ± 0.416332 i , 0.118961 ± 0.626617 i , 0.701957 ± 0.574647 i , 0.848638 ± 0.320589 i , 0.146681 ± 0.895237 i , 0.296076 ± 0.447202 i , 0.535326 ± 0.0328086 i , 0.23925 ± 0.48001 i , 0.414766 ± 0.407081 i , 0.145159 ± 0.562739 i , 0.559926 ± 0.155658 i .
All of these fixed points are repelling (since | R ( z 0 ) | > 1 ). ☐

5. Basins of Attraction

Section 2 and Section 3 discussed methods whose roots are in the real domain, that is f : D R R . The study can be extended to functions defined in the complex plane f : D C C having complex zeros. From the fundamental theorem of algebra, a polynomial of degree n with real or complex coefficients has n roots, which may or may not be distinct. In such a case, a complex initial guess is needed for the convergence of complex zeros. Note that we need some basic definitions in order to study functions for the complex domain with complex zeros. We give below some definitions required for our study, which are found in [20,21,22]. Let R : C C be a rational map on the Riemann sphere.
Definition 15. 
For z C , we define its orbit as the set o r b ( z ) = { z , R ( z ) , R 2 ( z ) , . . . , R n ( z ) , . . . } .
Definition 16. 
A periodic point z 0 of the period m is such that R m ( z 0 ) = z 0 , where m is the smallest integer.
Definition 17. 
The Julia set of a nonlinear map R ( z ) denoted by J ( R ) is the closure of the set of its repelling periodic points. The complementary of J ( R ) is the Fatou set F ( R ) .
Definition 18. 
If O is an attracting periodic orbit of period m, we define the basins of attraction to be the open set A C consisting of all points z C for which the successive iterates R m ( z ) , R 2 m ( z ) , . . . converge towards some point of O.
Lemma 19. 
Every attracting periodic orbit is contained in the Fatou set of R. In fact, the entire basins of attraction A for an attracting periodic orbit is contained in the Fatou set. However, every repelling periodic orbit is contained in the Julia set.
In the following subsections, we produce some beautiful graphs obtained for the proposed methods and for some existing methods using MATLAB [23,24]. In fact, an iteration function is a mapping of the plane into itself. The common boundaries of these basins of attraction constitute the Julia set of the iteration function, and its complement is the Fatou set. This section is necessary in this paper to show how the proposed methods could be considered in polynomiography. In the following section, we describe the basins of attraction for Newton’s method and some higher order Newton type methods for finding complex roots of polynomials p 1 ( z ) = z 3 1 and p 2 ( z ) = z 4 1 .

5.1. Polynomiographs of p 1 ( z ) = z 3 1

We consider the square region [ 2 , 2 ] × [ 2 , 2 ] , and in this region, we have 160,000 equally-spaced grid points with mesh h = 0.01 . It is composed of 400 columns and 400 rows, which can be related to the pixels of a computer display, which would represent a region of the complex plane [25]. Each grid point is used as an initial point z 0 , and the number of iterations until convergence is counted for each point. Now, we draw the polynomiographs of p 1 ( z ) = z 3 1 with roots α 1 = 1 , α 2 = 0.5000 0.8660 i and α 3 = 0.5000 + 0.8660 i . We assign “red color” if each grid point converges to the root α 1 , “green color” if they converge to the root α 2 and “blue color” if they converge to the root α 3 in at most 200 iterations and if | z n α j |   < 10 4 , j = 1 , 2 , 3 . In this way, the basins of attraction for each root would be assigned a characteristic color. If the iterations do not converge as per the above condition for some specific initial points, we assign “black color”.
Figure 1a–j shows the polynomiographs of the methods for the cubic polynomial p 1 ( z ) . There are diverging points for the method of Noor et al., SBS1, SBS2 and SKK. All starting points are converging for the methods NM, JM, SJ, SKS, PM1 and PM2. In Table 6, we classify the number of converging and diverging grid points for each iterative method. Note that a point z 0 belongs to the Julia set if and only if the dynamics in a neighborhood of z 0 displays sensitive dependence on the initial conditions, so that nearby initial conditions lead to wildly different behavior after a number of iterations. For this reason, some of the methods are getting many divergent points. The common boundaries of these basins of attraction constitute the Julia set of the iteration function.

5.2. Polynomiographs of p 2 ( z ) = z 4 1

Next, we draw the polynomiographs of p 2 ( z ) = z 4 1 with roots α 1 = 1 , α 2 = 1 , α 3 = i and α 4 = i . We assign yellow color if each grid point converges to the root α 1 , red color if they converge to the root α 2 , green color if they converge to the root α 3 and blue color if they converge to the root α 4 in at most 200 iterations and if | z n α j | < 10 4 , j = 1 , 2 , 3 , 4 . Therefore, the basins of attraction for each root would be assigned a corresponding color. If the iterations do not converge as per the above condition for some specific initial points, we assign black color.
Figure 2a–j shows the polynomiographs of the methods for the quartic polynomial p 2 ( z ) . There are diverging points for the method of Noor et al., SBS1, SBS2, SKK, SJ, SKS, PM1 and PM2. All starting points are convergent for NM and JM. In Table 7, we classify the number of converging and diverging grid points for each iterative methods. Furthermore, we observe that the SKS, PM1 and PM2 methods are divergent at a lesser number of grid points than the method of Noor et al., SBS1, SBS2, SKK and SJ. Table 8 shows that the proposed methods are better than or equal to other comparable methods with respect to the number of iterations, computational order convergence and error. All of the methods applied on the cubic and quartic polynomials p 1 ( z ) and p 2 ( z ) are convergent with real roots as the starting point.
From this comparison based on the basins of attractions for cubic and quartic polynomials, we could generally say that NM, JM, PM1 and PM2 are more reliable in solving nonlinear equations. Furthermore, by observing the polynomiographs of p 1 ( z ) and p 2 ( z ) , we find certain symmetrical patterns for the x-axis and y-axis, where the starting point z 0 leads to convergent real or complex pair of roots of the respective polynomials.

6. An Application Problem

To test our methods, we consider the following Planck’s radiation law problem found in [10,26]:
φ ( λ ) = 8 π c h λ 5 e c h / λ k T 1 ,
which calculates the energy density within an isothermal blackbody. Here, λ is the wavelength of the radiation; T is the absolute temperature of the blackbody; k is Boltzmann’s constant; h is the Planck’s constant; and c is the speed of light. Suppose we would like to determine wavelength λ, which corresponds to maximum energy density φ ( λ ) . From Equation (31), we get:
φ ( λ ) = 8 π c h λ 6 e c h / λ k T 1 ( c h / λ k T ) e c h / λ k T e c h / λ k T 1 5 = A · B .
It can be checked that a maxima for φ occurs when B = 0 , that is when:
( c h / λ k T ) e c h / λ k T e c h / λ k T 1 = 5 .
Here, putting x = c h / λ k T , the above equation becomes:
1 x 5 = e x .
Define:
f ( x ) = e x 1 + x 5 .
The aim is to find a root of the equation f ( x ) = 0 . Obviously, one of the roots x = 0 is not taken for discussion. As argued in [26], the left-hand side of Equation (32) is zero for x = 5 and e 5 6.74 × 10 3 . Hence, it is expected that another root of the equation f ( x ) = 0 might occur near x = 5 . The approximate root of the Equation (33) is given by x * 4.96511423174427630369 . Consequently, the wavelength of radiation (λ) corresponding to which the energy density is maximum is approximated as:
λ c h ( k T ) 4 . 96511423174427630369 .
We apply the methods NM, DJ, PM1, PM2, PM3 and PM4 to solve Equation (33) and compared the results in Table 9 and Table 10. From these tables, we note that the root x * is reached faster by the method PM4 than by other methods. This is due to the fact that PM4 has the highest efficiency index EI = 1.644.
Results for the f z e r o command in MATLAB for this application problem are given in Table 11.

7. Conclusions

In this work, we have proposed a family of fourth order methods using weight functions. The fourth order methods are found to be optimal as per the Kung–Traub conjuncture. Further, we have extended one of the methods to sixth and twelfth order methods with four and five function evaluations, respectively. The extraneous fixed points for the fourth order methods and for some existing methods are discussed in detail. By analysis using basins of attraction, our methods PM1 and PM2 are found to be superior to the methods of Noor et al. [16], SBS1, SBS2, SKK and SJ; specifically, the methods of SBS1, SBS2 and SKK are very badly scaled in both cubic and quartic polynomials. Moreover, PM1 and PM2 are better than other compared methods, except Newton’s method and Jaratt’s method, which perform equally well. We have also verified our methods (PM1, PM2, PM3, PM4), NM and DJ on Planck’s radiation law problem, and the results show that PM4 is more efficient than other compared methods.

Acknowledgments

The authors would like to thank the editors and referees for the valuable comments and for the suggestions to improve the readability of the paper.

Author Contributions

The contributions of both of the authors have been similar. Both of them have worked together to develop the present manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ezquerro, J.A.; Hernandez, M.A. A uniparametric halley-type iteration with free second derivative. Int. J. Pure Appl. Math. 2003, 6, 99–110. [Google Scholar]
  2. Babajee, D.K.R.; Dauhoo, M.Z. An Analysis of the Properties of the Variants of Newton’s Method with Third Order Convergence. Appl. Math. Comput. 2006, 183, 659–684. [Google Scholar] [CrossRef]
  3. Chun, C.; Yong-Il, K. Several new third-order iterative methods for solving nonlinear equations. Acta Appl. Math. 2010, 109, 1053–1063. [Google Scholar] [CrossRef]
  4. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Mach. 1974, 21, 643–651. [Google Scholar] [CrossRef]
  5. Jarratt, P. Some efficient fourth order multipoint methods for solving equations. BIT Numer. Math. 1969, 9, 119–124. [Google Scholar] [CrossRef]
  6. Ardelean, G. A new third-order newton-type iterative method for solving nonlinear equations. Appl. Math. Comput. 2013, 219, 9856–9864. [Google Scholar] [CrossRef]
  7. Babajee, D.K.R.; Madhu, K.; Jayaraman, J. A family of higher order multi-point iterative methods based on power mean for solving nonlinear equations. Afr. Mat. 2015. [Google Scholar] [CrossRef]
  8. Cordero, A.; Hueso, J.L.; Martinez, E.; Torregrosa, J.R. Efficient three-step iterative methods with sixth order convergence for nonlinear equations. Numer. Algor. 2010, 53, 485–495. [Google Scholar] [CrossRef]
  9. Cordero, A.; Hueso, J.L.; Martinez, E.; Torregrosa, J.R. A family of iterative methods with sixth and seventh order convergence for nonlinear equations. Math. Comput. Model. 2010, 52, 1490–1496. [Google Scholar] [CrossRef]
  10. Jain, D. Families of newton-like methods with fourth-order convergence. Int. J. Comput. Math. 2013, 90, 1072–1082. [Google Scholar] [CrossRef]
  11. Madhu, K.; Jayaraman, J. Class of modified Newton’s method for solving nonlinear equations. Tamsui Oxf. J. Inf. Math. Sci. 2014, 30, 91–100. [Google Scholar]
  12. Sharifi, M.; Babajee, D.K.R.; Soleymani, F. Finding the solution of nonlinear equations by a class of optimal methods. Comput. Math. Appl. 2012, 63, 764–774. [Google Scholar] [CrossRef]
  13. Sharma, J.R.; Kumar, G.R.; Sharma, R. An efficient fourth order weighted-newton method for systems of nonlinear equations. Numer. Algor. 2013, 62, 307–323. [Google Scholar] [CrossRef]
  14. Singh, A.; Jaiswal, J.P. Several new third-order and fourth-order iterative methods for solving nonlinear equations. Int. J. Eng. Math. 2014, 2014, 828409. [Google Scholar] [CrossRef]
  15. Soleymani, F.; Khratti, S.K.; Karimi, V.S. Two new classes of optimal Jarratt-type fourth-order methods. Appl. Math. Lett. 2011, 25, 847–853. [Google Scholar] [CrossRef]
  16. Noor, M.A.; Waseem, M. Some iterative methods for solving a system of nonlinear equations. Comput. Math. Appl. 2009, 57, 101–106. [Google Scholar] [CrossRef]
  17. Kalantari, B. Polynomial Root-Finding and Polynomiography; World Scientific Publishing Co. Pte. Ltd.: Singapore, 2009. [Google Scholar]
  18. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  19. Vrscay, E.R.; Gilbert, W.J. Extraneous fixed points, basin boundaries and chaotic dynamics for Schroder and Konig rational iteration functions. Numer. Math. 1988, 52, 1–16. [Google Scholar] [CrossRef]
  20. Amat, S.; Busquier, S.; Plaza, S. Review of some iterative root-finding methods from a dynamical point of view. Sci. Ser. A Math. Sci. 2004, 10, 3–35. [Google Scholar]
  21. Blanchard, P. Complex Analytic Dynamics on the Riemann sphere. Bull. Am. Math. Soc. 1984, 11, 85–141. [Google Scholar] [CrossRef]
  22. Scott, M.; Neta, B.; Chun, C. Basin attractors for various methods. Appl. Math. Comput. 2011, 218, 2584–2599. [Google Scholar] [CrossRef]
  23. Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Drawing Dynamical and Parameters Planes of Iterative Families and Methods. Sci. World J. 2013. [Google Scholar] [CrossRef] [PubMed]
  24. Introduction to Computational Engineering. Available online: http://www.caam.rice.edu (accessed on 10 May 2015).
  25. Soleymani, F.; Babajee, D.K.R.; Sharifi, M. Modified jarratt method without memory with twelfth-order convergence. Ann. Univ. Craiova Math. Comput. Sci. Ser. 2012, 39, 21–34. [Google Scholar]
  26. Bradie, B. A Friendly Introduction to Numerical Analysis; Pearson Education Inc.: New Delhi, India, 2006. [Google Scholar]
Figure 1. Polynomiographs of p 1 ( z ) . (a) Newton’s method (NM) (1); (b) method of Noor et al. (4); (c) Jarratt method (JM) (22); (d) method of Sharifi et al. (SBS1) (23); (e) method of Sharifi et al. (SBS2) (24); (f) method of Soleymani et al. (SKK) (25); (g) method of Singh et al. (SJ) (26); (h) method of Sharma et al. (SKS) (27); (i) proposed method (PM1) (16); (j) proposed method (PM2) (17).
Figure 1. Polynomiographs of p 1 ( z ) . (a) Newton’s method (NM) (1); (b) method of Noor et al. (4); (c) Jarratt method (JM) (22); (d) method of Sharifi et al. (SBS1) (23); (e) method of Sharifi et al. (SBS2) (24); (f) method of Soleymani et al. (SKK) (25); (g) method of Singh et al. (SJ) (26); (h) method of Sharma et al. (SKS) (27); (i) proposed method (PM1) (16); (j) proposed method (PM2) (17).
Mathematics 04 00022 g001
Figure 2. Polynomiographs of p 2 ( z ) . (a) Newton’s method (NM) (1); (b) method of Noor et al. (4); (c) Jarratt method (JM) (22); (d) method of Sharifi et al. (SBS1) (23); (e) method of Sharifi et al. (SBS2) (24); (f) method of Soleymani et al. (SKK) (25); (g) method of Singh et al. (SJ) (26); (h) method of Sharma et al. (SKS) (27); (i) proposed method (PM1) (16); (j) proposed method (PM2) (17).
Figure 2. Polynomiographs of p 2 ( z ) . (a) Newton’s method (NM) (1); (b) method of Noor et al. (4); (c) Jarratt method (JM) (22); (d) method of Sharifi et al. (SBS1) (23); (e) method of Sharifi et al. (SBS2) (24); (f) method of Soleymani et al. (SKK) (25); (g) method of Singh et al. (SJ) (26); (h) method of Sharma et al. (SKS) (27); (i) proposed method (PM1) (16); (j) proposed method (PM2) (17).
Mathematics 04 00022 g002
Table 1. Comparison of the results for some known methods and proposed methods.
Table 1. Comparison of the results for some known methods and proposed methods.
fx0NM (1)Noor et al. (4)JM (22)SBS1 (23)SBS2 (24)PM1 (16)
NErrorNErrorNErrorNErrorNErrorNError
f 1 0.9 77.7e – 07452.5e – 09341.6e – 06742.1e – 07444.0e – 06344.4e – 065
0.7 71.0e – 07453.3e – 09441.4e – 07041.2e – 08344.2e – 06347.2e – 066
f 2 1.7 94.3e – 05467.2e – 05151.4e – 08552.4e – 07264.3e – 17954.2e – 058
1.0 81.1e – 06466.0e – 12352.0e – 19955.0e – 08151.4e – 09753.8e – 116
f 3 1.6 77.7e – 06352.0e – 07942.4e – 06545.3e – 05945.6e – 05741.2e – 059
1.0 82.8e – 08855.5e – 05651.4e – 18751.3e – 16152.7e – 13552.5e – 149
f 4 0.2 76.8e – 09656.5e – 12142.1e – 07741.0e – 05842.9e – 07045.4e – 076
0.6 61.5e – 06146.9e – 05244.3e – 10048.8e – 07942.4e – 09041.2e – 099
f 5 1.6 86.8e – 08755.4e – 05555.7e – 16954.9e – 14855.5e – 12457.9e – 137
2.0 71.8e – 08051.0e – 10147.4e – 07941.3e – 09642.9e – 07241.2e – 074
f 6 2.1 61.5e – 05551.2e – 14246.5e – 09642.7e – 06245.3e – 08046.3e – 097
2.5 69.6e – 05551.3e – 13844.5e – 09447.1e – 07346.1e – 08747.8e – 096
f 7 0.2 72.0e – 07452.2e – 09048.7e – 06359.5e – 14242.8e – 05442.5e – 060
0.9 73.0e – 09457.3e – 12143.5e – 07945.9e – 05541.8e – 07346.9e – 081
f 8 0.2 88.2e – 07662.8e – 14357.4e – 15151.0e – 11851.0e – 10053.8e – 114
1.5 92.7e – 07461.5e – 07053.1e – 07451.3e – 10451.8e – 06155.6e – 065
f 9 1.9 72.9e – 08854.5e – 11041.0e – 08454.7e – 11949.7e – 05743.1e – 108
2.7 65.9e – 05856.1e – 14945.8e – 10247.8e – 06642.9e – 07841.3e – 100
f 10 9.9 69.5e – 05951.9e – 14943.3e – 10047.7e – 07243.1e – 08641.7e – 101
9.2 63.1e – 05253.3e – 13141.9e – 07855.3e – 12843.0e – 05849.4e – 079
Table 2. Comparison of the results for some known methods and proposed methods.
Table 2. Comparison of the results for some known methods and proposed methods.
fx0SKK (25)SJ (26)SKS (27)PM2 (17)
NErrorNErrorNErrorNError
f 1 0.9 44.6e – 06243.0e – 06249.7e – 06442.7e – 066
0.7 49.9e – 06042.9e – 06245.4e – 06441.5e – 067
f 2 1.7 66.4e – 14763.9e – 15362.9e – 18656.9e – 075
1.0 51.3e – 07551.9e – 14954.1e – 10659.8e – 127
f 3 1.6 49.6e – 06141.6e – 05444.6e – 05742.0e – 062
1.0 59.5e – 10254.2e – 14251.7e – 14051.7e – 157
f 4 0.2 43.5e – 05642.1e – 07443.0e – 07541.3e – 076
0.6 41.7e – 07845.0e – 09942.2e – 09947.4e – 100
f 5 1.6 53.6e – 10151.0e – 13452.4e – 12951.5e – 143
2.0 42.0e – 07041.6e – 07041.2e – 07241.2e – 076
f 6 2.1 43.8e – 06341.3e – 09841.2e – 09741.9e – 096
2.5 41.5e – 07542.0e – 09943.2e – 09746.7e – 095
f 7 0.2 54.6e – 15941.2e – 05745.2e – 05941.2e – 061
0.9 44.8e – 05741.4e – 08442.8e – 08245.9e – 080
f 8 0.2 51.6e – 08253.7e – 13253.5e – 10751.2e – 120
1.5 62.1e – 15053.0e – 05451.2e – 05958.6e – 071
f 9 1.9 53.2e – 12643.1e – 08043.5e – 08645.9e – 089
2.7 42.5e – 06445.7e – 09947.6e – 10043.2e – 101
f 10 9.9 41.3e – 07347.2e – 10441.9e – 10248.0e – 101
9.2 58.0e – 12742.9e – 07945.7e – 07941.3e – 078
Table 3. Total number of function evaluations ( d 1 ) and COC (ρ).
Table 3. Total number of function evaluations ( d 1 ) and COC (ρ).
fx0NM (1)Noor et al. (4)JM (22)SBS1 (23)SBS2 (24)PM1 (16)
d1ρd1ρd1ρd1ρd1ρd1ρ
f 1 0.9 141.99152.99123.99124.00123.99123.99
0.7 141.99152.98123.99123.99123.99123.99
f 2 1.7 182.00182.99154.00154.00183.99153.99
1.0 162.00183.00153.99154.00154.00153.99
f 3 1.6 142.00152.98123.99123.99123.99123.99
1.0 161.99152.99154.00153.98153.99153.99
f 4 0.2 142.00153.00123.99124.00123.99123.99
0.6 122.01123.01123.98124.00123.99123.99
f 5 1.6 161.98153.00154.00153.99153.99153.99
2.0 141.99152.99123.99123.98124.00123.99
f 6 2.1 121.99153.00123.99123.99123.99124.00
2.5 121.98153.00124.00124.00123.99123.99
f 7 0.2 142.00152.99124.00154.00123.98123.99
0.9 142.00152.98123.98123.99124.00124.01
f 8 0.2 162.00183.00153.99153.99154.01153.98
1.5 181.99182.99153.99154.00153.98153.99
f 9 1.9 141.98152.99123.98154.00123.99123.99
2.7 122.00152.98124.00123.99123.99123.99
f 10 9.9 121.99153.00124.00123.98123.99123.99
9.2 122.00152.99123.99153.99124.00123.99
Table 4. Total number of function evaluations ( d 1 ) and COC (ρ).
Table 4. Total number of function evaluations ( d 1 ) and COC (ρ).
fx0SKK (25)SJ (26)SKS (27)PM2 (17)
d1ρd1ρd1ρd1ρ
f 1 0.9 123.99123.99123.99123.99
0.7 123.99123.99123.99123.99
f 2 1.7 183.99183.99183.99153.99
1.0 153.99154.00153.99153.99
f 3 1.6123.99124.00123.99124.00
1.0154.00153.99154.00153.99
f 4 0.2 124.00124.00123.98123.99
0.6 123.98123.99123.99124.00
f 5 1.6153.99153.99153.99153.99
2.0123.99123.99123.99123.99
f 6 2.1123.98124.00123.99123.99
2.5124.00124.00123.99123.99
f 7 0.2153.99123.99123.98123.99
0.9 123.98123.98123.99124.00
f 8 0.2 154.01153.98154.00153.98
1.5 184.00153.99153.99153.99
f 9 1.9 153.99123.99123.98123.99
2.7 123.99124.00123.99123.99
f 10 9.9 123.99123.99123.99123.99
9.2 153.99123.99123.98123.99
Table 5. Results for the f z e r o command in MATLAB.
Table 5. Results for the f z e r o command in MATLAB.
fx0N1Nd1f (xn)x*
f 1 0.7 6619−1.1102e – 016−0.7849
f 2 1.7 8723−2.6645e – 015−1.2076
f 3 1.0 962501.3652
f 4 0.2 13633−5.5511e – 017−0.4566
f 5 1.6 762101.8955
f 6 2.1 55168.8818e – 0162.3320
f 7 0.9 842000.6417
f 8 0.2 12732−2.7756e – 0170.4100
f 9 1.9 842102.4477
f 10 9.2 331009.7165
Table 6. Comparison of convergent and divergent grids for polynomiographs of p 1 ( z ) .
Table 6. Comparison of convergent and divergent grids for polynomiographs of p 1 ( z ) .
MethodsConvergent Grid PointsDivergent Grid Points
Real Root (α1)Complex Roots (α2 and α3)
NM (1)56,452103,5480
Noor et al. (4)52,37298,6708958
JM (22)56,474103,5260
SBS1 (23)23,17444,16092,666
SBS2 (24)48,01891,30820,674
SKK (25)34,72282,59042,688
SJ (26)52,587107,1430
SKS (27)55,178104,8220
PM1 (16)55,892104,1080
PM2 (17)54,622105,3780
Table 7. Comparison of convergent and divergent grids for polynomiographs of p 2 ( z ) .
Table 7. Comparison of convergent and divergent grids for polynomiographs of p 2 ( z ) .
MethodsConvergent Grid PointsDivergent Grid Points
Real Roots (α1 and α2)Complex Roots (α3 and α4)
NM (1)80,01079,9900
Noor et al. (4)68,13368,12023,747
JM (22)80,00179,9990
SBS1 (23)53,79253,79252,416
SBS2 (24)60,09860,46639,436
SKK (25)54,58454,58450,832
SJ (26)79,42779,4271146
SKS (27)79,96179,95980
PM1 (16)79,96279,97959
PM2 (17)79,96879,95478
Table 8. Results for polynomials p 1 ( z ) , p 2 ( z ) with real roots.
Table 8. Results for polynomials p 1 ( z ) , p 2 ( z ) with real roots.
Methods p 1 ( z ) = z 3 1 p 2 ( z ) = z 4 1
z0MρErrorz0MρError
NM (1)1.691.993.3751e – 0980.791.993.1014e – 066
Noor et al. (4)1.663.003.5077e – 0930.763.006.2225e – 061
JM (22)1.653.992.1482e – 1080.764.002.0563e – 089
SBS1 (23)1.654.009.3378e – 0990.774.003.3321e – 110
SBS2 (24)1.653.999.4664e – 0830.763.996.2610e – 055
SKK (25)1.653.991.6780e – 0810.773.991.7649e – 092
SJ (26)1.653.998.5156e – 0760.783.993.9861e – 076
SKS (27)1.653.999.3481e – 0840.763.994.7052e – 093
PM1 (16)1.653.993.1169e – 0920.763.992.4833e – 099
PM2 (17)1.653.992.5579e – 1020.763.993.4840e – 120
Table 9. Comparison of the results.
Table 9. Comparison of the results.
x0NM (1)DJ (28)PM1 (16)
Nd1ρErrorNd1ρErrorNd1ρError
4.07142.001.4e – 1014164.003.3e – 0864123.992.2e – 069
4.56121.994.5e – 0634164.009.1e – 1104123.992.1e – 093
5.06122.001.4e – 1014163.998.5e – 1854123.991.3e – 168
5.56121.995.4e – 0664164.009.9e – 1124123.991.6e – 095
Table 10. Comparison of the results.
Table 10. Comparison of the results.
x0PM2 (17)PM3 (18)PM4 (21)
Nd1ρErrorNd1ρErrorNd1ρError
4.04123.991.4e – 0694165.993.3e – 22431512.117.7e – 144
4.54123.991.6e – 0934165.993.3e – 30631512.036.4e – 198
5.04123.991.1e – 1683125.993.7e – 09331512.000
5.54123.991.4e – 0953125.953.2e – 05231511.963.2e – 203
Table 11. Results for Planck’s radiation law problem in f z e r o .
Table 11. Results for Planck’s radiation law problem in f z e r o .
x0N1Nd1f (xn)x*
4.0 86231.1102e – 0164.9651
4.5 5516−1.1102e – 0164.9651
5.0 1461.1102e – 0164.9651
5.5 5515−1.1102e – 0164.9651

Share and Cite

MDPI and ACS Style

Madhu, K.; Jayaraman, J. Higher Order Methods for Nonlinear Equations and Their Basins of Attraction. Mathematics 2016, 4, 22. https://doi.org/10.3390/math4020022

AMA Style

Madhu K, Jayaraman J. Higher Order Methods for Nonlinear Equations and Their Basins of Attraction. Mathematics. 2016; 4(2):22. https://doi.org/10.3390/math4020022

Chicago/Turabian Style

Madhu, Kalyanasundaram, and Jayakumar Jayaraman. 2016. "Higher Order Methods for Nonlinear Equations and Their Basins of Attraction" Mathematics 4, no. 2: 22. https://doi.org/10.3390/math4020022

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop