Next Article in Journal
Selective Poisoning Attack on Deep Neural Networks
Next Article in Special Issue
Algorithm for Neutrosophic Soft Sets in Stochastic Multi-Criteria Group Decision Making Based on Prospect Theory
Previous Article in Journal
Online Learning Method for Drift and Imbalance Problem in Client Credit Assessment
Previous Article in Special Issue
A Test Detecting the Outliers for Continuous Distributions Based on the Cumulative Distribution Function of the Data Being Tested
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On a Reduced Cost Higher Order Traub-Steffensen-Like Method for Nonlinear Systems

1
Department of Mathematics, Sant Longowal Institute of Engineering and Technology, Longowal, Sangrur 148106, India
2
Department of Physics and Chemistry, Technical University of Cluj-Napoca, Cluj-Napoca 400114, Romania
*
Authors to whom correspondence should be addressed.
Symmetry 2019, 11(7), 891; https://doi.org/10.3390/sym11070891
Submission received: 30 May 2019 / Revised: 27 June 2019 / Accepted: 4 July 2019 / Published: 8 July 2019
(This article belongs to the Special Issue Symmetry in Applied Mathematics)

Abstract

:
We propose a derivative-free iterative method with fifth order of convergence for solving systems of nonlinear equations. The scheme is composed of three steps, of which the first two steps are that of third order Traub-Steffensen-type method and the last is derivative-free modification of Chebyshev’s method. Computational efficiency is examined and comparison between the efficiencies of presented technique with existing techniques is performed. It is proved that, in general, the new method is more efficient. Numerical problems, including those resulting from practical problems viz. integral equations and boundary value problems, are considered to compare the performance of the proposed method with existing methods. Calculation of computational order of convergence shows that the order of convergence of the new method is preserved in all the numerical examples, which is not so in the case of some of the existing higher order methods. Moreover, the numerical results, including the CPU-time consumed in the execution of program, confirm the accurate and efficient behavior of the new technique.

1. Introduction

We are concerned with the problem of solving a system of nonlinear equations
F ( x ) = 0 .
This problem can precisely be stated as to find a solution vector α = ( α 1 , α 2 , , α m ) T such that F ( α ) = 0 , where F ( x ) : D R m R m is the given nonlinear vector function F ( x ) = ( f 1 ( x ) , f 2 ( x ) , , f m ( x ) ) T and x = ( x 1 , x 2 , , x m ) T . The vector α can be computed as a fixed point of some function M : D R m R m by means of fixed point iteration
x ( 0 ) D , x ( k + 1 ) = M ( x ( k ) ) , k 0 .
Many applied problems in Science and Engineering are reduced to solve numerically the system F ( x ) = 0 of nonlinear equations (see, for example [1,2,3,4,5,6]). A plethora of iterative methods are developed in literature for solving such equations. A classical method is cubically convergent Chebyshev’s method (see [7])
x ( 0 ) D , x ( k + 1 ) = x ( k ) I + 1 2 L F ( x ( k ) ) F ( x ( k ) ) 1 F ( x ( k ) ) , k 0 ,
where L F ( x ( k ) ) = F ( x ( k ) ) 1 F ( x ( k ) ) F ( x ( k ) ) 1 F ( x k ) . This one-point iterative scheme depends explicitly on the first two derivatives of F. In [7], Ezquerro and Hernández present modification in Chebyshev’s method that avoids the computation of second derivative F while maintaining third-order of convergence. It has the following form:
x ( 0 ) D , y ( k ) = x ( k ) a F ( x ( k ) ) 1 F ( x ( k ) ) , x ( k + 1 ) = x ( k ) 1 a 2 F ( x ( k ) ) 1 ( a 2 + a 1 ) F ( x ( k ) ) + F ( y ( k ) ) , k 0 .
There is an interest in constructing derivative free iterative processes obtained by considering an approximation of the first derivative of F from a divided difference of first order. One class of such methods is called the class of Secant-type methods which is obtained by replacing F with the divided difference operator [ x ( k 1 ) , x ( k ) ; F ] . Using this operator a family of derivative free methods is given in [8]. The authors call this family the Chebyshev-Secant-type method and it is defined as
x ( 1 ) , x ( 0 ) D , y ( k ) = x ( k ) a [ x ( k 1 ) , x ( k ) ; F ] 1 F ( x ( k ) ) , x ( k + 1 ) = x ( k ) [ x ( k 1 ) , x ( k ) ; F ] 1 b F ( x ( k ) ) + c F ( y ( k ) ) , k 0 ,
where a , b and c are non-negative parameters.
Another class of derivative free methods is the class of Steffensen-type processes that replaces F with operator [ w ( x ( k ) ) , x ( k ) ; F ] , wherein w : R m R m . The work presented in [9] analyzes Steffensen-type iterative method which is given as
x ( 0 ) D , y ( k ) = x ( k ) a [ w ( x ( k ) ) , x ( k ) ; F ] 1 F ( x ( k ) ) , x ( k + 1 ) = x ( k ) [ w ( x ( k ) ) , x ( k ) ; F ] 1 b F ( x ( k ) ) + c F ( y ( k ) ) , k 0 .
For a = b = c = 1 and w ( x ( k ) ) = x ( k ) + β F ( x ( k ) ) , β is an arbitrary non-zero constant, this method possesses third order convergence. In this case y ( k ) is Traub-Steffensen iteration [6]. For β = 1 , y ( k ) belongs to Steffensen iteration [10]. Both of these iterations are quadratically convergent.
The two-step third order Traub-Steffensen-type method, i.e., the case of (6) for a = b = c = 1 , can be written as
x ( 0 ) D , w ( x ( k ) ) = x ( k ) + β F ( x ( k ) ) , y ( k ) = M 2 , 1 ( x ( k ) ) , x ( k + 1 ) = M 3 , 1 ( x ( k ) , y ( k ) ) = y ( k ) [ w ( x ( k ) ) , x ( k ) ; F ] 1 F ( y ( k ) ) , k 0 ,
where M 2 , 1 ( x ( k ) ) = x ( k ) [ w ( x ( k ) ) , x ( k ) ; F ] 1 F ( x ( k ) ) is the quadratically convergent Traub-Steffensen scheme. Here and in the sequel, the symbol M p , i is used for denoting an i-th iteration function of convergence order p. It can be observed that the third order scheme (7) is computationally more efficient than quadratically convergent Traub-Steffensen scheme. The reason is that the convergence order is increased from two to three at the cost of only one function evaluation without adding extra inverse operator. We discuss computational efficiency in later sections.
Researchers have always been trying to develop the iterative method with increasing efficiency since different methods converge to the solution with different convergence speed. This can be done either by increasing the convergence order or by decreasing the computational cost or both. In [11], Ren et al. have derived a fourth order derivative-free method that uses three F, three divided differences and two matrix inversions per iteration. Zheng et al. [12] have constructed two families of fourth order derivative-free methods for scalar nonlinear equations, that are extendable to solve systems of nonlinear equations. First family requires to evaluate three F, three divided differences and two matrix inversions, whereas the second family needs three F, three divided differences and three matrix inversions. Grau et al. presented a fourth order derivative-free method in [13] utilizing four F, two divided differences and two matrix inversions. Sharma and Arora [14] presented a fourth order derivative-free method that uses the evaluations of three F, three divided differences and one matrix inversion per each step.
In search of more fast techniques, researchers have also introduced sixth and seventh order derivative-free methods in [13,15,16,17,18]. The sixth order method in [13] proposed by Grau et al. requires five F, two divided differences and two matrix inverses. Sharma and Arora [17] also developed a method of at least sixth order which requires evaluation of five functions, two divided difference and one matrix inversion per iteration. The seventh order method proposed by Sharma and Arora [15] utilizes four F, five divided differences and two matrix inversions per iteration. The seventh order methods presented by Wang and Zhang [16] use four F, five divided differences and three matrix inversions. Ahmad et al. [18] proposed eighth order derivative free method without memory which uses six functions evaluations, three divided difference and one matrix inversion.
The main goal in this study is to develop a derivative-free method of high computational efficiency, that means a method with high convergence speed and low computational cost. Consequently, we present a Traub-Steffensen-type method of fifth order of convergence which requires the evaluations four F, two divided differences and only one matrix inversion per step. The scheme of the present contribution is simple and consists of three steps. Of the three steps, the first two are that of cubically convergent Traub-Steffensen-type scheme (7) whereas the third is derivative-free modification of Chebyshev’s scheme (3). We show that the proposed method is more efficient than existing methods of similar nature.
The content of the rest of the paper is summarized as follows. Basic definitions relevant to the present work are stated in Section 2. In Section 3, the scheme of fifth order method is introduced and its convergence behavior is studied. In Section 4, the computational efficiency of the new method is examined and also compared with the existing derivative-free methods. In Section 5, the basins of attractors are presented to check the stability and convergence of the new method. Numerical tests are performed in Section 6 to verify the theoretical results as proved in Section 3 and Section 4. Section 7 contains the concluding remarks.

2. Preliminary Results

2.1. Computational Order of Convergence

Let α be a solution of the function F ( x ) = 0 and x ( k 2 ) , x ( k 1 ) , x ( k ) and x ( k + 1 ) be the four consecutive iterations close to α . Then, the computational order of convergence (say, p c ) can be calculated using the formula (see [19])
p c = log ( x ( k + 1 ) x ( k ) / x ( k ) x ( k 1 ) ) log ( x ( k ) x ( k 1 ) / x ( k 1 ) x ( k 2 ) ) .

2.2. Divided Difference

Divided difference operator for multivariable function F (see [4,5,20]) is a mapping [ · , · ; F ] : D × D R m × R m L ( R m ) which is defined as
[ x , y ; F ] ( x y ) = F ( x ) F ( y ) , ( x , y ) R m .
If F is differentiable, we can also define first order divided difference as (see [4,20])
[ x + h , x ; F ] = 0 1 F ( x + t h ) d t , ( x , h ) R m .
This also implies that
[ x , x ; F ] = F ( x ) .
It can be seen that the divided difference operator [ x , y ; F ] is an m × m matrix and the definitions (9) and (10) are equivalent (for details see [20]). For computational purpose the following definition (see [5]), is used
[ x , y ; F ] i j = f i ( x 1 , , x j , y j + 1 , . . , y m ) f i ( x 1 , , x j 1 , y j , , y m ) x j y j , 1 i , j m .

2.3. Computational Efficiency

Computational efficiency of an iterative method for solving F ( x ) = 0 is calculated by the efficiency index E = p 1 / C , (for detail see [21,22]), where p is the order of convergence and C is the total cost of computation. The cost of computation C is measured in terms of the total number of function evaluations per iteration and the number of operations (that means products and quotients) per iteration.

3. The Method and Analysis of Convergence

Let us begin with the following three-step scheme
y ( k ) = M 2 , 1 ( x ( k ) ) , z ( k ) = y ( k ) [ w ( k ) , x ( k ) ; F ] 1 F ( y ( k ) ) , x ( k + 1 ) = z ( k ) I + 1 2 L F ( y ( k ) ) F ( y ( k ) ) 1 F ( z ( k ) ) ,
where w ( k ) = x ( k ) + β F ( x ( k ) ) , I is m × m identity matrix and L F ( y ( k ) ) = F ( y ( k ) ) 1 F ( y ( k ) ) F ( y ( k ) ) 1 F ( y k ) .
Note that this is a scheme whose first two steps are that of third order Traub-Steffensen-type method (7) whereas third step is based on Chebyshev’s method (3). The scheme requires first and second derivatives of F at y ( k ) . To make this a derivative-free method, we describe an approach as follows:
Consider the Taylor expansion of F ( z ( k ) ) about y ( k ) ,
F ( z ( k ) ) F ( y ( k ) ) + F ( y ( k ) ) ( z ( k ) y ( k ) ) + 1 2 F ( y ( k ) ) ( z ( k ) y ( k ) ) 2 .
Then, it follows that
1 2 F ( y ( k ) ) ( z ( k ) y ( k ) ) 2 F ( z ( k ) ) F ( y ( k ) ) F ( y ( k ) ) ( z ( k ) y ( k ) ) .
Using the fact that
F ( z ( k ) ) F ( y ( k ) ) = [ z ( k ) , y ( k ) ; F ] ( z ( k ) y ( k ) ) ,
(see, for example [4,5]), we can write (15) as
F ( y ( k ) ) ( z ( k ) y ( k ) ) 2 ( [ z ( k ) , y ( k ) ; F ] F ( y ( k ) ) ) .
Then, using the second step of (13) in the above equation, it follows that
F ( y ( k ) ) [ w ( k ) , x ( k ) ; F ] 1 F ( y ( k ) ) 2 [ z ( k ) , y ( k ) ; F ] F ( y ( k ) ) .
Let us assume F ( y ( k ) ) [ w ( k ) , x ( k ) ; F ] , then (17) implies
F ( y ( k ) ) [ w ( k ) , x ( k ) ; F ] 1 F ( y ( k ) ) 2 [ z ( k ) , y ( k ) ; F ] [ w ( k ) , x ( k ) ; F ] .
In addition, we have that
L F ( y ( k ) ) = F ( y ( k ) ) 1 F ( y ( k ) ) F ( y ( k ) ) 1 F ( y k ) [ w ( k ) , x ( k ) ; F ] 1 F ( y ( k ) ) [ w ( k ) , x ( k ) ; F ] 1 F ( y k ) .
Using (18) in (19), we obtain that
L F ( y ( k ) ) [ w ( k ) , x ( k ) ; F ] 1 F ( y ( k ) ) [ w ( k ) , x ( k ) ; F ] 1 F ( y k ) 2 [ w ( k ) , x ( k ) ; F ] 1 [ z ( k ) , y ( k ) ; F ] I .
Now, we can write the third-step of (13) in modified form as
x ( k + 1 ) = z ( k ) 2 I [ w ( k ) , x ( k ) ; F ] 1 [ z ( k ) , y ( k ) ; F ] [ w ( k ) , x ( k ) ; F ] 1 F ( z ( k ) ) .
Thus, we define the following new method:
y ( k ) = M 2 , 1 ( x ( k ) ) , z ( k ) = M 3 , 1 ( x ( k ) , y ( k ) ) , x ( k + 1 ) = z ( k ) H ( x ( k ) ) [ w ( k ) , x ( k ) ; F ] 1 F ( z ( k ) ) ,
wherein H ( x ( k ) ) = 2 I [ w ( k ) , x ( k ) ; F ] 1 [ z ( k ) , y ( k ) ; F ] .
Since the scheme (22) is composed of Traub-Steffensen like steps, we call it the Traub-Steffensen-like method.
In order to explore the convergence properties of Traub-Steffensen-like method, we recall some important results from the theory of iteration functions. First, we state the following well-known result (see [3,23]):
Lemma 1.
Assume that M : D R m R m has a fixed point α i n t ( D ) and M ( x ) is F r e ´ c h e t differentiable on α. If
ρ ( M ( α ) ) = σ < 1 ,
then α is a point of attraction for the iteration x ( k + 1 ) = M ( x ( k ) ) , where ρ is a spectral radius of M ( α ) .
Next, we state a result which has been proven in [24] by Madhu et al. and that shows α is a point of attraction for a general iteration function of the form M ( x ) = P ( x ) Q ( x ) R ( x ) .
Lemma 2.
Let F : D R m R m be sufficiently F r e ´ c h e t differentiable at each point of an open convex set D of α D , which is a solution of the nonlinear system F ( x ) = 0 . Suppose that P , Q , R : D R m R m are sufficiently F r e ´ c h e t differentiable functions (depending on F) at each point in the set D with the properties P ( α ) = α , Q ( α ) 0 , R ( α ) = 0 . Then, there exists a ball
S = S ¯ ( α , ϵ ) = { α x ϵ } D , ϵ > 0 ,
on which the mapping
M : S R m , M ( x ) = P ( x ) Q ( x ) R ( x ) , x S
is well defined. Moreover, M ( x ) is F r e ´ c h e t differentiable at α, thus
M ( α ) = P ( α ) Q ( α ) R ( α ) .
Let us also recall the definition (10) of divided difference operator. Then, expanding F ( x + t h ) in (10) by Taylor series at the point x and thereafter integrating, we have that
[ x + h , x ; F ] = 0 1 F ( x + t h ) d t = F ( x ) + 1 2 F ( x ) h + 1 6 F ( x ) h 2 + 1 24 F ( i v ) ( x ) h 3 + O ( h 4 ) ,
where h i = ( h , h , · · · · i , h ) , h R m . Let e ( k ) = x ( k ) α . Assuming that Γ = F ( α ) 1 exists, then expanding F ( x ( k ) ) and its first three derivatives in a neighborhood of α by Taylor’s series, we have that
F ( x ( k ) ) = F ( α ) e ( k ) + A 2 ( e ( k ) ) 2 + A 3 ( e ( k ) ) 3 + A 4 ( e ( k ) ) 4 + A 5 ( e ( k ) ) 5 + O ( ( e ( k ) ) 6 ) ,
F ( x ( k ) ) = F ( α ) I + 2 A 2 e ( k ) + 3 A 3 ( e ( k ) ) 2 + 4 A 4 ( e ( k ) ) 3 + 5 A 5 ( e ( k ) ) 4 + O ( ( e ( k ) ) 5 ) ,
F ( x ( k ) ) = F ( α ) 2 A 2 + 6 A 3 e ( k ) + 12 A 4 ( e ( k ) ) 2 + 20 A 5 ( e ( k ) ) 3 + O ( ( e ( k ) ) 4 )
and
F ( x ( k ) ) = F ( α ) 6 A 3 + 24 A 4 e ( k ) + 60 A 5 ( e ( k ) ) 2 + O ( ( e ( k ) ) 3 ) ,
where A i = 1 i ! Γ F ( i ) ( α ) L i ( R m , R m ) and ( e ( k ) ) i = ( e ( k ) , e ( k ) , · · · · i t i m e s , e ( k ) ) , e ( k ) R m .
We are in a situation to analyze the behavior of Traub-Steffensen-like method. Thus, the following theorem is proved:
Theorem 1.
Let F : D R m R m be sufficiently F r e ´ c h e t differentiable at each point of an open convex set D of α R m , which is a solution of F ( x ) = 0 . Assume that x S = S ¯ ( α , ϵ ) and F ( x ) is continuous and nonsingular at α, and x ( 0 ) close to α. Then, α is a point of attraction of the sequence { x ( k ) } generated by the Traub-Steffensen-like method (22). Furthermore, the sequence so developed converges locally to α with order at least 5.
Proof. 
First we show that α is a point of attraction of Traub-Steffensen-like iteration. In this case, we have that
P ( x ) = z ( x ) , Q ( x ) = H ( x ) [ w , x ; F ] 1 , R ( x ) = F ( z ( x ) ) .
Now, since F ( α ) = 0 , [ α , α ; F ] = F ( α ) O , we have
y ( α ) = α [ α , α ; F ] 1 F ( α ) = α F ( α ) 1 F ( α ) = α ,
z ( α ) = α [ α , α ; F ] 1 F ( α ) [ α , α ; F ] 1 F ( α ) = α F ( α ) 1 F ( α ) F ( α ) 1 F ( α ) = α ,
H ( α ) = 2 I [ α , α ; F ] 1 [ α , α ; F ] = I ,
P ( α ) = z ( α ) , P ( α ) = z ( α ) ,
Q ( α ) = H ( α ) [ α , α ; F ] 1 = I [ α , α ; F ] 1 = [ α , α ; F ] 1 = F ( α ) 1 O ,
R ( α ) = F ( z ( α ) ) = F ( α ) = 0 ,
R ( α ) = F ( z ( α ) ) z ( α ) = F ( α ) z ( α ) ,
M ( α ) = P ( α ) Q ( α ) R ( α ) = z ( α ) F ( α ) 1 F ( α ) z ( α ) = O ,
so that ρ ( M ( α ) ) = 0 < 1 and by Lemma 1, α is a point of attraction of (22).
Let e w ( k ) = w ( k ) α = x ( k ) + β F ( x ( k ) ) α = e ( k ) + β F ( x ( k ) ) . Then using (25), it follows that
e w ( k ) = ( I + β F ( α ) ) e ( k ) + β F ( α ) ( A 2 ( e ( k ) ) 2 + A 3 ( e ( k ) ) 3 ) + O ( ( e ( k ) ) 4 .
Setting x + h = w ( k ) , x = x ( k ) , h = e w ( k ) e ( k ) in Equation (24) and then using (26)–(29), we can write
[ w ( k ) , x ( k ) ; F ] = F ( α ) ( I + X 1 A 2 e ( k ) + ( λ A 2 2 + X 2 A 3 ) ( e ( k ) ) 2 + X 1 ( 2 λ A 2 A 3 + X 3 A 4 ) ( e ( k ) ) 3 + O ( ( e ( k ) ) 4 ) ) ,
where λ = β F ( α ) , X 1 = λ + 2 , X 2 = λ 2 + 3 λ + 3 and X 3 = λ 2 + 2 λ + 2 .
Expansion of the inverse of preceding divided difference operator is given as
[ w ( k ) , x ( k ) ; F ] 1 = ( I X 1 A 2 ( e ( k ) ) + ( ( 1 + X 2 ) A 2 2 X 2 A 3 ) ( e ( k ) ) 2 X 1 ( ( 2 + X 3 ) A 2 3 2 ( 1 + X 3 ) A 2 A 3 + X 3 A 4 ) ( e ( k ) ) 3 + O ( ( e ( k ) ) 3 ) ) Γ .
By using (25) and (31) in the first step of method (22), we get
e y ( k ) = y ( k ) α = ( 1 + X 1 ) A 2 ( e ( k ) ) 2 ( X 3 A 2 2 + ( 1 X 2 ) A 3 ) ( e ( k ) ) 3 + O ( ( e ( k ) ) 4 ) .
Taylor expansion of F ( y k ) about α yields,
F ( y ( k ) ) = F ( α ) e y ( k ) + A 2 ( e y ( k ) ) 2 + O ( ( e y ( k ) ) 3 ) .
From the second step of (22), on using (31) and (33), it follows that
e z ( k ) = z ( k ) α = X 1 A 2 ( e ( k ) ) e y ( k ) A 2 ( e y ( k ) ) 2 ( ( 1 + X 2 ) A 2 2 X 2 A 3 ) ( e ( k ) ) 2 e y ( k ) + O ( ( e ( k ) ) 5 ) .
By Taylor expansion of F ( z k ) about α ,
F ( z ( k ) ) = F ( α ) e z ( k ) + A 2 ( e z ( k ) ) 2 + O ( ( e z ( k ) ) 3 ) .
Equation (24), for x + h = z ( k ) , x = y ( k ) and h = e z ( k ) e y ( k ) , yields
[ z ( k ) , y ( k ) ; F ] = F ( α ) I + A 2 ( e z ( k ) + e y ( k ) ) + O ( ( e ( k ) ) 3 ) = F ( α ) I + ( λ + 1 ) A 2 2 ( e ( k ) ) 2 + O ( ( e ( k ) ) 3 ) .
From (31) and (36), we have
H ( x ( k ) ) = 2 I [ w ( k ) , x ( k ) ; F ] 1 [ z ( k ) , y ( k ) ; F ] = I + X 1 A 2 e ( k ) + X 2 A 3 ( X 1 + X 2 ) A 2 2 ( e ( k ) ) 2 + O ( ( e ( k ) ) 3 ) .
Equations (31) and (37) yield
H ( x ( k ) ) [ w ( k ) , x ( k ) ; F ] 1 = I ( λ 2 + 5 λ + 5 ) A 2 2 ( e ( k ) ) 2 + O ( ( e ( k ) ) 3 ) Γ .
Applying Equations (34), (35) and (38) in the last step of method (22) and then simplifying, we get the error equation
e ( k + 1 ) = ( λ + 1 ) ( λ + 2 ) ( λ 2 + 5 λ + 5 ) A 2 4 ( e ( k ) ) 5 + O ( ( e ( k ) ) 6 ) .
This completes the proof of Theorem 1.  □
Thus, the Traub-Steffensen-like method (22) defines a one-parameter ( β ) family of derivative-free fifth order methods. Now onwards we denote it by M 5 , 1 . In terms of computational cost M 5 , 1 utilizes four functions, two divided difference and one matrix inversion per each step. In the next section we will compare the computational efficiency of the new method with the existing derivative-free methods.

4. Computational Efficiency

In order to find the computational efficiency we will use the definition given in Section 2.3. The various evaluations and arithmetic operations that contribute towards the cost of computation are described as follows. For the computation of F in any iterative function we evaluate m scalar functions f i , ( 1 i m ) and when computing a divided difference [ x , y ; F ] (see, Section 2.2) we evaluate m ( m 1 ) scalar functions, wherein F ( x ) and F ( y ) are evaluated separately. Furthermore, one has to add m 2 divisions from any divided difference. For the computation of inverse linear operator, a linear system can be solved that requires m ( m 1 ) ( 2 m 1 ) / 6 products and m ( m 1 ) / 2 divisions in the LU decomposition process, and m ( m 1 ) products and m divisions in the resolution of two triangular linear systems. Moreover, we add m products for the multiplication of a vector by a scalar and m 2 products for multiplying a matrix by a vector or of a matrix by a scalar.
The comparison of computational efficiency of the present method M 5 , 1 is drawn with second order method M 2 , 1 ; third order method M 3 , 1 ; fourth order methods by Ren et al. [11], Grau et al. [13] and Sharma-Arora [14]; fifth order method by Kumar et al. [25]; sixth order method by Grau et al. [13]; seventh order methods by Sharma-Arora [15] and Wang-Zhang [16]. These methods are expressed as follows:
Fourth order method by Ren et al. ( M 4 , 1 ):
y ( k ) = x ( k ) [ u ( k ) , x ( k ) ; F ] 1 F ( x ( k ) ) , x ( k + 1 ) = y ( k ) [ y ( k ) , x ( k ) ; F ] + [ y ( k ) , u ( k ) ; F ] [ u ( k ) , x ( k ) ; F ] 1 F ( y ( k ) ) ,
where u ( k ) = x ( k ) + F ( x ( k ) ) .
Fourth order method by Grau et al. ( M 4 , 2 ):
y ( k ) = x ( k ) [ u ( k ) , v ( k ) ; F ] 1 F ( x ( k ) ) x ( k + 1 ) = y ( k ) 2 [ y ( k ) , x ( k ) ; F ] [ u ( k ) , v ( k ) ; F ] 1 F ( y ( k ) ) ,
where u = x + F ( x ) and v = x F ( x ) .
Sharma-Arora fourth order method ( M 4 , 3 ):
y ( k ) = x ( k ) [ w ( k ) , x ( k ) ; F ] 1 F ( x ( k ) ) x ( k + 1 ) = y ( k ) 3 I [ w ( k ) , x ( k ) ; F ] 1 ( [ y ( k ) , x ( k ) ; F ] + [ y ( k ) , w ( k ) ; F ] ) × [ w ( k ) , x ( k ) ; F ] 1 F ( y ( k ) ) ,
where w ( k ) = x ( k ) + β F ( x ( k ) ) , β is a non-zero constant.
Fifth order method by Kumar et al. ( M 5 , 2 ):
y ( k ) = x ( k ) [ w ( k ) , x ( k ) ; F ] 1 F ( x ( k ) ) z ( k ) = y ( k ) [ w ( k ) , x ( k ) ; F ] 1 F ( y ( k ) ) x ( k + 1 ) = z ( k ) [ x ( k ) , y ( k ) ; F ] 1 [ w ( k ) , x ( k ) ; F ] [ w ( k ) , y ( k ) ; F ] 1 F ( z ( k ) ) ,
where w ( k ) = x ( k ) + F ( x ( k ) ) .
Sixth order method by Grau et al. ( M 6 , 1 ):
y ( k ) = x ( k ) [ u ( k ) , v ( k ) ; F ] 1 F ( x ( k ) ) z ( k ) = y ( k ) 2 [ y ( k ) , x ( k ) ; F ] [ u ( k ) , v ( k ) ; F ] 1 F ( y ( k ) ) x ( k + 1 ) = z ( k ) 2 [ y ( k ) , x ( k ) ; F ] [ u ( k ) , v ( k ) ; F ] 1 F ( z ( k ) ) .
Wang-Zhang seventh order method ( M 7 , 1 ):
y ( k ) = x ( k ) [ u ( k ) , x ( k ) ; F ] 1 F ( x ( k ) ) , z ( k ) = y ( k ) [ y ( k ) , x ( k ) ; F ] + [ y ( k ) , u ( k ) ; F ] [ u ( k ) , x ( k ) ; F ] 1 F ( y ( k ) ) x ( k + 1 ) = z ( k ) [ z ( k ) , x ( k ) ; F ] + [ z ( k ) , y ( k ) ; F ] [ y ( k ) , x ( k ) ; F ] 1 F ( z ( k ) ) ,
where u ( k ) = x ( k ) + F ( x ( k ) ) .
Sharma-Arora seventh order method ( M 7 , 2 ):
y ( k ) = x ( k ) [ w ( k ) , x ( k ) ; F ] 1 F ( x ( k ) ) z ( k ) = y ( k ) 3 I [ w ( k ) , x ( k ) ; F ] 1 ( [ y ( k ) , x ( k ) ; F ] + [ y ( k ) , w ( k ) ; F ] ) × [ w ( k ) , x ( k ) ; F ] 1 F ( y ( k ) ) x ( k + 1 ) = z ( k ) [ z ( k ) , y ( k ) ; F ] 1 [ w ( k ) , x ( k ) ; F ] + [ y ( k ) , x ( k ) ; F ] [ z ( k ) , x ( k ) ; F ] × [ w ( k ) , x ( k ) ; F ] 1 F ( z ( k ) ) .
Let us denote efficiency indices of the methods M p , i by E p , i and their computational costs by C p , i . Then, using the definition of the Section 2.3 taking into account the above considerations of evaluations and operations, we have that
C 2 , 1 = 1 3 m 3 + 3 m 2 + 2 3 m and E 2 , 1 = 2 1 / C 2 , 1 .
C 3 , 1 = 1 3 m 3 + 4 m 2 + 5 3 m and E 3 , 1 = 3 1 / C 3 , 1 .
C 4 , 1 = 2 3 m 3 + 8 m 2 2 3 m and E 4 , 1 = 4 1 / C 4 , 1 .
C 4 , 2 = 2 3 m 3 + 7 m 2 + 4 3 m and E 4 , 2 = 4 1 / C 4 , 2 .
C 4 , 3 = 1 3 m 3 + 10 m 2 + 2 3 m and E 4 , 3 = 4 1 / C 4 , 3 .
C 5 , 1 = 1 3 m 3 + 9 m 2 + 8 3 m and E 5 , 1 = 5 1 / C 5 , 1 .
C 5 , 2 = m 3 + 11 m 2 and E 5 , 2 = 5 1 / C 5 , 2 .
C 6 , 1 = 2 3 m 3 + 8 m 2 + 7 3 m and E 6 , 1 = 6 1 / C 6 , 1 .
C 7 , 1 = m 3 + 13 m 2 2 m and E 7 , 1 = 7 1 / C 7 , 1 .
C 7 , 2 = 2 3 m 3 + 17 m 2 2 3 m and E 7 , 2 = 7 1 / C 7 , 2 .
To compare the efficiency of considered iterative methods, say M p , i against M q , j , we consider the ratio
R p , i ; q , j = log E p , i log E q , j = C q , j log ( p ) C p , i log ( q ) .
It is clear that when R p , i ; q , j > 1 , the iterative method M p , i is more efficient than M q , j .
M 3 , 1 versus M 2 , 1 case:
For this case the ratio (50) is given by
R 3 , 1 ; 2 , 1 = 1 3 m 3 + 3 m 2 + 2 3 m log ( 3 ) 1 3 m 3 + 4 m 2 + 5 3 m log ( 2 ) .
It can be easily shown that R 3 , 1 ; 2 , 1 > 1 for m 2 . This implies that E 3 , 1 > E 2 , 1 for m 2 . Thus, M 3 , 1 is more efficient than M 2 , 1 as we have stated in the introduction section.
M 5 , 1 versus M 2 , 1 case:
The ratio (50) is given by
R 5 , 1 ; 2 , 1 = 1 3 m 3 + 3 m 2 + 2 3 m log ( 5 ) 1 3 m 3 + 9 m 2 + 8 3 m log ( 2 ) .
It is easy to prove that R 5 , 1 ; 2 , 1 > 1 for m 6 . Thus, we conclude that E 5 , 1 > E 2 , 1 for m 6 .
M 5 , 1 versus M 3 , 1 case:
The ratio (50) is given by
R 5 , 1 ; 3 , 1 = 1 3 m 3 + 4 m 2 + 5 3 m log ( 5 ) 1 3 m 3 + 9 m 2 + 8 3 m log ( 3 ) .
It can be checked that R 5 , 1 ; 3 , 1 > 1 for m 21 . Thus, we have that E 5 , 1 > E 3 , 1 for m 21 .
M 5 , 1 versus M 4 , 1 case:
In this case the ratio
R 5 , 1 ; 4 , 1 = 2 3 m 3 + 8 m 2 2 3 m log ( 5 ) 1 3 m 3 + 9 m 2 + 8 3 m log ( 4 ) > 1 ,
for m 3 , which implies that E 5 , 1 > E 4 , 1 for m 3 .
M 5 , 1 versus M 4 , 2 case:
Here the ratio
R 5 , 1 ; 4 , 2 = 2 3 m 3 + 7 m 2 + 4 3 m log ( 5 ) 1 3 m 3 + 9 m 2 + 8 3 m log ( 4 ) > 1 ,
for m 3 which implies that E 5 , 1 > E 4 , 2 for m 3 .
M 5 , 1 versus M 4 , 3 case:
Here the ratio
R 5 , 1 ; 4 , 3 = 1 3 m 3 + 10 m 2 + 2 3 m log ( 5 ) 1 3 m 3 + 9 m 2 + 8 3 m log ( 4 ) > 1 ,
for m 2 which implies that E 5 , 1 > E 4 , 3 for m 2 .
M 5 , 1 versus M 5 , 2 case:
In this case the ratio
R 5 , 1 ; 5 , 2 = m 3 + 11 m 2 1 3 m 3 + 9 m 2 + 8 3 m > 1 ,
for m 2 which means E 5 , 1 > E 5 , 2 for m 2 .
M 5 , 1 versus M 6 , 1 case:
Here the ratio
R 5 , 1 ; 6 , 1 = 2 3 m 3 + 8 m 2 + 7 3 m log ( 5 ) 1 3 m 3 + 9 m 2 + 8 3 m log ( 6 ) > 1 ,
for m 8 which means E 5 , 1 > E 6 , 1 for m 8 .
M 5 , 1 versus M 7 , 1 case:
Here also the ratio
R 5 , 1 ; 7 , 1 = m 3 + 13 m 2 2 m log ( 5 ) 1 3 m 3 + 9 m 2 + 8 3 m log ( 7 ) > 1 ,
for m 2 which means E 5 , 1 > E 7 , 1 for m 2 .
M 5 , 1 versus M 7 , 2 case:
Here also the ratio
R 5 , 1 ; 7 , 2 = 2 3 m 3 + 17 m 2 2 3 m log ( 5 ) 1 3 m 3 + 9 m 2 + 8 3 m log ( 7 ) > 1 ,
for m 2 which means E 5 , 1 > E 7 , 2 for m 2 .
The above results are summarized in the following theorem:
Theorem 2.
We have that
(a) 
E 5 , 1 > E 2 , 1 f o r m 6 .
(b) 
E 5 , 1 > E 3 , 1 f o r m 21 .
(c) 
{ E 5 , 1 > E 4 , 1 E 5 , 1 > E 4 , 2 } f o r m 3 .
(d) 
E 3 , 1 > E 2 , 1 , E 5 , 1 > E 4 , 3 , E 5 , 1 > E 5 , 2 , E 5 , 1 > E 7 , 1 , E 5 , 1 > E 7 , 2 f o r m 2 .
(e) 
E 5 , 1 > E 6 , 1 f o r m 8 .

5. Complex Dynamics of Methods

Our aim is to analyze the complex dynamics of the new method based on graphical tool ‘basins of attraction’ of the zeros of polynomial P ( z ) in complex plane. Visual display of the basins gives important information about the stability and convergence of iterative methods. This idea was introduced initially by Vrscay and Gilbert [26]. In recent times, many authors have used this concept in their work, see, for example [27,28] and references therein. We consider the method (22) to analyze the basins of attraction.
To start with we take the initial point z 0 in a rectangular region R C that contains all the zeros of a polynomial P ( z ) . The iterative method, when starting from point z 0 in a rectangle, either converges to the zero P ( z ) or eventually diverges. Stopping condition for convergence is considered as 10 3 to a maximum of 25 iterations. If the required tolerance is not achieved in 25 iterations, we conclude that the iterative scheme starting at point z 0 does not converge to any root. The strategy adopted is as follows: A color is allocated to each initial point z 0 in the basin of attraction of a zero. If the iteration initiating at z 0 converges, then it represents the attraction basin with that assigned color to it, otherwise in the failing (divergence) situation in 25 iterations the iteration represents the black color.
We analyze the basins of attraction of the new method (for the choices β = 10 2 , 10 4 , 10 8 ) on following three polynomials:
Example 1.
In the first case, consider the polynomial P 1 ( z ) = z 2 1 which has zeros { ± 1 } . A grid of 400 × 400 points in a rectangle D C of size [ 2 , 2 ] × [ 2 , 2 ] is used for drawing the graphics. We assign the color red to each initial point in the basin of attraction of zero ‘1’ and the color green to the points in the basin of attraction of zero ‘ 1 ’. The graphics are shown in Figure 1 corresponding to β = 10 2 , 10 4 , 10 8 . Observing the behavior of the basins of the new method, we conclude that the convergence domain becoming wider as parameter β assumes smaller values since black zones (divergent points) are getting smaller in size.
Example 2.
Let us consider the next polynomial as P 2 ( z ) = z 3 z having zeros { 0 , ± 1 } . To draw the dynamical view, we select a rectangle D = [ 2 , 2 ] × [ 2 , 2 ] C containing 400 × 400 grid points. Then, allocate the colors green, blue and red to each point in the basin of attraction of 0, 1 and 1 , respectively. Basins for this example are exhibited in Figure 2 corresponding to parameter choices β = 10 2 , 10 4 , 10 8 in the proposed methods. In addition, observe that the basins are becoming larger and larger with the smaller values of β.
Example 3.
Lastly, we consider the polynomial as P 3 ( z ) = z 5 + 2 z 1 having zeros { 0.945068 ± 0.854518 i , 0.701874 ± 0.879697 i , 0.486389 } . To draw the dynamical view, we select a rectangle D = [ 2 , 2 ] × [ 2 , 2 ] C containing 400 × 400 grid points. Then, allocate the colors green, blue, red, yellow and pink to each point in the basin of attraction of 0.701874 + 0.879697 i , 0.945068 0.854518 i , 0.701874 0.879697 i , 0.486389 and 0.945068 + 0.854518 i , respectively. Basins for this example are exhibited in Figure 3 corresponding to parameter choices β = 10 2 , 10 4 , 10 8 in the proposed methods. We observe that the basins are getting larger with the smaller values of β.

6. Numerical Tests

In this section, some numerical tests on different problems are performed to demonstrate the convergence behavior and computational efficiency of the method M 5 , 1 . A comparison between the performance of M 5 , 1 with the existing methods M 2 , 1 , M 3 , 1 , M 4 , j ( j = 1 , 2 , 3 ) , M 5 , 2 , M 6 , 1 , M 7 , 1 and M 7 , 2 is also drawn. The programs are performed in the processor with specifications Intel (R) Core (TM) i5-4210U CPU @ 1.70 GHz 2.40 GHz (64-bit Operating System) Microsoft Windows 10 Professional and are complied by Mathematica 10.0 using multiple-precision arithmetic. We record the number of iterations ( k ) required to converge to the solution such that the stopping condition
| | x ( k + 1 ) x ( k ) | | + | | F ( x ( k ) ) | | < 10 300
is satisfied. In order to verify the theoretical order of convergence, the computational order of convergence ( p c ) is obtained by using the Formula (8). In the comparison of performance of considered methods, we also include the real CPU time elapsed during the execution of program computed by the Mathematica command “TimeUsed[ ]”.
The methods M 2 , 1 , M 3 , 1 , M 4 , 3 , M 5 , 1 and M 7 , 2 are tested by using the value 0.01 for the parameter β . In numerical experiments we consider the following five problems:
Example 4.
Let us consider the system of two equations (selected from [29]):
x 2 + sin x e y = 0 , 3 x cos x y = 0 .
The initial guess assumed is x ( 0 ) = { 1 , 2 } T for obtaining the solution
α = { 0.90743021707369569 , 3.3380632251862363 } T .
Example 5.
Now considering the mixed Hammerstein integral equation (see [4]):
x ( s ) = 1 + 1 5 0 1 G ( s , t ) x ( t ) 3 d t ,
wherein x C [ 0 , 1 ] ; s , t [ 0 , 1 ] and the kernel G is
G ( s , t ) = ( 1 s ) t , t s , s ( 1 t ) , s t .
The above equation is transformed to a finite-dimensional problem by using the Gauss-Legendre quadrature formula
0 1 f ( t ) d t j = 1 m ϖ j f ( t j ) ,
where the weights ϖ j and abscissas t j are obtained for m = 8 by Gauss-Legendre quadrature formula. Then, setting x ( t i ) = x i , i = 1 , 2 , , 8 , we obtain the following system of nonlinear equations
5 x i 5 j = 1 8 a i j x j 3 = 0 ,
where
a i j = ϖ j t j ( 1 t i ) i f j i , i = 1 , 2 , 8 . ϖ j t i ( 1 t j ) i f i < j ,
wherein the abscissas t j and the weights ϖ j are known and produced in Table 1 for m = 8 . The initial approximation assumed is
x ( 0 ) = { 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 } T
and the solution of this problem is:
α = { 1.002096245031 , 1.009900316187 , 1.019726960993 , 1.026435743030 ,   1.026435743030 , 1.019726960993 , 1.009900316187 , 1.002096245031 } T .
Example 6.
Consider the system of 20 equations (see [29]):
tan 1 ( x i ) + 1 2 j = 1 , j i 20 x j 2 = 0 , 1 i 20 ,
This problem has the following two solutions:
α 1 = { 0.1757683176158 , 0.1757683176158 , · · · · · , 0.1757683176158 } T .
and
α 2 = { 0.14968543422 , 0.14968543422 , , · · · · · , 0.14968543422 } T .
We intend to find the first solution and so choose the initial value: x ( 0 ) = { 0.5 , 0.5 , 0.5 , · · · · · , 0.5 } T .
Example 7.
Consider the boundary value problem:
y + y 3 = 0 , y ( 0 ) = 0 , y ( 1 ) = 1 .
Assuming the following partitioning of the interval [ 0 , 1 ] :
u 0 = 0 < u 1 < u 2 < < u n 1 < u n = 1 , u j + 1 = u j + h , h = 1 / n .
Setting y 0 = y ( u 0 ) = 0 , y 1 = y ( u 1 ) , , y n 1 = y ( u n 1 ) , y n = y ( u n ) = 1 . If we discretize the problem by using the finite difference approximation for second derivative
y m = y m 1 2 y m + y m + 1 h 2 , m = 1 , 2 , 3 , , n 1 ,
we obtain a system of n 1 equations in n 1 variables:
y m 1 2 y m + y m + 1 + h 2 y m 3 = 0 , m = 1 , 2 , 3 , , n 1 .
In particular, let us solve this problem for n = 51 , that is for m = 50 by choosing y ( 0 ) = { 1 , 1 , 1 , , 1 } T as the initial value. The solution vector α of this problem is
{ 0.02071138910 , 0.04142277479 , 0.06213413315 , 0.08284539929 , 0.10355644682 , 0.12426706739 , 0.14497695018 , 0.16568566142 , 0.18639262397 , 0.20709709683 , 0.22779815476 , 0.24849466794 , 0.26918528167 , 0.28986839623 , 0.31054214677 , 0.33120438344 , 0.35185265167 , 0.37248417270 , 0.39309582441 , 0.41368412246 , 0.43424520189 , 0.45477479913 , 0.47526823468 , 0.49572039629 , 0.51612572294 , 0.53647818972 , 0.55677129350 , 0.57699803975 , 0.59715093054 , 0.61722195374 , 0.63720257375 , 0.65708372374 , 0.67685579959 , 0.69650865572 , 0.71603160287 , 0.73541340802 , 0.75464229671 , 0.77370595761 , 0.79259154985 , 0.81128571300 , 0.82977457984 , 0.84804379222 , 0.86607851992 , 0.88386348269 , 0.90138297559 , 0.91862089765 , 0.93556078378 , 0.95218584022 , 0.96847898326 , 0.98442288125 } T
.
Example 8.
Consider the following Burger’s equation (see [30]):
2 f u 2 + f f u f t + g ( u , t ) = 0 , ( u , t ) [ 0 , 1 ] 2 ,
where g ( u , t ) = 10 e 2 t [ e t ( 2 u + u 2 ) + 10 u ( 1 3 u + 2 u 2 ) ] and function f = f ( u , t ) satisfies the boundary conditions
f ( 0 , t ) = f ( 1 , t ) = 0 , f ( u , 0 ) = 10 u ( u 1 ) a n d f ( u , 1 ) = 10 u ( u 1 ) / e .
Assuming the following partitioning of the domain [ 0 , 1 ] 2 :
0 = u 0 < u 1 < u 2 < < u n 1 < u n = 1 , u k + 1 = u k + h , 0 = t 0 < t 1 < t 2 < < t n 1 < t n = 1 , t l + 1 = t l + h , h = 1 / n .
Let us define f k , l = f ( u k , t l ) and g k , l = g ( u k , t l ) for k , l = 0 , 1 , 2 , . n . Then the boundary conditions would be f 0 , l = f ( u 0 , t l ) = 0 , f n , l = f ( u n , t l ) = 0 , f k , 0 = f ( u k , t 0 ) = 10 u k ( u k 1 ) and f k , n = f ( u k , t n ) = 10 u k ( u k 1 ) / e . If we discretize Burger’s equation by using the numerical formulas for the partial derivatives
f u i , j = f i + 1 , j f i 1 , j 2 h , f t i , j = f i , j + 1 f i , j 1 2 h ,
2 f u 2 i , j = f i + 1 , j 2 f i , j + f i 1 , j h 2 , i , j = 1 , 2 , n 1 ,
then we obtain the following system of ( n 1 ) 2 nonlinear equations in ( n 1 ) 2 variables:
f i 1 , j ( 2 h f i , j ) + h ( f i , j 1 f i , j + 1 ) f i , j ( 4 h f i + 1 , j ) + 2 f i + 1 , j + 2 h 2 g i , j = 0 ,
where i , j = 1 , 2 n 1 . In particular, we solve this nonlinear system for n = 11 so that m = 100 by selecting f i , j = 1 for i , j = 1 , 2 10 as the initial value. The solution of this system of nonlinear equations is given in Table 2.
In Table 3, Table 4, Table 5, Table 6 and Table 7 we present the numerical results produced for the methods M 2 , 1 , M 3 , 1 , M 4 , j ( j = 1 , 2 , 3 ) , M 5 , 1 , M 5 , 2 , M 6 , 1 , M 7 , 1 and M 7 , 2 . Displayed in each table are the errors | | x ( k + 1 ) x ( k ) | | of first three consecutive approximations to corresponding solution of Examples 4–8, number of iterations ( k ) needed to converge to the required solution, computational order of convergence p c , computational cost C p , i , computational efficiency E p , i and elapsed CPU-time (e-time) measured in seconds. In each table the meaning of A ( h ) is A × 10 h . Numerical values of computational cost and efficiency are obtained according to the corresponding expressions given by (40)–(49). The e-time is calculated by taking the average of 50 performances of the program, where we use | | x ( k + 1 ) x ( k ) | | + | | F ( x ( k ) ) | | < 10 300 as the stopping condition in a single performance of the program.
From the numerical results displayed in Table 3, Table 4, Table 5, Table 6 and Table 7, it can be observed that like that of the existing methods the proposed new method shows consistent convergence behavior. Seventh order methods produce approximations with large accuracy due to their higher order of convergence, but they are less efficient. In Example 6, M 4 , 1 and M 7 , 1 do not converge to the required solution α 1 . Instead, they converge to solution α 2 which is far off from initial approximation chosen. Calculation of computational order of convergence shows that the order of convergence of the new method is preserved in all the numerical examples. However, this is not true for some existing methods, e.g., M 4 , j ( j = 1 , 2 , 3 ) , M 5 , 2 , M 6 , 1 , M 7 , 1 and M 7 , 2 , in Example 8. Values of the efficiency index shown in the penultimate column of each table also verify the theoretical results stated in Theorem 2. The efficiency results are also in complete agreement with the CPU time utilized in the execution of the program since the method with large efficiency uses less computing time than the method with small efficiency. Moreover, the proposed method utilizes less CPU time than existing higher order methods which points to the dominance of the method. In fact, the new method is especially more efficient for large systems of nonlinear equations.

7. Conclusions

In the foregoing study, we have developed a fifth order iterative method for approximating solution of systems of nonlinear equations. The methodology is based on third order Traub-Steffensen method and further developed by using derivative free modification of classical Chebyshev’s method. The iterative scheme is totally derivative-free and so particularly suitable to those problems where derivatives are lengthy to compute. To prove the local fifth order of convergence for the new method, a development of first-order divided difference operator and direct computation by Taylor’s expansion are used.
We have examined the computational efficiency of the new method. A comparison of efficiencies with that of the existing most efficient methods is also performed. It is proved that, in general, the new algorithm is more efficient. Numerical experiments are performed and the performance is compared with existing derivative-free methods. From numerical results it has been observed that the proposed method has equal or better convergence compared to existing methods. Theoretical results related to convergence order and computational efficiency have also been verified in the considered numerical problems. Similar numerical tests, performed for a variety of other different problems, have confirmed the above drawn conclusions to a good extent.

Author Contributions

Methodology, J.R.S.; writing, review and editing, J.R.S.; investigation, D.K.; data curation, D.K.; conceptualization, L.J; formal analysis, L.J.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Argyros, I.K. Quadratic equations and applications to Chandrasekhar’s and related equations. Bull. Aust. Math. Soc. 1985, 32, 275–292. [Google Scholar] [CrossRef]
  2. Kelley, C.T. Iterative Methods for Linear and Nonlinear Equations; SIAM: Philadelphia, PA, USA, 1995. [Google Scholar]
  3. Ostrowski, A.M. Solution of Equations and Systems of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
  4. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  5. Potra, F.-A.; Pták, V. Nondiscrete Induction and Iterarive Processes; Pitman Publishing: Boston, MA, USA, 1984. [Google Scholar]
  6. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  7. Ezquerro, J.A.; Hernández, M.A. An optimization of Chebyshev’s method. J. Complex. 2009, 25, 343–361. [Google Scholar] [CrossRef]
  8. Argyros, I.K.; Ezquerro, J.A.; Gutiérrez, J.M.; Hernández, M.A.; Hilout, S. On the semilocal convergence of efficient Chebyshev-Secant-type methods. J. Comput. Appl. Math. 2011, 235, 3195–3206. [Google Scholar] [CrossRef]
  9. Argyros, I.K.; Ren, H. Efficient Steffensen-type algorithms for solving nonlinear equations. Int. J. Comput. Math. 2013, 90, 691–704. [Google Scholar] [CrossRef]
  10. Steffensen, J.F. Remarks on iteration. Skand. Aktuar Tidskr. 1933, 16, 64–72. [Google Scholar] [CrossRef]
  11. Ren, H.; Wu, Q.; Bi, W. A class of two-step Steffensen type methods with fourth-order convergence. Appl. Math. Comput. 2009, 209, 206–210. [Google Scholar] [CrossRef]
  12. Zheng, Q.; Zhao, P.; Huang, F. A family of fourth-order Steffensen-type methods with the applications on solving nonlinear ODEs. Appl. Math. Comput. 2011, 217, 8196–8203. [Google Scholar] [CrossRef]
  13. Grau-Sánchez, M.; Noguera, M.; Amat, S. On the approximation of derivatives using divided difference operators preserving the local convergence order of iterative methods. J. Comput. Appl. Math. 2013, 237, 363–372. [Google Scholar] [CrossRef]
  14. Sharma, J.R.; Arora, H. An efficient derivative free iterative method for solving systems of nonlinear equations. Appl. Anal. Discrete Math. 2013, 7, 390–403. [Google Scholar] [CrossRef] [Green Version]
  15. Sharma, J.R.; Arora, H. A novel derivative free algorithm with seventh order convergence for solving systems of nonlinear equations. Numer. Algorithms 2014, 67, 917–933. [Google Scholar] [CrossRef]
  16. Wang, X.; Zhang, T. A family of Steffensen type methods with seventh-order convergence. Numer. Algorithms 2013, 62, 429–444. [Google Scholar] [CrossRef]
  17. Sharma, J.R.; Arora, H. Efficient higher order derivative-free multipoint methods with and without memory for systems of nonlinear equations. Int. J. Comput. Math. 2018, 95, 920–938. [Google Scholar] [CrossRef]
  18. Ahmad, F.; Soleymani, F.; Haghani, F.K.; Serra-Capizzano, S. Higher order derivative-free iterative methods with and without memory for systems of nonlinear equations. Appl. Math. Comput. 2017, 314, 199–211. [Google Scholar] [CrossRef]
  19. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method for functions of several variables. Appl. Math. Comput. 2006, 183, 199–208. [Google Scholar] [CrossRef]
  20. Genocchi, A. Relation entre la différence et la dérivée d’un même ordre quelconque. Arch. Math. Phys. I 1869, 49, 342–345. [Google Scholar]
  21. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algorithms 2010, 55, 87–99. [Google Scholar] [CrossRef]
  22. Lotfi, T.; Bakhtiari, P.; Cordero, A.; Mahdiani, K.; Torregrosa, J.R. Some new efficient multipoint iterative methods for solving nonlinear systems of equations. Int. J. Comput. Math. 2015, 92, 1921–1934. [Google Scholar] [CrossRef]
  23. Krasnoselsky, M.A.; Vainikko, G.M.; Zabreiko, P.P.; Rutitskii, J.B.; Stetsenko, V.J. Approximate Solution of Operator Equations; Nauka: Moscow, Russia, 1969. (In Russian) [Google Scholar]
  24. Madhu, K.; Babajee, D.K.R.; Jayaraman, J. An improvement to double-step Newton method and its multi-step version for solving system of nonlinear equations and its applications. Numer. Algorithms 2017, 74, 593–607. [Google Scholar] [CrossRef]
  25. Kumar, M.; Singh, A.K.; Srivastava, A. A new fifth order derivative free Newton-type method for solving nonlinear equations. Appl. Math. Inf. Sci. 2015, 9, 1507–1513. [Google Scholar]
  26. Vrscay, E.R.; Gilbert, W.J. Extraneous fixed points, basin boundaries and chaotic dynamics for Schröder and König rational iteration functions. Numer. Math. 1988, 52, 1–16. [Google Scholar] [CrossRef]
  27. Varona, J.L. Graphic and numerical comparison between iterative methods. Math. Intell. 2002, 24, 37–46. [Google Scholar] [CrossRef]
  28. Scott, M.; Neta, B.; Chun, C. Basin attractors for various methods. Appl. Math. Comput. 2011, 218, 2584–2599. [Google Scholar] [CrossRef]
  29. Xiao, X.Y.; Yin, H.W. Increasing the order of convergence for iterative methods to solve nonlinear systems. Calcolo 2016, 53, 285–300. [Google Scholar] [CrossRef]
  30. Sauer, T. Numerical Analysis, 2nd ed.; Pearson Education, Inc.: Boston, MA, USA, 2012. [Google Scholar]
Figure 1. Basins of attraction for polynomial P 1 ( z ) .
Figure 1. Basins of attraction for polynomial P 1 ( z ) .
Symmetry 11 00891 g001
Figure 2. Basins of attraction for polynomial P 2 ( z ) .
Figure 2. Basins of attraction for polynomial P 2 ( z ) .
Symmetry 11 00891 g002
Figure 3. Basins of attraction for polynomial P 3 ( z ) .
Figure 3. Basins of attraction for polynomial P 3 ( z ) .
Symmetry 11 00891 g003
Table 1. Weights and abscissas of Gauss-Legendre quadrature formula for m = 8 .
Table 1. Weights and abscissas of Gauss-Legendre quadrature formula for m = 8 .
j t j ϖ j
1 0.01985507175123188415821957 0.05061426814518812957626567
2 0.10166676129318663020422303 0.11119051722668723527217800
3 0.23723379504183550709113047 0.15685332293894364366898110
4 0.40828267875217509753026193 0.18134189168918099148257522
5 0.59171732124782490246973807 0.18134189168918099148257522
6 0.76276620495816449290886952 0.15685332293894364366898110
7 0.89833323870681336979577696 0.11119051722668723527217800
8 0.98014492824876811584178043 0.05061426814518812957626567
Table 2. The solution of system (51) with the unknowns f i , j for i , j = 1 , 2 , 10 .
Table 2. The solution of system (51) with the unknowns f i , j for i , j = 1 , 2 , 10 .
f i , 1 f i , 2 f i , 3 f i , 4 f i , 5 f i , 6 f i , 7 f i , 8 f i , 9 f i , 10
0.7546 0.6892 0.6290 0.5750 0.5236 0.4817 0.4306 0.4214 0.2583 0.3068
1.3583 1.2405 1.1322 1.0351 0.9422 0.8675 0.7741 0.7598 0.4358 0.5505
1.8111 1.6541 1.5096 1.3803 1.2559 1.1573 1.0309 1.0110 0.6951 0.7356
2.1130 1.9298 1.7611 1.6106 1.4649 1.3511 1.2014 1.1768 0.8755 0.8606
2.2639 2.0678 1.8869 1.7258 1.5690 1.4485 1.2860 1.2582 0.9783 0.9244
2.2639 2.0678 1.8868 1.7261 1.5686 1.4494 1.2850 1.2558 1.0040 0.9268
2.1129 1.9300 1.7609 1.6112 1.4637 1.3534 1.1987 1.1700 0.9534 0.8672
1.8111 1.6544 1.5093 1.3812 1.2544 1.1604 1.0272 1.0010 0.8270 0.7454
1.3583 1.2408 1.1320 1.0359 0.9407 0.8704 0.7706 0.7492 0.6255 0.5609
0.7546 0.6893 0.6289 0.5755 0.5227 0.4834 0.4285 0.4152 0.3496 0.3128
Table 3. Comparison of performance of methods for Example 4.
Table 3. Comparison of performance of methods for Example 4.
Methods | | x ( 2 ) x ( 1 ) | | | | x ( 3 ) x ( 2 ) | | | | x ( 4 ) x ( 3 ) | | k p c C p , i E p , i e-Time
M 2 , 1 ( β = 0.01 ) 9.94 ( 2 ) 4.45 ( 3 ) 7.14 ( 6 ) 92.000161.044270.2887
M 3 , 1 ( β = 0.01 ) 2.93 ( 2 ) 8.14 ( 6 ) 1.42 ( 16 ) 63.000221.051200.2630
M 4 , 1 3.73 ( 4 ) 1.71 ( 16 ) 6.94 ( 66 ) 54.000361.039260.3234
M 4 , 2 6.17 ( 2 ) 7.75 ( 7 ) 5.63 ( 27 ) 54.000361.039260.3165
M 4 , 3 ( β = 0.01 ) 5.35 ( 3 ) 2.42 ( 10 ) 7.56 ( 40 ) 54.000441.032010.3362
M 5 , 1 ( β = 0.01 ) 1.76 ( 3 ) 4.72 ( 15 ) 4.11 ( 73 ) 45.000441.037260.3297
M 5 , 2 1.89 ( 3 ) 2.96 ( 15 ) 2.04 ( 74 ) 45.000521.031430.3972
M 6 , 1 2.97 ( 2 ) 8.66 ( 12 ) 1.81 ( 69 ) 46.000421.043580.3120
M 7 , 1 2.23 ( 7 ) 4.55 ( 52 ) 0.000 37.000561.035360.3468
M 7 , 2 ( β = 0.01 ) 7.93 ( 5 ) 4.14 ( 31 ) 3.40 ( 215 ) 37.000721.027400.4125
Table 4. Comparison of performance of methods for Example 5.
Table 4. Comparison of performance of methods for Example 5.
Methods | | x ( 2 ) x ( 1 ) | | | | x ( 3 ) x ( 2 ) | | | | x ( 4 ) x ( 3 ) | | k p c C p , i E p , i e-Time
M 2 , 1 ( β = 0.01 ) 0.202 1.44 ( 3 ) 7.18 ( 8 ) 92.0003681.0018850.3437
M 3 , 1 ( β = 0.01 ) 1.73 ( 3 ) 1.24 ( 11 ) 4.56 ( 36 ) 53.0004401.0025000.2252
M 4 , 1 0.276 6.19 ( 6 ) 1.36 ( 24 ) 54.0008481.0016360.3532
M 4 , 2 9.94 ( 2 ) 4.12 ( 8 ) 1.23 ( 33 ) 54.0008001.0017340.3562
M 4 , 3 ( β = 0.01 ) 1.86 ( 2 ) 4.72 ( 11 ) 2.06 ( 45 ) 54.0008161.0017000.2749
M 5 , 1 ( β = 0.01 ) 1.20 ( 5 ) 3.49 ( 30 ) 7.35 ( 153 ) 45.0007681.0020980.2312
M 5 , 2 4.50 ( 2 ) 1.98 ( 11 ) 3.74 ( 58 ) 55.00012161.0013240.4234
M 6 , 1 1.39 ( 2 ) 3.84 ( 17 ) 1.86 ( 104 ) 46.0008721.0020570.3063
M 7 , 1 1.12 ( 2 ) 7.70 ( 21 ) 6.01 ( 148 ) 47.00013281.0014670.3862
M 7 , 2 ( β = 0.01 ) 8.66 ( 5 ) 4.24 ( 36 ) 3.01 ( 255 ) 47.00014241.0013670.3625
Table 5. Comparison of performance of methods for Example 6.
Table 5. Comparison of performance of methods for Example 6.
Methods | | x ( 2 ) x ( 1 ) | | | | x ( 3 ) x ( 2 ) | | | | x ( 4 ) x ( 3 ) | | k p c C p , i E p , i e-Time
M 2 , 1 ( β = 0.01 ) 0.336 5.87 ( 2 ) 2.05 ( 3 ) 102.00038801.00017874.2530
M 3 , 1 ( β = 0.01 ) 0.209 4.29 ( 3 ) 6.08 ( 8 ) 73.00043001.00025553.3061
* M 4 , 1 0.370 2.50 ( 2 ) 1.37 184.00085201.000162715.469
M 4 , 2 8.97 ( 2 ) 1.02 ( 5 ) 2.23 ( 21 ) 54.00081601.00016993.4542
M 4 , 3 ( β = 0.01 ) 0.133 2.18 ( 4 ) 2.85 ( 15 ) 64.00066801.00020763.8634
M 5 , 1 ( β = 0.01 ) 8.15 ( 2 ) 3.67 ( 6 ) 1.08 ( 27 ) 55.00063201.00025473.3176
M 5 , 2 0.434 7.70 ( 2 ) 1.33 ( 2 ) 75.00012,4001.00012987.2054
M 6 , 1 3.16 ( 2 ) 1.38 ( 10 ) 1.12 ( 60 ) 46.00085801.00020893.3585
* M 7 , 1 1.572 3.42 ( 4 ) 7.60 ( 25 ) 57.00013,1601.00014785.7346
M 7 , 2 ( β = 0.01 ) 1.96 ( 2 ) 6.66 ( 13 ) 3.92 ( 86 ) 47.00012,1201.00016064.3594
* The methods M 4 , 1 and M 7 , 1 converge to the solution α 2 .
Table 6. Comparison of performance of methods for Example 7.
Table 6. Comparison of performance of methods for Example 7.
Methods | | x ( 2 ) x ( 1 ) | | | | x ( 3 ) x ( 2 ) | | | | x ( 4 ) x ( 3 ) | | k p c C p , i E p , i e-Time
M 2 , 1 ( β = 0.01 ) 3.828 0.681 1.23 ( 2 ) 92.00049,2001.000014090.8928
M 3 , 1 ( β = 0.01 ) 0.433 9.62 ( 5 ) 1.74 ( 15 ) 63.00051,7501.000021230.7183
M 4 , 1 0.840 9.07 ( 5 ) 9.18 ( 21 ) 54.000103,3001.000013421.0475
M 4 , 2 0.848 9.54 ( 5 ) 1.13 ( 20 ) 54.000100,9001.000013741.1896
M 4 , 3 ( β = 0.01 ) 1.548 3.85 ( 3 ) 5.75 ( 14 ) 64.00066,7001.000020780.8284
M 5 , 1 ( β = 0.01 ) 4.06 ( 2 ) 1.22 ( 12 ) 2.82 ( 65 ) 45.00064,3001.000025030.6102
M 5 , 2 8.16 ( 2 ) 3.64 ( 11 ) 7.86 ( 58 ) 55.000152,5001.000010551.6563
M 6 , 1 0.159 1.20 ( 11 ) 2.24 ( 72 ) 66.000103,4501.000017321.0313
M 7 , 1 9.92 ( 2 ) 1.26 ( 15 ) 7.09 ( 113 ) 47.000157,4001.000012361.3457
M 7 , 2 ( β = 0.01 ) 0.212 6.80 ( 13 ) 3.00 ( 93 ) 47.000125,8001.000015471.0938
Table 7. Comparison of performance of methods for Example 8.
Table 7. Comparison of performance of methods for Example 8.
Methods | | x ( 2 ) x ( 1 ) | | | | x ( 3 ) x ( 2 ) | | | | x ( 4 ) x ( 3 ) | | k p c C p , i E p , i e-Time
M 2 , 1 ( β = 0.01 ) 1.980 1.78 ( 2 ) 2.66 ( 6 ) 92.000363,4001.0000019077.6572
M 3 , 1 ( β = 0.01 ) 0.951 1.31 ( 4 ) 4.43 ( 16 ) 63.000373,5001.0000029415.9846
M 4 , 1 0.158 8.95 ( 8 ) 2.27 ( 25 ) 62.997746,6001.00000147110.503
M 4 , 2 0.418 8.99 ( 6 ) 2.28 ( 19 ) 63.000736,8001.00000149110.249
M 4 , 3 ( β = 0.01 ) 0.453 5.14 ( 6 ) 9.22 ( 21 ) 63.001433,4001.0000025356.5627
M 5 , 1 ( β = 0.01 ) 0.137 3.08 ( 12 ) 3.28 ( 65 ) 45.001423,6001.0000038154.8255
M 5 , 2 3.12 ( 2 ) 5.16 ( 13 ) 3.86 ( 55 ) 53.9991,110,0001.00000145010.876
M 6 , 1 8.38 ( 2 ) 6.04 ( 11 ) 9.18 ( 46 ) 54.001746,9001.0000018569.2656
M 7 , 1 1.75 ( 4 ) 1.74 ( 34 ) 4.89 ( 213 ) 45.9991,129,8001.00000158610.235
M 7 , 2 ( β = 0.01 ) 3.83 ( 3 ) 5.44 ( 25 ) 1.65 ( 155 ) 45.996836,6001.0000021428.4533

Share and Cite

MDPI and ACS Style

Sharma, J.R.; Kumar, D.; Jäntschi, L. On a Reduced Cost Higher Order Traub-Steffensen-Like Method for Nonlinear Systems. Symmetry 2019, 11, 891. https://doi.org/10.3390/sym11070891

AMA Style

Sharma JR, Kumar D, Jäntschi L. On a Reduced Cost Higher Order Traub-Steffensen-Like Method for Nonlinear Systems. Symmetry. 2019; 11(7):891. https://doi.org/10.3390/sym11070891

Chicago/Turabian Style

Sharma, Janak Raj, Deepak Kumar, and Lorentz Jäntschi. 2019. "On a Reduced Cost Higher Order Traub-Steffensen-Like Method for Nonlinear Systems" Symmetry 11, no. 7: 891. https://doi.org/10.3390/sym11070891

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop