Next Article in Journal
Settings-Free Hybrid Metaheuristic General Optimization Methods
Next Article in Special Issue
dCATCH—A Numerical Package for d-Variate near G-Optimal Tchakaloff Regression via Fast NNLS
Previous Article in Journal
Truss Sizing Optimization with a Diversity-Enhanced Cyclic Neighborhood Network Topology Particle Swarm Optimizer
Previous Article in Special Issue
A Simple Method for Network Visualization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Derivative Free Multiple-Root Finders with Optimal Fourth Order Convergence

1
Department of Mathematics, Sant Longowal Institute of Engineering and Technology, Longowal Sangrur 148106, India
2
Department of Physics and Chemistry, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
3
Institute of Doctoral Studies, Babeş-Bolyai University, 400084 Cluj-Napoca, Romania
*
Authors to whom correspondence should be addressed.
Mathematics 2020, 8(7), 1091; https://doi.org/10.3390/math8071091
Submission received: 14 June 2020 / Revised: 2 July 2020 / Accepted: 2 July 2020 / Published: 3 July 2020
(This article belongs to the Special Issue Numerical Methods)

Abstract

:
A number of optimal order multiple root techniques that require derivative evaluations in the formulas have been proposed in literature. However, derivative-free optimal techniques for multiple roots are seldom obtained. By considering this factor as motivational, here we present a class of optimal fourth order methods for computing multiple roots without using derivatives in the iteration. The iterative formula consists of two steps in which the first step is a well-known Traub–Steffensen scheme whereas second step is a Traub–Steffensen-like scheme. The Methodology is based on two steps of which the first is Traub–Steffensen iteration and the second is Traub–Steffensen-like iteration. Effectiveness is validated on different problems that shows the robust convergent behavior of the proposed methods. It has been proven that the new derivative-free methods are good competitors to their existing counterparts that need derivative information.

1. Introduction

Finding root of a nonlinear equation ψ ( u ) = 0 is a very important and interesting problem in many branches of science and engineering. In this work, we examine derivative-free numerical methods to find a multiple root (say, α ) with multiplicity μ of the equation ψ ( u ) = 0 that means ψ ( j ) ( α ) = 0 , j = 0 , 1 , 2 , . . . , μ 1 and ψ ( μ ) ( α ) 0 . Newton’s method [1] is the most widely used basic method for finding multiple roots, which is given by
u k + 1 = u k μ ψ ( u k ) ψ ( u k ) , k = 0 , 1 , 2 , , μ = 2 , 3 , 4 , .
A number of modified methods, with or without the base of Newton’s method, have been elaborated and analyzed in literature, see [2,3,4,5,6,7,8,9,10,11,12,13,14]. These methods use derivatives of either first order or both first and second order in the iterative scheme. Contrary to this, higher order methods without derivatives to calculate multiple roots are yet to be examined. These methods are very useful in the problems where the derivative ψ is cumbersome to evaluate or is costly to compute. The derivative-free counterpart of classical Newton method (1) is the Traub–Steffensen method [15]. The method uses the approximation
ψ ( u k ) ψ ( u k + β ψ ( u k ) ) ψ ( u k ) β ψ ( u k ) , β R { 0 } ,
or
ψ ( u k ) ψ [ v k , u k ] ,
for the derivative ψ in the Newton method (1). Here, v k = u k + β ψ ( u k ) and ψ [ v k , u k ] = ψ ( v k ) ψ ( u k ) v k u k is a first order divided difference. Thereby, the method (1) takes the form of the Traub–Steffensen scheme defined as
u k + 1 = u k μ ψ ( u k ) ψ [ v k , u k ] .
The Traub–Steffensen method (2) is a prominent improvement of the Newton method because it maintains the quadratic convergence without adding any derivative.
Unlike Newton-like methods, the Traub–Steffensen-like methods are difficult to construct. Recently, a family of two-step Traub–Steffensen-like methods with fourth order convergence has been proposed in [16]. In terms of computational cost, the methods of [16] use three function evaluations per iteration and thus possess optimal fourth order convergence according to Kung–Traub conjecture (see [17]). This hypothesis states that multi-point methods without memory requiring m functional evaluations can attain the convergence order 2 m 1 called optimal order. Such methods are usually known as optimal methods. Our aim in this work is to develop derivative-free multiple root methods of good computational efficiency, which is to say, the methods of higher convergence order with the amount of computational work as small as we please. Consequently, we introduce a class of Traub–Steffensen-like derivative-free fourth order methods that require three new pieces of information of the function ψ and therefore have optimal fourth order convergence according to Kung–Traub conjecture. The iterative formula consists of two steps with Traub–Steffensen iteration (2) in the first step, whereas there is Traub–Steffensen-like iteration in the second step. Performance is tested numerically on many problems of different kinds. Moreover, comparison of performance with existing modified Newton-like methods verifies the robust and efficient nature of the proposed methods.
We summarize the contents of paper. In Section 2, the scheme of fourth order iteration is formulated and convergence order is studied separately for different cases. The main result, showing the unification of different cases, is studied in Section 3. Section 4 contains the basins of attractors drawn to assess the convergence domains of new methods. In Section 5, numerical experiments are performed on different problems to demonstrate accuracy and efficiency of the methods. Concluding remarks about the work are reported in Section 6.

2. Development of a Novel Scheme

Researchers have used different approaches to develop higher order iterative methods for solving nonlinear equations. Some of them are: Interpolation approach, Sampling approach, Composition approach, Geometrical approach, Adomian approach, and Weight-function approach. Of these, the Weight-function approach has been most popular in recent times; see, for example, Refs. [10,13,14,18,19] and references therein. Using this approach, we consider the following two-step iterative scheme for finding multiple root with multiplicity μ 2 :
z k = u k μ ψ ( u k ) ψ [ v k , u k ] , u k + 1 = z k G ( h ) 1 + 1 y k ψ ( u k ) ψ [ v k , u k ] ,
where h = x k 1 + x k , x k = ψ ( z k ) ψ ( u k ) μ , y k = ψ ( v k ) ψ ( u k ) μ and G : C C is analytic in the neighborhood of zero. This iterative scheme is weighted by the factors G ( h ) and 1 + 1 y k , hence the name weight-factor or weight-function technique.
Note that x k and y k are one-to- μ multi-valued functions, so we consider their principal analytic branches [18]. Hence, it is convenient to treat them as the principal root. For example, let us consider the case of x k . The principal root is given by x k = exp 1 μ Log ψ ( z k ) ψ ( u k ) , with Log ψ ( z k ) ψ ( u k ) = Log | ψ ( z k ) ψ ( u k ) | + i Arg ψ ( z k ) ψ ( u k ) for π < Arg ψ ( z k ) ψ ( u k ) π ; this convention of Arg ( p ) for p C agrees with that of Log [ p ] command of Mathematica [20] to be employed later in the sections of basins of attraction and numerical experiments. Similarly, we treat for y k .
In the sequel, we prove fourth order of convergence of the proposed iterative scheme (3). For simplicity, the results are obtained separately for the cases depending upon the multiplicity μ . Firstly, we consider the case μ = 2 .
Theorem 1. 
Assume that u = α is a zero with multiplicity μ = 2 of the function ψ ( u ) , where ψ : C C is sufficiently differentiable in a domain containing α. Suppose that the initial point u 0 is closer to α; then, the order of convergence of the scheme (3) is at least four, provided that the weight-function G ( h ) satisfies the conditions G ( 0 ) = 0 , G ( 0 ) = 1 , G ( 0 ) = 6 and | G ( 0 ) | < .
Proof. 
Assume that ε k = u k α is the error at the k-th stage. Expanding ψ ( u k ) about α using the Taylor series keeping in mind that ψ ( α ) = 0 , ψ ( α ) = 0 and ψ ( 2 ) ( α ) 0 , , we have that
ψ ( u k ) = ψ ( 2 ) ( α ) 2 ! ε k 2 1 + A 1 ε k + A 2 ε k 2 + A 3 ε k 3 + A 4 ε k 4 + ,
where A m = 2 ! ( 2 + m ) ! ψ ( 2 + m ) ( α ) ψ ( 2 ) ( α ) for m N .
Similarly, Taylor series expansion of ψ ( v k ) is
ψ ( v k ) = ψ ( 2 ) ( α ) 2 ! ε v k 2 1 + A 1 ε v k + A 2 ε v k 2 + A 3 ε v k 3 + A 4 ε v k 4 + ,
where ε v k = v k α = ε k + β ψ ( 2 ) ( α ) 2 ! ε k 2 1 + A 1 ε k + A 2 ε k 2 + A 3 ε k 3 + A 4 ε k 4 + .
By using (4) and (5) in the first step of (3), we obtain
ε z k = z k α = 1 2 β ψ ( 2 ) ( α ) 2 + A 1 ε k 2 1 16 ( β ψ ( 2 ) ( α ) ) 2 8 β ψ ( 2 ) ( α ) A 1 + 12 A 1 2 16 A 2 ε k 3 + 1 64 ( ( β ψ ( 2 ) ( α ) ) 3 20 β ψ ( 2 ) ( α ) A 1 2 + 72 A 1 3 + 64 β ψ ( 2 ) ( α ) A 2 10 A 1 ( β ψ ( 2 ) ( α ) ) 2 + 16 A 2 + 96 A 3 ) ε k 4 + O ( ε k 5 ) .
In addition, we have that
ψ ( z k ) = ψ ( 2 ) ( α ) 2 ! ε z k 2 1 + A 1 ε z k + A 2 ε z k 2 + .
Using (4), (5) and (7), we further obtain
x k = 1 2 β ψ ( 2 ) ( α ) 2 + A 1 ε k 1 16 ( β ψ ( 2 ) ( α ) ) 2 6 β ψ ( 2 ) ( α ) A 1 + 16 ( A 1 2 A 2 ) ε k 2 + 1 64 ( ( β ψ ( 2 ) ( α ) ) 3 22 β ψ ( 2 ) ( α ) A 1 2 + 4 29 A 1 3 + 14 β ψ ( 2 ) ( α ) A 2 2 A 1 3 ( β ψ ( 2 ) ( α ) ) 2 + 104 A 2 + 96 A 3 ) ε k 3 + 1 256 ( 212 β ψ ( 2 ) ( α ) A 1 3 800 A 1 4 + 2 A 1 2 ( 7 ( β ψ ( 2 ) ( α ) ) 2 + 1040 A 2 ) + 2 A 1 ( 3 ( β ψ ( 2 ) ( α ) ) 3 232 β ψ ( 2 ) ( α ) A 2 576 A 3 ) ( ( β ψ ( 2 ) ( α ) ) 4 + 8 β ψ ( 2 ) ( α ) A 2 + 640 A 2 2 416 β ψ ( 2 ) ( α ) A 3 512 A 4 ) ) ε k 4 + O ( ε k 5 )
and
y k = 1 + β ψ ( 2 ) ( α ) 2 ε k ( 1 + 3 2 A 1 ε k + 1 4 β ψ ( 2 ) ( α ) A 1 + 8 A 2 ε k 2 + 1 16 3 β ψ ( 2 ) ( α ) A 1 2 + 12 β ψ ( 2 ) ( α ) A 2 + 40 A 3 ε k 3     + O ( ε k 4 ) ) .
Using (8), we have
h = 1 2 β ψ ( 2 ) ( α ) 2 + A 1 ε k 1 8 ( β ψ ( 2 ) ( α ) ) 2 β ψ ( 2 ) ( α ) A 1 2 ( 4 A 2 5 A 1 2 ) ε k 2 + 1 32 ( β ψ ( 2 ) ( α ) A 1 2 + 94 A 1 3   4 A 1 ( ( β ψ ( 2 ) ( α ) ) 2 + 34 A 2 ) + 2 ( ( β ψ ( 2 ) ( α ) ) 3 + 6 β ψ ( 2 ) ( α ) A 2 + 24 A 3 ) ) ε k 3 + 1 128 ( 54 β ψ ( 2 ) ( α ) A 1 3 864 A 1 4   + A 1 2 ( 1808 A 2 13 ( β ψ ( 2 ) ( α ) ) 2 ) + 2 A 1 ( 6 ( β ψ ( 2 ) ( α ) ) 3 68 β ψ ( 2 ) ( α ) A 2 384 A 3 ) 4 ( ( β ψ ( 2 ) ( α ) ) 4   + 5 ( β ψ ( 2 ) ( α ) ) 2 A 2 + 112 A 2 2 28 β ψ ( 2 ) ( α ) A 3 64 A 4 ) ) ε k 4 + O ( ε k 5 ) .
Taylor expansion of the weight function G ( h ) in the neighborhood of origin up to third-order terms is given by
G ( h ) G ( 0 ) + h G ( 0 ) + 1 2 h 2 G ( 0 ) + 1 6 h 2 G ( 0 ) .
Using (4)–(11) in the last step of (3), we have
ε k + 1 = G ( 0 ) ε k + 1 4 β ψ ( 2 ) ( α ) ( 1 + 2 G ( 0 ) G ( 0 ) ) + 2 ( 1 + G ( 0 ) G ( 0 ) ) A 1 ε k 2 + n = 1 2 ϕ n ε k n + 2 + O ( ε k 5 ) ,
where ϕ n = ϕ n ( β , A 1 , A 2 , A 3 , G ( 0 ) , G ( 0 ) , G ( 0 ) , G ( 0 ) ) , n = 1 , 2 . The expressions of ϕ 1 and ϕ 2 being very lengthy have not been produced explicitly.
We can obtain at least fourth order convergence if we set coefficients of ε k , ε k 2 and ε k 3 simultaneously equal to zero. Then, some simple calculations yield
G ( 0 ) = 0 , G ( 0 ) = 1 , G ( 0 ) = 6 .
Using (13) in (12), we will obtain final error equation
ε k + 1 = 1 192 β ψ ( 2 ) ( α ) 2 + A 1 ( ( G ( 0 ) 42 ) ( β ψ ( 2 ) ( α ) ) 2 + 4 ( G ( 0 ) 45 ) β ψ ( 2 ) ( α ) A 1 + 4 ( G ( 0 ) 63 ) A 1 2 + 48 A 2 ) ε k 4 + O ( ε k 5 ) .
Thus, the theorem is proved. □
Next, we prove the following theorem for case μ = 3 .
Theorem 2. 
Using assumptions of Theorem 1, the convergence order of scheme (3) for the case μ = 3 is at least 4, if G ( 0 ) = 0 , G ( 0 ) = 3 2 , G ( 0 ) = 9 and | G ( 0 ) | < .
Proof. 
Taking into account that ψ ( α ) = 0 , ψ ( α ) = 0 , ψ ( α ) = 0 and ψ ( 3 ) ( α ) 0 , the Taylor series development of ψ ( u k ) about α gives
ψ ( u k ) = ψ ( 3 ) ( α ) 3 ! ε k 3 1 + B 1 ε k + B 2 ε k 2 + B 3 ε k 3 + B 4 ε k 4 + ,
where B m = 3 ! ( 3 + m ) ! ψ ( 3 + m ) ( α ) ψ ( 3 ) ( α ) for m N .
Expanding ψ ( v k ) about α
ψ ( v k ) = ψ ( 3 ) ( α ) 3 ! ε v k 3 1 + B 1 ε v k + B 2 ε v k 2 + B 3 ε v k 3 + B 4 ε v k 4 + ,
where ε v k = v k α = ε k + β ψ ( 3 ) ( α ) 3 ! ε k 3 1 + B 1 ε k + B 2 ε k 2 + B 3 ε k 3 + B 4 ε k 4 + .
Then, using (15) and (16) in the first step of (3), we obtain
ε z k = z k α = B 1 3 ε k 2 + 1 18 3 β ψ ( 3 ) ( α ) 8 B 1 2 + 12 B 2 ε k 3 + 1 27 16 B 1 3 + 3 B 1 2 β ψ ( 3 ) ( α ) 13 B 2 + 27 B 3 ε k 4 + O ( ε k 5 ) .
Expansion of ψ ( z k ) about α yields
ψ ( z k ) = ψ ( 3 ) ( α ) 3 ! ε z k 3 1 + B 1 ε z k + B 2 ε z k 2 + B 3 ε z k 3 + B 4 ε z k 4 + .
Then, from (15), (16), and (18), it follows that
x k = B 1 3 ε k + 1 18 3 β ψ ( 3 ) ( α ) 10 B 1 2 + 12 B 2 ε k 2 + 1 54 46 B 1 3 + 3 B 1 3 ψ ( 3 ) ( α ) β 32 B 2 + 54 B 3 ε k 3 1 486 ( 610 B 1 4 B 1 2 ( 1818 B 2 27 β ψ ( 3 ) ( α ) ) + 1188 B 1 B 3 + 9 ( β ψ ( 3 ) ( α ) ) 2 15 β ψ ( 3 ) ( α ) B 2 + 72 B 2 2 72 B 4 ) ε k 4 + O ( ε k 5 )
and
y k = 1 + β ψ ( 3 ) ( α ) 3 ! ε k 2 1 + 4 3 B 1 ε k + 5 3 B 2 ε k 2 + 1 18 ( β ψ ( 3 ) ( α ) B 1 + 36 B 3 ) ε k 3 + O ( ε k 4 ) .
Using (19), we have
h = B 1 3 ε k + 1 6 β ψ ( 3 ) ( α ) 4 B 1 2 + 4 B 2 ε k 2 + 1 54 68 B 1 3 + 3 B 1 β ψ ( 3 ) ( α ) 40 B 2 + 54 B 3 ε k 3 1 2916 ( 6792 B 1 4 108 B 1 2 ( 159 B 2 + 2 β ψ ( 3 ) ( α ) ) + 9072 B 1 B 3 27 5 ( β ψ ( 3 ) ( α ) ) 2 + 6 β ψ ( 3 ) ( α ) B 2 192 B 2 2 + 144 B 4 ) ε k 4 + O ( ε k 5 ) .
Developing weight function G ( h ) about origin by the Taylor series expansion,
G ( h ) G ( 0 ) + h G ( 0 ) + 1 2 h 2 G ( 0 ) + 1 6 h 3 G ( 0 ) .
By using (15)–(22) in the last step of (3), we have
ε k + 1 = 2 G ( 0 ) 3 ε k + 1 9 ( 3 + 2 G ( 0 ) 2 G ( 0 ) ) B 1 ε k 2 + n = 1 2 φ n ε k n + 2 + O ( ε k 5 ) ,
where φ n = φ n ( β , B 1 , B 2 , B 3 , G ( 0 ) , G ( 0 ) , G ( 0 ) , G ( 0 ) ) , n = 1 , 2 .
To obtain fourth order convergence, it is sufficient to set coefficients of ε k , ε k 2 , and ε k 3 simultaneously equal to zero. This process will yield
G ( 0 ) = 0 , G ( 0 ) = 3 2 , G ( 0 ) = 9 .
Then, error equation (23) is given by
ε k + 1 = B 1 972 27 β ψ ( 3 ) ( α ) + 4 ( G ( 0 ) 99 ) B 1 2 + 108 B 2 ε k 4 + O ( ε k 5 ) .
Hence, the result is proved. □
Remark 1. 
We can observe from the above results that the number of conditions on G ( h ) is 3 corresponding to the cases μ = 2 , 3 to attain the fourth order convergence of the method (3). These cases also satisfy common conditions: G ( 0 ) = 0 , G ( 0 ) = μ 2 , G ( 0 ) = 3 μ . Their error equations also contain the term involving the parameter β. However, for the cases μ 4 , it has been seen that the error equation in each such case does not contain β term. We shall prove this fact in the next section.

3. Main Result

We shall prove the convergence order of scheme (3) for the multiplicity μ 4 by the following theorem:
Theorem 3. 
Using assumptions of Theorem 1, the convergence order of scheme (3) for μ 4 is at least four, provided that G ( 0 ) = 0 , G ( 0 ) = μ 2 , G ( 0 ) = 3 μ and | G ( 0 ) | < . Moreover, error in the scheme is given by
ε k + 1 = 1 6 μ 4 ( 3 μ ( 19 + μ ) 2 G ( 0 ) ) F 1 3 6 μ 2 F 1 F 2 ε k 4 + O ( ε k 5 ) ,
where F m = μ ! ( μ + m ) ! ψ ( μ + m ) ( α ) ψ ( μ ) ( α ) for m N .
Proof. 
Taking into account that ψ ( i ) ( α ) = 0 , i = 0 , 1 , 2 , . . . , μ 1 and ψ ( μ ) ( α ) 0 , then Taylor series expansion of ψ ( u k ) about α is
ψ ( u k ) = ψ ( μ ) ( α ) μ ! ε k μ 1 + F 1 ε k + F 2 ε k 2 + F 3 ε k 3 + F 4 ε k 4 + .
Taylor expansion of ψ ( v k ) about α yields
ψ ( v k ) = ψ ( μ ) ( α ) μ ! ε v k μ 1 + F 1 ε v k + F 2 ε v k 2 + F 3 ε v k 3 + F 4 ε v k 4 + ,
where ε v k = v k α = ε k + β ψ ( μ ) ( α ) μ ! ε k μ 1 + F 1 ε k + F 2 ε k 2 + F 3 ε k 3 + F 4 ε k 4 + .
Using (26) and (27) in the first step of (3), we obtain
ε z k = F 1 4 ε k 2 + 1 16 8 F 2 5 F 1 2 ε k 3 + 1 64 4 β ψ ( 4 ) ( α ) + 25 F 1 3 64 F 1 F 2 + 48 F 3 ε k 4 + O ( ε k 5 ) , if μ = 4 , F 1 μ ε k 2 + 1 μ 2 2 μ F 2 ( 1 + μ ) F 1 2 ε k 3 + 1 μ 3 ( 1 + μ ) 2 F 1 3 μ ( 4 + 3 μ ) F 1 F 2 + 3 μ 2 F 3 ε k 4 + O ( ε k 5 ) , if μ 5 ,
where ε z k = z k α .
Expansion of ψ ( z k ) around α yields
ψ ( z k ) = ψ ( μ ) ( α ) μ ! ε z k μ 1 + F 1 ε z k + F 2 ε z k 2 + F 3 ε z k 3 + F 4 ε z k 4 + .
Using (26), (27) and (29), we have that
x k = F 1 4 ε k + 1 8 4 F 2 3 F 1 2 ε k 2 + 1 128 8 β ψ ( 4 ) ( α ) + 67 F 1 3 152 F 1 F 2 + 96 F 3 ε k 3 + 1 768 ( 543 F 1 4 + 1740 F 1 2 F 2 + 4 F 1 ( 11 β ψ ( 4 ) ( α ) 312 F 3 ) + 96 ( 7 F 2 2 + 8 F 4 ) ) ε k 4 + O ( ε k 5 ) , if μ = 4 , F 1 5 ε k + 1 25 10 F 2 7 F 1 2 ε k 2 + 1 125 46 F 1 3 110 F 1 F 2 + 75 F 3 ε k 3 + ( β ψ ( 5 ) ( α ) 60 294 625 F 1 4 + 197 125 F 1 2 F 2 16 25 F 2 2 6 5 F 1 F 3 + 4 5 F 4 ) ε k 4 + O ( ε k 5 ) , if μ = 5 , F 1 μ ε k + 1 μ 2 2 μ F 2 ( 2 + μ ) F 1 2 ε k 2 + 1 2 μ 3 ( 7 + 7 μ + 2 μ 2 ) F 1 3 2 μ ( 7 + 3 μ ) F 1 F 2 + 6 μ 2 F 3 ε k 3 1 6 μ 4 ( ( 34 + 51 μ + 29 μ 2 + 6 μ 3 ) F 1 4 6 μ ( 17 + 16 μ + 4 μ 2 ) F 1 2 F 2 + 12 μ 2 ( 3 + μ ) F 2 2 + 12 μ 2 ( 5 + 2 μ ) F 1 F 3 ) ε k 4 + O ( ε k 5 ) , if μ 6
and
y k = 1 + β ψ ( μ ) ( α ) μ ! ε k μ 1 1 + ( μ + 1 ) F 1 μ ε k + ( μ + 2 ) F 2 μ ε k 2 + ( μ + 3 ) F 3 μ ε k 3 + ( μ + 4 ) F 4 μ ε k 4 + O ( ε k 5 ) .
Using (30), we obtain that
h = F 1 4 ε k + 1 16 8 μ F 2 7 F 1 2 ε k 2 + 1 128 8 β ψ ( 4 ) ( α ) + 93 F 1 3 184 F 1 F 2 + 96 F 3 ε k 3 + 303 256 F 1 4 + 213 64 F 1 2 F 2 9 8 F 2 2 + F 1 ( 5 192 β ψ ( 4 ) ( α ) 2 F 3 ) + F 4 ε k 4 + O ( ε k 5 ) , if μ = 4 , F 1 5 ε k + 1 25 10 F 2 8 F 1 2 ε k 2 + 1 125 61 F 1 3 130 F 1 F 2 + 75 F 3 ε k 3 + 457 625 F 1 4 + 11 5 F 1 2 F 2 36 25 F 1 F 3 + 1 60 ( β ψ ( 5 ) ( α ) 48 F 2 2 + 48 F 4 ) ε k 4 + O ( ε k 5 ) , if μ = 5 , F 1 μ ε k + 1 μ 2 2 μ F 2 ( 3 + μ ) F 1 2 ε k 2 + 1 2 μ 3 ( 17 + 11 μ + 2 μ 2 ) F 1 3 2 μ ( 11 + 3 μ ) F 1 F 2 + 6 μ 2 F 3 ε k 3 1 6 μ 4 ( ( 142 + 135 μ + 47 μ 2 + 6 μ 3 ) F 1 4 6 μ ( 45 + 26 μ + 4 μ 2 ) F 1 2 F 2 + 12 μ 2 ( 5 + μ ) F 2 2 + 24 μ 2 ( 4 + μ ) F 1 F 3 ) ε k 4 + O ( ε k 5 ) , if μ 6 .
Developing weight function G ( h ) about origin by the Taylor series expansion,
G ( h ) G ( 0 ) + h G ( 0 ) + 1 2 h 2 G ( 0 ) + 1 6 h 3 G ( 0 ) .
Using (26)–(33) in the last step of (3), we get
ε k + 1 = 2 G ( 0 ) μ ε k + 1 μ 2 ( 2 G ( 0 ) 2 G ( 0 ) + μ ) F 1 ε k 2 + n = 1 2 χ n ε k n + 2 + O ( ε k 5 ) ,
where χ n = χ n ( β , F 1 , F 2 , F 3 , G ( 0 ) , G ( 0 ) , G ( 0 ) , G ( 0 ) ) when μ = 4 , 5 and χ n = χ n ( F 1 , F 2 , F 3 , G ( 0 ) , G ( 0 ) , G ( 0 ) , G ( 0 ) ) when μ 6 for n = 1 , 2 .
The fourth order convergence can be attained if we put coefficients of ε k , ε k 2 and ε k 3 simultaneously equal to zero. Then, the resulting equations yield
G ( 0 ) = 0 , G ( 0 ) = μ 2 , G ( 0 ) = 3 μ .
As a result, the error equation is given by
ε k + 1 = 1 6 μ 4 ( 3 μ ( 19 + μ ) 2 G ( 0 ) ) F 1 3 6 μ 2 F 1 F 2 ε k 4 + O ( ε k 5 ) .
This proves the result. □
Remark 2. 
The proposed scheme (3) achieves fourth-order convergence with the conditions of weight-function G ( h ) as shown in Theorems 1–3. This convergence rate is attained by using only three functional evaluations viz. ψ ( u k ) , ψ ( v k ) and ψ ( z k ) per iteration. Therefore, the iterative scheme (3) is optimal according to Kung–Traub conjecture [17].
Remark 3. 
Note that the parameter β, which is used in v k , appears only in the error equations of the cases μ = 2 , 3 but not for μ 4 (see Equation (36)). However, for μ 4 , we have observed that this parameter appears in the terms of ε k 5 and higher order. Such terms are difficult to compute in general. However, we do not need these in order to show the required fourth order of convergence. Note also that Theorems 1–3 are presented to show the difference in error expressions. Nevertheless, the weight function G ( h ) satisfies the common conditions G ( 0 ) = 0 , G ( 0 ) = μ 2 , G ( 0 ) = 3 μ for every μ 2 .

Some Special Cases

Based on various forms of function G ( h ) that satisfy the conditions of Theorem 3, numerous special cases of the family (3) can be explored. The following are some simple forms:
( 1 ) G ( h ) = μ h ( 1 + 3 h ) 2 , ( 2 ) G ( h ) = μ h 2 6 h , ( 3 ) G ( h ) = μ h ( μ 2 h ) 2 ( μ ( 2 + 3 μ ) h + 2 μ h 2 ) , ( 4 ) G ( h ) = μ h ( 3 h ) 6 20 h .
The corresponding method to each of the above forms can be expressed as follows:
Method 1 (M1) :
u k + 1 = z k μ h ( 1 + 3 h ) 2 1 + 1 y k ψ ( u k ) ψ [ v k , u k ] .
Method 2 (M2) :
u k + 1 = z k μ h 2 6 h 1 + 1 y k ψ ( u k ) ψ [ v k , u k ] .
Method 3 (M3) :
u k + 1 = z k μ h ( μ 2 h ) 2 ( μ ( 2 + 3 μ ) h + 2 μ h 2 ) 1 + 1 y k ψ ( u k ) ψ [ v k , u k ] .
Method 4 (M4) :
u k + 1 = z k μ h ( 3 h ) 6 20 h 1 + 1 y k ψ ( u k ) ψ [ v k , u k ] .
Note that, in all the above cases, z k has the following form:
z k = u k μ ψ ( u k ) ψ [ v k , u k ] .

4. Basins of Attraction

In this section, we present complex geometry of the above considered method with a tool, namely basin of attraction, by applying the method to some complex polynomials ψ ( z ) . Basin of attraction of the root is an important geometrical tool for comparing convergence regions of the iterative methods [21,22,23]. To start with, let us recall some basic ideas concerned with this graphical tool.
Let R : C C be a rational mapping on the Riemann sphere. We define orbit of a point z 0 C as the set { z 0 , R ( z 0 ) , R 2 ( z 0 ) , , R n ( z 0 ) , } . A point z 0 C is a fixed point of the rational function R if it satisfies the equation R ( z 0 ) = z 0 . A point z 0 is said to be periodic with period m > 1 if R m ( z 0 ) = z 0 , where m is the smallest such integer. A point z 0 is called attracting if | R ( z 0 ) | < 1 , repelling if | R ( z 0 ) | > 1 , neutral if | R ( z 0 ) | = 1 and super attracting if | R ( z 0 ) | = 0 . Assume that z ψ * is an attracting fixed point of the rational map R. Then, the basin of attraction of z ψ * is defined as
A ( z ψ * ) = { z 0 C : R n ( z 0 ) z ψ * , n } .
The set of points whose orbits tend to an attracting fixed point z ψ * is called the Fatou set. The complementary set, called the Julia set, is the closure of the set of repelling fixed points, which establishes the boundaries between the basins of the roots. Attraction basins allow us to assess those starting points which converge to the concerned root of a polynomial when we apply an iterative method, so we can visualize which points are good options as starting points and which are not.
We select z 0 as the initial point belonging to D, where D is a rectangular region in C containing all the roots of the equation ψ ( z ) = 0 . An iterative method starting with a point z 0 D may converge to the zero of the function ψ ( z ) or may diverge. To assess the basins, we consider 10 3 as the stopping criterion for convergence restricted to 25 iterations. If this tolerance is not achieved in the required iterations, the procedure is dismissed with the result showing the divergence of the iteration function started from z 0 . While drawing the basins, the following criterion is adopted: A color is allotted to every initial guess z 0 in the attraction basin of a zero. If the iterative formula that begins at point z 0 converges, then it forms the basins of attraction with that assigned color and, if the formula fails to converge in the required number of iterations, then it is painted black.
To view the complex dynamics, the proposed methods are applied on the following three problems:
Test problem 1. Consider the polynomial ψ 1 ( z ) = ( z 2 + z + 1 ) 2 having two zeros { 0.5 0.866025 i , 0.5 + 0.866025 i } with multiplicity μ = 2 . The attraction basins for this polynomial are shown in Figure 1, Figure 2 and Figure 3 corresponding to the choices 0.01 , 10 4 , 10 6 of parameter β . A color is assigned to each basin of attraction of a zero. In particular, red and green colors have been allocated to the basins of attraction of the zeros 0.5 0.866025 i and 0.5 + 0.866025 i , respectively.
Test problem 2. Consider the polynomial ψ 2 ( z ) = z 3 + 1 4 z 3 which has three zeros { i 2 , i 2 , 0 } with multiplicities μ = 3 . Basins of attractors assessed by methods for this polynomial are drawn in Figure 4, Figure 5 and Figure 6 corresponding to choices β = 0.01 , 10 4 , 10 6 . The corresponding basin of a zero is identified by a color assigned to it. For example, green, red, and blue colors have been assigned corresponding to i 2 , i 2 , and 0.
Test problem 3. Next, let us consider the polynomial ψ 3 ( z ) = z 3 + 1 z 4 that has four zeros { 0.707107 + 0.707107 i , 0.707107 0.707107 i , 0.707107 + 0.707107 i , 0.707107 0.707107 i } with multiplicity μ = 4 . The basins of attractors of zeros are shown in Figure 7, Figure 8 and Figure 9, for choices of the parameter β = 0.01 , 10 4 , 10 6 . A color is assigned to each basin of attraction of a zero. In particular, we assign yellow, blue, red, and green colors to 0.707107 + 0.707107 i , 0.707107 0.707107 i , 0.707107 + 0.707107 i and 0.707107 0.707107 i , respectively.
Estimation of β values plays an important role in the selection of those members of family (3) which possess good convergence behavior. This is also the reason why different values of β have been chosen to assess the basins. The above graphics clearly indicate that basins are becoming wider with the smaller values of parameter β . Moreover, the black zones (used to indicate divergence zones) are also diminishing as β assumes small values. Thus, we conclude this section with a remark that the convergence of proposed methods is better for smaller values of parameter β .

5. Numerical Results

In order to validate of theoretical results that have been shown in previous sections, the new methods M1, M2, M3, and M4 are tested numerically by implementing them on some nonlinear equations. Moreover, these are compared with some existing optimal fourth order Newton-like methods. For example, we consider the methods by Li–Liao–Cheng [7], Li–Cheng–Neta [8], Sharma–Sharma [9], Zhou–Chen–Song [10], Soleymani–Babajee–Lotfi [12], and Kansal–Kanwar–Bhatia [14]. The methods are expressed as follows:
Li–Liao–Cheng method (LLCM):
z k = u k 2 μ μ + 2 ψ ( u k ) ψ ( u k ) , u k + 1 = u k μ ( μ 2 ) μ μ + 2 μ ψ ( z k ) μ 2 ψ ( u k ) ψ ( u k ) μ μ + 2 μ ψ ( z k ) ψ ( u k ) 2 ψ ( u k ) .
Li–Cheng–Neta method (LCNM):
z k = u k 2 μ μ + 2 ψ ( u k ) ψ ( u k ) , u k + 1 = u k α 1 ψ ( u k ) ψ ( z k ) ψ ( u k ) α 2 ψ ( u k ) + α 3 ψ ( z k ) ,
where
α 1 = 1 2 μ μ + 2 μ μ ( μ 4 + 4 μ 3 16 μ 16 ) μ 3 4 μ + 8 , α 2 = ( μ 3 4 μ + 8 ) 2 μ ( μ 4 + 4 μ 3 4 μ 2 16 μ + 16 ) ( μ 2 + 2 μ 4 ) , α 3 = μ 2 ( μ 3 4 μ + 8 ) μ μ + 2 μ ( μ 4 + 4 μ 3 4 μ 2 16 μ + 16 ) ( μ 2 + 2 μ 4 ) .
Sharma–Sharma method (SSM):
z k = u k 2 μ μ + 2 ψ ( u k ) ψ ( u k ) , u k + 1 = u k μ 8 [ ( μ 3 4 μ + 8 ) ( μ + 2 ) 2 μ μ + 2 μ ψ ( u k ) ψ ( z k ) × 2 ( μ 1 ) ( μ + 2 ) μ μ + 2 μ ψ ( u k ) ψ ( z k ) ] ψ ( u k ) ψ ( u k ) .
Zhou–Chen–Song method (ZCSM):
z k = u k 2 μ μ + 2 ψ ( u k ) ψ ( u k ) , u k + 1 = u k μ 8 [ μ 3 μ + 2 μ 2 μ ψ ( z k ) ψ ( u k ) 2 2 μ 2 ( μ + 3 ) μ + 2 μ μ ψ ( z k ) ψ ( u k ) + ( μ 3 + 6 μ 2 + 8 μ + 8 ) ] ψ ( u k ) ψ ( u k ) .
Soleymani–Babajee–Lotfi method (SBLM):
z k = u k 2 μ μ + 2 ψ ( u k ) ψ ( u k ) , u k + 1 = u k ψ ( z k ) ψ ( u k ) q 1 ( ψ ( z k ) ) 2 + q 2 ψ ( z k ) ψ ( u k ) + q 3 ( ψ ( u k ) ) 2 ,
where q 1 = 1 16 μ 3 μ ( μ + 2 ) μ , q 2 = 8 μ ( μ + 2 ) ( μ 2 2 ) 8 μ , q 3 = 1 16 ( μ 2 ) μ μ 1 ( μ + 2 ) 3 μ .
Kansal–Kanwar–Bhatia method (KKBM):
z k = u k 2 μ μ + 2 ψ ( u k ) ψ ( u k ) , u k + 1 = u k μ 4 ψ ( u k ) 1 + μ 4 p 2 μ p μ 1 ψ ( z k ) ψ ( u k ) 2 ( p μ 1 ) 8 ( 2 p μ + n ( p μ 1 ) ) × 4 2 μ + μ 2 ( p μ 1 ) ψ ( u k ) p μ ( 2 p μ + μ ( p μ 1 ) ) 2 ψ ( u k ) ψ ( z k ) ,
where p = μ μ + 2 .
Computations are performed in the programming package of Mathematica software [20] in a PC with specifications: Intel(R) Pentium(R) CPU B960 @ 2.20 GHz, 2.20 GHz (32-bit Operating System) Microsoft Windows 7 Professional and 4 GB RAM. Numerical tests are performed by choosing the value 0.01 for parameter β in new methods. The tabulated results of the methods displayed in Table 1 include: (i) iteration number ( k ) required to obtain the desired solution satisfying the condition | u k + 1 u k | + | ψ ( u k ) | < 10 100 , (ii) estimated error | u k + 1 u k | in the consecutive first three iterations, (iii) calculated convergence order (CCO), and (iv) time consumed (CPU time in seconds) in execution of a program, which is measured by the command “TimeUsed[ ]”. The calculated convergence order (CCO) is computed by the well-known formula (see [24])
CCO = log | ( u k + 2 α ) / ( u k + 1 α ) | log | ( u k + 1 α ) / ( u k α ) | , for each k = 1 , 2 ,
The problems considered for numerical testing are shown in Table 2.
From the computed results in Table 1, we can observe the good convergence behavior of the proposed methods. The reason for good convergence is the increase in accuracy of the successive approximations as is evident from values of the differences | u k + 1 u k | . This also implies to stable nature of the methods. Moreover, the approximations to solutions computed by the proposed methods have either greater or equal accuracy than those computed by existing counterparts. The value 0 of | u k + 1 u k | indicates that the stopping criterion | u k + 1 u k | + | ψ ( u k ) | < 10 100 has been satisfied at this stage. From the calculation of calculated convergence order as shown in the second last column in each table, we have verified the theoretical fourth order of convergence. The robustness of new algorithms can also be judged by the fact that the used CPU time is less than that of the CPU time by the existing techniques. This conclusion is also confirmed by similar numerical experiments on many other different problems.

6. Conclusions

We have proposed a family of fourth order derivative-free numerical methods for obtaining multiple roots of nonlinear equations. Analysis of the convergence has been carried out under standard assumptions, which proves the convergence order four. The important feature of our designed scheme is its optimal order of convergence which is rare to achieve in derivative-free methods. Some special cases of the family have been explored. These cases are employed to solve some nonlinear equations. The performance is compared with existing techniques of a similar nature. Testing of the numerical results have shown the presented derivative-free method as good competitors to the already established optimal fourth order techniques that use derivative information in the algorithm. We conclude this work with a remark: the proposed derivative-free methods can be a better alternative to existing Newton-type methods when derivatives are costly to evaluate.

Author Contributions

Methodology, J.R.S.; Writing—review & editing, J.R.S.; Investigation, S.K.; Data Curation, S.K.; Conceptualization, L.J.; Formal analysis, L.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Schröder, E. Über unendlich viele Algorithmen zur Auflösung der Gleichungen. Math. Ann. 1870, 2, 317–365. [Google Scholar] [CrossRef] [Green Version]
  2. Hansen, E.; Patrick, M. A family of root finding methods. Numer. Math. 1977, 27, 257–269. [Google Scholar] [CrossRef]
  3. Victory, H.D.; Neta, B. A higher order method for multiple zeros of nonlinear functions. Int. J. Comput. Math. 1983, 12, 329–335. [Google Scholar] [CrossRef]
  4. Dong, C. A family of multipoint iterative functions for finding multiple roots of equations. Int. J. Comput. Math. 1987, 21, 363–367. [Google Scholar] [CrossRef]
  5. Osada, N. An optimal multiple root-finding method of order three. J. Comput. Appl. Math. 1994, 51, 131–133. [Google Scholar] [CrossRef] [Green Version]
  6. Neta, B. New third order nonlinear solvers for multiple roots. App. Math. Comput. 2008, 202, 162–170. [Google Scholar] [CrossRef] [Green Version]
  7. Li, S.; Liao, X.; Cheng, L. A new fourth-order iterative method for finding multiple roots of nonlinear equations. Appl. Math. Comput. 2009, 215, 1288–1292. [Google Scholar]
  8. Li, S.G.; Cheng, L.Z.; Neta, B. Some fourth-order nonlinear solvers with closed formulae for multiple roots. Comput Math. Appl. 2010, 59, 126–135. [Google Scholar] [CrossRef] [Green Version]
  9. Sharma, J.R.; Sharma, R. Modified Jarratt method for computing multiple roots. Appl. Math. Comput. 2010, 217, 878–881. [Google Scholar] [CrossRef]
  10. Zhou, X.; Chen, X.; Song, Y. Constructing higher-order methods for obtaining the multiple roots of nonlinear equations. J. Comput. Appl. Math. 2011, 235, 4199–4206. [Google Scholar] [CrossRef] [Green Version]
  11. Sharifi, M.; Babajee, D.K.R.; Soleymani, F. Finding the solution of nonlinear equations by a class of optimal methods. Comput. Math. Appl. 2012, 63, 764–774. [Google Scholar] [CrossRef] [Green Version]
  12. Soleymani, F.; Babajee, D.K.R.; Lotfi, T. On a numerical technique for finding multiple zeros and its dynamics. J. Egypt. Math. Soc. 2013, 21, 346–353. [Google Scholar] [CrossRef]
  13. Geum, Y.H.; Kim, Y.I.; Neta, B. A class of two-point sixth-order multiple-zero finders of modified double-Newton type and their dynamics. Appl. Math. Comput. 2015, 270, 387–400. [Google Scholar] [CrossRef] [Green Version]
  14. Kansal, M.; Kanwar, V.; Bhatia, S. On some optimal multiple root-finding methods and their dynamics. Appl. Appl. Math. 2015, 10, 349–367. [Google Scholar]
  15. Traub, J.F. Iterative Methods for the Solution of Equations; Chelsea Publishing Company: New York, NY, USA, 1982. [Google Scholar]
  16. Sharma, J.R.; Kumar, S.; Jäntschi, L. On a class of optimal fourth order multiple root solvers without using derivatives. Symmetry 2019, 11, 766. [Google Scholar] [CrossRef] [Green Version]
  17. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Mach. 1974, 21, 643–651. [Google Scholar] [CrossRef]
  18. Geum, Y.H.; Kim, Y.I.; Neta, B. Constructing a family of optimal eighth-order modified Newton-type multiple-zero finders along with the dynamics behind their purely imaginary extraneous fixed points. J. Comp. Appl. Math. 2018, 333, 131–156. [Google Scholar] [CrossRef]
  19. Benbernou, S.; Gala, S.; Ragusa, M.A. On the regularity criteria for the 3D magnetohydrodynamic equations via two components in terms of BMO space. Math. Meth. Appl. Sci. 2016, 37, 2320–2325. [Google Scholar] [CrossRef]
  20. Wolfram, S. The Mathematica Book, 5th ed.; Wolfram Media: Champaign, IL, USA, 2003. [Google Scholar]
  21. Vrscay, E.R.; Gilbert, W.J. Extraneous fixed points, basin boundaries and chaotic dynamics for Schröder and König rational iteration functions. Numer. Math. 1988, 52, 1–16. [Google Scholar] [CrossRef]
  22. Varona, J.L. Graphic and numerical comparison between iterative methods. Math. Intell. 2002, 24, 37–46. [Google Scholar] [CrossRef]
  23. Argyros, I.K.; Magreñán, Á.A. Iterative Methods and Their Dynamics with Applications; CRC Press: New York, NY, USA, 2017. [Google Scholar]
  24. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
Figure 1. Basins of attraction by M-1–M-4 ( β = 0.01 ) for polynomial ψ 1 ( z ) .
Figure 1. Basins of attraction by M-1–M-4 ( β = 0.01 ) for polynomial ψ 1 ( z ) .
Mathematics 08 01091 g001
Figure 2. Basins of attraction by M-1–M-4 ( β = 10 4 ) for polynomial ψ 1 ( z ) .
Figure 2. Basins of attraction by M-1–M-4 ( β = 10 4 ) for polynomial ψ 1 ( z ) .
Mathematics 08 01091 g002
Figure 3. Basins of attraction by M-1–M-4 ( β = 10 6 ) for polynomial ψ 1 ( z ) .
Figure 3. Basins of attraction by M-1–M-4 ( β = 10 6 ) for polynomial ψ 1 ( z ) .
Mathematics 08 01091 g003
Figure 4. Basins of attraction by M-1–M-4 ( β = 0.01 ) for polynomial ψ 2 ( z ) .
Figure 4. Basins of attraction by M-1–M-4 ( β = 0.01 ) for polynomial ψ 2 ( z ) .
Mathematics 08 01091 g004
Figure 5. Basins of attraction by M-1–M-4 ( β = 10 4 ) for polynomial ψ 2 ( z ) .
Figure 5. Basins of attraction by M-1–M-4 ( β = 10 4 ) for polynomial ψ 2 ( z ) .
Mathematics 08 01091 g005
Figure 6. Basins of attraction by methods M-1–M-4 ( β = 10 6 ) for polynomial ψ 2 ( z ) .
Figure 6. Basins of attraction by methods M-1–M-4 ( β = 10 6 ) for polynomial ψ 2 ( z ) .
Mathematics 08 01091 g006
Figure 7. Basins of attraction by M-1–M-4 ( β = 0.01 ) for polynomial ψ 3 ( z ) .
Figure 7. Basins of attraction by M-1–M-4 ( β = 0.01 ) for polynomial ψ 3 ( z ) .
Mathematics 08 01091 g007
Figure 8. Basins of attraction by M-1–M-4 ( β = 10 4 ) for polynomial ψ 3 ( z ) .
Figure 8. Basins of attraction by M-1–M-4 ( β = 10 4 ) for polynomial ψ 3 ( z ) .
Mathematics 08 01091 g008
Figure 9. Basins of attraction by M-1–M-4 ( β = 10 6 ) for polynomial ψ 3 ( z ) .
Figure 9. Basins of attraction by M-1–M-4 ( β = 10 6 ) for polynomial ψ 3 ( z ) .
Mathematics 08 01091 g009
Table 1. Comparison of numerical results.
Table 1. Comparison of numerical results.
Methodsk | u 2 u 1 | | u 3 u 2 | | u 4 u 3 | CCOCPU-Time
ψ 1 ( u )
LLCM6 7.84 × 10 2 6.31 × 10 3 1.06 × 10 5 4.0000.0784
LCNM6 7.84 × 10 2 6.31 × 10 3 1.006 × 10 5 4.0000.0822
SSM6 7.99 × 10 2 6.78 × 10 3 1.44 × 10 5 4.0000.0943
ZCSM6 8.31 × 10 2 7.83 × 10 3 2.76 × 10 5 4.0000.0956
SBLM6 7.84 × 10 2 6.31 × 10 3 1.06 × 10 5 4.0000.0874
KKBM6 7.74 × 10 2 5.97 × 10 3 7.31 × 10 6 4.0000.0945
M16 9.20 × 10 2 1.16 × 10 2 1.16 × 10 4 4.0000.0774
M26 6.90 × 10 2 3.84 × 10 3 1.03 × 10 6 4.0000.0794
M36 6.21 × 10 2 2.39 × 10 3 7.06 × 10 8 4.0000.0626
M46 6.29 × 10 2 2.54 × 10 3 9.28 × 10 8 4.0000.0785
ψ 2 ( u )
LLCM4 2.02 × 10 4 2.11 × 10 17 2.51 × 10 69 4.0000.7334
LCNM4 2.02 × 10 4 2.12 × 10 17 2.54 × 10 69 4.0001.0774
SSM4 2.02 × 10 4 2.12 × 10 17 2.60 × 10 69 4.0001.0765
ZCSM4 2.02 × 10 4 2.15 × 10 17 2.75 × 10 69 4.0001.1082
SBLM4 2.02 × 10 4 2.13 × 10 17 2.62 × 10 69 4.0001.2950
KKBM4 2.02 × 10 4 2.08 × 10 17 2.31 × 10 69 4.0001.1548
M14 1.01 × 10 4 1.08 × 10 18 1.43 × 10 74 4.0000.5612
M24 9.85 × 10 5 4.94 × 10 19 3.13 × 10 76 4.0000.5154
M34 9.85 × 10 5 4.94 × 10 19 3.13 × 10 76 4.0000.5311
M44 9.82 × 10 5 4.35 × 10 19 1.67 × 10 76 4.0000.5003
ψ 3 ( u )
LLCM4 4.91 × 10 5 5.70 × 10 21 1.03 × 10 84 4.0000.6704
LCNM4 4.91 × 10 5 5.70 × 10 21 1.03 × 10 84 4.0000.9832
SSM4 4.92 × 10 5 5.71 × 10 21 1.04 × 10 84 4.0001.0303
ZCSM4 4.92 × 10 5 5.72 × 10 21 1.05 × 10 84 4.0001.0617
SBLM4 4.92 × 10 5 5.73 × 10 21 1.06 × 10 84 4.0001.2644
KKBM4 4.91 × 10 5 5.66 × 10 21 1.00 × 10 84 4.0001.0768
M13 6.35 × 10 6 2.73 × 10 25 04.0000.3433
M23 4.94 × 10 6 6.81 × 10 26 04.0000.2965
M33 5.02 × 10 6 7.46 × 10 26 04.0000.3598
M43 4.77 × 10 6 5.66 × 10 26 04.0000.3446
ψ 4 ( u )
LLCM4 1.15 × 10 4 5.69 × 10 17 3.39 × 10 66 4.0001.4824
LCNM4 1.15 × 10 4 5.70 × 10 17 3.40 × 10 66 4.0002.5745
SSM4 1.15 × 10 4 5.71 × 10 17 3.44 × 10 66 4.0002.5126
ZCSM4 1.15 × 10 4 5.72 × 10 17 3.47 × 10 66 4.0002.5587
SBLM4 1.15 × 10 4 5.83 × 10 17 3.79 × 10 66 4.0003.1824
KKBM4 1.15 × 10 4 5.63 × 10 17 3.21 × 10 66 4.0002.4965
M14 4.18 × 10 4 6.03 × 10 19 2.60 × 10 74 4.0000.4993
M24 3.88 × 10 5 2.24 × 10 19 2.45 × 10 76 4.0000.5151
M34 3.92 × 10 5 2.57 × 10 19 4.80 × 10 76 4.0000.4996
M44 3.85 × 10 5 1.92 × 10 19 1.18 × 10 76 4.0000.4686
ψ 5 ( u )
LLCM4 2.16 × 10 4 3.17 × 10 17 1.48 × 10 68 4.0001.9042
LCNM4 2.16 × 10 4 3.17 × 10 17 1.47 × 10 68 4.0002.0594
SSM4 2.16 × 10 4 3.16 × 10 17 1.45 × 10 68 4.0002.0125
ZCSM4 2.16 × 10 4 3.15 × 10 17 1.43 × 10 68 4.0002.1530
SBLM4 2.16 × 10 4 3.01 × 10 17 1.15 × 10 68 4.0002.4185
KKBM4 2.16 × 10 4 3.24 × 10 17 1.63 × 10 68 4.0002.2153
M14 2.48 × 10 4 7.62 × 10 21 6.81 × 10 83 4.0001.6697
M24 2.15 × 10 5 2.03 × 10 21 1.63 × 10 85 4.0001.7793
M34 2.19 × 10 5 2.51 × 10 21 4.35 × 10 85 4.0001.7942
M44 2.11 × 10 5 1.66 × 10 21 6.29 × 10 86 4.0001.6855
Table 2. Test functions.
Table 2. Test functions.
FunctionsRoot ( α )MultiplicityInitial Guess
ψ 1 ( u ) = u 3 5.22 u 2 + 9.0825 u 5.2675 1.7522.4
ψ 2 ( u ) = u 4 12 + u 2 2 + u + e u ( u 3 ) + sin u + 3 030.6
ψ 3 ( u ) = e u 1 + u 5 4 4.9651142317…45.5
ψ 4 ( u ) = u ( u 2 + 1 ) ( 2 e u 2 + 1 + u 2 1 ) cosh 4 π u 2 i61.2 i
ψ 5 ( u ) = [ tan 1 5 2 tan 1 ( u 2 1 ) + 6 ( tan 1 u 2 1 6
tan 1 1 2 5 6 ) 11 63 ] 7 1.8411294068…71.6

Share and Cite

MDPI and ACS Style

Sharma, J.R.; Kumar, S.; Jäntschi, L. On Derivative Free Multiple-Root Finders with Optimal Fourth Order Convergence. Mathematics 2020, 8, 1091. https://doi.org/10.3390/math8071091

AMA Style

Sharma JR, Kumar S, Jäntschi L. On Derivative Free Multiple-Root Finders with Optimal Fourth Order Convergence. Mathematics. 2020; 8(7):1091. https://doi.org/10.3390/math8071091

Chicago/Turabian Style

Sharma, Janak Raj, Sunil Kumar, and Lorentz Jäntschi. 2020. "On Derivative Free Multiple-Root Finders with Optimal Fourth Order Convergence" Mathematics 8, no. 7: 1091. https://doi.org/10.3390/math8071091

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop