Next Article in Journal
Primes in Intervals and Semicircular Elements Induced by p-Adic Number Fields Q p over Primes p
Next Article in Special Issue
Computing Degree Based Topological Properties of Third Type of Hex-Derived Networks
Previous Article in Journal
Sculpture from Patchwise Modules
Previous Article in Special Issue
Improved Convergence Analysis of Gauss-Newton-Secant Method for Solving Nonlinear Least Squares Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convergence Analysis of Weighted-Newton Methods of Optimal Eighth Order in Banach Spaces

1
Department of Mathematics, Sant Longowal Institute of Engineering and Technology Longowal, Punjab 148106, India
2
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(2), 198; https://doi.org/10.3390/math7020198
Submission received: 12 December 2018 / Revised: 14 February 2019 / Accepted: 17 February 2019 / Published: 19 February 2019
(This article belongs to the Special Issue Computational Methods in Analysis and Applications)

Abstract

:
We generalize a family of optimal eighth order weighted-Newton methods to Banach spaces and study their local convergence. In a previous study, the Taylor expansion of higher order derivatives is employed which may not exist or may be very expensive to compute. However, the hypotheses of the present study are based on the first Fréchet-derivative only, thereby the application of methods is expanded. New analysis also provides the radius of convergence, error bounds and estimates on the uniqueness of the solution. Such estimates are not provided in the approaches that use Taylor expansions of derivatives of higher order. Moreover, the order of convergence for the methods is verified by using computational order of convergence or approximate computational order of convergence without using higher order derivatives. Numerical examples are provided to verify the theoretical results and to show the good convergence behavior.
MSC:
49M15; 41A25; 65H10; 65J10

1. Introduction

In this work, we generate a sequence { x n } for approximating a locally unique solution α of the nonlinear equation
F ( x ) = 0 ,
where F is a Fréchet-differentiable operator defined on a closed convex subset D of Banach space B 1 with values in a Banach space B 2 . In computational sciences, many problems can be written in the form (1). See, for example [1,2]. The solutions of such equations are rarely attainable in closed form. This shows why most methods for solving these equations are usually iterative in nature. The important part in the construction of an iterative method is to study its convergence analysis. In general, the convergence domain is small. Therefore, it is important to enlarge the convergence domain without using extra hypotheses. Knowledge of the radius of convergence is useful because it gives us the degree of difficulty for obtaining initial points. Another important problem is to find more precise error estimates on x n + 1 x n or x n α . Many authors have studied convergence analysis of iterative methods, see, for example [1,2,3,4,5,6,7].
The most widely used iterative method for solving (1) is the quadratically convergent Newton’s method
x n + 1 = x n F ( x n ) 1 F ( x n ) , n = 0 , 1 , 2 , ,
where F ( x ) 1 is the inverse of first Fréchet derivative F ( x ) of the function F ( x ) . In order to accelerate the convergence, researchers have also obtained modified Newton’s or Newton-like methods (see [4,6,8,9,10,11,12,13,14,15,16,17]) and references therein.
There are numerous higher order iterative methods for solving a scalar equation f ( x ) = 0 (see, for example [2]. Contrary to this fact, higher order methods are rare for multi-dimensional cases, that is, for approximating the solution of F ( x ) = 0 . One possible reason is that the construction of higher order methods for solving systems is a difficult task. Another factual reason is that not every method developed for single equations can be generalized to solve systems of nonlinear equations. Recently, a family of optimal eighth order methods for solving a scalar equation f ( x ) = 0 has been proposed in [16], which is given by
y n = x n f ( x n ) f ( x n ) , z n = ϕ 4 ( x n , y n ) , x n + 1 = z n f [ z n , y n ] 2 f [ z n , y n ] f [ z n , x n ] f ( z n ) f [ z n , x n ] ,
where ϕ 4 ( x n , y n ) is any optimal fourth order scheme with the base as Newton’s iteration y n and f [ · , · ] is Newton’s first order divided difference. In particular, they have considered the following optimal fourth order schemes in the second step of (3):
Ostrowski method (see [12]):
z n = y n 1 2 f [ y n , x n ] f ( x n ) f ( y n ) .
Ostrowski-like method (see [12]):
z n = y n 2 f [ y n , x n ] 1 f ( x n ) f ( y n ) .
Kung-Traub method (see [15]):
z n = y n f ( x n ) f ( y n ) f [ y n , x n ] 2 .
Motivated by the above methods defined on the real line, we propose the methods that follow but for Banach space valued operators. It can be observed that the above family of eighth order methods can be easily extendable for solving (1). In view of this, here we study the method (3) in Banach space. The iterative methods corresponding to the fourth order schemes (4)–(6) in the Banach space setting are given as
y n = x n F ( x n ) 1 F ( x n ) , z n = y n 2 F [ y n , x n ] F ( x n ) 1 F ( y n ) , x n + 1 = Ψ 8 ( x n , y n , z n ) ,
y n = x n F ( x n ) 1 F ( x n ) , z n = y n 2 F [ y n , x n ] 1 F ( x n ) 1 F ( y n ) , x n + 1 = Ψ 8 ( x n , y n , z n )
and
y n = x n F ( x n ) 1 F ( x n ) , z n = y n F [ y n , x n ] 1 F ( x n ) F [ y n , x n ] 1 F ( y n ) , x n + 1 = Ψ 8 ( x n , y n , z n ) .
In above each case, we have that
Ψ 8 ( x n , y n , z n ) = z n 2 F [ z n , y n ] F [ z n , x n ] 1 F [ z n , y n ] F [ z n , x n ] 1 F ( z n ) .
Here F [ · , · ] : D × D L ( B 1 , B 2 ) is a first order divided difference on D × D satisfying F [ x , y ] ( x y ) = F ( x ) F ( y ) for x y and F [ x , x ] = F ( x ) if F is differentiable, where L ( B 1 , B 2 ) stands for the space of bounded linear operators from B 1 into B 2 . Methods (7)–(9) require four inverses and four function evaluations at each step.
The rest of the paper is summarized as follows. In Section 2, the local convergence, including radius of convergence, computable error bounds and uniqueness results of the proposed methods, is presented. In order to verify the theoretical results of convergence analysis, some numerical examples are presented in Section 3. Finally, the methods are applied to solve systems of nonlinear equations in Section 4.

2. Local Convergence

Local convergence analysis of the methods (7)–(9) is presented by using some real functions and parameters. Let λ 0 : [ 0 , + ) [ 0 , + ) be a continuous and increasing function satisfying λ 0 ( 0 ) = 0 . Suppose that equation
λ 0 ( t ) = 1
has positive solutions. Denote by ϱ the smallest such solution. Let λ : [ 0 , ϱ ) [ 0 , + ) , μ : [ 0 , ϱ ) [ 0 , + ) , λ 1 : [ 0 , ϱ ) × [ 0 , ϱ ) [ 0 , + ) and μ 1 : [ 0 , ϱ ) × [ 0 , ϱ ) [ 0 , + ) also be continuous and increasing functions satisfying λ 1 ( 0 , 0 ) = 0 . Define functions g 1 and h 1 on the interval [ 0 , ϱ ) by
g 1 ( t ) = 0 1 λ ( ( 1 θ ) t ) d θ 1 λ 0 ( t )   and   h 1 ( t ) = g 1 ( t ) 1 .
We have that h 1 ( 0 ) = 1 < 0 and h 1 ( t ) + as t ϱ . By applying the Bolzano’s theorem on function h 1 , we deduce that equation h 1 ( t ) = 0 has solutions in the interval ( 0 , ϱ ) . Let r 1 be the smallest such zero.
Moreover, define function p and h p on the interval [ 0 , ϱ ) by
p ( t ) = 2 λ 1 ( g 1 ( t ) t , t ) + λ 0 ( t )
and
h p ( t ) = p ( t ) 1 .
We get h p ( 0 ) = 1 < 0 and h p ( t ) + as t ϱ . Let r p be the smallest solution of equation h p ( t ) = 0 in the interval ( 0 , ϱ ) . Furthermore, define functions g 2 and h 2 on the interval [ 0 , r p ) by
g 2 ( t ) = 1 + 0 1 μ ( θ g 1 ( t ) t ) d θ ( 1 p ( t ) ) ( 1 λ 0 ( t ) ) g 1 ( t )
and
h 2 ( t ) = g 2 ( t ) 1 .
We obtain h 2 ( t ) = 1 < 0 and h 2 ( t ) + as t r p . Let r 2 be the smallest solution of equation h 2 ( t ) = 0 in the interval ( 0 , r p ) . Define functions q and h q on the interval ( 0 , r p ) and functions φ and ψ on the interval [ 0 , r p ) , respectively by
q ( t ) = 2 λ 1 ( g 2 ( t ) t , g 1 ( t ) t ) + λ 1 ( g 2 ( t ) t , t ) , h q ( t ) = q ( t ) 1 , φ ( t ) = λ 1 ( g 2 ( t ) , t ) , ψ ( t ) = φ ( t ) 1 .
We get h q ( 0 ) = ψ ( 0 ) = 1 < 0 and h q ( t ) + as t r p , ψ ( t ) + as t r ψ . Let r q , r ψ be the smallest solutions of equations h q ( t ) = 0 , ψ ( t ) = 0 in the intervals ( 0 , r p ) , ( 0 , ϱ ) , respectively. Finally, define functions g 3 and h 3 on the interval [ 0 , ϱ 0 ) by
g 3 ( t ) = 1 + μ 1 ( g 2 ( t ) t , g 1 ( t ) t ) ( 1 q ( t ) ) ( 1 λ 1 ( g 2 ( t ) t , t ) ) g 2 ( t )
and
h 3 ( t ) = g 3 ( t ) 1 ,
where ϱ 0 = min { r q , r ψ } . We have that h 3 ( 0 ) = 1 < 0 and h 3 ( t ) + as t ϱ 0 . Let r 3 be the smallest solution of equation h 3 ( t ) = 0 in the interval ( 0 , ϱ 0 ) . Set
r = min { r i } i = 1 , 2 , 3 . . . . .
to be the radius of convergence for method (7). Then, for each t [ 0 , r ) , it follows that
0 g i ( t ) 1 ,
0 p ( t ) 1 ,
0 φ ( t ) 1
and
0 q ( t ) 1 .
Let U ( a , b ) and U ¯ ( a , b ) stand, respectively for the open and closed balls in B 1 with center a D and of radius b > 0 .
The local convergence analysis of method (7), method (8) and method (9) is based on the conditions (A):
(a1)
F : D B 1 B 2 is continuously Fréchet differentiable and D is a convex set. The operator F [ · , · ] : D × D L ( B 1 , B 2 ) is a divided difference of order one satisfying
F [ x , y ] ( x y ) = F ( x ) F ( y ) for x y
and
F [ x , x ] = F ( x ) .
(a2)
There exists α D such that F ( α ) = 0 and F ( α ) 1 L ( B 2 , B 1 ) .
(a3)
There exists function λ 0 : [ 0 , + ) [ 0 , + ) continuous and increasing with λ 0 ( 0 ) = 0 such that for each x D
F ( α ) 1 ( F ( x ) F ( α ) ) λ 0 ( x α ) .
Set D 0 = D U ( α , ϱ ) , where ϱ is given in (11).
(a4)
There exist continuous and increasing functions λ : [ 0 , ϱ ) [ 0 , + ) , λ 1 : [ 0 , ϱ ) × [ 0 , ϱ ) [ 0 , + ) , μ : [ 0 , ϱ ) [ 0 , + ) and μ 1 : [ 0 , ϱ ) × [ 0 , ϱ ) [ 0 , + ) such that for each x , y D 0
F ( α ) 1 ( F ( x ) F ( y ) ) λ ( x y ) ,
F ( α ) 1 ( F [ x , y ] F ( α ) ) λ 1 ( y α , x α ) ,
F ( α ) 1 F ( x ) μ ( x y )
and
F ( α ) 1 F [ x , y ] μ 1 ( x α , y α ) .
(a5)
U ¯ ( α , r ) D where r is given in (12) for method (7), by (30) for method (8) and by (31) for method (9).
(a6)
There exists R r such that
0 1 λ 0 ( θ R ) d θ < 1 .
Set D 1 = D U ¯ ( α , R ) .
Next, we first present the local convergence analysis of method (7) based on the conditions (A).
Theorem 1.
Assume that the conditions (A) hold. Then, sequence { x n } generated for x 0 U ( α , r ) { α } by method (7) is well defined in U ( α , r ) , remains in U ( α , r ) for each n = 0 , 1 , 2 and converges to α so that
y n α   g 1 ( x n α ) x n α     x n α   < r ,
z n α   g 2 ( x n α ) x n α     x n α
and
x n + 1 α   g 3 ( x n α ) x n α     x n α ,
where the functions g i are defined previously. Moreover, the solution α of equation F ( x ) = 0 is unique in D 1 .
Proof. 
We shall show assertions (17)–(19) using mathematical induction. Let x U [ α , ϱ ) . Then, using ( a 3 ) and (12), we have that
F ( α ) 1 F ( x 0 ) F ( α )   λ 0 x α   < λ 0 ( r ) < 1 .
By the Banach perturbation Lemma [2] and (20), we deduce that F ( x ) 1 L ( B 2 , B 1 ) and
F ( x ) 1 F ( α ) 1 1 λ 0 x α .
In particular for x = x 0 , y 0 is well defined by the first substep of method (7) and (21) holds for x = x 0 , since x 0 U [ α , r ) . We get by the first substep of method (7) for n = 0 , ( a 2 ) , ( a 4 ) , (13) (for i = 1 ) and (12) that
y 0 α = x 0 α F ( x 0 ) 1 F ( x 0 ) = [ F ( x 0 ) 1 F ( α ) ] [ 0 1 F ( α ) 1 ( F ( α + θ ( x 0 α ) ) F ( x 0 ) ) ( x 0 α ) d θ ] 0 1 λ ( ( 1 θ ) x 0 α ) d θ 1 λ 0 ( x 0 α ) x 0 α = g 1 ( x 0 α ) x 0 α x 0 α < r ,
so (17) holds for n = 0 and y 0 U ( α , r ) . We must show the existence of ( 2 F [ y 0 , x 0 ] F ( x 0 ) ) 1 which shall imply that z 0 is well defined. Using (12), (14), ( a 3 ) and ( a 4 ) , we get in turn that
F ( α ) 1 ( 2 ( F [ y 0 , x 0 ] F ( α ) ) + ( F ( α ) F ( x 0 ) ) ) 2 F ( α ) 1 ( F [ y 0 , x 0 ] F ( α ) ) + F ( α ) 1 ( F ( α ) F ( x 0 ) ) 2 λ 1 ( y 0 α , x 0 α ) + λ 0 ( x 0 α ) 2 λ 1 ( g 1 ( x 0 α ) x 0 α , x 0 α ) + λ 0 ( x 0 α ) = p ( x 0 α ) p ( r ) < 1 ,
so ( 2 F [ y 0 , x 0 ] F ( x 0 ) ) 1 exists and
( 2 F [ y 0 , x 0 ] F ( x 0 ) ) 1 F ( α ) = 1 1 p ( x 0 α ) .
We can write
F ( x ) = F ( x ) F ( α ) = 0 1 F ( α + θ ( x α ) ) ( x α ) d θ .
Notice that α + θ ( x α ) α = θ x α r for each θ [ 0 , 1 ] . Using ( a 4 ) and (24), we get
F ( α ) 1 F ( x ) 0 1 μ ( θ x α ) d θ x α .
Then, by (12), (13) (for i = 2 ), (21), (22), (23), (25) and the second substep of method (7), we obtain in turn that
z 0 α y 0 α + ( 2 F [ y 0 , x 0 ] F ( x 0 ) ) 1 F ( α ) F ( α ) 1 F ( y 0 ) y 0 α + 0 1 μ ( θ y 0 α ) d θ y 0 α ( 1 p ( x 0 α ) ) ( 1 λ 0 ( x 0 α ) ) 1 + 0 1 μ ( θ g 1 ( x 0 α ) x 0 α ) d θ ( 1 p ( x 0 α ) ) ( 1 λ 0 ( x 0 α ) ) g 1 ( x 0 α ) x 0 α = g 2 ( x 0 α ) x 0 α x 0 α < r ,
which shows (18) for n = 0 and z 0 U ( α , r ) . We must show the existence of F [ z 0 , x 0 ] 1 which shall also imply that x 1 is well defined. Using (12), (15) and ( a 4 ) , we obtain in turn that
F ( α ) 1 ( F [ z 0 , x 0 ] F ( α ) ) λ 1 ( z 0 α , x 0 α ) λ 1 ( g 2 ( x 0 α ) x 0 α , x 0 α ) = φ x 0 α φ ( r ) < 1 ,
so F [ z 0 , x 0 ] 1 exists and
F [ z 0 , x 0 ] 1 F ( α ) 1 1 φ ( x 0 α ) .
Then, using the last substep of method (7), (10), (12), (13) (for i = 3 ), (18), (23), (26) and (27), we get in turn that
x 1 α z 0 α + μ 1 ( z 0 α , y 0 α ) 0 1 μ ( θ z 0 α ) d θ z 0 α ( 1 q ( x 0 α ) ) ( 1 λ 1 ( z 0 α , x 0 α ) ) 1 + μ 1 ( g 2 ( x 0 α ) x 0 α , g 1 ( x 0 α ) x 0 α ) ( 1 q ( x 0 α ) ) ( 1 λ 1 ( g 2 ( x 0 α ) x 0 α , x 0 α ) ) × g 2 ( x 0 α ) x 0 α ,
which shows (19) for n = 0 and x 1 U ( α , r ) . The induction is completed if x k , y k , z k , x k + 1 replace x 0 , y 0 , z 0 , x 1 in the preceding estimates, respectively. Then, from the estimate
x k + 1 α c x k α < r ,
where c = g 3 ( x k α ) [ 0 , 1 ) , we deduce that lim k x k = α and x k + 1 U ( α , r ) . Let Q = 0 1 F ( α + θ ( y * α ) ) d θ for some y * D 1 such that F ( y * ) = 0 . By ( a 3 ) and ( a 6 ) , we have in turn that
F ( α ) 1 ( Q F ( α ) ) 0 1 λ 0 ( α + θ ( y * α ) α ) d θ , 0 1 λ 0 ( θ α y * ) d θ 0 1 λ 0 ( θ R ) d θ < 1 ,
implies that Q 1 exists. Then, from the identity 0 = F ( y * ) F ( α ) = Q ( y * α ) , we conclude that α = y * .  □
Next, we shall show the local convergence of method (8) in an analogous way but functions g 2 , φ , g 3 shall be replaced by g ¯ 2 , φ 1 , g ¯ 3 and which are given by
g ¯ 2 ( t ) = 1 + μ ( t ) 0 1 μ ( θ g 1 ( t ) t ) d θ ( 1 λ 1 ( g 1 ( t ) t , t ) ) 2 g 1 ( t ) , h ¯ 2 ( t ) = g ¯ 2 ( t ) 1 , φ 1 ( t ) = λ 1 ( g 1 ( t ) t , t ) , ψ 1 ( t ) = φ 1 ( t ) 1 , g ¯ 3 ( t ) = 1 + μ 1 ( g ¯ 2 ( t ) t , g 1 ( t ) t ) ( 1 q ( t ) ) ( 1 λ 1 ( g ¯ 2 ( t ) t , t ) ) g ¯ 2 ( t ) , h ¯ 3 ( t ) = g ¯ 3 ( t ) 1 .
We shall use the same notation for r 1 as in (12) but notice that r ¯ 2 and r ¯ 3 correspond to the smallest positive solutions of equations h ¯ 2 ( t ) = 0 and h ¯ 3 ( t ) = 0 , respectively. Set
r ¯ = min { r 1 , r ¯ 2 , r ¯ 3 } .
The local convergence analysis of method (8) is given by the following theorem:
Theorem 2.
Assume that the conditions ( A ) hold. Then, the conclusions of Theorem 1 also hold for method (8) with functions g ¯ 2 , g ¯ 3 and r ¯ replacing g 2 , g 3 and r, respectively.
Proof. 
We have that
y n α g 1 ( x n α ) x n α x n α < r ¯
as in Theorem 1 and using the second and third substep of method (8) we get (as in Theorem 1) that
z n α y n α + μ ( x n α ) 0 1 μ ( θ y n α ) d θ y n α ( 1 λ 1 ( y n α , x n α ) ) 2 g ¯ 2 ( x n α ) x n α x n α
and
x n + 1 α 1 + μ 1 ( g ¯ 2 ( x n α ) x n α , g 1 ( x n α ) x n α ) ( 1 q ( x n α ) ) ( 1 λ 1 ( g ¯ 2 ( x n α ) x n α , x n α ) ) × g ¯ 2 ( x n α ) x n α g ¯ 3 ( x n α ) x n α x n α .
  □
We define
g ¯ ¯ 2 ( t ) = 1 + 2 0 1 μ ( θ ( g 1 ( t ) t ) d θ ( 1 λ 1 ( g 1 ( t ) t , t ) ) + 0 1 μ ( θ ( g 1 ( t ) t ) d θ 1 λ 0 ( t ) g 1 ( t ) , h ¯ ¯ 2 ( t ) = g ¯ ¯ 2 ( t ) 1 , g ¯ ¯ 3 ( t ) = 1 + μ 1 ( g ¯ ¯ 2 ( t ) t , g 1 ( t ) t ) ( 1 q ( t ) ) ( 1 λ 1 ( g ¯ ¯ 2 ( t ) t , t ) ) g ¯ ¯ 2 ( t ) , h ¯ ¯ 3 ( t ) = g ¯ ¯ 3 ( t ) 1 .
Denote by r ¯ ¯ 2 , r ¯ ¯ 3 , the smallest positive solutions of equations h ¯ ¯ 2 ( t ) = 0 and h ¯ ¯ 3 ( t ) = 0 . Set
r ¯ ¯ = min { r 1 , r ¯ ¯ 2 , r ¯ ¯ 3 } .
Then, we have:
Theorem 3.
Assume that the conditions ( A ) hold. Then, the conclusions of Theorem 1 also hold for method (9) with functions g ¯ ¯ 2 , g ¯ ¯ 3 and r ¯ ¯ replacing g 2 , g 3 and r, respectively.
Proof. 
Notice that from the second and third substep of method (9) we obtain
z n α y n α + 2 F [ y n , x n ] 1 F ( α ) F ( α ) 1 F ( y n ) y n α + 2 0 1 μ ( θ y n α ) d θ y n α ( 1 λ 1 ( y n α , x n α ) ) + 0 1 μ ( θ y n α ) d θ ( 1 λ 0 ( x n α ) ) y n α g ¯ 2 ( x n α ) x n α x n α r ¯ ¯
and
x n + 1 α 1 + μ 1 ( g ¯ ¯ 2 ( x n α ) x n α , g 1 ( x n α ) x n α ) ( 1 q ( x n α ) ) ( 1 λ 1 ( g ¯ ¯ 2 ( x n α ) x n α , x n α ) ) × g ¯ ¯ 2 ( x n α ) x n α .
  □
Remark 1.
Methods (7)–(9) are not effected, when we use the conditions of the Theorems 1–3 instead of stronger conditions used in ([16], Theorem 1). Moreover, we can compute the computational order of convergence (COC) [18] defined by
COC = ln x n + 1 α x n α / ln x n α x n 1 α ,
or the approximate computational order of convergence (ACOC) [9], given by
ACOC = ln x n + 1 x n x n x n 1 / ln x n x n 1 x n 1 x n 2 .
In this way, we obtain in practice the order of convergence.

3. Numerical Examples

Here, we shall demonstrate the theoretical results which we have shown in Section 2. We use the divided difference given by F [ x , y ] = 1 2 ( F ( x ) + F ( y ) ) or F [ x , y ] = 0 1 ( F ( y + τ ( x y ) ) d τ .
Example 1.
Suppose that the motion of an object in three dimensions is governed by system of differential equations
f 1 ( x ) f 1 ( x ) 1 = 0 , f 2 ( y ) ( e 1 ) y 1 = 0 , f 3 ( z ) 1 = 0 .
with x , y , z D for f 1 ( 0 ) = f 2 ( 0 ) = f 3 ( 0 ) = 0 . Then, the solution of the system is given for v = ( x , y , z ) T by function F : = ( f 1 , f 2 , f 3 ) : D R 3 defined by
F ( u ) = e x 1 , e 1 2 y 2 + y , z T .
The Fréchet-derivative is given by
F ( u ) = e x 0 0 0 ( e 1 ) y + 1 0 0 0 1 .
Then for α = ( 0 , 0 , 0 ) T we have that λ ( t ) = e t , λ 0 ( t ) = ( e 1 ) t , λ 1 ( s , t ) = s + t 2 , μ ( t ) = 2 and μ 1 ( s , t ) = s t 2 . The parameters r 1 , r 2 , r 3 , r ¯ 2 , r ¯ 3 , r ¯ ¯ 2 and r ¯ ¯ 3 using methods (7)–(9) are given in Table 1.
Theorems 1–3 guarantee the convergence of (7)–(9) to α = 0 provided that x 0 U ( α , r ) . This condition yields very close initial approximation.
Example 2.
Let B 1 = C [ 0 , 1 ] , be the space of continuous functions defined on the interval [ 0 , 1 ] . We shall utilize the max norm. Let D = U ¯ ( 0 , 1 ) . Define function G on D by
G ( φ 2 ) ( x ) = ϕ ( x ) 10 0 1 x θ φ 2 ( θ ) 3 d θ .
We get that
G ( φ 2 ( ξ ) ) ( x ) = ξ ( x ) 30 0 1 x θ φ 2 ( θ ) 2 ξ ( θ ) d θ , f o r e a c h ξ D .
Then for α = 0 we have that λ ( t ) = 30 t , λ 0 ( t ) = 15 t , λ 1 ( s , t ) = s + t 2 , μ ( t ) = 1.85 and μ 1 ( s , t ) = s t 2 . The parameters r 1 , r 2 , r 3 , r ¯ 2 , r ¯ 3 , r ¯ ¯ 2 and r ¯ ¯ 3 using (7)–(9) are given in Table 2.
It is clear that the convergence of (7)–(9) is guaranteed to α = 0 provided that x 0 U ( α , r ) .
Example 3.
Let us consider the function H : = ( f 1 , f 2 , f 3 ) : D R 3 defined by
H ( x ) = 10 x 1 + sin ( x 1 + x 2 ) 1 , 8 x 2 cos 2 ( x 3 x 2 ) 1 , 12 x 3 + sin ( x 3 ) 1 T ,
where x = ( x 1 , x 2 , x 3 ) T .
The Fréchet-derivative is given by
H ( x ) = 10 + cos ( x 1 + x 2 ) cos ( x 1 + x 2 ) 0 0 8 + sin 2 ( x 2 x 3 ) 2 sin ( x 2 x 3 ) 0 0 12 + cos ( x 3 ) .
With the initial approximation x 0 = { 0 , 0.5 , 0.1 } T , we obtain the solution α of the function (37)
α = { 0.06897 , 0.24644 , 0.07692 } T .
Then we get that λ ( t ) = 0.269812 t , λ 0 ( t ) = 0.269812 t , λ 1 ( s , t ) = s + t 2 , μ ( t ) = 13.0377 and μ 1 ( s , t ) = s t 2 . The parameters r 1 , r 2 , r 3 , r ¯ 2 , r ¯ 3 , r ¯ ¯ 2 and r ¯ ¯ 3 using methods (7)–(9) are given in Table 3.

4. Applications

Lastly, we apply the methods (7)–(9) to solve systems of nonlinear equations in R m . The performance is also compared with some existing methods. For example, we choose Newton method (NM), sixth-order methods proposed by Grau et al. [12] and Sharma and Arora [15], and eighth-order Triple-Newton Method [14]. These methods are given as follows:
Grau-Grau-Noguera method:
y n = x n F ( x n ) 1 F ( x n ) , z n = y n 2 F [ y n , x n ] F ( x n ) 1 F ( y n ) , x n + 1 = z n 2 F [ y n , x n ] F ( x n ) 1 F ( z n ) .
This method requires two inverses and three function evaluations.
Grau-Grau-Noguera method:
y n = x n F ( x n ) 1 F ( x n ) , z n = y n 2 F [ y n , x n ] 1 F ( x n ) 1 F ( y n ) , x n + 1 = z n 2 F [ y n , x n ] 1 F ( x n ) 1 F ( z n ) .
It requires two inverses and three function evaluations.
Sharma-Arora Method:
y n = x n F ( x n ) 1 F ( x n ) , z n = y n 3 I 2 F ( x n ) 1 F [ y n , x n ] F ( x n ) 1 F ( y n ) , x n + 1 = z n 3 I 2 F ( x n ) 1 F [ y n , x n ] F ( x n ) 1 F ( z n ) .
The method requires one inverse and three function evaluations.
Triple-Newton Method:
y n = x n F ( x n ) 1 F ( x n ) , z n = y n F ( y n ) 1 F ( y n ) , x n + 1 = z n F ( z n ) 1 F ( z n ) .
This method requires three inverses and three function evaluations.
Example 4.
Let us consider the system of nonlinear equations:
x i 2 x i + 1 1 = 0 , 1 i m 1 , x i 2 x 1 1 = 0 , i = m .
with initial value x 0 = { 2 , 2 , m t i m e s , 2 } T towards the required solution α = { 1 , 1 , m t i m e s , 1 } T of the systems for m = 8 , 25 , 50 , 100 .
Example 5.
Next, consider the extended Freudenstein and Roth function [19]:
F ( x ) = ( f 1 ( x ) , f 2 ( x ) , , f m ( x ) ) T ,
where
f 2 i 1 ( x ) = x 2 i 1 + ( ( 5 x 2 i ) x 2 i 2 ) x 2 i 13 , for i = 1 , 2 , , m 2 , f 2 i ( x ) = x 2 i 1 + ( ( 1 + x 2 i ) x 2 i 14 ) x 2 i 29 , for i = 1 , 2 , , m 2 ,
with initial value x 0 = { 3 , 6 , m t i m e s , 3 , 6 } T towards the required solution x * = { 5 , 4 , m t i m e s , 5 , 4 } T of the systems for m = 20 , 50 , 100 , 200 .
Computations are performed in the programming package Mathematica using multiple-precision arithmetic. For every method, we record the number of iterations ( n ) needed to converge to the solution such that the stopping criterion
| | F ( x n ) | | < 10 350
is satisfied. In order to verify the theoretical order of convergence, we calculate the approximate computational order of convergence (ACOC) using the formula (33). For the computation of divided difference we use the formula (see [12])
F [ x , y ] i j = f i ( x 1 , . . . . . , x j , y j + 1 , . . . . . , y m ) f i ( x 1 , . . . . . , x j 1 , y j , . . . . . , y m ) x j y j , 1 i , j m .
Numerical results are displayed in Table 4 and Table 5, which include:
  • The dimension ( m ) of the system of equations.
  • The required number of iterations ( n ) .
  • The value of | | F ( x n ) | | of approximation to the corresponding solution of considered problems, wherein N ( h ) denotes N × 10 h .
  • The approximate computational order of convergence (ACOC).
From the numerical results shown in Table 4 and Table 5 it is clear that the methods possess stable convergence behavior. Moreover, the small values of | | F ( x n ) | | , in comparison to the other methods, show the accurate behavior of the presented methods. The computational order of convergence also supports the theoretical order of convergence. Similar numerical tests, carried out for a number of other different problems, confirmed the above conclusions to a large extent.

Author Contributions

Methodology, J.R.S.; Writing—review & editing, J.R.S.; Conceptualization, I.K.A.; Formal analysis, I.K.A.; Investigation, S.K.; Data Curation, S.K.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Argyros, I.K. Computational Theory of Iterative Methods, Volume 15 (Studies in Computational Mathematics); Chui, C.K., Wuytack, L., Eds.; Elsevier: New York, NY, USA, 2007. [Google Scholar]
  2. Argyros, I.K.; Magreñán, Á.A. Iterative Methods and Their Dynamics with Applications: A Contemporary Study; CRC Press: New York, NY, USA, 2017. [Google Scholar]
  3. Argyros, I.K.; Hilout, S. Weaker conditions for the convergence of Newton’s method. J. Complex. 2002, 28, 364–387. [Google Scholar] [CrossRef]
  4. Argyros, I.K.; Ren, H. Improved local analysis for certain class of iterative methods with cubic convergence. Numer. Algorithms 2012, 59, 505–521. [Google Scholar]
  5. Babajee, D.K.R.; Dauhoo, M.Z.; Darvishi, M.T.; Barati, A. A note on the local convergence of iterative methods based on Adomian decomposition method and 3-node quadrature rule. Appl. Math. Comput. 2008, 200, 452–458. [Google Scholar] [CrossRef]
  6. Argyros, I.K.; Sharma, J.R.; Kumar, D. Ball convergence of Newton-Gauss method in Banach spaces. SeMA J. 2017, 74, 429–439. [Google Scholar] [CrossRef]
  7. Ren, H.; Wu, Q. Convergence ball and error analysis of a family of iterative methods with cubic convergence. Appl. Math. Comput. 2009, 209, 369–378. [Google Scholar] [CrossRef]
  8. Babajee, D.K.R.; Dauhoo, M.Z.; Darvishi, M.T.; Karami, A.; Barati, A. On some improved harmonic mean Newton-like methods for solving systems of nonlinear equations. J. Comput. Appl. Math. 2010, 233, 2002–2012. [Google Scholar] [CrossRef]
  9. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  10. Darvishi, M.T. Some three-step iterative methods free from second order derivative for finding solutions of systems of nonlinear equations. Int. J. Pure Appl. Math. 2009, 57, 557–573. [Google Scholar]
  11. Darvishi, M.T.; Barati, A. Super cubic iterative methods to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 188, 1678–1685. [Google Scholar] [CrossRef]
  12. Grau-Sánchez, M.; Grau, Á.; Noguera, M. Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput. 2011, 218, 2377–2385. [Google Scholar] [CrossRef]
  13. Magreñán, Á.A. A new tool to study real dynamics: The convergence plane. Appl. Math. Comput. 2014, 248, 215–224. [Google Scholar] [CrossRef]
  14. Noor, K.I.; Noor, M.A. Iterative methods with fourth-order convergence for nonlinear equations. Appl. Math. Comput. 2007, 189, 221–227. [Google Scholar] [CrossRef]
  15. Sharma, J.R.; Arora, H. On efficient weighted-Newton methods for solving systems of nonlinear equations. Appl. Math. Comput. 2013, 222, 497–506. [Google Scholar] [CrossRef]
  16. Sharma, J.R.; Arora, H. A new family of optimal eighth order methods with dynamics for non linear equations. Appl. Math. Comput. 2016, 273, 924–933. [Google Scholar]
  17. Shin, B.C.; Darvishi, M.T.; Kim, C.H. A comparison of the Newton-Krylov method with high order Newton-like methods to solve nonlinear systems. Appl. Math. Comput. 2010, 217, 3190–3198. [Google Scholar] [CrossRef]
  18. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  19. Fang, X.; Ni, Q.; Zeng, M. A modified quasi-Newton method for nonlinear equations. J. Comput. Appl. Math. 2018, 328, 44–58. [Google Scholar] [CrossRef]
Table 1. Numerical results for Example 1.
Table 1. Numerical results for Example 1.
Method (7)Method (8)Method (9)
r 1 = 0.324947 r 1 = 0.324947 r 1 = 0.324947
r 2 = 0.119823 r ¯ 2 = 0.107789 r ¯ ¯ 2 = 0.083622
r 3 = 0.115973 r ¯ 3 = 0.103461 r ¯ ¯ 3 = 0.080798
r = 0.115973 r ¯ = 0.103461 r ¯ ¯ = 0.080798
Table 2. Numerical results for Example 2.
Table 2. Numerical results for Example 2.
Method (7)Method (8)Method (9)
r 1 = 0.033333 r 1 = 0.033333 r 1 = 0.033333
r 2 = 0.013431 r ¯ 2 = 0.011013 r ¯ ¯ 2 = 0.008039
r 3 = 0.013389 r ¯ 3 = 0.010972 r ¯ ¯ 3 = 0.008015
r = 0.0133889 r ¯ = 0.010972 r ¯ ¯ = 0.008015
Table 3. Numerical results for Example 3.
Table 3. Numerical results for Example 3.
Method (7)Method (8)Method (9)
r 1 = 2.470865 r 1 = 2.470865 r 1 = 2.470865
r 2 = 0.288117 r ¯ 2 = 0.612891 r ¯ ¯ 2 = 0.639134
r 3 = 0.254805 r ¯ 3 = 0.473734 r ¯ ¯ 3 = 0.461618
r = 0.254805 r ¯ = 0.473734 r ¯ ¯ = 0.461618
Table 4. Comparison of performance of methods for Example 4. Approximate computational order of convergence (ACOC).
Table 4. Comparison of performance of methods for Example 4. Approximate computational order of convergence (ACOC).
Methods(2)(38)(39)(40)(41)(7)(8)(9)
m = 8
n104443333
| | F ( x n ) | | 9.26 ( 253 ) 1.30 ( 304 ) 8.01 ( 206 ) 1.18 ( 168 ) 2.80 ( 126 ) 6.07 ( 258 ) 1.00 ( 185 ) 4.15 ( 171 )
ACOC2.0006.0006.0006.0008.0008.0008.0008.000
m = 25
n104443333
| | F ( x n ) | | 1.64 ( 252 ) 2.29 ( 304 ) 1.42 ( 205 ) 2.10 ( 168 ) 4.95 ( 126 ) 1.07 ( 257 ) 1.77 ( 185 ) 7.33 ( 171 )
ACOC2.0006.0006.0006.0008.0008.0008.0008.000
m = 50
n104443333
| | F ( x n ) | | 2.31 ( 252 ) 3.24 ( 304 ) 2.00 ( 205 ) 2.96 ( 168 ) 7.01 ( 126 ) 1.52 ( 257 ) 2.50 ( 185 ) 1.04 ( 170 )
ACOC2.0006.0006.0006.0008.0008.0008.0008.000
m = 100
n104443333
| | F ( x n ) | | 3.27 ( 252 ) 4.58 ( 304 ) 2.83 ( 205 ) 4.19 ( 168 ) 9.91 ( 126 ) 2.15 ( 257 ) 3.54 ( 185 ) 1.47 ( 170 )
ACOC 2.000 6.0006.0006.000 8.000 8.0008.0008.000
Table 5. Comparison of performance of methods for Example 5.
Table 5. Comparison of performance of methods for Example 5.
Methods(2)(38)(39)(40)(41)(7)(8)(9)
m = 20
n103443333
| | F ( x n ) | | 6.42 ( 327 ) 1.15 ( 63 ) 1.49 ( 278 ) 7.64 ( 234 ) 1.42 ( 162 ) 3.71 ( 246 ) 2.82 ( 184 ) 1.41 ( 197 )
ACOC2.0006.0006.0006.0008.0008.0008.0008.000
m = 50
n103443333
| | F ( x n ) | | 1.01 ( 326 ) 1.82 ( 63 ) 2.35 ( 278 ) 1.21 ( 233 ) 2.25 ( 162 ) 5.87 ( 246 ) 4.46 ( 184 ) 2.24 ( 197 )
ACOC2.0006.0006.0006.0008.0008.0008.0008.000
m = 100
n103443333
| | F ( x n ) | | 1.43 ( 326 ) 2.57 ( 63 ) 3.32 ( 278 ) 1.71 ( 233 ) 3.18 ( 162 ) 8.31 ( 246 ) 6.31 ( 184 ) 3.16 ( 197 )
ACOC2.0006.0006.0006.0008.0008.0008.0008.000
m = 200
n103443333
| | F ( x n ) | | 2.03 ( 326 ) 3.64 ( 63 ) 4.70 ( 278 ) 2.41 ( 233 ) 4.50 ( 162 ) 1.17 ( 245 ) 8.92 ( 184 ) 4.47 ( 197 )
ACOC 2.000 6.0006.0006.000 8.000 8.0008.0008.000

Share and Cite

MDPI and ACS Style

Sharma, J.R.; Argyros, I.K.; Kumar, S. Convergence Analysis of Weighted-Newton Methods of Optimal Eighth Order in Banach Spaces. Mathematics 2019, 7, 198. https://doi.org/10.3390/math7020198

AMA Style

Sharma JR, Argyros IK, Kumar S. Convergence Analysis of Weighted-Newton Methods of Optimal Eighth Order in Banach Spaces. Mathematics. 2019; 7(2):198. https://doi.org/10.3390/math7020198

Chicago/Turabian Style

Sharma, Janak Raj, Ioannis K. Argyros, and Sunil Kumar. 2019. "Convergence Analysis of Weighted-Newton Methods of Optimal Eighth Order in Banach Spaces" Mathematics 7, no. 2: 198. https://doi.org/10.3390/math7020198

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop