Next Article in Journal
A Multisecret-Sharing Scheme Based on LCD Codes
Next Article in Special Issue
Domain of Existence and Uniqueness for Nonlinear Hammerstein Integral Equations
Previous Article in Journal
Cumulative Sum Chart Modeled under the Presence of Outliers
Previous Article in Special Issue
A New Newton Method with Memory for Solving Nonlinear Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Higher-Order Iterative Scheme for the Solutions of Nonlinear Systems

by
Ramandeep Behl
1,† and
Ioannis K. Argyros
2,*,†
1
Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
2
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2020, 8(2), 271; https://doi.org/10.3390/math8020271
Submission received: 23 January 2020 / Revised: 9 February 2020 / Accepted: 10 February 2020 / Published: 18 February 2020

Abstract

:
Many real-life problems can be reduced to scalar and vectorial nonlinear equations by using mathematical modeling. In this paper, we introduce a new iterative family of the sixth-order for a system of nonlinear equations. In addition, we present analyses of their convergences, as well as the computable radii for the guaranteed convergence of them for Banach space valued operators and error bounds based on the Lipschitz constants. Moreover, we show the applicability of them to some real-life problems, such as kinematic syntheses, Bratu’s, Fisher’s, boundary value, and Hammerstein integral problems. We finally wind up on the ground of achieved numerical experiments, where they perform better than other competing schemes.

1. Introduction

Establishment of higher-order efficient iterative schemes for finding the solutions
F ( U ) = 0 ,
(where F : D R m R m is a differentiable mapping with open domain D ) is one of the foremost tasks in the area of numerical analysis and computation methods because of its wide application in real-life situations. We can easily find several real-life problems that were phrased into the nonlinear system (1) along with the same fundamental properties. For example, transport theory, combustion, reactor, kinematics syntheses, steering, chemical equilibrium, neurophysiology, and economic modeling problems were solved by being formulated to F ( U ) = 0 , and details can be found in the research articles [1,2,3,4,5].
Analytical methods for these problems are rare. Therefore, many authors developed iterative schemes that are based on iteration procedures. These iterative methods depend on several things, like starting the initial guess/es, the considered problem, the body structure of the proposed method, efficiency, and so forth. (For more details, please go through [6,7,8,9,10]). Some authors [11,12,13,14,15,16] gave special concern to the development of higher-order multi-point iterative methods. Faster convergence toward the required root, better efficiency, less CPU time, and fast accuracy are some of the main reasons behind the importance of multi-point methods.
The inspiration behind this work was the thought to suggest a new sixth-order iterative technique based on the weight function approach along with lower computational costs for large nonlinear systems. The beauty of using this approach is that it gives us the flexibility to produce new, as well as some special cases of the earlier methods. A good variety of applied science problems is considered in order to investigate the authenticity of our presented methods. Finally, using numerical experiments, we show the superiority of our schemes when compared to others in regard to computational cost, residual error, and CPU time. Moreover, they also show the stable computational order of convergence and minimum asymptotic error constants in contrast with exiting iterative methods.

2. Multi-Dimensional Case

Consider the following new scheme:
V ζ = U ζ 2 3 F ( U ζ ) 1 F ( U ζ ) , W ζ = U ζ P T ( U ζ ) F ( U ζ ) + b F ( V ζ ) 1 F ( U ζ ) , U ζ + 1 = W ζ + 2 Q ( U ζ ) 1 F ( W ζ ) , ζ = 0 , 1 , 2 , ,
where T : D R m R m is sufficiently differentiable in D with T ( U ζ ) = F ( U ζ ) 1 F ( V ζ ) , where Q ( U ζ ) = F ( U ζ ) 3 F ( V ζ ) . We demonstrate the sixth-order convergence in Theorem 1 by adopting the same procedure suggested in [16].
Let F : D R m R m be sufficiently differentiable in D . The kth derivative of F at u R m , k 1 , is the k-linear function F ( k ) ( u ) : R m × × R m R m with F ( k ) ( u ) ( v 1 , , v k ) R m , and we have
  • F ( k ) ( u ) ( v 1 , , v k 1 , · ) L ( R m )
  • F ( k ) ( u ) ( v σ ( 1 ) , , v σ ( k ) ) = F ( k ) ( u ) ( v 1 , , v k ) , for each permutation σ of { 1 , 2 , , k } ,
that further yields
(a)
F ( k ) ( u ) ( v 1 , , v k ) = F ( k ) ( u ) v 1 v k
(b)
F ( k ) ( u ) v k 1 F ( p ) v p = F ( k ) ( u ) F ( p ) ( u ) v k + p 1 .
This ξ + h R m being contained in the neighborhood of the required root ξ of F ( x ) = 0 , we have
F ( ξ + h ) = F ( ξ ) h + k = 2 p 1 C k h k + O ( h p ) ,
where C k = ( 1 / k ! ) [ F ( ξ ) ] 1 F ( k ) ( ξ ) , k 2 , provided F ( ξ ) is invertible. We recognize that C k h k R m , since F ( k ) ( ξ ) L ( R m × × R m , R m ) , and [ F ( ξ ) ] 1 L ( R m ) .
We can also write
F ( ξ + h ) = F ( x ¯ ) I + k = 2 p 2 k C k h k 1 + O ( h p 1 ) ,
I being the identity and k C k h k 1 L ( R m ) .
By (4), we get
[ F ( ξ + h ) ] 1 = I + U 2 h + U 3 h 2 + U 4 h 4 + [ F ( ξ ) ] 1 + O ( h p ) ,
with
U 2 = 2 C 2 , U 3 = 4 C 2 2 3 C 3 , U 4 = 8 C 2 3 + 6 C 2 C 3 4 C 4 + 6 C 3 C 2 , .
The e ζ = U ζ ξ denotes the error in the ζ th step. Then,
e ζ + 1 = M e ζ p + O ( e ζ p + 1 ) ,
where M is a p-linear function M L ( R m × × R m , R m ) , which is known as the error equation, and where p is the convergence order. Observe that e ζ p is ( e ζ , e ζ , , e ζ ) .
Theorem 1.
Suppose F : D R m R m is a sufficient differentiable mapping with open domain D that consists of the required zero ξ. Further, we assume F ( x ) is invertible and continuous around ξ. Moreover, we consider the starting guess X 0 is close enough to ξ for sure convergence. Then, scheme (2) attains maximum sixth-order convergence, provided that
P ( I ) = ( 1 + b ) I , P ( I ) = b 3 4 I ,   P ( I ) = 3 ( b + 3 ) 4 I ,
where I is the identity matrix.
Proof. 
We can write F ( U ζ ) and F ( U ζ ) as follows:
F ( U ζ ) = F ( ξ ) e ζ + C 2 e ζ 2 + C 3 e ζ 3 + C 4 e ζ 4 + C 5 e ζ 5 + C 6 e ζ 6 + O ( e ζ 7 )
and
F ( U ζ ) = F ( ξ ) I + 2 C 2 e ζ + 3 C 3 e ζ 2 + 4 C 4 e ζ 3 + 5 C 5 e ζ 4 + 6 C 6 e ζ 5 + O ( e ζ 6 ) ,
where I is the identity matrix of size m × m and C m = 1 m ! F ( ξ ) 1 F ( m ) ( ξ ) , m = 2 , 3 , 4 , 5 , 6 .
By expressions (7) and (8), we get
F ( U ζ ) 1 = I 2 C 2 e ζ + ( 4 C 2 2 3 C 3 ) e ζ 2 F ( ξ ) 1
and
F ( U ζ ) 1 F ( U ζ ) = e ζ C 2 e ζ 2 + ( 2 C 2 2 2 C 3 ) e ζ 3 + O ( e ζ 4 ) .
Using expression (10) in (2), we have
V ζ ξ = 1 3 e ζ + 2 3 C 2 e ζ 2 2 3 ( 2 C 2 2 2 C 3 ) e ζ 3 + O ( e ζ 4 ) ,
which further produces
F ( V ζ ) = F ( ξ ) I + 2 3 C 2 e ζ + 1 3 ( 4 C 2 2 + C 3 ) e ζ 2 + O ( e ζ 3 )
and
F ( U ζ ) + b F ( V ζ ) = F ( ξ ) ( 1 + b ) I + 2 ( b + 3 ) C 2 3 e ζ + 1 3 4 b C 2 2 + ( b + 9 ) C 3 e ζ 2 + O ( e ζ 3 ) .
We can easily obtain the following from the expressions (10) and (13):
T ( U ζ ) = F ( U ζ ) 1 F ( V ζ ) = I 4 C 2 e ζ 3 + 4 C 2 2 8 C 3 3 e ζ 2 + O ( e ζ 3 ) .
We deduce from expression (14) that T ( U ζ ) I = O ( e ζ ) . Moreover, we can write
P T ( U ζ ) = P ( I ) + P ( I ) T ( U ζ ) + 1 2 ! P ( I ) T ( U ζ ) 2 + O T ( U ζ ) 3 ,
so
P ( T ( U ζ ) ) = ( 1 + b ) I ( b 3 ) 3 C 2 e ζ + ( 5 b 3 ) C 2 2 2 ( b 3 ) C 3 3 e ζ 2 + O ( e ζ 3 ) .
By adopting the expressions (10) and (16), we have
P ( T ( U ζ ) ) F ( U ζ ) + b F ( V ζ ) 1 F ( U ζ ) = ( 1 + b ) I ( b 3 ) 3 C 2 e ζ + ( 5 b 3 ) C 2 2 2 ( b 3 ) C 3 3 e ζ 2 + O ( e ζ 3 ) × e ζ b + 1 + ( b 3 ) C 2 3 ( b + 1 ) 2 e ζ 2 2 ( 7 b 2 + 6 b 9 ) C 2 2 6 ( b 2 2 b 3 ) C 3 9 ( b + 1 ) 3 e ζ 3 + O ( e ζ 4 ) , = e ζ + O ( e ζ 4 ) .
Then, using expression (17) in (2), we yield
W ζ ξ = α 1 e ζ 4 + α 2 e ζ 5 + α 3 e ζ 6 + O ( e ζ 7 ) ,
where α i ,   i = 1 , 2 , 3 depend on some b and C i ,   2 j 6 .
Moreover, we have
F ( W ζ ) = F ( ξ ) α 1 e ζ 4 + α 2 e ζ 5 + α 3 e ζ 6 + O ( e ζ 7 ) .
After some simple algebraic calculations, we have
2 Q ( U ζ ) 1 F ( W ζ ) = 2 F ( U ζ ) + F ( V ζ ) 1 F ( W ζ ) = α 1 e ζ 4 α 2 e ζ 5 + α 3 + 2 α 1 C 2 2 α 1 C 3 e ζ 6 + O ( e ζ 7 ) .
Finally, we have
X ζ + 1 ξ = W ζ ξ + 2 Q ( U ζ ) 1 F ( W ζ ) = α 1 2 C 2 2 C 3 e ζ 6 + O ( e ζ 7 ) ,
where α 1 is a function of only b , C 2 , C 3 , C 4 . □

Specializations

Some of the fruitful cases are mentioned below:
(1) We assume
P ( U ) = 3 2 U 2 1 2 U + 4 I
for b = 1 , which generates the following new sixth-order, Jarratt-type scheme:
V ζ = U ζ 2 3 F ( U ζ ) 1 F ( U ζ ) , W ζ = U ζ 3 2 F ( U ζ ) 1 F ( V ζ ) 2 1 2 F ( U ζ ) 1 F ( V ζ ) + 4 I F ( U ζ ) + F ( V ζ ) 1 F ( U ζ ) , U ζ + 1 = W ζ + 2 Q ( U ζ ) 1 F ( W ζ ) .
(2) Consider the following weight function for b = 0 :
P ( U ) = 3 U 8 + 9 8 U 1 1 2 I ,
leading to
V ζ = U ζ 2 3 F ( U ζ ) 1 F ( U ζ ) , W ζ = U ζ 3 8 F ( U ζ ) 1 F ( V ζ ) + 9 8 F ( U ζ ) 1 F ( V ζ ) 1 1 2 I F ( U ζ ) 1 F ( U ζ ) , U ζ + 1 = W ζ + 2 Q ( U ζ ) 1 F ( W ζ ) .
(3) Now, we assume another weight function (for b = 0.5 ),
P ( U ) = 44 I 84 U 1 ( 41 I 101 U ) ,
that yields
V ζ = U ζ 2 3 F ( U ζ ) 1 F ( U ζ ) , W ζ = U ζ 44 I 84 F ( U ζ ) 1 F ( V ζ ) 1 41 I 101 F ( U ζ ) 1 F ( V ζ ) F ( U ζ ) + 0.5 F ( V ζ ) 1 F ( U ζ ) , U ζ + 1 = W ζ + 2 Q ( U ζ ) 1 F ( W ζ ) ,
which is another new sixth-order scheme.
In like manner, we can obtain many familiar and advanced sixth-order, Jarratt-type schemes by adopting different weight functions.

3. Local Convergence Analysis

It is well-known that iterative methods defined on the real line or on the m - dimensional Euclidean space constitute the motivation for extending these methods to more abstract spaces, such as Hilbert, Banach, or other spaces. The local convergence analysis of method (2) after defining it for Banach space operators for all ζ = 0 , 1 , 2 , 3 , are as
V ζ = U ζ 2 3 F ( U ζ ) 1 F ( U ζ ) , W ζ = U ζ P T ( U ζ ) F ( U ζ ) + b F ( V ζ ) 1 F ( U ζ ) , U ζ + 1 = W ζ + 2 Q ( U ζ ) 1 F ( W ζ ) ,
where E 1 , E 2 are Banach spaces, Ω E 1 is a nonempty, convex, and open subset of E 1 , T ( U ) = F ( U ) 1 F ( V ) , Q ( U ) = F ( U ) 3 F ( V ) , b R { 1 } , and T : E 1 E 2 is such that P ( I ) = ( 1 + b ) I . Then, under certain hypotheses given later, method (25) converges to a solution U * of equation
F ( U ) = 0 ,
where F : Ω E 2 is a continuously differentiable operator in the sense of Fréchet. For acceptable convergence analysis, we first need to define some parameters and scalar functions. Let ψ 0 : T T be a continuous and increasing function with ψ 0 ( 0 ) = 0 , where T = [ 0 , + ) . The equation
ψ 0 ( t ) = 1
has a minimum of one positive zero. Denote by ρ 0 the smallest such solution, and set T 0 = [ 0 , ρ ) . Let ψ : T 0 T , ψ 1 : T 0 T be continuous and increasing maps with ψ ( 0 ) = 0 . Consider maps φ 1 and φ ¯ 1 on T 0 as
φ 1 ( t ) = 0 1 ψ ( 1 τ ) t d τ + 1 3 0 1 ψ 1 ( τ t ) d τ 1 ψ 0 ( t )
and
φ ¯ 1 ( t ) = φ 1 ( t ) 1 .
Suppose that
ψ 1 ( t ) 3 < 1 .
Then, by the definition of function φ ¯ 1 and (26), we have φ ¯ 1 ( 0 ) = ψ 1 ( t ) 3 1 < 0 and φ ¯ 1 ( t ) + , as t ρ 0 . Then, by the mean value theorem, there exists at least one solution of φ ¯ 1 ( t ) = 0 in the interval ( 0 , ρ 0 ) . Denoted by R 1 is the smallest such solution.
Suppose
λ ( t ) = 1
where
λ ( t ) = 1 | 1 + b | ψ 0 ( t ) + | b | ψ 0 ( φ 1 ( t ) t )
has a minimum of one positive zero. Denoted by ρ λ is the smallest such solution, and set T 1 = [ 0 , ρ 1 ) ,   ρ 1 = min { ρ 0 ,   ρ λ } .
Further, we consider functions φ 2 and φ ¯ 2 on T 1 = [ 0 , ρ ) by
φ 2 ( t ) = 0 1 ψ ( 1 τ ) t d τ 1 ψ 0 ( t ) + ψ 0 ( t ) + ψ 0 ( φ 1 ( t ) t ) A ( t ) + ψ 1 ( φ 1 ( t ) t ) B ( t ) 0 1 ψ 1 ( τ t ) d τ ( 1 λ ( t ) ) ( 1 ψ 0 ( t ) )
and
φ ¯ 2 ( t ) = φ 2 ( t ) 1 ,
where A : T 1 T and B : T 1 T are continuous and increasing functions. We also get φ ¯ 2 ( 0 ) = 1 and φ ¯ 2 ( t ) + , as t ρ 0 . Recall by R 2 the smallest solution of equation φ ¯ 2 ( t ) = 0 in ( 0 , ρ 1 ) . Suppose that the equations
g ( t ) = 1 , ψ 0 ( φ 2 ( t ) t ) = 1
have at least one positive solution, where g ( t ) = 1 2 ψ 0 ( t ) + 3 ψ 0 φ 1 ( t ) t . Denote by ρ g , ρ h the smallest such solutions, and set T 2 = [ 0 , ρ ) , ρ = min { ρ 0 , ρ λ , ρ g , ρ h } . Define functions ψ 3 and ψ ¯ 3 on the interval T 3 = [ 0 , ρ ) by
φ 3 ( t ) = 0 1 ψ ( 1 τ ) φ 2 ( t ) t d τ 1 ψ 0 ( φ 2 ( t ) ) + ψ 0 ( t ) + 3 ψ 0 ( φ 1 ( t ) t ) + 2 ψ 0 ( φ 2 ( t ) t ) 0 1 ψ 1 ( τ φ 2 ( t ) t ) d τ 1 ψ 0 ( φ 2 ( t ) t ) φ 2 ( t )
and
φ ¯ 3 ( t ) = φ 3 ( t ) 1 .
We also get φ ¯ 3 ( 0 ) = 1 and φ ¯ 3 ( t ) + , as t ρ . Denote by R 3 the smallest solution of equation φ ¯ 3 ( t ) = 0 in ( 0 , ρ ) . Define a radius of convergence R by
R = min { R i } ,   i = 1 , 2 , 3 .
It follows from each t [ 0 , R )
0 ψ 0 ( t ) < 1 ,
0 ψ 0 ( φ 1 ( t ) t ) < 1 ,
0 ψ 0 ( φ 2 ( t ) t ) < 1 ,
0 φ 1 ( t ) < 1 ,
0 λ ( t ) < 1
0 φ 2 ( t ) < 1 ,
0 g ( t ) < 1
and
0 φ 3 ( t ) < 1 .
Define K ( U , a ) = { V E 1   such   that   U V < a } . Let K ¯ ( U , a ) to stand for the closure of K ( U , a ) . By L B ( E 1 , E 2 ) , denote the space of bounded linear operators from E 1 into E 2 .
The following conditions ( H ) are the base for the study of local convergence analysis:
( h 1 ) F : Ω E 2 with U * Ω such that F ( U * ) = 0 and F ( U * ) 1 are invertible.
( h 2 ) There exists function ψ 0 : T T continuous, increasing with ψ 0 ( 0 ) = 0 such that for each x Ω
F ( U * ) 1 F ( U ) F ( U * ) ψ 0 ( U U * ) .
Set Ω 0 = Ω S ( U * , ρ 0 ) , where ρ 0 is given in (25).
( h 3 ) There exist functions ψ : T 0 T , ψ 1 : T 0 T , A : T 0 T and B : T 0 T continuous and increasing with ψ ( 0 ) = 0 such that for each X , V Ω 0
F ( U * ) 1 F ( V ) F ( U ) ψ ( V U ) ,
F ( U * ) 1 F ( U ) ψ 1 ( V U ) ,
I K ( T ( U ) ) A ( V U ) ,
K ( I ) K ( T ( U ) ) B ( V U ) ,
and
K ( I ) = ( 1 + b ) I .
( h 4 ) K ¯ ( U * , R ) Ω , ρ 0 , ρ λ , ρ g exist, given by (25), (29), (30), respectively, and R is defined in (31).
( h 5 ) There exists R * R such that
0 1 ψ 0 ( τ R * ) d τ < 1 .
Set Ω 1 = Ω K ¯ ( U * , R * ) .
Next, we provide the local convergence analysis of method (25) using the hypotheses ( H ) and the previously developed notations.
Theorem 2.
Assume the hypotheses ( H ) hold and U 0 K ( U * , R ) { U * } . Then, the sequence { U ζ } K ( U * , R ) , lim ζ U ζ = U * and
V ζ U * φ 1 ( U ζ U * ) U ζ U * U ζ U * < R ,
W ζ U * φ 2 ( U ζ U * ) U ζ U * U ζ U *
and
X ζ + 1 U * φ 3 ( U ζ U * ) U ζ U * U ζ U * .
Moreover, U * is the only solution of F ( x ) = 0 in the set Ω 1 given in ( h 5 ) .
Proof. 
Estimates (40)–(42) are shown utilizing the hypotheses ( H ) and mathematical induction. By ( h 1 ) , ( h 2 ) , (31) and (32), we have that
F ( U * ) 1 ( F ( U ) F ( U * ) ) ψ 0 ( U U * ) < ψ 0 ( R ) < 1 ,
for each U K ( U * , R ) { U * } , so F ( U ) 1 L B ( E 2 , E 1 ) and
F ( U ) 1 F ( U * ) 1 1 ψ 0 ( U U * ) ,
by the Banach perturbation lemma on invertible operators [6,7]. Then, V 0 is well-defined by method (25) for ζ = 0 . By convexity, we have that U * + τ ( U U * ) K ( U * , R ) for each U K ( U * , R ) , by adopting ( h 1 ) , we get
F ( U ) = F ( U ) F ( U * ) = 0 1 F ( U * + τ ( U U * ) ) d τ ( U U * ) .
From the hypotheses ( h 1 ) and ( h 3 ) , we get
F ( U * ) 1 F ( U ) 0 1 ψ 1 ( τ U U * ) d τ U U * .
Using the first substep of method (25) for ζ = 0 , (31), (35), ( h 3 ) , (44) (for U = U 0 ), and (46) (for U = U 0 ), we have in turn that
V 0 U * = U 0 U * F ( U 0 ) 1 F ( U 0 ) + 1 3 F ( U 0 ) 1 F ( U 0 ) F ( U 0 ) 1 F ( U * ) 0 1 F ( U * ) 1 F ( U * + τ ( U 0 U * ) ) F ( U 0 ) d τ + 1 3 F ( U 0 ) 1 F ( U 0 ) F ( U 0 ) 1 F ( U 0 ) 0 1 ψ ( ( 1 τ ) U 0 U * ) d τ + 1 3 0 1 ψ 1 ( τ U 0 U * ) d τ 1 ψ 0 ( U 0 U * ) = φ 1 ( U 0 U * ) U 0 U * U 0 U * < R ,
so (40) holds for ζ = 0 and V 0 K ( U * ,   R ) . We must show that F ( U 0 ) + b F ( V 0 ) 1 L B ( E 2 , E 1 ) .
By (31), (36), ( h 2 ) and (47), we have
( 1 + b ) F ( U * ) 1 ( F ( U 0 ) F ( U * ) ) + b ( F ( V 0 ) + F ( U * ) ) 1 | 1 + b | F ( U * ) 1 F ( U 0 ) F ( U * ) + | b | F ( U * ) 1 F ( V 0 ) F ( U * ) 1 | 1 + b | ψ 0 ( U 0 U * ) + | b | ψ 0 ( V 0 U * ) λ ( U 0 U * ) < λ ( R ) < 1 ,
so
F ( U 0 ) + b F ( V 0 ) 1 F ( U * ) 1 | 1 + b | ( 1 λ ( U 0 U * ) ) .
Then, since W 0 is well-defined by (24) and the second substep of method (25), we can write
W 0 U * = W 0 U * F ( U 0 ) 1 F ( U 0 ) + F ( U 0 ) 1 P ( T ( U 0 ) ) F ( U 0 ) + b F ( V 0 ) 1 F ( U 0 ) .
We need an estimate on the expression inside the bracket in (50):
F ( U 0 ) 1 P ( T ( U 0 ) ) F ( U 0 ) + b F ( V 0 ) 1 = F ( U 0 ) 1 I F ( U 0 ) P ( T ( U 0 ) ) F ( U 0 ) + b F ( V 0 ) 1 = F ( U 0 ) 1 F ( U 0 ) + b F ( V 0 ) F ( U 0 ) P ( T ( U 0 ) ) F ( U 0 ) + b F ( V 0 ) 1 = F ( U 0 ) 1 [ F ( U 0 ) F ( U * ) + F ( U * ) F ( V 0 ) ( ( I P ( T ( U 0 ) ) + F ( V 0 ) P ( I ) P ( T ( U ) ) × F ( U 0 ) + b F ( V 0 ) 1 ,
so by ( h 3 ) , (44) and (49), we have in turn that (51), in norm, is bounded above by
F ( U 0 ) 1 F ( U * ) [ F ( U * ) 1 ( F ( U 0 ) F ( U * ) ) + F ( U * ) 1 ( F ( V 0 ) F ( U * ) ) I P ( T ( U 0 ) ) × F ( U * ) 1 F ( V 0 ) P ( I ) P ( T ( U 0 ) ) ] 1 λ ( U 0 U * ) ,
and (31), (37), (44)(for U = U 0 ), (46)(for U = U 0 ), (47), (49) and (52), we yield
W 0 U * = U 0 U * F ( U 0 ) 1 F ( U 0 ) + F ( U 0 ) 1 P ( T ( U 0 ) ) F ( U 0 ) + b F ( V 0 ) 1 F ( U * ) F ( U * ) 1 F ( U 0 ) [ 0 1 ψ ( ( 1 τ ) U 0 U * ) d τ 1 ψ 0 ( U 0 U * ) + 1 ( 1 ψ 0 ( U 0 U * ) ) ( 1 λ ( U 0 U * ) ) [ ( ψ 0 ( U 0 U * + ψ 0 ( φ 1 ( U 0 U * ) U 0 U * ) ) ) A ( U 0 U * ) + ψ 1 ( φ 1 ( U 0 U * ) U 0 U * ) B ( U 0 U * ) 0 1 ψ 1 ( τ U 0 U * ) d τ ] ] U 0 U * = φ 2 ( U 0 U * ) U 0 U * U 0 U * < R ,
so (41) holds for ζ = 0 and W 0 K ( U * , R ) . Next, we must show that Q ( U 0 ) 1 L B ( E 2 , E 1 ) . Using (31), (38), ( h 2 ) and (47), we get that
( 2 F ( U * ) 1 ) F ( U 0 ) F ( U * ) 3 ( F ( V 0 ) F ( U * ) ) = 1 2 F ( U * ) 1 ( F ( U 0 ) F ( U * ) ) + 3 F ( U * ) 1 ( F ( V 0 ) F ( U * ) ) = 1 2 ψ 0 ( U 0 U * ) + 3 ψ 0 ( V 0 U * ) g ( U 0 U * ) < 1 .
Thus, Q ( U 0 ) 1 L B ( E 2 , E 1 ) and
Q ( U 0 ) 1 F ( U * ) 1 2 ( 1 g ( U 0 U * ) ) .
Hence, U 1 is well-defined by the last substep of method (25). By adopting (31), (39), (53) and (55), we have
U 1 U * = W 0 U * F ( W 0 ) 1 F ( W 0 ) + ( F ( W 0 ) 1 + 2 Q ( U 0 ) 1 ) F ( W 0 ) = W 0 U * F ( W 0 ) 1 F ( W 0 ) + F ( W 0 ) 1 [ ( F ( U 0 ) F ( U * ) ) 3 ( F ( V 0 ) F ( U * ) ) + 2 ( F ( W 0 ) F ( U 0 ) ) ] Q ( U 0 ) 1 F ( W 0 )
from which we get that
U 1 U * W 0 U * F ( W 0 ) 1 F ( W 0 ) + F ( W 0 ) 1 F ( U * ) [ ( F ( U * ) 1 ( F ( U 0 ) F ( U * ) ) + 3 ( F ( U * ) 1 ( F ( V 0 ) F ( U * ) ) + 2 ( F ( U * ) 1 ( F ( W 0 ) F ( U * ) ) ] × Q ( U 0 ) 1 F ( U * ) F ( U * ) 1 F ( W 0 ) 0 1 ψ ( ( 1 τ ) U 0 U * ) d τ 1 ψ 0 ( U 0 U * ) + ψ 0 ( U 0 U * ) + 3 ψ 0 ( V 0 U * ) + 2 ψ 0 ( W 0 U * ) 0 1 ψ 1 ( τ U 0 U * ) d τ ( 1 ψ 0 ( W 0 U * ) ) ( 1 g ( U 0 U * ) ) φ 3 ( U 0 U * ) U 0 U * U 0 U * < R ,
so (42) holds for ζ = 0 and U 1 K ( U * , R ) .
Then, we replace U 0 , V 0 , W 0 ,   U 1 by U j , V j ,   W j , U j + 1 in the preceding estimations to finish the induction for (40)–(42). In view of the estimate
U j + 1 U * r U j U * < R ,   r = ψ 3 ( U 0 U * ) [ 0 ,   1 ) ,
we conclude that lim j U j = U * and U j + 1 K ( U * ,   R ) . Let us consider that V * Ω be such that F ( V * ) = 0 . Using K 1 = 0 1 F ( V * + τ ( U * V * ) ) d τ , ( h 2 ) and ( h 5 ) , we have
F ( U * ) 1 ( K F ( U * ) ) 0 1 ψ 0 ( τ U * V * ) d τ 0 1 ψ 0 ( τ R ) d τ < 1 ,
so K 1 1 L B ( E 1 , E 2 ) . Therefore, by the identity
0 = F ( U * ) F ( V * ) = K ( U * V * ) ,
we deduce that U * = V * . □
Application 1: Let us see how functions A and B can be chosen when P is given above (22). We get
I P ( T ( U ζ ) ) = I 3 2 T ( U ζ ) 2 + 1 2 T ( U ζ ) 4 I = 1 2 T ( U ζ ) I 3 2 T ( U ζ ) 2 I I = 1 2 F ( U ζ ) 1 F ( V ζ ) I 3 2 F ( U ζ ) 1 F ( V ζ ) 2 I I = 1 2 F ( U ζ ) 1 F ( V ζ ) F ( U ζ ) + 3 2 F ( U ζ ) 1 F ( V ζ ) F ( U ζ ) 2 + 2 F ( U ζ ) 1 F ( V ζ ) I
so
I P ( T ( U ζ ) ) = 1 2 ψ 0 ( V ζ U * ) + ψ 0 ( U ζ U * ) 1 ψ 0 ( U ζ U * ) + 3 2 ψ 0 ( V ζ U * ) + ψ 0 ( U ζ U * ) 1 ψ 0 ( U ζ U * ) 2 + 2 ψ 0 ( V ζ U * ) 1 ψ 0 ( U ζ U * ) + 1 .
Hence, function A can be defined by
A ( t ) = 1 2 ψ 0 ( φ 1 ( t ) t ) + ψ 0 ( t ) 1 ψ 0 ( t ) + ψ 0 ( φ 1 ( t ) t ) + ψ 0 ( t ) 1 ψ 0 ( t ) 2 + 2 ψ 1 ( φ 1 ( t ) t ) 1 ψ 0 ( t ) + 1 .
Similarly,
P ( I ) P ( T ( U ζ ) ) = 3 2 I 2 1 2 I + 4 I 3 2 T ( U ζ ) + 1 2 T ( U ζ ) 4 I = 3 2 I 2 F ( U ζ ) 1 F ( V ζ ) 2 1 2 I F ( U ζ ) 1 F ( V ζ ) ,
so we can define B by
B ( t ) = 3 2 ψ 0 ( φ ( t ) t ) + ψ 0 ( t ) 1 ψ 0 ( t ) 2 + ψ 1 ( φ 1 ( t ) t ) 1 ψ 0 ( t ) + 1 2 ψ 0 ( φ 1 ( t ) t ) + ψ 0 ( t ) 1 ψ 0 ( t ) , = A ( t ) 1 .
Remark 1.
The results in this section were obtained using hypotheses only on the first derivative, in contrast to the results in Theorem 1 where hypotheses up to the seventh derivative of F were used to show the convergence order six. Hence, we have extended the usage of method (25) in Banach space valued operators. Notice also that there are even simple functions defined on the real line, where the hypotheses of Theorem 1 do not hold. Hence, the method 2 may or may not converge. As a motivational and academic example, see Example 6 in the next section. Then, notice that the third derivative of F does not exist. Using the approach of Theorem (2), we bypass the computation of higher-order-than-one derivatives. However, we assume hypotheses only on the first-order derivative of operator F. For obtaining the order of convergence, we adopted
ρ = ln u σ + 2 ξ u σ + 1 ξ ln u σ + 1 ξ u σ ξ , for   each   σ = 0 , 1 , 2 , 3 , 4 ,
or
ρ * = ln u σ + 2 u σ + 1 u σ + 1 u σ ln u σ + 1 u σ u σ u σ 1 , for   each   σ = 1 , 2 , 3 , 4 ,
the computational order of convergence C O C , and the approximate computational order of convergence A C O C [17,18], respectively. These definitions can also be found in [19]. They do not require derivatives higher than one. Indeed, notice that to generate iterates u σ and therefore compute ρ and ρ * , we need to use the formula (2) using only the first derivatives. It is vital to note that A C O C does not need the prior information of the exact root ξ.

4. Numerical Experimentation

Here, we demonstrate the suitability of our iterative methods for real-life complications. In addition, we also want to validate our theoretical results which were presented in earlier sections. Therefore, we consider four real-life issues (namely, Bratu’s one-dimension, Fisher’s, kinematic synthesis, and Hammerstein integration problems), where the fifth one is a standard academic problem and the sixth one is a motivational problem. The corresponding starting initial approximation and zeroes are depicted in examples (1)–(6).
Next, we consider our schemes, namely, (22), (23), and (24), recalled as ( N M 1 ), ( N M 2 ), and ( N M 3 ), respectively to investigate the computational conduct of them with existing techniques. We contrast them with sixth-order schemes given by Hueso et al. [20] and Lotfi et al. [21], where out of them we consider the expressions, namely, (14–15) for   t 1 = 9 4   and   s 2 = 9 8 and (5), known as ( H U ) and ( L O ) , respectively. In addition, we also compare them with an Ostrowski-type method proposed by Grau-Sánchez et al. [22], where among them we choose the iterative scheme (7), denoted by ( G R ) . Further, we contrast them with sixth-order iterative schemes presented by Sharma and Arora [23] (expression-18) and Abbasbandy et al. [24] (expression-8), notated as ( S A ) and ( A B ) , respectively. Furthermore, we contrast them with solution techniques of order six designed by Soleymani et al. [25] (method-5) and Wang and Li [26] (method-6), known as ( S O ) and ( W L ) , respectively.
In the Table 1, Table 2, 4, 6 and 7, we report our findings’ iteration indexes ( n ) , ( F ( x ζ ) ) , U ζ + 1 U ζ , ρ = log U ζ + 1 U ζ / U ζ U ζ 1 log U ζ U ζ 1 / U ζ 1 U ζ 2 , U ζ + 1 U ζ U ζ U ζ 1 6 and η by using Mathematica (Version 9) with multiple precision arithmetic and minimum 300 digits of mantissa that minimize the rounding-off errors. Further, the variable η is the last obtained value of U ζ + 1 U ζ U ζ U ζ 1 6 . Furthermore, the radii of convergence and the consumption of central processing unit (CPU) time by distinct schemes are depicted in the Tables 8 and 9, respectively. The α   ( ± β ) indicates α × 10 ( ± β ) .
Example 1. Bratu Problem:
We find the huge applicability of the Bratu Problem [27] in the areas of thermal reaction, the Chandrasekhar model of the expansion of the universe, the chemical reactor theory, radiative heat transfer, and the fuel ignition model of thermal combustion and nanotechnology [28,29,30,31]. The mathematical formulation of this problem is given below:
y + C e y = 0 ,   y ( 0 ) = y ( 1 ) = 0 .
By adopting the following central difference
y σ = y σ 1 2 y σ + y σ + 1 h 2 ,   σ = 1 ,   2 ,   ,   51 ,
it yields the following nonlinear system of size 50 × 50 from BVP (67) with step size h = 1 / 50
h 2 C exp y σ + y σ 1 + y σ + 1 2 y σ = 0 , σ = 1 ,   2 ,   ,   51 .
We consider C = 3 and initial value ( s i n ( π h ) , s i n ( 2 π h ) , , s i n ( 50 π h ) ) T for this problem, and computational out-comings are depicted in Table 1.
Example 2.
Here, we choose another well-known Fisher’s equation [32]
θ t = D θ x x + θ ( 1 θ ) = 0 , with   homogeneous   Neumann’s   boundary   conditions θ ( x , 0 ) = 1.5 + 0.5 c o s ( π x ) , 0 x 1 , θ x ( 0 , t ) = 0 , t 0 , θ x ( 1 , t ) = 0 , t 0 ,
where D is the diffusion parameter. We adopted the finite difference discretization technique in order to convert the above differential Equation (69) into a system of nonlinear equations. Thus, we chose w i , j = θ ( x i , t j ) as the required solution at the grid points of the mesh. In addition, x and t are the numbers of steps in the direction of M and N, respectively. Moreover, h and k are the corresponding step sizes of M and N, respectively. By adopting central, backward, and forward differences, it resulted in:
θ x x ( x i , t j ) = ( w i + 1 , j 2 w i , j + w i 1 , j ) / h 2 , θ t ( x i , t j ) = ( w i , j w i , j 1 ) / k , and θ x ( x i , t j ) = ( w i + 1 , j w i , j ) / ( h ) , t [ 0 , 1 ] ,
that leading to
w 1 , j w i , j 1 k w i , j 1 w i , j D w i + 1 , j 2 w i , j + w i 1 , j h 2 , i = 1 , 2 , 3 , , M , j = 1 , 2 , 3 , , N ,
where h = 1 M , k = 1 N . For particular values of M = 9 , N = 9 , h = 1 9 , k = 1 9 and D = 1 , which led us to a nonlinear system of size 81 × 81 , with the starting point x 0 = ( i / 81 ) T , i = 1 , 2 , , 8 convergence towards the following solution:
u ( x i , t j ) = 1.6017 , 1.4277 , 1.3328 , 1.2740 , 1.2331 , 1.2022 , 1.1772 , 1.1563 , 1.1385 1.5726 , 1.4159 , 1.3277 , 1.2717 , 1.2322 , 1.2017 , 1.1770 , 1.1563 , 1.1384 1.5203 , 1.3940 , 1.3182 , 1.2676 , 1.2303 , 1.2009 , 1.1767 , 1.1561 , 1.1384 1.4521 , 1.3648 , 1.3055 , 1.2619 , 1.2278 , 1.1998 , 1.1762 , 1.1559 , 1.1383 1.3771 , 1.3321 , 1.2911 , 1.2556 , 1.2250 , 1.1985 , 1.1756 , 1.1556 , 1.1381 1.3045 , 1.2998 , 1.2768 , 1.2492 , 1.2221 , 1.1973 , 1.1750 , 1.1554 , 1.1380 1.2429 , 1.2719 , 1.2642 , 1.2436 , 1.2196 , 1.1961 , 1.1745 , 1.1551 , 1.1379 1.1990 , 1.2514 , 1.2550 , 1.2395 , 1.2178 , 1.1953 , 1.1742 , 1.1550 , 1.1379 1.1768 , 1.2406 , 1.2501 , 1.2373 , 1.2168 , 1.1949 , 1.1740 , 1.1549 , 1.1378 .
We depicted the numerical out-coming in Table 2.
Example 3.
Here, we choose a remarkable kinematic synthesis problem that is related to steering, as mentioned in [4,5], which is defined as follows:
E i u 2 sin ψ i u 3 F i u 2 sin ϕ i u 3 2 + F i u 2 cos ϕ i + 1 F i u 2 cos ψ i 1 2 u 1 u 2 sin ψ i u 3 u 2 cos ϕ i + 1 u 1 u 2 cos ψ i u 3 u 2 sin ϕ i u 3 2 = 0 ,   for   i = 1 , 2 , 3 ,
where
E i = u 3 u 2 sin ϕ i sin ϕ 0 u 1 u 2 sin ϕ i u 3 + u 2 cos ϕ i cos ϕ 0 ,   i = 1 , 2 , 3
and
F i = u 3 u 2 sin ψ i + u 2 cos ψ i + u 3 u 1 u 2 sin ψ 0 + u 2 cos ψ 0 + u 1 u 3 ,   i = 1 , 2 , 3 .
The values of ψ i and ϕ i (in radians) are depicted in Table 3 and the behavior of methods in Table 4. We chose the starting approximation u 0 = ( 0.7 ,   0.7 ,   0.7 ) that converges to
ξ = ( 0.9051567 ,   0.6977417 ,   0.6508335 ) T .
Example 4.
We choose here a distinguished problem of applied science that is popular as the Hammerstein integral equation (see [10], pp. 19–20), which is given as follows:
x ( s ) = 1 + 1 5 0 1 F ( s , t ) x ( t ) 3 d t ,
where x C [ 0 , 1 ] ,   s , t [ 0 , 1 ] and the kernel F is
F ( s , t ) = ( 1 s ) t , t s , s ( 1 t ) , s t .
To convert the expression (72) into a finite-dimensional problem, we adopt the following Gauss Legendre quadrature formula:
0 1 f ( t ) d t j = 1 8 w j f ( t j ) ,
where t j and w j are the abscissas and the weights, respectively. We recall x ( t i ) by x i ( i = 1 , 2 , , 8 ) , where we have
5 x i 5 j = 1 8 a i j x j 3 = 0 ,   where i = 1 , 2 , , 8
and
a i j = w j t j ( 1 t i ) , j i , w j t i ( 1 t j ) , i < j .
The parameters t j and w j are mentioned as depicted in Table 5.
The desired root is ξ * = ( 1.002096 ,   1.009900 ,   1.019727 ,   1.026436 , 1.026436 ,   1.019727 ,   1.009900 ,   1.002096 ) T . We depicted the numerical out coming in Table 6 on the ground of the starting approximation U 0 = 1 2 ,   1 2 ,   1 2 ,   1 2 ,   1 2 ,   1 2 ,   1 2 ,   1 2 T .
Example 5.
Finally, we choose
F ( U ) = u j 2 u j + 1 1 = 0 ,   1 j ζ , u ζ 2 u 1 1 = 0 .
Here, we picked ζ = 110 in order to deduce a huge system of 110 × 110 . In addition, we selected the starting guess U 0 = ( 1.25 ,   1.25 ,   1.25 ,   ,   ( 110 t i m e s ) ) T that converges to ξ = ( 1 ,   1 ,   1 ,   ,   ( 110 t i m e s ) ) T , and the results are depicted in Table 7.
Example 6.
As a counter-example, we picked a function F on E 1 = E 2 = R , Ω = [ 1 π ,   2 π ] by
F ( x ) = x 3 log ( π 2 x 2 ) + x 5 sin 1 x , x 0 0 , x = 0 ,
that leads to
F ( x ) = 2 x 2 x 3 cos 1 x + 3 x 2 log ( π 2 x 2 ) + 5 x 4 sin 1 x ,
F ( x ) = 8 x 2 cos 1 x + 2 x ( 5 + 3 log ( π 2 x 2 ) ) + x ( 20 x 2 1 ) sin 1 x
and
F ( x ) = 1 x ( 1 36 x 2 ) cos 1 x + x 22 + 6 log ( π 2 x 2 ) + ( 60 x 2 9 ) sin 1 x .
Surely, we can say that F ( x ) is not bounded on Ω in the neighborhood of point x = 0 . This means the study prior to Section 5 is not applicable. In particular, hypotheses on the seventh-order derivative of F or even higher are considered to demonstrate the convergence of the proposed scheme in Section 3. Because of this section, we now demand the hypotheses on the first order.
Further, we have
H = 80 + 16 π + ( π + 12 log 2 ) π 2 2 π + 1 ,   b = 1 , ψ 0 ( t ) = ψ ( t ) = H t ,   ψ 1 ( t ) = 1 + H t
and functions A and B, as given in Application 3.2. The desired solution of (6) is x * = 1 π . The distinct radii, U 0 , COC (ρ), and C P U   t i m e are stated in Table 8 and Table 9.

5. Concluding Remarks

In this paper, a new family of sixth-order schemes was introduced to produce sequences heading to a solution of an equation. In addition, we present analyses of their convergences, as well as the computable radii for the guaranteed convergence of them for Banach space valued operators and error bounds based on the Lipschitz constants. It turns out that these schemes are superior to existing ones utilizing similar information. Numerical experiments test the convergence criteria and also numerically show the superiority of the new schemes.

Author Contributions

R.B. and I.K.A.: Conceptualization; Methodology; Validation; Writing–Original Draft Preparation; Writing–Review & Editing. All authors have read and agreed to the published version of the manuscript.

Funding

Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under Grant No. D-534-130-1441.

Acknowledgments

This project was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, under grant No. (D-534-130-1441). The authors, therefore, gratefully acknowledge the DSR technical and financial support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Grosan, C.; Abraham, A. A new approach for solving nonlinear equations systems. IEEE Trans. Syst. Man Cybernet Part A Syst. Hum. 2008, 38, 698–714. [Google Scholar] [CrossRef]
  2. Lin, Y.; Bao, L.; Jia, X. Convergence analysis of a variant of the Newton method for solving nonlinear equations. Comput. Math. Appl. 2010, 59, 2121–2127. [Google Scholar] [CrossRef] [Green Version]
  3. Moré, J.J. A Collection of Nonlinear Model Problems; Allgower, E.L., Georg, K., Eds.; Computational Solution of Nonlinear Systems of Equations Lectures in Applied Mathematics; American Mathematical Society: Providence, RI, USA, 1990; Volume 26, pp. 723–762. [Google Scholar]
  4. Awawdeh, F. On new iterative method for solving systems of nonlinear equations. Numer. Algor. 2010, 54, 395–409. [Google Scholar] [CrossRef]
  5. Tsoulos, I.G.; Stavrakoudis, A. On locating all roots of systems of nonlinear equations inside bounded domain using global optimization methods. Nonlinear Anal. Real World Appl. 2010, 11, 2465–2471. [Google Scholar] [CrossRef]
  6. Argyros, I.K. Convergence and Application of Newton-Type Iterations; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  7. Argyros, I.K.; Hilout, S. Numerical Methods in Nonlinear Analysis; World Scientific Publ. Comp.: Hackensack, NJ, USA, 2013. [Google Scholar]
  8. Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multi-Point Methods for Solving Nonlinear Equations; Academic Press: Cambridge, MA, USA, 2012. [Google Scholar]
  9. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice- Hall Series in Automatic Computation: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  10. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  11. Abad, M.F.; Cordero, A.; Torregrosa, J.R. A family of seventh-order schemes for solving nonlinear systems. Bull. Math. Soc. Sci. Math. Roum. 2014, 57, 133–145. [Google Scholar]
  12. Artidiello, S.; Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Multidimensional generalization of iterative methods for solving nonlinear problems by means of weight-function procedure. Appl. Math. Comput. 2015, 268, 1064–1071. [Google Scholar] [CrossRef] [Green Version]
  13. Cordero, A.; Maimó, J.G.; Torregrosa, J.R.; Vassileva, M.P. Solving nonlinear problems by Ostrowski-Chun type parametric families. J. Math. Chem. 2014, 52, 430–449. [Google Scholar]
  14. Sharma, J.R.; Gua, R.K.; Sharma, R. An efficient fourth order weighted-Newton method for systems of nonlinear equations. Numer. Algor. 2013, 2, 307–323. [Google Scholar] [CrossRef]
  15. Wang, X.; Zhang, T. A family of Steffensen type methods with seventh-order convergence. Numer. Algor. 2013, 62, 429–444. [Google Scholar] [CrossRef]
  16. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algor. 2010, 55, 87–99. [Google Scholar] [CrossRef]
  17. Beyer, W.A.; Ebanks, B.R.; Qualls, C.R. Convergence rates and convergence-order profiles for sequences. Acta Appl. Math. 1990, 20, 267–284. [Google Scholar] [CrossRef]
  18. Potra, F.A. On Q-order and R-order of convergence. J. Optim. Theory Appl. 1989, 63, 415–431. [Google Scholar] [CrossRef]
  19. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  20. Hueso, J.L.; Martínez, E.; Teruel, C. Convergence, efficiency and dynamics of new fourth and sixth-order families of iterative methods for nonlinear system. J. Comput. Appl. Math. 2015, 275, 412–420. [Google Scholar] [CrossRef]
  21. Lotfi, T.; Bakhtiari, P.; Cordero, A.; Mahdiani, K.; Torregrosa, J.R. Some new efficient multipoint iterative methods for solving nonlinear systems of equations. Int. J. Comput. Math. 2015, 92, 1921–1934. [Google Scholar] [CrossRef] [Green Version]
  22. Grau-Sánchez, M.; Grau, Á.; Noguera, M. Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput. 2011, 218, 2377–2385. [Google Scholar] [CrossRef]
  23. Sharma, J.R.; Arora, H. Efficient Jarratt-like methods for solving systems of nonlinear equations. Calcolo 2014, 51, 193–210. [Google Scholar] [CrossRef]
  24. Abbasbandy, S.; Bakhtiari, P.; Cordero, A.; Torregrosa, J.R.; Lotfi, T. New efficient methods for solving nonlinear systems of equations with arbitrary even order. Appl. Math. Comput. 2016, 287–288, 94–103. [Google Scholar] [CrossRef]
  25. Soleymani, F.; Lotfi, T.; Bakhtiari, P. A multi-step class of iterative methods for nonlinear systems. Optim. Lett. 2014, 8, 1001–1015. [Google Scholar] [CrossRef]
  26. Wang, X.; Li, Y. An efficient sixth-order Newton-type method for solving nonlinear systems. Algorithms 2017, 10, 45. [Google Scholar] [CrossRef] [Green Version]
  27. Alaidarous, E.S.; Ullah, M.Z.; Ahmad, F.; Al-Fhaid, A.S. An Efficient Higher-Order Quasilinearization Method for Solving Nonlinear BVPs. J. Appl. Math. 2013, 2013, 1–11. [Google Scholar] [CrossRef] [Green Version]
  28. Gelfand, I.M. Some problems in the theory of quasi-linear equations. Trans. Am. Math. Soc. Ser. 1963, 2, 295–381. [Google Scholar]
  29. Wan, Y.Q.; Guo, Q.; Pan, N. Thermo-electro-hydrodynamic model for electrospinning process. Int. J. Nonlinear Sci. Numer. Simul. 2004, 5, 5–8. [Google Scholar] [CrossRef]
  30. Jacobsen, J.; Schmitt, K. The Liouville Bratu Gelfand problem for radial operators. J. Diff. Equat. 2002, 184, 283–298. [Google Scholar] [CrossRef] [Green Version]
  31. Jalilian, R. Non-polynomial spline method for solving Bratu’s problem. Comput. Phys. Commun. 2010, 181, 1868–1872. [Google Scholar] [CrossRef]
  32. Sauer, T. Numerical Analysis, 2nd ed.; Pearson: Harlow, UK, 2012. [Google Scholar]
Table 1. Conduct of different techniques in Bratu’s problem Example 1.
Table 1. Conduct of different techniques in Bratu’s problem Example 1.
Methods ζ F ( U ζ ) U ζ + 1 U ζ ρ * U ζ + 1 U ζ U ζ U ζ 1 6 η
H U 1 2.2   ( 4 ) 1.2   ( 2 )
2 2.1   ( 11 ) 1.1   ( 8 ) 1.083162484   ( 4 ) 3.174505914   ( + 1 )
3 1.3   ( 46 ) 7.0   ( 44 ) 5.3632 3.174505914   ( + 1 )
L O 1 2.2   ( 4 ) 1.2   ( 1 )
2 5.7   ( 13 ) 3.1   ( 10 ) 1.87422921   ( 4 ) 5.428184301   ( + 1
3 9.7   ( 58 ) 2.8   ( 56 ) 5.3634 5.428184301   ( + 1 )
G R 1 1.3   ( 5 ) 7.1   ( 3 )
2 9.0   ( 21 ) 4.8   ( 18 ) 3.793760743   ( 5 ) 3.818203150   ( 5 )
3 8.9   ( 112 ) 4.8   ( 109 ) 5.9998 3.818203150   ( 5 )
S A 1 1.1   ( 3 ) 5.5   ( 1 )
2 7.4   ( 9 ) 4.0   ( 6 ) 1.486642520   ( 4 ) 3.377015100   ( 4 )
3 2.6   ( 39 ) 1.4   ( 36 ) 5.9306 3.377015100   ( 4 )
A B 1 1.9   ( 3 ) 9.1   ( 1 )
2 1.4   ( 7 ) 7.6   ( 5 ) 1.306885506   ( 4 ) 5.224036587   ( 4 )
3 1.8   ( 31 ) 9.9   ( 29 ) 5.8525 5.224036587   ( 4 )
S O 1 3.0   ( 6 ) 1.6   ( 3 )
2 7.3   ( 25 ) 3.9   ( 22 ) 2.402458013   ( 5 ) 2.407289926   ( 5 )
3 1.5   ( 136 ) 8.2   ( 134 ) 6.0000 2.407289926   ( 5 )
W L 1 8.2   ( 4 ) 4.2   ( 1 )
2 1.5   ( 9 ) 8.1   ( 7 ) 1.427111276   ( 4 ) 2.712173991   ( 4 )
3 1.4   ( 43 ) 7.4   ( 41 ) 5.9512 2.712173991   ( 4 )
N M 1 1 6.7   ( 5 ) 3.5   ( 2 )
2 3.6   ( 16 ) 1.9   ( 13 ) 9.787819663   ( 5 ) 1.020710352   ( 4 )
3 9.7   ( 84 ) 5.2   ( 81 ) 5.9984 1.020710352   ( 4 )
N M 2 1 3.7   ( 5 ) 2.0   ( 2 )
2 8.2   ( 18 ) 4.4   ( 15 ) 6.899443783   ( 5 ) 7.050512396   ( 5 )
3 9.3   ( 94 ) 5.0   ( 91 ) 5.9993 7.050512396   ( 5 )
N M 3 1 1.2   ( 6 ) 6.5   ( 4 )
2 1.3   ( 27 ) 7.5   ( 25 ) 9.459851406   ( 6 ) 9.479844812   ( 6 )
3 2.5   ( 153 ) 1.3   ( 150 ) 6.0000 9.479844812   ( 6 )
Table 2. Conduct of different techniques in Fisher’s Equation (2).
Table 2. Conduct of different techniques in Fisher’s Equation (2).
Methods ζ F ( U ζ ) U ζ + 1 U ζ ρ * U ζ + 1 U ζ U ζ U ζ 1 6 η
H U 1 4.0 8.9   ( 1 )
2 1.7   ( 5 ) 1.8   ( 61 ) 3.767384807   ( 6 ) 1.887278364
3 8.4   ( 34 ) 7.6   ( 35 ) 4.9968 1.887278364
L O 1 6.8 1.3
2 2.6   ( 5 ) 2.7   ( 6 ) 6.210769679   ( 7 ) 6.028784826   ( 3 )
3 2.7   ( 35 ) 2.6   ( 36 ) 5.2967 6.028784826   ( 3 )
G R 1 4.8   ( 1 ) 1.1   ( 1 )
2 1.5   ( 12 ) 1.6   ( 13 ) 1.337154575   ( 13 ) 1.444113141   ( 13 )
3 1.5   ( 83 ) 1.4   ( 84 ) 5.9996 1.444113141   ( 13 )
S A 1 2.0   ( + 1 ) 3.5
2 4.7   ( 3 ) 4.9   ( 4 ) 2.612933396   ( 7 ) 1.099716041   ( 4 )
3 1.5   ( 25 ) 1.5   ( 26 ) 5.8382 1.099716041   ( 4 )
A B 1 1.4   ( + 1 ) 2.9
2 2.4   ( 3 ) 3.1   ( 4 ) 5.378390973   ( 7 ) 1.597885339   ( 6 )
3 1.5   ( 26 ) 1.4   ( 27 ) 5.8809 1.597885339   ( 6 )
S O 1 2.7   ( 1 ) 6.8   ( 2 )
2 4.5   ( 14 ) 4.8   ( 15 ) 1.923091087   ( 12 ) 2.166790648   ( 12 )
3 4.8   ( 93 ) 4.7   ( 94 ) 5.9994 2.166790648   ( 12 )
W L 1 2.0   ( + 1 ) 3.5
2 4.7   ( 3 ) 4.9   ( 4 ) 1.666475363   ( 16 ) 1.692929278   ( 16 )
3 1.5   ( 25 ) 1.5   ( 26 ) 5.9999 1.692929278   ( 16 )
N M 1 1 1.4 4.1   ( 1 )
2 1.3   ( 8 ) 1.8   ( 9 ) 2.468042987   ( 7 ) 2.892790896   ( 7 )
3 1.1   ( 58 ) 1.1   ( 59 ) 5.9917 2.892790896   ( 7 )
N M 2 1 1.0 2.4   ( 1 )
2 2.1   ( 10 ) 2.1   ( 11 ) 6.595078065   ( + 5 ) 3.781460299   ( + 4 )
3 1.9   ( 70 ) 1.8   ( 71 ) 6.2266 3.781460299   ( + 4 )
N M 3 1 3.6   ( 1 ) 8.7   ( 2 )
2 3.6   ( 13 ) 3.9   ( 14 ) 2.391680319   ( + 5 ) 7.549898932   ( + 3 )
3 4.1   ( 87 ) 4.0   ( 88 ) 6.1921 7.549898932   ( + 3 )
Table 3. The parameters ψ i and ϕ i (in radians) used in Example 3.
Table 3. The parameters ψ i and ϕ i (in radians) used in Example 3.
i ψ i ϕ i
0 1.3954170041747090114 1.7461756494150842271
1 1.7444828545735749268 2.0364691127919609051
2 2.0656234369405315689 2.2390977868265978920
3 2.4600678478912500533 2.4600678409809344550
Table 4. Conduct of different techniques in kinematic synthesis problem 3.
Table 4. Conduct of different techniques in kinematic synthesis problem 3.
Methods ζ F ( U ζ ) U ζ + 1 U ζ ρ * U ζ + 1 U ζ U ζ U ζ 1 6 η
H U 1 3.0   ( 4 ) 7.0   ( 3 )
2 8.7   ( 9 ) 9.3   ( 7 ) 7.812154542   ( + 6 ) 8.147793628   ( + 9 )
3 1.4   ( 28 ) 5.2   ( 27 ) 5.2217 8.147793628   ( + 9 )
L O 1 1.2   ( 4 ) 2.7   ( 3 )
2 4.8   ( 13 ) 3.0   ( 11 ) 7.926557348   ( + 4 ) 2.247398482   ( + 11 )
3 6.9   ( 53 ) 1.6   ( 52 ) 5.1887 2.247398482   ( + 11 )
G R 1 1.2   ( 3 ) 3.7   ( 2 )
2 6.3   ( 6 ) 6.1   ( 4 ) 2.198663751   ( + 5 ) 2.496661657   ( + 8 )
3 2.1   ( 13 ) 1.3   ( 11 ) 4.2910 2.496661657   ( + 8 )
S A 1 7.5   ( 4 ) 4.9   ( 3 )
2 3.3   ( 9 ) 6.5   ( 7 ) 4.841793597   ( + 7 ) 1.033201098   ( + 6 )
3 1.7   ( 33 ) 7.7   ( 32 ) 6.4311 1.033201098   ( + 6 )
A B 1 1.1   ( 3 ) 5.3   ( 3 )
2 9.1   ( 9 ) 1.3   ( 6 ) 5.600775513   ( + 7 ) 2.359206614   ( + 6 )
3 2.0   ( 31 ) 1.0   ( 29 ) 6.3799 2.359206614   ( + 6 )
S O 1 4.3   ( 5 ) 5.1   ( 3 )
2 3.8   ( 11 ) 2.1   ( 9 ) 1.188878144   ( + 5 ) 1.015723644   ( + 4 )
3 2.0   ( 50 ) 1.0   ( 48 ) 6.1675 1.015723644   ( + 4 )
W L 1 5.8   ( 4 ) 3.9   ( 3 )
2 1.0   ( 9 ) 2.0   ( 7 ) 5.475067615   ( + 7 ) 8.521077783   ( + 5 )
3 1.5   ( 36 ) 5.8   ( 35 ) 6.4215 8.521077783   ( + 5 )
N M 1 1 1.8   ( 4 ) 5.6   ( 3 )
2 8.7   ( 10 ) 6.0   ( 8 ) 1.569574095   ( + 6 ) 7.322404948   ( + 4 )
3 1.2   ( 40 ) 4.1   ( 39 ) 6.2643 7.322404948   ( + 4 )
N M 2 1 1.3   ( 4 ) 5.6   ( 3 )
2 3.8   ( 10 ) 2.5   ( 8 ) 7.167106213   ( + 5 ) 4.034357383   ( + 4 )
3 2.9   ( 43 ) 1.0   ( 41 ) 6.2301 4.034357383   ( + 4 )
N M 3 1 9.9   ( 5 ) 4.9   ( 3 )
2 8.2   ( 11 ) 5.2   ( 9 ) 2.391680319   ( + 5 ) 7.549898932   ( + 3 )
3 1.2   ( 47 ) 4.9   ( 46 ) 6.1987 7.549898932   ( + 3 )
Table 5. (Abscissas and weights for t = 8 ).
Table 5. (Abscissas and weights for t = 8 ).
j t j w j
1 0.01985507175123188415821957 0.05061426814518812957626567
2 0.10166676129318663020422303 0.11119051722668723527217800
3 0.23723379504183550709113047 0.15685332293894364366898110
4 0.40828267875217509753026193 0.18134189168918099148257522
5 0.59171732124782490246973807 0.18134189168918099148257522
6 0.76276620495816449290886952 0.15685332293894364366898110
7 0.89833323870681336979577696 0.11119051722668723527217800
8 0.98014492824876811584178043 0.05061426814518812957626567
Table 6. Conduct of different techniques in Hammerstein integral problem 4.
Table 6. Conduct of different techniques in Hammerstein integral problem 4.
Methods ζ F ( U ζ ) U ζ + 1 U ζ ρ * U ζ + 1 U ζ U ζ U ζ 1 6 η
H U 1 3.0   ( 5 ) 6.5   ( 6 )
2 6.9   ( 31 ) 1.5   ( 31 ) 1.994799598 9.220736175   ( + 25 )
3 4.4   ( 159 ) 9.4   ( 160 ) 4.9991 9.220736175   ( + 25 )
L O 1 8.6   ( 6 ) 1.8   ( 6 )
2 8.4   ( 37 ) 1.7   ( 37 ) 4.445532921   ( 3 ) 1.232404905   ( + 31 )
3 1.4   ( 189 ) 3.0   ( 190 ) 4.9924 1.232404905   ( + 31 )
G R 1 6.7   ( 6 ) 1.4   ( 6 )
2 1.0   ( 40 ) 2.1   ( 41 ) 2.526393028   ( 6 ) 2.741190361   ( 6 )
3 1.2   ( 249 ) 2.6   ( 250 ) 5.9990 2.741190361   ( 6 )
S A 1 9.4   ( 6 ) 2.0   ( 6 )
2 1.5   ( 39 ) 3.2   ( 40 ) 4.964844066   ( 6 ) 5.324312398   ( 6 )
3 2.7   ( 242 ) 5.7   ( 243 ) 5.9991 5.324312398   ( 6 )
A B 1 5.5   ( 7 ) 1.2   ( 7 )
2 8.3   ( 47 ) 1.8   ( 47 ) 6.557370741   ( 6 ) 7.0685748   ( 6 )
3 1.1   ( 285 ) 2.3   ( 286 ) 5.9992 7.0685748   ( 6 )
S O 1 5.4   ( 6 ) 1.2   ( 6 )
2 1.9   ( 41 ) 4.1   ( 42 ) 1.622600124   ( 6 ) 1.775041548   ( 6 )
3 3.7   ( 254 ) 8.0   ( 255 ) 5.9999 1.775041548   ( 6 )
W L 1 1.6   ( 6 ) 3.3   ( 7 )
2 9.0   ( 45 ) 1.9   ( 45 ) 1.377597868   ( 6 ) 1.431074607   ( 6 )
3 3.3   ( 274 ) 7.1   ( 275 ) 5.9996 1.431074607   ( 6 )
N M 1 1 6.9   ( 6 ) 1.5   ( 6 )
2 1.4   ( 40 ) 2.9   ( 41 ) 2.737455233   ( 6 ) 2.968285627   ( 6 )
3 9.1   ( 249 ) 2.0   ( 249 ) 5.9990 2.968285627   ( 6 )
N M 2 1 6.4   ( 6 ) 1.4   ( 6 )
2 7.1   ( 41 ) 1.5   ( 41 ) 2.315331324   ( 6 ) 2.514101750   ( 6 )
3 1.5   ( 250 ) 3.1   ( 251 ) 5.9990 2.514101750   ( 6 )
N M 3 1 5.8   ( 6 ) 1.2   ( 6 )
2 3.3   ( 41 ) 7.1   ( 42 ) 2.391680319   ( + 5 ) 7.549898932   ( + 3 )
3 1.3   ( 252 ) 2.7   ( 253 ) 6.1921 7.549898932   ( + 3 )
Table 7. Conduct of different techniques in Example 5.
Table 7. Conduct of different techniques in Example 5.
Methods ζ F ( U ζ ) U ζ + 1 U ζ ρ * U ζ + 1 U ζ U ζ U ζ 1 6 η
H U 1 2.2   ( 1 ) 7.4   ( 3 )
2 1.5   ( 14 ) 5.0   ( 15 ) 2.953163177   ( 2 ) 4.440762811   ( + 6 )
3 2.0   ( 75 ) 6.6   ( 76 ) 4.9998 4.440762811   ( + 6 )
L O 1 2.5   ( 1 ) 8.4   ( 3 )
2 1.7   ( 16 ) 5.7   ( 17 ) 1.658701079   ( 4 ) 1.670091725   ( 4 )
3 1.7   ( 101 ) 5.7   ( 102 ) 5.9998 1.670091725   ( 4 )
G R 1 9.1   ( 3 ) 3.0   ( 3 )
2 8.0   ( 20 ) 2.7   ( 20 ) 3.496581094   ( 5 ) 3.502158271   ( 5 )
3 3.7   ( 122 ) 1.2   ( 122 ) 6.0000 3.502158271   ( 5 )
S A 1 2.9   ( 2 ) 9.6   ( 3 )
2 4.7   ( 16 ) 1.6   ( 16 ) 2.066540292   ( 4 ) 2.083784171   ( 4 )
3 9.6   ( 99 ) 3.2   ( 99 ) 5.9997 2.083784171   ( 4 )
A B 1 4.0   ( 2 ) 1.3   ( 2 )
2 6.1   ( 15 ) 2.0   ( 15 ) 3.387336833   ( 4 ) 3.430169462   ( 4 )
3 7.0   ( 92 ) 2.3   ( 92 ) 5.9996 3.430169462   ( 4 )
S O 1 2.0   ( 3 ) 6.6   ( 4 )
2 8.7   ( 25 ) 2.9   ( 25 ) 3.501864006   ( 6 ) 3.502158271   ( 6 )
3 6.5   ( 153 ) 2.2   ( 153 ) 6.0000 3.502158271   ( 6 )
W L 1 3.1   ( 2 ) 1.0   ( 2 )
2 8.7   ( 16 ) 2.9   ( 16 ) 2.342171924   ( 4 ) 2.363956833   ( 4 )
3 4.2   ( 97 ) 1.4   ( 97 ) 5.9937 2.363956833   ( 4 )
N M 1 1 1.1   ( 2 ) 3.8   ( 3 )
2 4.2   ( 19 ) 1.4   ( 19 ) 4.950410601   ( 5 ) 4.961390884   ( 5 )
3 1.1   ( 117 ) 3.6   ( 118 ) 5.9999 4.961390884   ( 5 )
N M 2 1 6.8   ( 3 ) 2.3   ( 3 )
2 9.3   ( 21 ) 3.1   ( 21 ) 2.235218683   ( 5 ) 2.237490006   ( 5 )
3 5.9   ( 128 ) 2.0   ( 128 ) 6.0000 2.237490006   ( 5 )
N M 3 1 2.5   ( 3 ) 8.3   ( 4 )
2 7.2   ( 24 ) 2.4   ( 24 ) 7.590662313   ( 6 ) 7.588009587   ( 6 )
3 4.3   ( 147 ) 1.4   ( 147 ) 6.0000 7.588009587   ( 6 )
Table 8. Different radii of convergence.
Table 8. Different radii of convergence.
SchemesDistinct parameters that appease the Theorem 1 ρ
R 1 R 2 R 3 R U 0
N M 1 0.011971 0.0016362 0.0032737 0.0016362 0.3198 6.000
N M 2 0.011971 0.00096269 0.00064786 0.00064785 0.3177 6.000
N M 3 0.011971 0.00041256 0.0011889 0.00041256 0.3179 6.000
Table 9. Consumption of CPU time by distinct schemes.
Table 9. Consumption of CPU time by distinct schemes.
IM / Ex HU LO GR SA AB SO WL NM 1 NM 2 NM 3
Example 139.419821.366028.078827.855642.767127.775622.313718.587113.237313.0102
Example 217.405308.031685.5979611.1468814.1560110.182207.798537.959625.765065.75208
Example 31.293920.543390.579422.484770.902640.946670.534390.207170.190130.18615
Example 428.0538414.0759614.0869682.8455627.9137527.7836613.987900.109080.107080.10507
Example 551.1630526.1524222.2516853.5717653.9279933.6727527.2812128.1558314.228028.03466
T T 109.2820270.1694970.59480177.90459139.66752100.4137072.9157355.0187933.5276327.08812
A T 21.85640514.03389811.76580035.58091827.93350520.08273914.58314611.0037576.7055255.417624
TT: stands for total time for all examples to the corresponding iterative method. AT: means average time taken by corresponding iterative method. CPU: stands for central processing unit.

Share and Cite

MDPI and ACS Style

Behl, R.; Argyros, I.K. A New Higher-Order Iterative Scheme for the Solutions of Nonlinear Systems. Mathematics 2020, 8, 271. https://doi.org/10.3390/math8020271

AMA Style

Behl R, Argyros IK. A New Higher-Order Iterative Scheme for the Solutions of Nonlinear Systems. Mathematics. 2020; 8(2):271. https://doi.org/10.3390/math8020271

Chicago/Turabian Style

Behl, Ramandeep, and Ioannis K. Argyros. 2020. "A New Higher-Order Iterative Scheme for the Solutions of Nonlinear Systems" Mathematics 8, no. 2: 271. https://doi.org/10.3390/math8020271

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop