Next Article in Journal
Strong Solutions of the Incompressible Navier–Stokes–Voigt Model
Next Article in Special Issue
Malmquist Productivity Analysis of Top Global Automobile Manufacturers
Previous Article in Journal
Discovering Systemic Risks of China's Listed Banks by CoVaR Approach in the Digital Economy Era
Previous Article in Special Issue
New Improvement of the Domain of Parameters for Newton’s Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Local Convergence for Multi-Step High Order Solvers under Weak Conditions

by
Ramandeep Behl
1 and
Ioannis K. Argyros
2,*
1
Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
2
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(2), 179; https://doi.org/10.3390/math8020179
Submission received: 19 November 2019 / Revised: 21 January 2020 / Accepted: 24 January 2020 / Published: 2 February 2020
(This article belongs to the Special Issue Computational Methods in Analysis and Applications 2020)

Abstract

:
Our aim in this article is to suggest an extended local convergence study for a class of multi-step solvers for nonlinear equations valued in a Banach space. In comparison to previous studies, where they adopt hypotheses up to 7th Fŕechet-derivative, we restrict the hypotheses to only first-order derivative of considered operators and Lipschitz constants. Hence, we enlarge the suitability region of these solvers along with computable radii of convergence. In the end of this study, we choose a variety of numerical problems which illustrate that our works are applicable but not earlier to solve nonlinear problems.
MSC:
65G99; 65H10; 47J25; 47J05; 65D10; 65D99

1. Introduction

Finding the approximate solution μ of
F ( x ) = 0 ,
is one of the top priorities in the field of Numerical analysis. We assume that F : A E 1 E 2 is a Fréchet-differentiable operator, E 1 , E 2 are Banach spaces and A is a convex subset of E 1 . The B ( E 1 , E 2 ) is known as the set of bounded linear operators.
The problem of finding an approximate unique solution μ is very important, since many problems can be written as Equation (1) in References [1,2,3,4,5,6,7,8]. However, it is not always possible to access the solution μ in an explicit form. Hence, most of the solvers are iterative in nature. The analysis of solvers involves local convergence that stands on the knowledge around μ . It also ensures the convergence of iteration procedures. One of the most significant tasks in the analysis of iterative procedures is to yield the convergence region. Hence, it is essential to suggest the radius of convergence.
We redefine the iterative solver suggested in Reference [7], for all σ = 0 , 1 2 , as
y σ = x σ F ( x σ ) 1 F ( x σ ) , z σ = ϕ 1 ( x σ , y σ ) z σ ( 1 ) = z σ ϕ ( x σ , y σ ) F ( z σ ) , z σ ( m 1 ) = z σ ( m 2 ) ϕ ( x σ , y σ ) F ( z σ m 2 ) , x σ + 1 = z σ ( m 1 ) ϕ ( x σ , y σ ) F ( z σ m 1 ) ,
where x 0 A is a starting guess, z σ = ϕ 1 ( x σ , y σ ) is a λ -order iteration function solver (for λ 1 ) and
ϕ ( s , ζ ) = 1 3 4 [ 3 F ( ζ ) F ( s ) ] 1 F ( s ) 1 .
F stands for the first-order Fŕechet-derivative of F. The study of these methods is important for various reasons already stated in Reference [7]. For brevity we refer the reader to Reference [7] and the references therein. On top of those reasons, we also mention that method (2) generalizes the existing widely used Newton’s type methods such as Newton’s, Traub’s and other methods. So, it is important to study these methods under the same set of convergence criteria. Keeping the linear operator frozen is also a very cheap and efficient way of increasing the order of convergence. The convergence order of (2) was given in Reference [7] but using hypotheses up to the 7th-order derivative of function F. Only the 1st-order derivative emerges in scheme (2). Such conditions hamper the suitability of solver (2). Consider function F with E 1 = E 2 = R on A = [ 1 2 , 3 2 ] by
Θ ( κ ) = κ 3 ln κ 2 + κ 5 κ 4 , κ 0 0 , κ = 0 .
Using this definition, we get
Θ ( κ ) = 3 κ 2 ln κ 2 + 5 κ 4 4 κ 3 + 2 κ 2 ,
Θ ( κ ) = 6 κ ln κ 2 + 20 κ 3 12 κ 2 + 10 κ
and
Θ ( κ ) = 6 ln κ 2 + 60 κ 2 24 κ + 22 .
It is clear from the above that the 3rd-order derivative of F ( x ) is unbounded in A . We have plenty of research articles on iterative solvers [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26]. The local convergence analysis of these solvers traditionally requires the usage of Taylor expansions and the operator involved must be sufficiently many times differentiable in a neighborhood of the solution μ . This way, the convergence order is established but derivatives of an order higher than one do not appear in these solvers, as we saw previously with the motivational example restricting the applicability of solvers. Another problem is that this approach does not provide error estimates on x n μ that can be used to predetermine the number of steps required to attain a prescribed error tolerance. The uniqueness of the solution μ also cannot be established in any set containing it. Moreover, the starting guess is a shot in the dark. Therefore, it is important to find a technique other than the preceding. This is what we offer in this article. Furthermore, (COC) and (ACOC) [27] are used to compute the convergence order (to be explained in Remark 1 (d)).
These formulas do not require higher than one derivative, and in the case of ACOC, knowledge of μ is not needed. It is worth noting that the iterates are obtained by using (2), which involves the first derivative. Hence, these iterates also depend on the first derivative (see Remark 1 (d)). Our techniques can be used on other solvers to extend their applicability in a similar fashion.

2. Local Convergence

Here, we present a study of local convergence for solver (2). For this, we consider a function φ 0 : [ 0 , ) [ 0 , ) which is nondecreasing and continuous such that φ 0 ( 0 ) = 0 . We assume
φ 0 ( ζ ) = 1
has a minimal positive solution r 0 .
Define functions g 1 , g 2 , h 1 and h 2 on the interval 0 , r 0 by
g 1 ( ζ ) = 0 1 φ ( 1 θ ) ζ d θ 1 φ 0 ( ζ ) , g 2 ( ζ ) = ψ ( ζ , g 1 ( ζ ) ζ ) ζ λ 1 , h 1 ( ζ ) = g 1 ( ζ ) 1 . and h 2 ( ζ ) = g 2 ( ζ ) 1 ,
where v , φ : [ 0 , r 0 ) [ 0 , ) and functions ψ : [ 0 , r 0 ) × [ 0 , r 0 ) [ 0 , ) are also nondecreasing and continuous, satisfying φ ( 0 ) = 0 . We have that h 1 ( 0 ) = h 2 ( 0 ) = 1 and h 1 ( ζ ) , h 2 ( ζ ) as ζ r 0 . Then, by the intermediate value theorem, we notice that the functions h 1 and h 2 have solutions in the interval 0 , r 0 . Call as r 1 and r 2 the smallest such solutions in 0 , r 0 of the functions h 1 and h 2 , respectively. Assume p ( t ) = 1 has minimal positive solution r p . Consider functions
p ( ζ ) = 1 2 3 φ 0 g 1 ( ζ ) ζ + φ 0 ( ζ ) , h p ( ζ ) = p ( ζ ) 1 .
These functions are defined in the interval [ 0 , r ¯ ) , where r ¯ = min { r 0 , r p } . Consider functions g ( i ) , h ( i ) , i = 1 , 2 , , m on [ 0 , r ¯ ) as
g ( i ) ( ζ ) = 1 + q ( ζ ) 0 1 v ( θ g ( i 1 ) ( ζ ) ζ ) d θ i 1 g 2 ( ζ ) , g ( 1 ) ( ζ ) = g 2 ( ζ ) , and h ( i ) ( ζ ) = g ( i ) ( ζ ) 1 ,
where
q ( ζ ) = 1 2 φ 0 ( ζ ) + φ 0 g 1 ( ζ ) ζ 1 p ( ζ ) .
Then, h ( i ) ( 0 ) = 1 and h ( i ) ( ζ ) as ζ r ¯ . Defined by r ( i ) be the minimal solutions of corresponding to functions h ( i ) in ( 0 , r ¯ ) .
Set r as
r = min { r 1 , r 2 , r ( i ) } .
Then, it follows
0 < r < r 0
and for all t [ 0 , r )
0 g 1 ( ζ ) < 1 ,
0 g 2 ( ζ ) < 1 ,
0 p ( ζ ) < 1 ,
0 q ( ζ )
and
0 g ( i ) ( ζ ) < 1 .
Let U ( ξ , ρ ) , U ¯ ( ξ , ρ ) be, respectively, open and closed balls in E 1 centered at ξ E 1 and of radius ρ > 0 . Next, the local convergence analysis of solver (2) follows.
Theorem 1.
Let F : A E 1 E 2 be a differentiable operator. Let v , φ 0 , φ : [ 0 , ) [ 0 , ) and ψ : [ 0 , ) × [ 0 , ) [ 0 , ) be a nondecreasing continuous function such that φ 0 ( 0 ) = φ ( 0 ) = 0 . The parameter r 0 be defined by (4). Suppose that there exists μ A such that
F ( μ ) = 0 , F ( μ ) 1 B ( E 2 , E 1 )
and
F ( μ ) 1 ( F ( x ) F ( μ ) φ 0 ( x μ ) , for all x A .
Moreover, suppose that for all x , y A 0 = A U ( μ , r 0 )
F ( μ ) 1 F ( x ) F ( y ) φ ( x y ) ,
F ( μ ) 1 F ( x ) v ( x μ ) ,
ϕ 1 ( x , y ) μ ψ ( x μ , y μ ) x μ λ
and
U ¯ ( μ , r ) A .
Then, { x σ } generated for x 0 U ( μ , r ) { x * } by solver (2) is well defined, remains in U ( μ , r ) for all σ = 0 , 1 , 2 , 3 , 4 , and converges to μ, so that
y σ μ g 1 ( x σ μ ) x σ μ x σ μ < r ,
z σ μ g 2 ( x σ μ ) x σ μ x σ μ ,
z σ ( i ) μ g ( i ) ( x σ μ ) x σ μ x σ μ , i = 1 , 2 , , m 1
and
x σ + 1 μ g ( m ) ( x σ μ ) x σ μ x σ μ .
Further, if
0 1 φ 0 ( θ R ) d θ < 1 for R r ,
then, μ is the only solution of equation F ( x ) = 0 in A 1 : = A U ¯ ( μ , R ) .
Proof. 
We select mathematical induction to show that expressions (18)–(21) are satisfied. Using hypotheses x 0 U ( μ , r ) { x * } , (4), (5) and (13), we yield
F ( μ ) 1 ( F ( x 0 ) F ( μ ) ) φ 0 ( x 0 μ ) < φ 0 ( r ) < 1 .
Therefore, F ( x 0 ) 1 B ( E 2 , E 2 ) , y 0 , z 0 are well defined, and
F ( x 0 ) 1 F ( μ ) 1 1 φ 0 ( x 0 μ ) .
By adopting (2), (5), (7), (14) and (24), we have
y 0 μ = x 0 μ F ( x 0 ) 1 F ( x 0 ) F ( x 0 ) 1 F ( μ ) 0 1 F ( μ ) 1 F ( μ + θ ( x 0 μ ) ) F ( x 0 ) ( x 0 μ ) d θ 0 1 w ( 1 θ ) x 0 μ d θ x 0 μ 1 φ 0 ( x 0 μ ) = g 1 ( x 0 μ ) x 0 μ x 0 μ < r ,
showing (18) for σ = 0 and y 0 U ( μ , r ) .
By (2), (5), (8), (16) and (25), we yield
z 0 μ = ϕ 1 ( x 0 , y 0 ) μ ψ x 0 μ , y 0 μ x 0 μ λ ψ x 0 μ , g 1 ( x 0 μ ) x 0 μ x 0 μ λ = g 2 ( x 0 μ ) x 0 μ x 0 μ < r ,
showing (19) (for σ = 0 ) and z 0 U ( μ , r ) . We can write by (12)
F ( z 0 ) = F ( z 0 ) F ( μ ) = 0 1 F μ + θ ( z 0 μ ) d θ ( z 0 μ ) .
Then, from (15), (26) and (27), we obtain
F ( μ ) 1 F ( z 0 ) 0 1 v ( θ z 0 μ ) z 0 μ d θ 0 1 v ( θ g 1 ( x 0 μ ) x 0 μ ) d θ g 1 ( x 0 μ ) x 0 μ .
We must show that ϕ ( x 0 , y 0 ) 0 . In view of (5), (9), (13) and (25), we get
2 F ( μ ) 1 3 F ( y 0 ) F ( μ ) 2 F ( μ ) 1 2 3 φ 0 ( y 0 μ ) + φ 0 ( x 0 μ ) 1 2 3 φ 0 g 1 ( x 0 μ ) x 0 μ + φ 0 ( x 0 μ ) = p ( x 0 μ ) < p ( r ) < 1 ,
so z 0 , z 0 ( 1 ) , z 0 ( m 1 ) , x 1 exist
3 F ( y 0 ) F ( μ ) 1 F ( μ ) 1 2 ( 1 p ( x 0 μ ) )
and
ϕ ( x , y 0 ) F ( μ ) 1 3 4 3 F ( y 0 ) F ( x 0 ) 1 F ( x 0 ) 1 F ( μ ) 3 F ( y 0 ) F ( x 0 ) 1 F ( x 0 ) F ( y 0 ) F ( μ ) .
Using (2), (5), (8), (9), (11) (for i = 2 ), (28) and (31), we obtain
z 0 ( 1 ) μ = z 0 μ + ϕ ( x 0 , y 0 ) F ( μ ) F ( μ ) 1 F ( z 0 ) 1 + ϕ ( x 0 , y 0 ) F ( μ ) 0 1 v ( θ z 0 μ ) d θ z 0 μ g ( 1 ) ( x 0 μ ) x 0 μ x 0 μ < r ,
so (20) holds for σ = 0 , i = 1 and z 0 1 U ( μ , r ) . In an analogous way, we obtain for i = 2 , 3 , , m 1 that
z 0 ( i 1 ) μ = z 0 ( i 2 ) μ + q ( x 0 μ ) 0 1 v ( θ z 0 ( i 2 ) μ ) d θ z 0 μ g ( i 1 ) ( x 0 μ ) x 0 μ x 0 μ < r ,
which implies (20) holds for σ = 0 , i = 1 , 2 , , m 1 , and z 0 m U ( μ , r ) .
In view of solver (2), (5), (11) (for i = m ) and the proceeding estimates
x 1 μ z 0 ( m 1 ) μ + ϕ ( x 0 , y 0 ) F ( μ ) F ( μ ) 1 F ( z 0 ( m 1 ) ) 1 + q ( x 0 μ ) 0 1 v ( θ z 0 ( m 1 ) μ ) d θ z 0 ( m 1 ) μ = g ( m ) ( x 0 μ ) x 0 μ x 0 μ < r ,
showing (21) (for σ = 0 ) with x 1 U ( μ , r ) . Now, change x 0 , y 0 , z 0 , z 0 ( i ) ( i = 1 , 2 , , m ) and x 1 by x σ , y σ , z σ , z σ ( i ) and x σ + 1 in the preceding estimates. Hence, we attain (18)–(21). By adopting
x σ + 1 μ c x σ μ < r , c = g ( m ) ( x 0 μ ) [ 0 , 1 ) ,
we have lim σ x σ = μ with x σ + 1 U ( μ , r ) . Finally, for the uniqueness of required solution, we assume that y * A 1 satisfying F ( y * ) = 0 . Set Q = 0 1 F ( μ + θ ( μ y * ) ) d θ , so
F ( μ ) 1 ( Q F ( μ ) ) 0 1 φ 0 ( θ y * μ ) d θ 0 1 φ 0 ( θ R ) d θ < 1 .
Hence, Q is invertible. Then,
0 = F ( μ ) F ( y * ) = Q ( μ y * ) ,
yields y * = μ . ☐
Remark 1. 
(a) 
It is clear from (13) that we can drop the hypothesis (15) and choose
v ( ζ ) = 1 + φ 0 ( ζ ) or v ( ζ ) = 1 + φ 0 ( r 0 ) .
Indeed, we have
F ( μ ) 1 F ( x ) F ( μ ) + F ( μ ) = 1 + F ( μ ) 1 ( F ( x ) F ( μ ) ) 1 + φ 0 ( x μ ) = 1 + φ 0 ( ζ ) for x μ r 0 .
(b) 
We can set
r 0 = φ 0 1 ( 1 )
instead (4) provided that function φ 0 is strictly increasing.
(c) 
If φ 0 , w , v are constants functions, then
r 1 = 2 2 φ 0 + w
and
r r 1 ,
where r 1 is the radius for Newton’s solver [14].
x σ + 1 = x σ F ( x σ ) 1 F ( x σ ) .
Rheinboldt [26] and Traub [6] also provided radius of convergence instead of r 1
r T R = 2 3 φ 1
and by Argyros [1,2]
r A = 2 2 φ 0 + φ 1 ,
where φ 1 is a constant for (9) on D, so
w φ 1 , φ 0 φ 1 ,
so
r T R r A r 1
and
r T R r A 1 3 as φ 0 w 0 .
(d) 
By adopting conditions to the 7th-order derivative of operator F, the order of the convergence of solver (2) was given in Reference [7]. We assume hypotheses only on the 1st-order derivative of operator F. For obtaining the order of convergence, we adopted
ξ = ln x σ + 2 μ x σ + 1 μ ln x σ + 1 μ x σ μ , for   each   σ = 0 , 1 , 2 , 3 , 4 ,
or
ξ * = ln x σ + 2 x σ + 1 x σ + 1 x σ ln x σ + 1 x σ x σ x σ 1 , for   each   σ = 1 , 2 , 3 , 4 , ,
the computational order of convergence COC and the approximate computational order of convergence ACOC [28,29], respectively. These definitions can also be found in Reference [27]. They do not require derivatives higher than one. Indeed, notice that to generate iterates x n and therefore compute ξ and ξ * , we need to use the formula (2) using only the first derivatives. It is vital to note that ACOC does not need the prior information of exact root μ.
(e) 
Consider F satisfying the autonomous differential equation [1,2] of
F ( x ) = P ( F ( x ) )
where P is a given and continuous operator. Then, F ( x * ) = P ( F ( x * ) ) = P ( 0 ) , our results apply but without knowledge of x * and choose F ( x ) = e x 1 . Hence, we select P ( x ) = x + 1 .

3. Concrete Applications

Here, we illustrate the theoretical consequences suggested in Section 2. We choose λ = 1 and φ 1 ( x σ , y σ ) = y σ F ( y σ ) 1 F ( y σ ) , in all examples. Next, we provide numerical examples given as follows:
Example 1.
Choose E 1 = E 2 = A , where A = C [ 0 , 1 ] . We study the mixed Hammerstein-like equation [4,18], defined as follows:
x ( s ) = 1 + 0 1 H ( s , ζ ) x ( ζ ) 3 2 + x ( ζ ) 2 2 d ζ ,
where
H ( s , ζ ) = ( 1 s ) ζ , ζ s , s ( 1 ζ ) , s ζ ,
defined in [ 0 , 1 ] × [ 0 , 1 ] . The solution μ ( s ) = 0 is the same as zero of (1), where F : A A , given as:
F ( x ) ( s ) = x ( s ) 0 ζ H ( s , ζ ) x ( ζ ) 3 2 + x ( ζ ) 2 2 d ζ .
But
0 ζ H ( s , ζ ) d ζ 1 8 ,
and
F ( x ) y ( s ) = y ( s ) 0 ζ G ( s , ζ ) 3 2 x ( ζ ) 1 2 + x ( ζ ) d ζ ,
so since F ( μ ( s ) ) = I ,
F ( μ ) 1 F ( x ) F ( y ) 1 8 3 2 x y 1 2 + x y .
Then, we consider
φ 0 ( ζ ) = φ ( ζ ) = 1 8 3 2 ζ 1 2 + ζ
and
v ( ζ ) = 1 + φ 0 ( ζ ) ,
by Remark 1. But F is not Lipschitz, so earlier studies [4,7] are not applicable to solving this problem. On the other hand, our technique does not exhibit this kind of behavior. The different radii of convergence mentioned in Table 1.
Example 2.
Describing the movement of a particle in 3-D by the following system of differential equations
f 1 ( x ) f 1 ( x ) 1 = 0 f 2 ( y ) ( e 1 ) y 1 = 0 f 3 ( z ) 1 = 0
with x , y , z A for f 1 ( 0 ) = f 2 ( 0 ) = f 3 ( 0 ) = 0 . Define v = ( x , y , z ) T by function F : = ( f 1 , f 2 , f 3 ) : A R 3 given as follows:
F ( v ) = e x 1 , e 1 2 y 2 + y , z T .
So, we obtain
F ( v ) = e x 0 0 0 ( e 1 ) y + 1 0 0 0 1 .
Then, we have for μ = ( 0 , 0 , 0 ) T that φ 0 ( ζ ) = ( e 1 ) ζ , φ ( ζ ) = e 1 e 1 ζ , and v ( ζ ) = e 1 e 1 . The different radii of convergence mentioned in Table 2.
Example 3.
Let us choose E 1 = E 2 = S , facilitated by the max norm. Set A = U ¯ ( 0 , 1 ) and choose a function F on A
F ( Γ ) ( x ) = ϕ ( x ) 5 0 1 x θ Γ ( θ ) 3 d θ .
We have that
F ( Γ ( ξ ) ) ( x ) = ξ ( x ) 15 0 1 x θ Γ ( θ ) 2 ξ ( θ ) d θ , for each ξ A .
Then, we have that φ 0 ( ζ ) = 15 ζ , φ ( ζ ) = 30 ζ and v ( ζ ) = 2 . So, we yield the Table 3, where we calculated distinct radii of convergence.
Example 4.
By the academic problem that we considered in the introduction, we yield φ 0 ( ζ ) = φ ( ζ ) = 96.662907 t and v ( ζ ) = 2 . So, we have the different radii of convergence depicted in Table 4.

4. Application of Our Scheme on Large System of Nonlinear Equations

We cited the ( j ) , ( F ( x j ) ) , x j + 1 x j and ξ * log x j + 1 x j / x j x j 1 log x j x j 1 / x j 1 x j 2 as the index of number of iteration, absolute residual errors, errors among two iterations and computational convergence order, respectively, in Table 5, Table 6 and Table 7.
The whole calculation is performed in the Mathematica software (Version-9, Wolfram Research, Champaign, IL, USA). We consider at least 1000 digits of mantissa in order to minimize the round-off errors. The notation a 1 ( ± a 2 ) employs a 1 × 10 ( ± a 2 ) .
Example 5.
We assume here a boundary value problem [30], which is given by
v = 1 2 v 3 + 3 v 3 2 x + 1 2 , v ( 0 ) = 0 , v ( 1 ) = 1 .
Further, we chosen a σ-point partition of [ 0 , 1 ] in the following way:
x 0 = 0 < x 1 < x 2 < x 3 < < x σ , w h e r e x i + 1 = x i + k , k = 1 σ .
Furthermore, we assume that v 0 = v ( x 0 ) = 0 , v 1 = v ( x 1 ) , , v σ 1 = v ( x σ 1 ) , v σ = v ( x σ ) = 1 . By adopting the following technique for removing derivatives for problem (61)
v j = v j + 1 v j 1 2 k , v j = v j 1 2 v j + v j + 1 k 2 , j = 1 , 2 , , σ 1 .
We have
v j + 1 2 v j + v j 1 k 2 2 v j 3 3 2 x j k 2 1 k 2 = 0 , j = 1 , 2 , , σ 1 .
a system of nonlinear equations (SNE) of order ( σ 1 ) × ( σ 1 ) . We choose the starting approximation y h ( 0 ) = 1.5 , 1.5 , 1.5 , 1.5 , 1.5 , 1.5 T . We solved the problem for a 6 × 6 SNE by choosing σ = 7 . We obtained the following solution
μ = 0.0765439 , 0.165874 , 0.271521 , 0.398454 , 0.553886 , 0.748688 T .
We depicted the numerical out comes in Table 5.
Example 6.
We choose a prominent 2D Bratu problem [31,32], which is given by
u x x + u t t + C e u = 0 , o n A : ( x , t ) 0 x 1 , 0 t 1 , a l o n g b o u n d a r y h y p o t h e s i s u = 0 o n A .
Let us assume that Θ i , j = u ( x i , t j ) is a numerical result over the grid points of the mesh. In addition, we consider that τ 1 and τ 2 are the number of steps in the direction of x and t, respectively. Moreover, we choose that h and k are the respective step sizes in the direction of x and y, respectively. In order to find the solution of PDE (62), we adopt the following approach
u x x ( x i , t j ) = Θ i + 1 , j 2 Θ i , j + Θ i 1 , j h 2 , C = 0.1 , t [ 0 , 1 ] ,
which further yields the succeeding SNE
Θ i , j + 1 + Θ i , j 1 Θ i , j + Θ i + 1 , j + Θ i 1 , j + h 2 C exp Θ i , j i = 1 , 2 , 3 , , τ 1 , j = 1 , 2 , 3 , , τ 2 .
By choosing τ 1 = τ 2 = 11 , h = 1 11 , and C = 0.1 , we get a large SNE of order 100 × 100 . The starting point is
x 0 = 0.1 ( sin ( π h ) sin ( π k ) , sin ( 2 π h ) sin ( 2 π k ) , , sin ( 10 π h ) sin ( 10 π k ) ) T
and results are depicted in Table 6.
Example 7.
Finally, we deal with succeeding SNE
F ( X ) = 1 + x j 2 x j + 1 = 0 , 1 j σ 1 , 1 + x σ 2 x 1 = 0 .
In order to access a giant system of nonlinear equations of order 200 × 200 , we pick σ = 200 . In addition, we consider the following starting approximation for this problem:
x ( 0 ) = 5 4 , 5 4 , 5 4 , 5 4 , , 5 4 ( 200 t i m e s ) T ,
and converges to μ = ( 1 , 1 , 1 , 1 , , 1 ( 200 t i m e s ) ) T . The attained computation outcomes are illustrated in Table 7.

5. Concluding Remarks

Recently, there has been a surge in the development of multi-step solvers for nonlinear equations. In this article, we present a unifying local convergence of solver (2), relying only on the first derivative. This way, we expand the applicability of these solvers. Notice that in earlier studies that are special cases of (2), higher than one derivatives are used, which do not appear in the solver. Moreover, no bounds on the distances x σ μ are provided, nor uniqueness theorems. Furthermore, we provide computable bounds and uniqueness of solutions. This is where the novelty of our article lies. Numerical and applications are also given to test the convergence conditions. In our application, we solve the 2D-Bratu, BVP problems as well as a system of nonlinear equations of 200 × 200 .

Author Contributions

R.B. and I.K.A.: Conceptualization; Methodology; Validation; Writing—Original Draft Preparation; Writing—Review & Editing. All authors have read and agreed to the published version of the manuscript.

Funding

Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under Grant No. D-237-130-1440.

Acknowledgments

This project was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, under grant No. (D-237-130-1440). The authors, therefore, gratefully acknowledge the DSR technical and financial support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Argyros, I.K. Convergence and Application of Newton-type Iterations; Springer: New York, NY, USA, 2008. [Google Scholar]
  2. Argyros, I.K.; Hilout, S. Computational Methods in Nonlinear Analysis; World Scientific Publishing Company: Hackensack, NJ, USA, 2013. [Google Scholar]
  3. Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Increasing the order of convergence of iterative schemes for solving nonlinear system. J. Comput. Appl. Math. 2012, 252, 86–94. [Google Scholar] [CrossRef]
  4. Hernández, M.A.; Martinez, E. On the semilocal convergence of a three steps Newton-type process under mild convergence conditions. Numer. Algor. 2015, 70, 377–392. [Google Scholar] [CrossRef] [Green Version]
  5. Sharma, J.R.; Ghua, R.K.; Sharma, R. An efficient fourth-order weighted-Newton method for system of nonlinear equations. Numer. Algor. 2013, 62, 307–323. [Google Scholar] [CrossRef]
  6. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice- Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  7. Xiao, X.; Yin, H. Achieving higher order of convergence for solving systems of nonlinear equations. Appl. Math. Comput. 2017, 311, 251–261. [Google Scholar] [CrossRef]
  8. Potra, F.A.; Pták, V. Nondiscrete introduction and iterative process; Research Notes in Mathematics Volume 103; Pitman Advanced Publishing Program: Boston, MA, USA, 1984. [Google Scholar]
  9. Montazeri, H.; Soleymani, F.; Shateyi, S.; Motsa, S.S. On a new method for computing the numerical solution of systems of nonlinear equations. J. Appl. Math. 2012, 2012, 751975. [Google Scholar] [CrossRef] [Green Version]
  10. Amat, S.; Busquier, S.; Plaza, s.; Guttiérrez, J.M. Geometric constructions of iterative functions to solve nonlinear equations. J. Comput. Appl. Math. 2003, 157, 197–205. [Google Scholar] [CrossRef] [Green Version]
  11. Amat, S.; Busquier, S.; Plaza, S. Dynamics of the King and Jarratt iterations. Aequ. Math. 2005, 69, 212–223. [Google Scholar] [CrossRef]
  12. Amat, S.; Hernández, M.A.; Romero, N. A modified Chebyshev’s iterative method with at least sixth order of convergence. Appl. Math. Comput. 2008, 206, 164–174. [Google Scholar] [CrossRef]
  13. Argyros, I.K.; George, S. Local convergence of some higher-order Newton-like method with frozen derivative. SeMA J. 2015, 70, 47–59. [Google Scholar] [CrossRef]
  14. Argyros, I.K.; Magreñán, Á.A. Ball convergence theorems and the convergence planes of an iterative methods for nonlinear equations. SeMA J. 2015, 71, 39–55. [Google Scholar]
  15. Argyros, I.K.; Magreñán, Á.A. On the convergence of an optimal fourth-order family of methods and its dynamics. Appl. Math. Comput. 2015, 252, 336–346. [Google Scholar] [CrossRef]
  16. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  17. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method for functions of several variables. Appl. Math. Comput. 2006, 183, 199–208. [Google Scholar] [CrossRef]
  18. Ezquerro, J.A.; Hernández, M.A. New iterations of R-order four with reduced computational cost. BIT Numer. Math. 2009, 49, 325–342. [Google Scholar] [CrossRef]
  19. Ezquerro, J.A.; Hernández, M.A. A uniparametric Halley type iteration with free second derivative. Int. J. Pure and Appl. Math. 2003, 6, 99–110. [Google Scholar]
  20. Gutiérrez, J.M.; Hernández, M.A. Recurrence relations for the super-Halley method. Comput. Math. Appl. 1998, 36, 1–8. [Google Scholar] [CrossRef] [Green Version]
  21. Gutiérrez, J.M.; Magreñán, Á.A.; Romero, N. On the semilocal convergence of Newton–Kantorovich method under center-Lipschitz conditions. Appl. Math. Comput. 2013, 221, 79–88. [Google Scholar] [CrossRef]
  22. Kou, J. A third-order modification of Newton method for systems of nonlinear equations. Appl. Math. Comput. 2007, 191, 117–121. [Google Scholar]
  23. Magreñán, Á.A. Different anomalies in a Jarratt family of iterative root-finding methods. Appl. Math. Comput. 2014, 233, 29–38. [Google Scholar]
  24. Magreñán, Á.A. A new tool to study real dynamics: The convergence plane. Appl. Math. Comput. 2014, 248, 215–224. [Google Scholar] [CrossRef] [Green Version]
  25. Petkovic, M.S.; Neta, B.; Petkovic, L.; Džunič, J. Multipoint Methods for Solving Nonlinear Equations; Academic Press: New York, NY, USA, 2013. [Google Scholar]
  26. Rheinboldt, W.C. An adaptive continuation process for solving systems of nonlinear equations. Banach Ctr. Publ. 1978, 3, 129–142. [Google Scholar] [CrossRef]
  27. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  28. Beyer, W.A.; Ebanks, B.R.; Qualls, C.R. Convergence rates and convergence-order profiles for sequences. Acta Appl. Math. 1990, 20, 267–284. [Google Scholar] [CrossRef]
  29. Potra, F.A. On Q-order and R-order of convergence. J. Optim. Theory Appl. 1989, 63, 415–431. [Google Scholar] [CrossRef]
  30. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  31. Kapania, R.K. A pseudo-spectral solution of 2-parameter Bratu’s equation. Comput. Mech. 1990, 6, 55–63. [Google Scholar] [CrossRef]
  32. Simpson, R.B. A method for the numerical determination of bifurcation states of nonlinear systems of equations. SIAM J. Numer. Anal. 1975, 12, 439–451. [Google Scholar] [CrossRef]
Table 1. Distinct radii of convergence.
Table 1. Distinct radii of convergence.
i r 1 r 2 r ( i ) r
12.63030.8162990.8162990.816299
22.63030.8162990.6770290.677029
We notice that the radius of convergence decreases as “i” increases as expected, since we trade higher order convergence, with a smaller domain of convergence of initial points.
Table 2. Distinct radii of convergence.
Table 2. Distinct radii of convergence.
i r 1 r 2 r ( i ) r
10.3775420.4162750.4162750.416275
20.3775420.4162750.2727990.272799
30.3775420.4162750.2277770.227777
40.3775420.4162750.1980380.198038
We notice that the radius of convergence decreases as “i” increases as expected, since we trade higher order convergence, with a smaller domain of convergence of initial points.
Table 3. Distinct radii of convergence.
Table 3. Distinct radii of convergence.
i r 1 r 2 r ( i ) r
10.03333330.06250.06250.0625
20.03333330.06250.03245240.0324524
30.03333330.06250.02968090.0296809
40.03333330.06250.02707810.0270781
We notice that the radius of convergence decreases as “i” increases as expected, since we trade a higher order convergence with a smaller domain of convergence of initial points.
Table 4. Distinct radii of convergence.
Table 4. Distinct radii of convergence.
i r 1 r 2 r ( i ) r
10.006896820.01029170.01029170.0102917
20.006896820.01029170.006237740.00623774
30.006896820.01029170.005999060.00599906
40.006896820.01029170.005658630.00565863
We notice that the radius of convergence decreases as “i” increases as expected, since we trade a higher order convergence with a smaller domain of convergence of initial points.
Table 5. Computational results on a boundary value problem 5.
Table 5. Computational results on a boundary value problem 5.
Cases of (2)j F ( x j ) x j + 1 x j ξ *
i = 1 0 1.9 2.8
1 8.5 ( 6 ) 2.2 ( 5 )
2 9.0 ( 38 ) 1.7 ( 37 )
3 9.6 ( 231 ) 1.5 ( 230 ) 6.0097
i = 2 0 1.9 2.8
1 6.8 ( 8 ) 2.9 ( 7 )
2 6.0 ( 66 ) 6.5 ( 66 )
3 2.0 ( 534 ) 5.0 ( 534 ) 7.9819
We have computed ACOC and observed that as we increases “i” so does the ACOC.
Table 6. Computational results of 2D Bratu problem in Example 6.
Table 6. Computational results of 2D Bratu problem in Example 6.
Cases of (2)j F ( x j ) x j + 1 x j ξ *
i = 1 0 8.1 ( 2 ) 5.0 ( 1 )
1 1.2 ( 21 ) 4.9 ( 21 )
2 3.3 ( 141 ) 5.7 ( 141 )
3 4.3 ( 860 ) 1.6 ( 569 ) 5.9911
i = 2 0 8.1 ( 2 ) 5.0 ( 1 )
1 9.6 ( 30 ) 1.2 ( 29 )
2 1.3 ( 256 ) 6.4 ( 256 )
3 2.0 ( 2068 ) 2.4 ( 2068 ) 8.0096
We have computed ACOC and observed that, as “i” increases, so does the ACOC.
Table 7. Computational results on Example 7.
Table 7. Computational results on Example 7.
Cases of (2)j F ( x j ) x j + 1 x j ξ *
i = 1 0 1.3 ( + 1 ) 3.5
1 9.8 ( 4 ) 3.3 ( 4 )
2 2.0 ( 31 ) 6.6 ( 32 )
3 2.7 ( 225 ) 9.0 ( 226 ) 7.0000
i = 2 0 1.3 ( + 1 ) 3.5
1 1.3 ( 5 ) 4.3 ( 6 )
2 5.5 ( 64 ) 1.8 ( 64 )
3 9.5 ( 648 ) 3.2 ( 648 ) 10.000
We have computed ACOC and observed that, as “i” increases, so does the ACOC.

Share and Cite

MDPI and ACS Style

Behl, R.; Argyros, I.K. Local Convergence for Multi-Step High Order Solvers under Weak Conditions. Mathematics 2020, 8, 179. https://doi.org/10.3390/math8020179

AMA Style

Behl R, Argyros IK. Local Convergence for Multi-Step High Order Solvers under Weak Conditions. Mathematics. 2020; 8(2):179. https://doi.org/10.3390/math8020179

Chicago/Turabian Style

Behl, Ramandeep, and Ioannis K. Argyros. 2020. "Local Convergence for Multi-Step High Order Solvers under Weak Conditions" Mathematics 8, no. 2: 179. https://doi.org/10.3390/math8020179

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop