Next Article in Journal
Global Stability of Delayed Ecosystem via Impulsive Differential Inequality and Minimax Principle
Next Article in Special Issue
A New Nonlinear Ninth-Order Root-Finding Method with Error Analysis and Basins of Attraction
Previous Article in Journal
Domain Heuristic Fusion of Multi-Word Embeddings for Nutrient Value Prediction
Previous Article in Special Issue
Some High-Order Convergent Iterative Procedures for Nonlinear Systems with Local Convergence
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Unified Convergence Criteria for Iterative Banach Space Valued Methods with Applications

by
Ioannis K. Argyros
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
Mathematics 2021, 9(16), 1942; https://doi.org/10.3390/math9161942
Submission received: 24 July 2021 / Revised: 9 August 2021 / Accepted: 10 August 2021 / Published: 14 August 2021
(This article belongs to the Special Issue Application of Iterative Methods for Solving Nonlinear Equations)

Abstract

:
A plethora of sufficient convergence criteria has been provided for single-step iterative methods to solve Banach space valued operator equations. However, an interesting question remains unanswered: is it possible to provide unified convergence criteria for single-step iterative methods, which are weaker than earlier ones without additional hypotheses? The answer is yes. In particular, we provide only one sufficient convergence criterion suitable for single-step methods. Moreover, we also give a finer convergence analysis. Numerical experiments involving boundary value problems and Hammerstein-like integral equations complete this paper.

1. Introduction

Numerous applications from mathematics, economics, engineering, physics, chemistry, biology, and medicine, to mention a few, can be modeled as follows:
F ( x ) = 0 ,
with operator F : Ω T 1 T 2 acting between T 1 and T 2 , which are Banach spaces, whereas Ω is nonempty. That is why determining a solution denoted by x * of Equation (1) is of extreme importance. However, this task is difficult in general. Ideally, one desires x * to be in closed form, but this task is only accomplished in some instances. Practitioners and researchers resort to mostly iterative methods, generating a sequence approximating x * under certain conditions on the initial data. The most popular single step methods are as follows:
Newton’s [1,2]
x m + 1 = x m F ( x m ) 1 F ( x m ) .
Secant [3]
x m + 1 = x m [ x m , x m 1 ; F ] 1 F ( x m ) ,
where [ . , . ; F ] : Ω × Ω L ( T 1 , T 2 ) .
Steffensen’s-like [4]
x m + 1 = x m [ x m + λ 1 F ( x m ) , x m + λ 2 F ( x m ) ; F ] 1 F ( x m ) ,
for T 1 = T 2 and λ 1 , λ 2 being parameters.
Newton’s-type [5,6,7,8]
x m + 1 = x m A m 1 F ( x m ) ,
where A m = A ( x m ) , A : Ω L ( T 1 , T 2 ) .
Stirling’s [9]
x m + 1 = x m G ( x m ) 1 G ( x m ) ,
where T 1 = T 2 and G ( y ) = y F ( y ) are used to find fixed points of equation x = G ( x ) .
Picard’s [10,11]
x m + 1 = G ( x m ) .
Numerous other single step methods can be found in [12,13,14] and the references therein.
Clearly, all the preceding methods can be written in a unified way as follows:
x m + 2 = φ ( x m + 1 , x m , x m 1 ) , for   each m = 0 , 1 , 2 , ,
where φ : Ω × Ω × Ω T 1 and x 1 , x 0 , x 1 Ω .
We usually study two types of convergence for iterative methods. The local convergence uses information about x * to find the radii of convergence balls. The semilocal uses information about x 0 that guarantees convergence to x * . Sufficient convergence criteria for these methods have been provided by many authors [2,12,13].
The following common questions (Q) arise in the semilocal study of these methods:
Q1
Can the convergence region be extended since it is small in general?
Q2
Can the estimates on x m x * , x m + 1 x m become tighter? Otherwise, we compute more iterates than we should to reach a predecided error tolerance.
Q3
Can the convergence criteria be weakened?
Q4
Can the location of solution be more precise?
Q5
Is there a uniform way of studying single-step methods?
Q6
Are there uniform convergence criteria for single-step methods?
The novelty of our paper is that we answer positively to all these questions (Q), without additional conditions.
In order to deal with single-step methods, we first consider the following iteration:
t m + 2 = ψ ( t m + 1 , t m , t m 1 ) for   each m = 0 , 1 , 2 , ,
where ψ : [ 0 , ) × [ 0 , ) × [ 0 , ) is a function related to the initial data. The task of choosing ψ so that sequence { t n } is majorizing for all methods listed previously is very difficult in general.
We define a special case of sequences given by (9) as follows:
t 1 = α , t 0 = β , t 1 = γ = β + η , t 2 = t 1 + ( a ¯ 1 ( t 1 t 0 ) + a ¯ 2 ( t 0 t 1 ) + a ¯ 3 t 1 + a ¯ 4 t 0 + a ¯ 5 t 1 + a ¯ 6 ) 1 ( b ¯ 1 ( t 1 t 0 ) + b ¯ 2 ( t 0 t 1 ) + b ¯ 3 t 1 + b ¯ 4 t 0 + b ¯ 5 t 1 + b ¯ 6 ) ( t 1 t 0 ) t m + 2 = t m + 1 + ( a 1 ( t m + 1 t m ) + a 2 ( t m t m 1 ) + a 3 t m + 1 + a 4 t m + a 5 t m 1 + a 6 ) 1 ( b 1 ( t m + 1 t m ) + b 2 ( t n t m 1 ) + b 3 t m + 1 + b 4 t m + b 5 t m 1 + b 6 ) × ( t m + 1 t m ) ,
for each m = 1 , 2 , , where α , β , γ , a ¯ i , b ¯ i , a i , b i , i = 1 , 2 , , 6 are nonnegative parameters. We shall show that all majorizing sequences used to study the preceding methods are specializations of { t m } given by (10).
Similarly, in the case of local convergence we show all preceding methods can be studied using the estimate as follows:
e m + 1 λ m e m ,
where c 1 , c 2 , c 3 , d 1 , d 2 , d 3 are nonnegative parameters, e m = x m x * and λ m = c 1 e m + c 2 e m 1 + c 3 1 ( d 1 e m + d 2 e m 1 + d 3 ) .
We suppose from now on that { t m } is a majorizing sequence for { x n } . Recall that an increasing real sequence { t m } is majorizing for a sequence { x m } in a Banach space T 1 if x m + 1 x m t m + 1 t m , for each m = 0 , 1 , 2 , [11]. Additional conditions are needed to show that F ( ρ ) = 0 , where ρ : = lim m x m .
The paper contains also the semi-local as well as the local convergence of method (10) in Section 2. The numerical experiments can be found in Section 3. Conclusions appear in Section 4.

2. Majorizing Sequences and Convergence Analysis

In this section, we use majorizing sequence (10) to deal first with the semi-local convergence analysis for sequence { x n } .
We provide very general sufficient criteria for the convergence of sequence (10).
Theorem 1.
Suppose that for each m = 0 , 1 , 2 , and b ¯ 6 , b 6 [ 0 , 1 ) ,
b ¯ 1 ( t 1 t 0 ) + b ¯ 2 ( t 0 t 1 ) + b ¯ 3 t 1 + b ¯ 4 t 0 + b ¯ 5 t 1 + b ¯ 6 < 1 ,
and
b 1 ( t m + 1 t m ) + b 2 ( t m t m 1 ) + b 3 t m + 1 + b 4 t m + b 5 t m 1 + b 6 < 1 .
Then, sequence { t m } developed by (10) exists, is nondecreasing, bounded from above by t * * = 1 b 6 b 1 + b 2 + b 3 + b 4 + b 5 and converges to its unique least upper bounds denoted by t * , which satisfies t 1 t * t * * .
Proof. 
Using the definition of sequence { t k } we see that 0 t k t k + 1 holds for each k = 0 , 1 , 2 , . Moreover, by condition (12), t k + 1 < t * * . So, sequence { t k } converges to t * .
Remark 1.
Condition (12) can be satisfied only in some special cases. Next, we provide stronger conditions, which can easily be verified.
It is convenient for the following convergence analysis to develop real functions, parameters and sequences. Define functions on the interval [ 0 , 1 ) for μ = t 2 t 1 by the following:
f k ( t ) = a 1 μ t k 1 + a 2 μ t k 2 + a 3 t 1 + a 3 μ ( 1 + t + + t k 1 ) + a 4 t 1 + a 4 ( 1 + t + + t k 2 ) a 5 t 1 + a 5 μ ( 1 + t + + t k 3 ) + a 6 + b 1 μ t k + b 2 μ t k 1 + b 3 μ t 1 + b 3 μ ( 1 + t + + t k ) + b 4 t 1 t + b 4 μ ( t + t 4 + + t k 1 ) + b 5 t 1 t + b 5 μ ( t + t 2 + + t k 2 ) + b 6 t t ,
h ( t ) = ( b 1 + b 3 ) t 3 + ( a 1 + a 3 b 1 + b 2 + b 4 ) t 2 + ( a 2 a 1 + a 4 b 2 + b 5 ) t + a 5 a 2 ,
g ( t ) = a 3 t 1 + a 3 μ 1 t + a 4 t 1 + a 4 t 1 1 t + a 5 t 1 + a 5 t 1 1 t + a 6 + b 3 t 1 t + b 3 μ t 1 t + b 4 t 1 t + b 4 μ t 1 t + b 5 μ t + b 5 μ t 1 1 t + b 6 t t
and sequence
δ k = a 1 ( t k + 1 t k ) + a 2 ( t k t k 1 ) + a 3 t k 1 + a 4 t k + a 5 t k 1 + a 6 1 ( b 1 ( t k + 1 t k ) + b 2 ( t k t k 1 ) + b 3 t k 1 + b 4 t k + b 5 t k 1 + b 6 ) .
Suppose that equations
h ( t ) = 0
and
g ( t ) = 0
have minimal solutions δ and λ , respectively in the interval ( 0 , 1 ) satisfying the following:
0 δ 1 δ < λ .
Notice that
f i + 1 ( t ) = f i ( t ) + μ h ( t ) t i 2
and
f i + 1 ( δ ) = f i ( δ ) .
Indeed, by the definition of sequence { f i } and function h, we obtain in turn by adding and subtracting f i ( t ) (in the definition of f i + 1 ( t ) ) the following:
f i + 1 ( t ) = f i ( t ) + a 1 μ t i a 1 μ t i 1 + a 2 μ t i 1 a 2 μ t i 2 + a 3 μ t i + a 4 μ t i 1 + a 5 μ t i 2 + b 1 μ t i + 1 b 1 μ t i + b 2 μ t i b 2 μ t i 1 + b 3 μ t i + 1 + b 4 μ t i + b 5 μ t i 1 = f i ( t ) + h ( t ) μ t i 2 .
In particular, by the definition of δ and (20) we obtain (21) since h ( δ ) = 0 .
Remark 2.
Functions h and g appear in the proof of Theorem 1. The former is related to two consecutive functions f i and f i + 1 (see (20).) Then, (21) is true if (17) holds for t = δ . The latter relates to the limit of these recurrent functions f i and is independent of i . This function g then should satisfy (27) and that happens if (18) holds. Condition 0 δ 1 δ (see (19)) is needed to show that (22) holds for i = 1 , which will imply the following:
t 3 t 2 δ ( t 2 t 1 )
and the induction for 0 δ i δ can begin. The condition δ < λ is needed to show (29).
Next, we show the convergence of sequence { t n } under conditions (17)–(19).
Theorem 2.
Under conditions (17)–(19), the conclusions of Theorem 1 hold for sequence { t k } but t * * is replaced by s = t 1 + μ 1 δ .
Proof. 
We shall show by induction
0 δ i δ .
Item (22) holds for i = 1 by (19). Then, the definition of sequence { t i } and (22) give
0 < t 3 t 2 δ ( t 2 t 1 ) t 3 t 2 + δ ( t 2 t 1 ) = t 2 + ( 1 + δ ) ( t 2 t 1 ) ( t 2 t 1 ) t 1 + 1 δ 2 1 δ ( t 2 t 1 ) < s .
Suppose (22) holds. Then, we have the following:
0 < t i + 2 t i + 1 δ i ( t 2 t 1 )
and
t i + 2 t 1 + ( 1 δ i + 1 ) ( t 2 t 1 ) 1 δ < s .
Item (22) holds, if
δ i + 1 δ
(since δ i + 1 0 ). Evidently, item (25) holds, by (23) and (24), if we have the following:
a 1 μ δ i 1 + a 2 μ δ i 2 + a 3 ( t 1 + ( 1 δ i ) μ 1 δ ) + a 4 ( t 1 + ( 1 δ i 1 ) μ 1 δ ) + a 5 ( 1 + ( 1 δ i 2 ) μ 1 δ ) + a 6 + b 1 μ δ i + b 2 μ δ i 1 + b 3 ( t 1 + ( 1 δ i ) μ 1 δ ) + b 4 δ ( t 1 + ( 1 δ i 1 ) μ 1 δ ) + b 5 δ ( t 1 + ( 1 δ i 2 ) μ 1 δ ) + b 6 δ δ 0
or
f i + 1 ( δ ) 0
or
f i ( δ ) 0
by the definition of f i , (20) and (21). In view of (21), one obtains the following:
g ( t ) = f ( t ) : = lim i f i ( t ) .
So, we can show instead of (27) that the following holds:
h ( δ ) 0 ,
which is true by the definition of λ and (19). Hence, the induction for (22) is completed. Then, items (23) and (24) hold. Consequently, sequence { t k } converges to t * .
Remark 3.
The conditions of Theorem 2 imply condition (12) of Theorem 1 but not necessarily vice versa.
Next, we specialize a ¯ i , b ¯ i , a i , b i in some interesting cases, justifying the already stated advantages.
Case 1: Newton’s method. 
Let us abbreviate what is known. Suppose the following conditions (C) hold:
F ( x 0 ) 1 F ( x 0 ) η ,
F ( x 0 ) 1 ( F ( v 2 ) F ( v 1 ) ) 1 v 2 v 1 for   each v 1 , v 2 Ω ,
H 1 = 1 η 1 2
and
U [ x 0 , u * ] Ω ,
where u * = 1 1 2 1 η 1 .
Next, we present the celebrated Newton–Kantorovich theorem (NKT) [10].
Theorem 3.
Suppose the conditions (C) hold. Then, Newton’s method converges to a unique solution x * of equation F ( x ) = 0 in U ( x 0 , u * ) Ω , and
x k + 1 x k 1 x k x k 1 2 2 ( 1 1 x k x 0 ) 1 ( u k u k 1 ) 2 2 ( 1 1 u k ) = u k + 1 u k
and
x k x * u * u k ,
where u 0 = η and
u k + 1 = u k + 1 ( u k u k 1 ) 2 2 ( 1 1 u k ) for   each k = 0 , 1 , 2 , .
Let us see what we obtain under our conditions. Suppose the following conditions (A) hold:
F ( x 0 ) 1 F ( x 0 ) η ,
F ( x 0 ) 1 ( F ( v ) F ( x 0 ) ) 0 v x 0 for   each v Ω .
Set U = Ω U ( x 0 , 1 0 ) .
F ( x 0 ) 1 ( F ( v 2 ) F ( v 1 ) ) v 2 v 1 for   each v 1 , v 2 U ,
H = ¯ η 1 2
and
U [ x 0 , t * ] Ω ,
where ¯ = 1 8 ( 4 0 + 0 + 8 0 2 + 0 ) .
Remark 4.
Notice that = ( Ω , 0 ) but 1 = 1 ( Ω ) . Hence, U is used to define . It is important to see that in practice the computation of Lipschitz constant 1 requires that of center Lipschitz constant 0 and that of restricted Lipschitz constant ℓ as special cases. Hence, the conditions involving 0 and ℓ are not additional to the one involving 1 . Moreover, they are also weaker. This is also verified in the numerical section. In other words, the condition involving 1 implies the other two but not necessarily vice versa.
Next, we present our extended version of the Newton–Kantorovich Theorem 3.
Theorem 4.
Suppose the conditions (A) hold. Then, Newton’s method converges to a unique solution x * of equation F ( x ) = 0 in U ( x 0 , 1 0 ) Ω , and the following:
x 1 x 0 t 1 t 0 ,
x 2 x 1 0 x 1 x 0 2 2 ( 1 0 x 1 x 0 ) 0 ( t 1 t 0 ) 2 2 ( 1 0 t 1 ) = t 2 t 1 ,
x k + 2 x k + 1 x k + 1 x k 2 2 ( 1 0 x k + 1 x 0 ) ( t k + 1 t k ) 2 2 ( 1 0 t k + 1 )
for each k = 1 , 2 , .
Proof. 
Simply choose t 1 = 0 , t 0 = 0 , a ¯ 1 = 0 2 , a ¯ 2 = = a ¯ 6 = 0 , b ¯ 1 = 0 , b ¯ 2 = = b ¯ 6 = 0 ,   a 1 = 2 , a 2 = 0 = a 6 = 0 , b 0 = 0 and b 2 = = b 6 = 0 . Then, (19) reduces to (31). In particular, we use the following estimates:
F ( x 0 ) 1 ( F ( x i + 1 ) F ( x 0 ) ) 0 x i + 1 x 0 0 ( t i + 1 t 0 ) 0 t i + 1 < 1 ,
so F ( x i + 1 ) 1 L ( T 2 , T 1 ) by the Banach perturbation lemma on invertible linear operators [10] and the following:
F ( x i + 1 ) 1 F ( x 0 ) 1 1 0 x i + 1 x 0 .
Then, since
F ( x i + 1 ) = F ( x i + 1 ) F ( x i ) F ( x i ) ( x i + 1 x i ) = 0 1 ( F ( x i + τ ( x i + 1 x i ) ) F ( x i ) ) d τ ( x i + 1 x i ) ,
we obtain the following:
x i + 2 x i + 1 ¯ x i + 1 x i 2 2 ( 1 0 x i + 1 x i ) t i + 2 t i + 1 ,
where we also used the following:
x i + 1 x 0 m = 1 m x m x m 1 m = 1 i + 1 ( t m t m 1 ) t m + 1 t 0 = t m + 1 t * ,
and
x i + τ ( x i + 1 x i ) x 0 t i + τ ( t i + 1 t i ) t *
for each τ [ 0 , 1 ] . So, the sequence { t k } is majorizing for { x k } . Then, sequence { x k } is fundamental in T 1 , which is a Banach space, so lim k x k = x * U [ x 0 , t * ] , which solves Equation (1), since
F ( x 0 ) 1 F ( x i + 1 ) ˜ 2 x i + 1 x i 2 ˜ 2 ( t i + 1 t i ) 2 0
as i . Then, we conclude that F ( x * ) = 0 since F is a continuous operator, where ˜ = 0 , i = 0 , i = 1 , 2 , . Let x * U ( x 0 , 1 0 ) Ω with F ( x * ) = 0 . Set M = 0 1 F ( x * + τ ( x * z x * ) ) d τ . Using the center Lipschitz condition, we have the following:
F ( x 0 ) 1 ( M F ( x 0 ) ) 0 0 1 [ ( 1 τ ) x * x 0 + τ x * x 0 ] d τ < 0 1 0 = 1 ,
so x * = x * follows since M 1 exists and M ( x * x * ) = F ( x * ) F ( x * ) = 0 .
Remark 5.
(a) 
We have by the definition of U
U Ω ,
so
1 , 0 1
and
¯ 1 .
Hence, we have
H 1 1 2 H 1 2
t k + 1 t k u k + 1 u k
and
t * u * .
Estimates (35)–(37) justify the benefits as stated previously. In the numerical section, we provide examples where (32)–(34) are strict, and (31) holds but not (30).
(b) 
The proof in Theorem 3 used the less precise estimate as follows:
F ( x i + 1 ) 1 F ( x 0 ) 1 1 1 x i + 1 x 0 .
Our modification leads to (31) instead of (30). Moreover, in [15] we showed Theorem 4 but using the following:
H 2 = 2 η 1 2 ,
where
2 = 1 8 ( 4 0 + 0 1 + 8 0 2 + 0 1 ) ¯ ,
so
H 2 1 2 H 1 2 .
Hence, our results extend the ones in [15] too.
(c) 
Let us see how parameters δ 1 , δ , λ and functions h , g look like in the case of Newton’s method. We obtain by (17)–(19) the following:
δ 1 = η 2 ( 1 0 η ) , δ = 2 + 2 + 8 0 , λ = 1 1 2 0 η 1 0 η 2 ,
h ( t ) = 2 0 t 2 + t ,
and
g ( t ) = 0 1 t μ + 0 η 1 t .
Notice that δ , λ solve Equations (17) and (18), respectively. Then, if we solve inequality (19), we obtain (31).
Comments similar to the ones given in the previous five remarks can be made for the methods that follow in this Section.
Case 2: Secant method [14] 
Choose t 1 = 0 , t 0 = β , t 1 = β + η , a ¯ 1 = a ¯ 2 , b ¯ 1 = b ¯ 2 ,   a ¯ 3 = a ¯ 4 = a ¯ 5 = a ¯ 6 = b 3 = b 4 = b 5 = b 6 = 0 , a 1 = a 2 and b 1 = b 2 .
The nonzero parameters are again connected to the following:
x 0 x 1 β , [ x 0 , x 1 ; F ] 1 F ( x 0 ) η
[ x 0 , x 1 ; F ] 1 ( [ v 1 , v 2 ; F ] [ x 0 , x 1 ; F ] ) 0 2 ( v 1 x 0 + v 2 x 1 )
for each v 1 , v 2 Ω ,
[ x 0 , x 1 ; F ] 1 ( [ v 1 , v 2 ; F ] [ z , w ; F ] ) 2 ( v 1 z + v 2 w )
for each v 1 , v 2 , z , w V , provided that
[ v 1 , v 2 ; F ] = 0 1 F ( v 2 + τ ( v 1 v 2 ) ) d τ .
The standard condition used in connection to the secant method [14] is the following:
[ x 0 , x 1 ; F ] 1 ( [ v 1 , v 2 ; F ] [ z , w ; F ] ) 1 2 ( v 1 z + v 2 w )
for each v 1 , v 2 , z , w Ω . Then, we have again the following:
1
and
0 1 .
The old majorizing sequence { u n } [14] is defined by the following:
u 1 = 0 , u 0 = β , u 1 = β + η u k + 2 = u k + 1 + 1 ( u k + 1 u k 1 ) ( u k + 1 u k ) 2 ( 1 1 2 ( u k + 1 + u k + β ) )
with the following estimates:
x k + 2 x k + 1 1 x k + 1 x k 1 x k + 1 x k 2 ( 1 1 2 ( x k + 1 x 0 + x k x 0 + β ) ) u k + 2 u k + 1 .
However, ours is as follows:
t 1 = 0 , t 0 = β , t 1 = β + η t k + 2 = t k + 1 + ˜ ( t k + 1 t k 1 ) ( t k + 1 t k ) 2 ( 1 0 2 ( t k + 1 + t k + β ) )
with corresponding estimates
x k + 2 x k + 1 ¯ x k + 1 x k 1 x k + 1 x k 2 ( 1 0 2 ( x k + 1 x 0 + x k x 0 + β ) ) t k + 2 t k + 1
which are tighter, where
˜ = 0 , k = 0 , k = 1 , 2 ,
The old sufficient convergence criterion [14] is β + 2 1 η 1 but the new one is (for 0 = ) β + 2 η 1 , which is weaker. Hence, we obtain the semi-local convergence of the secant method.
Theorem 5.
Under the preceding conditions secant method { x n } U [ x 0 , t * ] and lim k x k = x * U [ x 0 , t * ] with F ( x * ) = 0 .
Proof. 
As in Theorem 4, we obtain the following:
[ x i + 1 , x i ; F ] 1 [ x 0 , x 1 ; F ] 1 1 0 2 ( x i + 1 x 0 + x i x 1 )
and
x i + 2 x i + 1 [ x i + 1 , x i ; F ] 1 [ x 0 , x 1 ; F ] × [ x 0 , x 1 ; F ] 1 ( ] x i + 1 , x i ; F ] [ x i , x i 1 ; F ] ) ˜ x i + 1 x i 1 x i + 1 x i 2 ( 1 0 2 ( x i + 1 x 0 + x i x 0 + β ) t i + 2 t i + 1
(see also [14]). □
Case 3: Newton-type method [8,16]
Choose: t 1 = 0 , t 0 = 0 , t 1 = η , a ¯ 1 = 0 2 , a ¯ 2 = 0 , a ¯ 3 = 5 , a ¯ 4 = a ¯ 5 = 0 , a ¯ 6 = 6 , a 1 = 4 2 , a 2 = 0 , a 3 = 5 , a 4 = a 5 = 0 , a 6 = 6 ,   b ¯ 1 = 2 , b ¯ 2 = 0 , b ¯ 3 = 2 , b ¯ 4 = b ¯ 5 = 0 , b ¯ 6 = 3 , b 1 = 2 , b 2 = 0 , b 3 = 2 , b 4 = b 5 = 0 and b 6 = 3 .
The parameters are connected to the following:
A ( x 0 ) 1 F ( x 0 ) η ,
A ( x 0 ) 1 ( F ( v ) F ( x 0 ) ) 0 v x 0 for   each v Ω
and
A ( x 0 ) 1 ( A ( v ) A ( x 0 ) ) 2 v x 0 + 3 .
Set V 1 = Ω U [ x 0 , 1 3 2 ] , 2 0 , 3 [ 0 , 1 ) .
A ( x 0 ) 1 ( F ( v 2 ) F ( v 1 ) ) 4 v 2 v 1 for   each v 1 , v 2 V 1
and
A ( x 0 ) 1 ( F ( v ) A ( v ) ) 5 v x 0 + 6 for   each v V 1
The conditions in [8,16] use the following:
A ( x 0 ) 1 ( F ( v 2 ) F ( v 1 ) ) 7 v 2 v 1 for   each v 1 , v 2 Ω
and
A ( x 0 ) 1 ( F ( v ) A ( v ) ) 8 v x 0 + 9 for   each v Ω .
We have the following:
V 1 Ω ,
so
4 7 ,
5 8
and
6 9 .
The old majorizing sequence { u n } [8,16] is defined for u 1 = 0 , u 0 = 0 , u 1 = η , σ 1 = max { 7 , 8 + 2 } by
u i + 1 = u i + σ 1 2 ( u i u i 1 ) + 8 u i 1 + 9 ) ( u i u i 1 ) 1 ( 2 u i + 3 )
with the following estimates:
x i + 1 x i A i ( x i ) 1 A ( x 0 ) ( 0 1 A ( x 0 ) 1 ( F ( x i + τ ( x i 1 x i ) ) F ( x i 1 ) ) d τ + A ( x 0 ) 1 ( F ( x i 1 ) A ( x i 1 ) ) ) x i x i 1 ( 7 2 x i x i 1 + 8 x i 1 x 0 + 9 ) x i x i 1 1 ( 2 x i x 0 + 3 ) ( σ 1 2 ( u i u i 1 ) + 8 u i 1 + 9 ) ( u i u i 1 ) 1 ( 2 u i + 3 ) = u i + 1 u i .
However, ours is for t 1 = 0 , t 0 = 0 , t 1 = η , σ = max { 4 , 5 + 2 }
t i + 1 = t i + ( σ 2 ( t i t i 1 ) + 5 t i 1 + 6 ) ( t i t i 1 ) 1 ( 2 t i + 3 )
with the following estimates:
x i + 1 x i ( 4 2 x i x i 1 + 5 x i 1 x 0 + 6 ) x i x i 1 1 ( 2 x i x 0 + 3 ) ( σ 2 ( t i t i 1 ) + 5 t i 1 + 6 ) ( t i t i 1 ) 1 ( 2 t i + 3 ) = t i + 1 t i .
The old sufficient convergence criterion [8,16] is the following:
C 1 = σ 1 η 1 2 ( 1 ( 3 + 9 ) ) 2 , 3 + 9 < 1 .
The new one is the following:
C = σ η 1 2 ( 1 ( 3 + 6 ) ) 2 , 3 + 6 < 1 .
However, σ σ 1 , so again condition C is weaker than C 1 .
Hence, we obtain the semilocal convergence of the Newton-type method.
Theorem 6.
Under the preceding conditions Newton-type method { x n } U [ x 0 , t * ] and lim k x k = x * U [ x 0 , t * ] with F ( x * ) = 0 .
Proof. 
It follows from the aforementioned estimates (see also [8,16]). Hence, again the results are extended.
Similar benefits are derived in the local convergence case.
Suppose the conditions (B) hold:
x * Ω is a simple solution of equation F ( x ) = 0 ,
F ( x * ) 1 ( F ( v 2 ) F ( v 1 ) ) L 1 v 2 v 1 for   each v 1 , v 2 Ω ,
and
U [ x * , r ] Ω ,
where r = 2 3 L 1 . Then, we have the following local convergence result arrived at independently by Rheinboldt [17] and Traub [18]. □
Theorem 7.
Suppose that the conditions (B) hold. Then, Newton’s method converges to x * so that the following holds:
x k + 1 x * L 1 x k x * 2 2 ( 1 L 1 x k x * )
for each k = 0 , 1 , 2 , , provided that x 0 U ( x * , r ) .
In our case, we consider the conditions (D):
x * Ω is a simple solution of equation F ( x ) = 0 .
F ( x * ) 1 ( F ( v ) F ( x * ) ) L 0 v x * for   each v Ω .
Set V = Ω U ( x * , 1 L 0 ) .
F ( x * ) 1 ( F ( v 2 ) F ( v 1 ) ) L v 2 v 1 for   each v 1 , v 2 V .
U ( x * , R ) Ω , where R = 2 2 L 0 + L .
Theorem 8.
Suppose that the conditions (D) hold. Then, Newton’s method converges to x * so the following holds:
x k + 1 x * L ˜ x k x * 2 2 ( 1 L 0 x k x * )
for each k = 0 , 1 , 2 , , provided that x 0 U ( x * , R ) , where L ˜ = L 0 , k = 0 L , k = 1 , 2 ,
Proof. 
Choose c 1 = L 2 , d 1 = L 0 , c 2 = c 3 = d 2 = d 3 = 0 in (11). Then, we obtain the following:
x i + 1 x * = x i x * F ( x * ) 1 F ( x i ) F ( x i ) 1 F ( x * ) 0 1 F ( x * ) 1 ( F ( x * + τ ( x i x * ) ) F ( x i ) ) d τ ( x i x * ) L ˜ x i x * 2 2 ( 1 L 0 x i x * ) .
Remark 6.
We have again the following:
V Ω ,
so
L 0 L 1 ,
L L 1 ,
L ˜ L 1 ,
r R ,
λ n λ n 1 ,
where λ k = L x k x * 2 ( 1 L 0 x k x * ) and λ k 1 = L 1 x k x * 2 ( 1 L 1 x k x * ) (see also the numerical section).
The same benefits can be obtained for the other single-step methods. Moreover, our idea can similarly be extended to multi-step and multi-point methods [4,5,13,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37].

3. Numerical Experiments

We contact some experiments showing that the old convergence criteria are not verified, but ours are. Hence, there is no assurance that the methods converge under the old conditions. However, under our approach, convergence can be established.
Example 1.
Define function as the following:
f ( x ) = θ 0 x + θ 1 + θ 2 sin θ 3 x , x 0 = 0 ,
where θ j , j = 0 , 1 , 2 , 3 are parameters. Then, clearly for θ 3 large and θ 2 small, 0 1 can be small (arbitrarily). Notice that as 0 1 0 , H H 1 0 too. So, the utilization of Newton’s method is extended numerous (infinitely many) times under the data ( Ω , F , x 0 , 0 , , η ).
Example 2.
Let T 1 = T 2 = R , x 0 = 1 and Ω = U [ 1 , 1 q ] for q ( 0 , 1 2 ) . Let function f on Ω as the following:
f ( s ) = s 3 q .
We consider case 1 of Newton’s method. Then, we obtain 0 = 3 q , = 1 = 2 ( 2 q ) and η = 1 3 ( 1 q ) . However, then, H 1 > 1 2 for all q ( 0 , 1 2 ) . So, the Newton–Kantorovich theorem cannot assure convergence. However, we have H 1 2 for all q I = [ 0.4271907643 , 1 2 ) . Hence, our result guarantees convergence to x * = q 3 as long as q I .
Example 3.
Let T 1 = T 2 = S ( [ 0 , 1 ] ) the domain of functions given on [ 0 , 1 ] which are continuous. We consider the norm-max. Choose Ω = U ( 0 , d ) , d > 1 . Define F on Ω by the following:
F ( x ) ( s ) = x ( s ) w ( s ) ξ 0 1 K ( s , t ) x 3 ( t ) d t ,
x T 1 , s [ 0 , 1 ] , w T 1 , ξ is a number and K is the Green’s kernel given by the following:
K ( s 2 , s 1 ) = ( 1 s 2 ) s 1 , s 1 s 2 s 2 ( 1 s 1 ) , s 2 s 1 .
By (38), we have the following:
( F ( x ) ( z ) ) ( s ) = z ( s ) 3 ξ 0 1 K ( s , t ) x 2 ( t ) z ( t ) d t ,
t T 1 , s [ 0 , 1 ] . Consider x 0 ( s ) = w ( s ) = 1 and | ξ | < 8 3 . We obtain the following:
I F ( x 0 ) < 3 8 | ξ | , F ( x 0 ) 1 L ( T 2 , T 1 ) ,
F ( x 0 ) 1 8 8 3 | ξ | , η = | ξ | 8 3 | ξ | , 0 = 12 | ξ | 8 3 | ξ | ,
1 = = 6 d | ξ | 8 3 | ξ | and H 1 = 6 d | ξ | 2 ( 8 3 | ξ | ) 2 . Let ξ * stand for the positive solution of equation 3 ( 4 d 3 ) t 2 + 48 t 64 = 0 . Then, if ξ > ξ * , we have H 1 > 1 2 . Hence, the Newton–Kantorovich criterion (30) is not satisfied. In particular, Table 1 shows that our criterion (31) is satisfied but not (30).
Example 4.
Let T 1 , T 2 and Ω be as in the Example 3. It is well known that the boundary value problem [2]
φ ( 0 ) = 0 , φ ( 1 ) = 1 ,
φ = φ λ φ 2
can be given as a Hammerstein-like nonlinear integral equation as follows:
φ ( s ) = s + 0 1 K ( s , t ) ( φ 3 ( t ) + λ φ 2 ( t ) ) d t
where λ is a parameter. Then, define F : Ω T 2 by the following:
[ F ( x ) ] ( s ) = x ( s ) s 0 1 K ( s , t ) ( x 3 ( t ) + λ x 2 ( t ) ) d t .
Choose φ 0 ( s ) = s and Ω = U ( φ 0 , r 0 ) . Then, clearly U ( φ 0 , r 0 ) U ( 0 , r 0 + 1 ) , since φ 0 = 1 . Suppose 2 λ < 5 . Then, by conditions (C) they are satisfied for the following:
0 = 2 λ + 3 r 0 + 6 8 , 1 = = λ + 6 r 0 + 3 4
and η = 1 + λ 5 2 λ . Notice that 0 < 1 .
The rest of the examples are given for the local convergence study of Newton’s method.
Example 5.
Let T 1 = T 2 = R 3 , Ω = U [ 0 , 1 ] and x * = ( 0 , 0 , 0 ) t r . Define mapping E on Ω for λ = ( λ 1 , λ 2 , λ 3 ) t r as
E ( λ ) = ( e λ 1 1 , e 1 2 λ 2 2 + λ 1 , λ 3 ) t r .
Then, conditions (B) and (D) hold, provided that L 0 = e 1 , L = e 1 L 0 and L 1 = e , since F ( x * ) 1 = F ( x * ) = d i a g { 1 , 1 , 1 } . Notice that
L 0 < L < L 1
and
r = 0.24 < R = 0.38 .
Hence, our radius of convergence is larger.
Example 6.
Let T 1 , T 2 and Ω be as in Example 3. Define F on Ω as
F ( φ 1 ) ( x ) = φ 1 ( x ) 0 1 x φ 1 ( j ) 3 d j .
By this definition, we obtain the following:
F ( φ 1 ( ψ 1 ) ) ( x ) = ψ 1 ( x ) 3 0 1 x j φ 1 ( j ) 2 ψ 1 ( j ) d j
for all ψ 1 Ω . So, we can choose 0 = 1.5 , = 1 = 3 . However, then, we again obtain the following:
r = 2 9 < R = 1 3 .

4. Conclusions

We have provided a single sufficient criterion for the semi-local convergence of single step methods. Upon specializing the parameters involved, we showed that although our majorizing sequence is more general than earlier ones, the convergence criteria are weaker (i.e., the utility of the methods is extended), the upper error estimates are more accurate (i.e., at least as few iterates are required to achieve a predecided error tolerance), and we have, at most, an as-small ball containing the solution. These benefits are obtained without additional hypotheses. According to our new technique, we locate a more accurate domain than the earlier ones containing the iterates, leading to a more accurate Lipschitz condition (at least as small).
Our theoretical results are further justified using numerical experiments. In the future, we plan to extend these results by replacing the Lipschitz constants by generalized functions along the same lines [2,12,13].

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Argyros, I.K. Convergence and Applications of Newton-Type Iterations; Springer: Berlin, Germany, 2008. [Google Scholar]
  2. Ezquerro, J.A.; Hernandez, M.A. Newton’s Method: An Updated Approach of Kantorovich’s Theory; Springer: Cham, Switzerland, 2018. [Google Scholar]
  3. Shakhno, S.M.; Gnatyshyn, O.P. On aan iterative algorithm of order 1.839... for solving nonlinear least squares problems. Appl. Math. Math. 2005, 161, 253–264. [Google Scholar]
  4. Steffensen, J.F. Remarks on iteration. Skand Aktuar Tidsr. 1993, 16, 64–72. [Google Scholar] [CrossRef]
  5. Cătinaş, E. The inexact, inexact perturbed, and quasi-Newton methods are equivalent models. Math. Comp. 2005, 74, 291–301. [Google Scholar] [CrossRef]
  6. Dennis, J.E., Jr. On Newton-like methods. Numer. Math. 1968, 11, 324–330. [Google Scholar] [CrossRef]
  7. Nashed, M.Z.; Chen, X. Convergence of Newton-like methods for singular operator equations using outer inverses. Numer. Math. 1993, 66, 235–257. [Google Scholar] [CrossRef]
  8. Yamamoto, T. A convergence theorem for Newton-like methods in Banach spaces. Numer. Math. 1987, 51, 545–557. [Google Scholar] [CrossRef] [Green Version]
  9. Argyros, I.K. Computational Theory of Iterative Methods; Series: Studies in Computational Mathematics, 15; Chui, C.K., Wuytack, L., Eds.; Elsevier Publ. Co.: New York, NY, USA, 2007. [Google Scholar]
  10. Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: Oxford, UK, 1982. [Google Scholar]
  11. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; First published by Academic Press, New York and London, 1997; SIAM Publications: Philadelphia, PA, USA, 2000. [Google Scholar]
  12. Argyros, I.K.; Magréñan, A.A. Iterative Methods and Their Dynamics with Applications; CRC Press: New York, NY, USA, 2017. [Google Scholar]
  13. Argyros, I.K.; Magréñan, A.A. A Contemporary Study of Iterative Methods; Elsevier Academic Press: New York, NY, USA, 2018. [Google Scholar]
  14. Potra, F.A.; Pták, V. Nondiscrete Induction and Iterative Processes; Research Notes in Mathematics, 103; Pitman (Advanced Publishing Program): Boston, MA, USA, 1984. [Google Scholar]
  15. Argyros, I.K.; Hilout, S. Weaker conditions for the convergence of Newton’s method. J. Complex. 2012, 28, 364–387. [Google Scholar] [CrossRef] [Green Version]
  16. Chen, X.; Yamamoto, T. Convergence domains of certain iterative methods for solving nonlinear equations. Numer. Funct. Anal. Optim. 1989, 10, 37–48. [Google Scholar] [CrossRef]
  17. Rheinboldt, W.C. An Adaptive Continuation Process of Solving Systems of Nonlinear Equations; Banach Ctr. Publ. 3; Polish Academy of Science: Warsaw, Poland, 1978; pp. 129–142. [Google Scholar]
  18. Traub, J.F. Iterative Methods for the Solution of Equations; Chelsea: Prentice Hall, NJ, USA, 1964. [Google Scholar]
  19. Behl, R.; Maroju, P.; Martinez, E.; Singh, S. A study of the local convergence of a fifth order iterative method. Indian J. Pure Appl. Math. 2020, 51, 439–455. [Google Scholar]
  20. Deuflhard, P. Newton methods for nonlinear problems. In Affine Invariance and Adaptive Algorithms; Springer Series in Computational Mathematics, 35; Springer: Berlin, Germany, 2004. [Google Scholar]
  21. Grau-Sánchez, M.; Grau, À.; Noguera, M. Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput. 2011, 281, 2377–2385. [Google Scholar] [CrossRef]
  22. Gutiérrez, J.M.; Magreñán, Á.A.; Romero, N. On the semilocal convergence of Newton-Kantorovich method under center-Lipschitz conditions. Appl. Math. Comput. 2013, 221, 79–88. [Google Scholar] [CrossRef]
  23. Magréñan, A.A.; Argyros, I.K.; Rainer, J.J.; Sicilia, J.A. Ball convergence of a sixth-order Newton-like method based on means under weak conditions. J. Math. Chem. 2018, 56, 2117–2131. [Google Scholar] [CrossRef]
  24. Magréñan, A.A.; Gutiérrez, J.M. Real dynamics for damped Newton’s method applied to cubic polynomials. J. Comput. Appl. Math. 2015, 275, 527–538. [Google Scholar] [CrossRef]
  25. Soleymani, F.; Lotfi, T.; Bakhtiari, P. A multi-step class of iterative methods for nonlinear systems. Optim. Lett. 2014, 8, 1001–1015. [Google Scholar] [CrossRef]
  26. Argyros, I.K. On the Newton—Kantorovich hypothesis for solving equations. J. Comput. Math. 2004, 169, 315–332. [Google Scholar] [CrossRef] [Green Version]
  27. Argyros, I.K.; Hilout, S. On an improved convergence analysis of Newton’s method. Appl. Math. Comput. 2013, 225, 372–386. [Google Scholar] [CrossRef]
  28. Dennis, J.E., Jr.; Schnabel, R.B. Numerical Methods for Unconstrained Optimization and Nonlinear Equations; First published by Prentice-Hall, Englewood Cliffs, New Jersey, 1983; SIAM: Philadelphia, PA, USA, 1996. [Google Scholar]
  29. Deuflhard, P.; Heindl, G. Affine invariant convergence theorems for Newton’s method and extensions to related methods. SIAM J. Numer. Anal. 1979, 16, 1–10. [Google Scholar] [CrossRef]
  30. Ezquerro, J.A.; Gutiérrez, J.M.; Hernández, M.A.; Romero, N.; Rubio, M.J. The Newton method: From Newton to Kantorovich (Spanish). Gac. R. Soc. Mat. Esp. 2010, 13, 53–76. [Google Scholar]
  31. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  32. Proinov, P.D. General local convergence theory for a class of iterative processes and its applications to Newton’s method. J. Complex. 2009, 25, 38–62. [Google Scholar] [CrossRef] [Green Version]
  33. Proinov, P.D. New general convergence theory for iterative processes and its applications to Newton-Kantorovich type theorems. J. Complex. 2010, 26, 3–42. [Google Scholar] [CrossRef] [Green Version]
  34. Shakhno, S.M.; Iakymchuk, R.P.; Yarmola, H.P. Convergence analysis of a two step method for the nonlinear squares problem with decomposition of operator. J. Numer. Appl. Math. 2018, 128, 82–95. [Google Scholar]
  35. Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth order weighted—Newton method for systems of nonlinear equations. Numer. Algorithms 2013, 62, 307–323. [Google Scholar] [CrossRef]
  36. Verma, R. New Trends in Fractional Programming; Nova Science Publisher: New York, NY, USA, 2019. [Google Scholar]
  37. Zabrejko, P.P.; Nguen, D.F. The majorant method in the theory of Newton-Kantorovich approximations and the Pták error estimates. Numer. Funct. Anal. Optim. 1987, 9, 671–684. [Google Scholar] [CrossRef]
Table 1. Comparison table of criteria (30) and (31).
Table 1. Comparison table of criteria (30) and (31).
d ξ * 2H12H
2.098990.99766137781.0075152000.9639223786
2.198970.98317660581.0555056000.9678118280
2.295970.96981856591.1020656000.9715205068
3.0954670.879631132111.4858241601.000082409
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Argyros, I.K. Unified Convergence Criteria for Iterative Banach Space Valued Methods with Applications. Mathematics 2021, 9, 1942. https://doi.org/10.3390/math9161942

AMA Style

Argyros IK. Unified Convergence Criteria for Iterative Banach Space Valued Methods with Applications. Mathematics. 2021; 9(16):1942. https://doi.org/10.3390/math9161942

Chicago/Turabian Style

Argyros, Ioannis K. 2021. "Unified Convergence Criteria for Iterative Banach Space Valued Methods with Applications" Mathematics 9, no. 16: 1942. https://doi.org/10.3390/math9161942

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop