Next Article in Journal
An Algebraic-Based Primal–Dual Interior-Point Algorithm for Rotated Quadratic Cone Optimization
Previous Article in Journal
Problem Solving and Budget Allocation of SMEs: Application of NCA Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Unified Convergence Criteria of Derivative-Free Iterative Methods for Solving Nonlinear Equations

1
Department of Mathematics, University of Houston, Houston, TX 77204, USA
2
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
3
Department of Theory of Optimal Processes, Ivan Franko National University of Lviv, Universytetska Str. 1, 79000 Lviv, Ukraine
4
Department of Computational Mathematics, Ivan Franko National University of Lviv, Universytetska Str. 1, 79000 Lviv, Ukraine
*
Authors to whom correspondence should be addressed.
Computation 2023, 11(3), 49; https://doi.org/10.3390/computation11030049
Submission received: 11 February 2023 / Revised: 23 February 2023 / Accepted: 28 February 2023 / Published: 1 March 2023

Abstract

:
A local and semi-local convergence is developed of a class of iterative methods without derivatives for solving nonlinear Banach space valued operator equations under the classical Lipschitz conditions for first-order divided differences. Special cases of this method are well-known iterative algorithms, in particular, the Secant, Kurchatov, and Steffensen methods as well as the Newton method. For the semi-local convergence analysis, we use a technique of recurrent functions and majorizing scalar sequences. First, the convergence of the scalar sequence is proved and its limit is determined. It is then shown that the sequence obtained by the proposed method is bounded by this scalar sequence. In the local convergence analysis, a computable radius of convergence is determined. Finally, the results of the numerical experiments are given that confirm obtained theoretical estimates.

1. Introduction

One of the greatest challenges numerical functional analysis and other computational disciplines the task of approximating a locally unique solution x * of the nonlinear equation
F ( x ) = 0 ,
for F : Ω X X , F is a continuous operator, acting between Banach space X and itself. The solution x * is needed in closed or analytical form but this is possible only in special cases. That is why iterative solution methods are used to generate a sequence approximating x * provided certain conditions are verified on the initial information.
Newton’s method (NM) defined for each n = 0 , 1 , 2 ,
x n + 1 = x n F ( x n ) 1 F ( x n )
has been used extensively to generate such a sequence converting quadratically to x * [1,2].
However, there are some difficulties with the implementation of it in case the inverse of linear operator F ( x n ) is very expensive to calculate or even does not exist.
This difficulty is handled by considering iterative methods of the form
x n + 1 = x n T n 1 F ( x n ) f o r e a c h n = 0 , 1 , 2 , ,
where T n = [ G n , H n ; F ] , [ · , · ; F ] : Ω × Ω L ( X , X ) , G n = G ( x n , x n 1 ) = a x n + b x n 1 + c F ( x n ) , H n = H ( x n , x n 1 ) = d x n + p x n 1 + q F ( x n ) , and a , b , c , d , p an q are real numbers.
Motivation for writing this article. Some popular methods are special cases of (3):
Newton: set a = d = 1 , b = c = p = q = 0 provided F is Fréchet-differentiable;
Secant [1,3,4]: set a = p = 1 , b = c = d = q = 0 ;
Kurchatov [5,6,7,8]: pick a = 2 , b = 1 , p = 1 , c = d = q = 0 ;
Steffensen [1,9]: pick a = d = 1 , c = 1 and b = p = q = 0 .
The convergence order of these iterative methods is 2, 1.6 . . . , 2 and 2, respectively, [1,2,7,9]. However, the convergence ctiteria differ, rendering the comparison between them difficult [10,11,12].
Other choices of the parameters lead to less well-known methods or new methods [1,7,8,13]. Iterative methods are constructed usually based on geometrical or algebraic considerations. Ours is the latter. The introduction of these parameters and function evaluations allow for a greater flexibility, tighter error accuracy, and the handling of equations not possible before (see also numerical section). The choice q F ( x n ) is not necessary more appropriate.
Semi-local and local constitute two types of convergence for iterative methods.
In the semi-local convergence analysis, information is used from the initial point x 0 to find usually sufficient convergence criteria for the method (3). A priori estimates on the norms x n x * are also obtained. In the local convergence analysis, data about the solution x * is taken into account to determine the radius of convergence for the method (3). Moreover, usually upper error bounds are calculated for the norms x n x * . Generalized Lipschitz-type conditions are used for both types of convergence.
The novelty of the article. Therefore, it is important to study the convergence of method (3) in both the semi-local (Section 2 and Section 3) as well as the local convergence (Section 4) case. Our technique allows for a comparison between the convergence criteria of these methods. The new convergence criteria can be weaker than those ones given if the methods are studied separately. Section 5 contains the numerical examples, and Section 6 contains the conclusions.

2. Majorizing Sequence

It is convenient for the semi-local convergence analysis of method (3) to introduce some parameters, sequences, and functions. Let L 0 , L , λ , h , η 0 , η , and η ¯ be given parameters. Define the parameters
A = | a | + | d | + λ | c | , B = | b | + | p | ,
C = ( | c | + | p | ) h , α = | 1 a | + | 1 d | ,
β = λ ( | c | + | q | ) , γ = | 1 a b | + | 1 d p | ,
δ = | 1 a b | η ¯ + | c | η 0 + | 1 d p | η ¯ + | q | η 0 ,
t 1 = 0 , t 0 = h , t 1 = η + h ,
sequences
μ n + 1 = L 0 ( A ( t n + 1 t 0 ) + B ( t n t 0 ) + C ) ,
λ n + 1 = L ( t n + 1 t n + α ( t n t n 1 ) + β ( t n t 0 ) + γ ( t n + 1 t 0 ) + δ ) ,
t n + 2 = t n + 1 + λ n + 1 1 μ n + 1 ( t n + 1 t n ) .
We shall show that { t n } is a majorizing sequence for { x n } under certain conditions. Moreover, define parameters θ i , i = 1 , 2 , , 8 by
θ 1 = L 0 A 1 L 0 C , θ 2 = L 0 B 1 L 0 C , θ 3 = L 1 L 0 C , θ 4 = L α 1 L 0 C ,
θ 5 = L β 1 L 0 C , θ 6 = L γ 1 L 0 C , θ 7 = L δ 1 L 0 C , θ 8 = ( θ 5 + θ 6 ) h + θ 7 ,
s 1 = 0 , s 0 = h , s 1 = η + h ,
and sequences
m n + 1 = θ 1 ( t n + 1 t 0 ) + θ 2 ( t n t 0 ) ,
l n + 1 = θ 3 ( t n + 1 t n ) + θ 4 ( t n t n 1 ) + θ 5 ( t n t 0 ) + θ 6 ( t n 1 t 0 ) + θ 7 ,
s n + 2 = s n + 1 + l n + 1 ( s n + 1 s n ) 1 m n + 1 .
We shall study the simplified version { s n } of sequence { t n } .
Furthermore, define the interval [ 0 , 1 ) quadratic polynomial
g ( t ) = ( θ 1 + θ 3 ) t 2 + ( θ 2 + θ 4 θ 3 + θ 5 ) t + θ 6 θ 4 ,
function
Q ( t ) = θ 5 η 1 t + θ 6 η 1 t + θ 1 η t 1 t + θ 2 η t 1 t + θ 1 h t + θ 1 h t t + θ 8
and sequence
Q n ( t ) = θ 3 η t n + θ 4 η t n 1 + θ 5 h + 1 t n 1 t η + θ 6 h + 1 t n 1 1 t η + θ 7 + t θ 1 h + 1 t n 1 t η + t θ 2 h + 1 t n 1 1 t η t + θ 8 = θ 3 η t n + θ 4 η t n 1 + θ 5 ( 1 + t + + t n 1 ) η + θ 6 ( 1 + t + + t n 2 ) η + θ 7 + t θ 1 ( 1 + t + + t n 1 ) η + t θ 2 ( 1 + t + + t n 2 ) η + t θ 1 h + t θ 2 h t + θ 8 .
Suppose that either of the following conditions hold:
(I)
L 0 C < 1
equation Q ( t ) = 0 has a minimal solution w ( 0 , 1 ) satisfying
0 l 1 1 m 1 w
and
g ( w ) 0 .
(II)
L 0 C < 1
and w exists satisfying
0 l 1 1 m 1 w ,
Q 1 ( w ) 0
and
g ( w ) 0 .
Then, we can show the following result on majorizing sequences for method (3).
Lemma 1.
Under conditions (I) or (II), sequence { s n } generated by (5) is nondecreasing, bounded from above by s * * = h + η 1 w and converges to its unique least upper bound s * [ h + η , s * * ] .
Proof. 
Induction is used to show
0 l k + 1 1 m k + 1 w
and
m k + 1 < 1 .
These estimates are true for k = 0 by (I) (or II). It then follows from (5) that
0 s 2 s 1 w ( s 1 s 0 ) = w η
and
s 2 h + ( 1 + w ) η = h + 1 w 2 1 w η s * * .
Assume
0 s k + 1 s k w k η .
Then, we also have
s k + 1 s k + w k η s k 1 + w k 1 η + w k η s 1 + η + w η + + w k η = h + 1 w k + 1 1 w η s * * .
Evidently, if we use (5), (8), and (9) estimates (6) and (7) are true if
θ 3 η w k + θ 4 η w k 1 + θ 5 h + 1 w k 1 w η + θ 6 h + 1 w k 1 1 w η + θ 7
+ t θ 1 h + 1 w k 1 w η + w θ 2 h + 1 w n 1 1 w η w 0 .
Define recurrent functions Q k on the interval [ 0 , 1 ) by
Q k ( t ) = θ 3 η t k + θ 4 η t k 1 + θ 5 η ( 1 + t + + t k 1 ) + θ 6 η ( 1 + t + + t k 2 ) + t η θ 1 ( 1 + t + + t k 2 ) + t η θ 2 ( 1 + t + + t k 2 ) + t θ 1 h + t θ 2 h t + θ 8 .
Then, we can show instead of (10) that
Q k ( w ) 0 .
Next, we relate two consecutive functions Q k . By the definition of these functions we have
Q k + 1 ( t ) = Q k + 1 ( t ) Q k ( t ) + Q k ( t ) = Q k ( t ) + g ( t ) t k 1 η .
Case I. We have by (13) Q k + 1 ( w ) = Q k ( w ) since g ( w ) = 0 . Define function Q by
Q ( t ) = lim k Q k ( t ) .
Then, we have by (11) and (14) that
Q ( t ) = θ 5 η 1 t + θ 6 η 1 t + θ 1 η t 1 t + θ 2 η t 1 t + θ 1 h t + θ 2 h t t + θ 8 .
It follows by Q ( w ) = Q ( w ) , (12) and (15) that we can show instead that
Q ( w ) 0 ,
which is true by the choice of w.
Case II. By g ( w ) 0 and (13), we have
Q k + 1 ( w ) Q k ( w ) .
Thus, we can show instead of (12) that
Q 1 ( w ) 0 .
which is true by the definition of w. The induction for (6) and (7) is completed. Hence, in either case (I) or (II) sequence { s n } is nondecreasing and bounded from above by s * * and as such it converges to its unique least upper bound s * . □
Remark 1.
(a) Clearly sequence { t n } can replace { s n } in Lemma 1 (since they are equivalent).
(b) It follows from the proof on the Theorem 1 that the convergence of the method (3) depends on the majorizing sequence (4). Sufficient convergence criteria for the majorizing sequence are given in Lemma 1.
Next, more general sufficient convergence criteria are developed so that the conditions of the Lemma 1 imply those of the Lemma 2 but not necessarily vice versa.
Lemma 2.
Suppose that there exists ρ > 0 such that for each n = 0 , 1 , 2 , . . .
μ n + 1 < 1 a n d t n < ρ .
Then, the following assertion holds
0 t n t n + 1 < ρ a n d t n < ρ .
and ρ * [ 0 , ρ ] exists such that
lim n t n = ρ * .
Proof. 
The definition of the sequence { t n } given by the formula (4) and the condition (18) imply the assertion (19) from which the item (20) is implied. □
Remark 2.
A possibly choice for ρ under the conditions of the Lemma 1 is s * .

3. Semi-Local Convergence

The following condition (R) shall be used in the semi-local convergence.
( R 1 )
x 1 , x 0 Ω , h 0 , η 0 , η 0 0 , and η ¯ 0 exist such that
x 1 x 0 h , T 0 1 F ( x 0 ) η , F ( x 0 ) η 0 a n d x 0 η ¯ .
( R 2 )
L 0 0 , L 0 , and λ 0 exist such that for all x , y , z Ω
T 0 1 ( [ G ( y , x ) , H ( y , x ) ; F ] T 0 ) L 0 ( G ( y , x ) G 0 + H ( y , x ) H 0 ) ,
T 0 1 ( [ z , y ; F ] [ G ( y , x ) , H ( y , x ) ; F ] ) L ( z G ( y , x ) + y H ( y , x ) )
and
F ( y ) F ( x 0 ) λ y x 0 .
( R 3 )
Conditions of Lemma 1 hold with s * also satisfying
( a + b 1 ) x 0 + c F ( x 0 ) s * ( 1 | a | | b | λ | c | )
and
( d + p 1 ) x 0 + q F ( x 0 ) s * ( 1 | d | | p | λ | q | ) .
( R 4 )
U [ x 0 , s * ] Ω .
Next, we show the semi-local convergence analysis of method (3) using conditions (R) and the preceding notation.
Theorem 1.
Suppose that conditions (R) hold. Then, sequence { x n } starting with x 1 , x 0 U [ x 0 , s * ] and generated by method (3) is well-defined in U [ x 0 , s * ] , remains in U [ x 0 , s * ] , and converges to a solution x * U [ x 0 , s * ] of equation F ( x ) = 0 .
Proof. 
We shall show that { t k } is a majorizing sequence for { x k } using induction. Notice that x 0 x 1 t 0 t 1 and x 1 x 0 t 1 t 0 . Suppose x k + 1 x k t k + 1 t k .
First, we show that linear operator T k + 1 1 L ( X , X ) exists. We have by the first condition in ( R 2 ) that
T 0 1 ( T k + 1 T 0 ) = T 0 1 ( [ G k + 1 , H k + 1 ; F ] T 0 ) L 0 ( G k + 1 G 0 + H k + 1 H 0 ) .
However, we have by ( R 2 ) and ( R 3 )
a x k + 1 + b x k + c F ( x k + 1 ) x 0 a ( x k + 1 x 0 ) + b ( x k x 0 ) + c ( F ( x k + 1 ) F ( x 0 ) )
+ a x 0 + b x 0 + c F ( x 0 ) x 0 ( a + b 1 ) x 0 + c F ( x 0 ) + | a | s * + | b | s * + | c | λ s * s * ,
and similarly
d x k + 1 + p x k + q F ( x k + 1 ) x 0 ( d + p 1 ) x 0 + q F ( x 0 )
+ | d | s * + | p | s * + | q | λ s * s * ,
thus, the iteration a x k + 1 + b x k + c F ( x k + 1 ) , d x k + 1 + p x k + q F ( x k + 1 ) belong in U [ x 0 , s * ] .
Moreover, we have
G k + 1 G 0 = a x k + 1 + b x k + c F ( x k + 1 ) a x 0 + b x 1 + c F ( x 0 ) a ( x k + 1 x 0 ) + b ( x k x 0 ) + c ( F ( x k + 1 ) F ( x 0 ) ) | a | x k + 1 x 0 + | b | x k x 0 + x 0 x 1 + | c | F ( x k + 1 ) F ( x 0 ) | a | ( t k + 1 t 0 ) + | b | ( t k t 0 ) + | b | h + | c | λ ( t k + 1 t 0 ) .
Similarly, it follows
H k + 1 H 0 | d | ( t k + 1 t 0 ) + | p | ( t k t 0 ) + | p | h + | q | λ ( t k + 1 t 0 ) ,
hence (21) gives by summing up
T 0 1 ( T k + 1 T 0 ) μ k + 1 < 1
by Lemma 1, so T k + 1 1 exists and
T k + 1 1 T 0 1 1 μ k + 1 .
Furthermore, we can write
F ( x k + 1 ) = F ( x k + 1 ) F ( x k ) T k ( x k + 1 x k ) = ( [ x k + 1 , x k ; F ] T k ) ( x k + 1 x k ) = ( [ x k + 1 , x k ; F ] [ G k , H k ; F ] ) ( x k + 1 x k ) .
Using ( R 2 ) and (25), we obtain
T 0 1 F ( x k + 1 ) L ( x k + 1 G k + x k H k ) x k + 1 x k .
However, we also have
x k + 1 G k = x k + 1 a x k b x k 1 c F ( x k ) = x k + 1 x k + ( 1 a ) ( x k x k 1 ) + ( 1 a ) x k 1 b x k 1 c F ( x k ) = x k + 1 x k + ( 1 a ) ( x k x k 1 ) + ( 1 a b ) ( x k 1 x 0 ) + ( 1 a b ) x 0 c ( F ( x k ) F ( x 0 ) ) c F ( x 0 ) ,
thus
x k + 1 G k x k + 1 x k + | 1 a | x k x k 1 + | 1 a b | x k 1 x 0 + | 1 a b | x 0 + λ | c | x k x 0 + | c | F ( x 0 ) t k + 1 t k + | 1 a | ( t k t k 1 ) + | 1 a b | ( t k + 1 t 0 ) + | 1 a b | η ¯ + λ | c | ( t k t 0 ) + | c | η 0 ,
Similarly,
x k H k = x k d x k p x k 1 q F ( x k ) = ( 1 d ) ( x k x k 1 ) + ( 1 d p ) ( x k 1 x 0 ) + ( 1 d p ) x 0 q F ( x k ) ,
so
x k H k | 1 d | ( t k t k 1 ) + | 1 d p | ( t k 1 t 0 ) + | 1 d p | η ¯ + | q | λ ( t k t 0 ) + | q | η 0 ,
hence,
x k + 1 G k + x k H k t k + 1 t k + | 1 a | ( t k t k 1 ) + | 1 a b | ( t k + 1 t 0 ) + | 1 a b | η ¯ + λ | c | ( t k t 0 ) + | c | η 0 + | 1 d | ( t k t k 1 ) + | 1 d p | ( t k 1 t 0 ) + | 1 d p | η ¯ + | q | λ ( t k t 0 ) + | q | η 0 t k + 1 t k + α ( t k t k 1 ) + β ( t k t 0 ) + γ ( t k 1 t 0 ) + δ .
Therefore, by (26), (27) and the definition of sequence λ k + 1 , we obtain
T 0 1 F ( x k + 1 ) λ k + 1 ( t k + 1 t k ) .
It then follows from (3), (24), and (28) that
x k + 2 x k + 1 T k + 1 1 T 0 T 0 1 F ( x k + 1 ) λ k + 1 ( t k + 1 t k ) 1 μ k + 1 = t k + 2 t k + 1
and
x k + 2 x 0 x k + 2 x k + 1 + x k + 1 x k + + x 1 x 0 t k + 2 t 0 t k + 2 t * * .
It follows that sequence { x k } is Cauchy (since { t k } is Cauchy as convergence by Lemma 4) and as such it converges to some x * U [ x 0 , s * ] . By letting k in (28), we conclude F ( x * ) = 0 . □
Remark 3.
Clearly, the conditions of Lemma 2 and ρ can replace Lemma 1 and s * in Theorem 1.

4. Local Convergence

Suppose:
(C1)
There exists a simple solution x * Ω of equation F ( x ) = 0 .
(C2)
For each x , y Ω
F ( x * ) 1 ( [ G ( y , x ) , H ( y , x ) ; F ] F ( x * ) ) l 0 ( G ( y , x ) x * + H ( y , x ) x * ) ,
F ( x * ) 1 ( [ G ( y , x ) , H ( y , x ) ; F ] [ y , x * ; F ] ) l ( G ( y , x ) y + H ( y , x ) x * ) ,
F ( y ) λ y x * .
(C3)
The parameter r * satisfies the conditions
( a + b 1 ) x * r * ( 1 | a | | b | λ | c | )
and
( d + p 1 ) x * r * ( 1 | d | | p | λ | q | ) .
(C4)
U ( x * , r * ) Ω , where r * = 1 2 l 0 + 3 l .
Theorem 2.
Suppose that conditions (C) hold. Then, sequence { x n } starting with x 1 , x 0 U ( x * , r * ) and generated by method (3) is well-defined in U ( x * , r * ) , remains in U ( x * , r * ) and converges to a solution x * .
Proof. 
We have by ( C 2 ) and ( C 3 ) that
a x k + b x k 1 + c F ( x k ) x * a ( x k x * ) + b ( x k 1 x * ) + c F ( x k )
+ a x * + b x * x * ( a + b 1 ) x * + | a | r * + | b | r * + | c | λ r * r * ,
d x k + p x k 1 + q F ( x k ) x * ( d + p 1 ) x * + | d | r * + | p | r * + | q | λ r * r * ,
a x k + b x k 1 + c F ( x k ) x k a x k + b x k 1 + c F ( x k ) x * + x * x k 2 r * ,
F ( x * ) 1 ( T k F ( x * ) ) l 0 ( G k x * + H k x * ) 2 l 0 r * < 1 ,
so
T k 1 F ( x * ) 1 1 l 0 ( G k x * + H k x * ) .
We also get by ( C 2 )
F ( x * ) 1 ( T k [ x k , x * ; F ] ) l ( H k x k + G k x * ) ,
thus
x k + 1 x * = x k x * T k 1 F ( x k ) T k 1 F ( x * ) F ( x * ) 1 ( T k [ x k , x * ; F ] ) ( x k x * ) T k 1 F ( x * ) F ( x * ) 1 ( T k [ x k , x * ; F ] ) x k x * l ( H k x k + G k x * ) 1 l 0 ( G k x * + H k x * ) < x k x * < r * ,
hence, the iterate x k + 1 U ( x * , r * ) and lim k x k = x * . □
A uniqueness of the solution domain can be specified.
Proposition 1.
Suppose that there exists a solution x * Ω of the equation F ( x ) = 0 such that for each x U ( x * , ρ 1 )
F ( x * ) 1 ( [ x * , x ; F ] F ( x * ) ) l 1 x x * f o r s o m e ρ 1 , l 1 > 0 ;
l 1 ρ 1 < 1 .
Then, the point x * is the only solution of the equation F ( x ) = 0 in the domain U 0 = U ( x * , ρ 1 ) U [ x * , 1 l 1 ] .
Proof. 
Let y * U 0 with F ( x ) = 0 . Define the linear operator S = [ x * , y * ; F ] . By applying the condition (29) and (30), it follows that
F ( x * ) 1 ( S F ( x * ) ) l 1 x * y * l 1 ρ 1 < 1 ,
thus, S 1 exists. Then, from the identity x * y * = S 1 ( F ( x * ) F ( y * ) ) = S 1 ( 0 ) , we conclude that y * = x * . □

5. Numerical Examples

In this section, we present numerical examples that confirm obtained semi-local theoretical results.
Firstly, we consider a nonlinear equation. Let X = R , Ω = ( 0.8 , 1.3 ) and
F ( x ) = x 3 1 = 0 .
Let us determine the Lipschitz constants from conditions ( R 2 ) . We can write
| F ( y ) F ( x 0 ) | = | y 3 x 0 3 | = | y 2 + y x 0 + x 0 2 | | y x 0 | .
It follows that λ = max y Ω | y 2 + y x 0 + x 0 2 | . For divided difference [ x , y ; F ] , we have
[ x , y ; F ] = x 2 + x y + y 2
and
[ x , y ; F ] [ u , v ; F ] = ( x u ) ( x + y + u ) + ( y v ) ( y + u + v ) .
We obtain from the last equality that
| T 0 1 ( [ x , y ; F ] [ u , y ; F ] ) | 1 | T 0 | max x , y , u , v Ω { | x + y + u | , | y + u + v | } ( | x u | + | y v | ) .
If a = d , b = p , c = q , and F is Fréchet-differentiable, then we obtain methods with derivatives. In this case, [ x , x ; F ] = F ( x ) and
F ( u ) F ( u 0 ) = 3 ( u + u 0 ) ( u u 0 ) L 0 = 1.5 | T 0 | max u Ω | u + u 0 | .
In Table 1, there are Lipschitz constants from conditions ( R 2 ) and the value s * to which the sequence { t n } converges. We see that in both cases sequences { x n } is contained in U ( x 0 , s * ) Ω .
In Table 2, there are values of the error at each step. The calculations were performed for initial approximation x 0 = 1.1 and an accuracy ε = 10 10 . For the Secant method, x 1 = 1.11 . We see from the obtained results that
| x n x n 1 | t n t n 1
is performed for each n 1 .
Then, we consider a system of nonlinear equations. Let X = R 3 , Ω = U ( 0 , 1 ) and
F ( x ) = e x 1 1 e 1 2 x 2 3 + x 2 x 3 = 0 .
Since | e t 1 e t 2 | e | t 1 t 2 | , then
λ = max e , e 1 2 max | y 2 2 + y 2 τ 0 + τ 0 2 | + 1 , 1 , x 0 = ( ξ 0 , τ 0 , ρ 0 ) T
and
L 0 = T 0 1 max e 2 , e 1 2 M 0 , L = T 0 1 max e 2 , e 1 2 M .
The constants M 0 and M are calculated similarly to the previous example.
Table 3 and Table 4 show results for system of nonlinear equations. The calculations were performed for initial approximations x 0 = ( 0.07 , 0.07 , 0.07 ) T , x 1 = ( 0.08 , 0.08 , 0.08 ) T and an accuracy ε = 10 10 . From the obtained results we see that
x n x n 1 t n t n 1
is satisfied for each n 1 .

6. Conclusions

A unified convergence analysis of the method without derivatives is provided under the classical Lipschitz conditions for first-order divided differences. The current convergence analysis allows for a comparison between specialized methods that was not possible before under the same set of conditions. The results of the numerical experiment that confirmed the theoretical one are given. The developed technique can also be employed on multipoint as well as multi-step iterative methods [13,14]. This is a possible direction for future areas of research.

Author Contributions

Conceptualization, S.R., I.K.A., S.S. and H.Y.; methodology, S.R., I.K.A., S.S. and H.Y.; software, S.R., I.K.A., S.S. and H.Y.; validation, S.R., I.K.A., S.S. and H.Y.; formal analysis, S.R., I.K.A., S.S. and H.Y.; investigation, S.R., I.K.A., S.S. and H.Y.; resources, S.R., I.K.A., S.S. and H.Y.; data curation, S.R., I.K.A., S.S. and H.Y.; writing—original draft preparation, S.R., I.K.A., S.S. and H.Y.; writing—review and editing, S.R., I.K.A., S.S. and H.Y.; visualization, S.R., I.K.A., S.S. and H.Y.; supervision, S.R., I.K.A., S.S. and H.Y.; project administration, S.R., I.K.A., S.S. and H.Y.; and funding acquisition, S.R., I.K.A., S.S. and H.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Argyros, I.K.; Magreñán, Á.A. Iterative Methods and Their Dynamics with Applications: A Contemporary Study; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  2. Dennis, J.E., Jr.; Schnabel, R.B. Numerical Methods for Unconstrained Optimization and Nonlinear Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1983. [Google Scholar]
  3. Amat, S. On the local convergence of secant-type methods. Intern. J. Comput. Math. 2004, 81, 1153–1161. [Google Scholar] [CrossRef]
  4. Hernandez, M.A.; Rubio, M.J. The Secant method for nondifferentiable operators. Appl. Math. Lett. 2002, 15, 395–399. [Google Scholar] [CrossRef] [Green Version]
  5. Argyros, I.K. A Kantorovich-type analysis for a fast iterative method for solving nonlinear equations. J. Math. Anal. Appl. 2007, 332, 97–108. [Google Scholar] [CrossRef] [Green Version]
  6. Kurchatov, V.A. On a method of linear interpolation for the solution of functional equations. Dokl. Akad. Nauk SSSR 1971, 198, 524–526. Translation in Soviet Math. Dokl. 1971, 12, 835–838(In Russian) [Google Scholar]
  7. Shakhno, S.M. On a Kurchatov’s method of linear interpolation for solving nonlinear equations. Pamm Proc. Appl. Math. Mech. 2004, 4, 650–651. [Google Scholar] [CrossRef]
  8. Shakhno, S.M. Nonlinear majorants for investigation of methods of linear interpolation for the solution of nonlinear equations. In Proceedings of the ECCOMAS 2004—European Congress on Computational Methods in Applied Sciences and Engineering, Jyväskylä, Finland, 24–28 July 2004. [Google Scholar]
  9. Amat, S.; Ezquerro, J.A.; Hernández-Verón, M.A. On a Steffensen-like method for solving nonlinear equations. Calcolo 2016, 53, 171–188. [Google Scholar] [CrossRef]
  10. Argyros, I.K.; George, S. Convergence of derivative free iterative methods. Creat. Math. Inform. 2019, 28, 19–26. [Google Scholar] [CrossRef]
  11. Argyros, G.; Argyros, M.; Argyros, I.K.; George, S. Semi-local convergence of a derivative-free method for solving equations. Probl. Anal. Issues Anal. 2021, 10, 18–26. [Google Scholar] [CrossRef]
  12. Sharma, R.; Gagandeep. A study of the local convergence of a derivative free method in Banach spaces. J Anal. 2022, 10, 18–26. [Google Scholar]
  13. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice Hall: Hoboken, NJ, USA, 1964. [Google Scholar]
  14. Behl, R.; Sarría, Í.; González, R.; Magreñán, Á.A. Highly efficient family of iterative methods for solving nonlinear models. J. Comput. Appl. Math. 2019, 346, 110–132. [Google Scholar] [CrossRef]
Table 1. Lipschitz constants and radii.
Table 1. Lipschitz constants and radii.
Method L 0 L λ s *
Newton0.99171.07444.33000.1023
Secant1.01011.06474.33000.1129
Table 2. Results for Newton and Secant method.
Table 2. Results for Newton and Secant method.
nNewton MethodSecant Method
x n | x n x n 1 | t n t n 1 x n | x n x n 1 | t n t n 1
11.00889.1185 × 10 2 9.1185 × 10 2 1.00969.0361 × 10 2 9.0361 × 10 2
21.00018.7386 × 10 3 1.0905 × 10 2 1.00098.7419 × 10 3 1.0991 × 10 2
31.00007.6802 × 10 5 1.6022 × 10 4 1.00008.8887 × 10 4 1.5283 × 10 3
41.00005.8989 × 10 9 3.4595 × 10 8 1.00008.5828 × 10 6 2.6685 × 10 5
51.000001.6098 × 10 15 1.00007.7050 × 10 9 5.7989 × 10 8
6 1.00006.6169 × 10 14 2.1673 × 10 12
Table 3. Lipschitz constants and radii.
Table 3. Lipschitz constants and radii.
Method L 0 L λ s *
Newton1.37892.57742.71830.0864
Secant1.77842.57742.71830.1041
Table 4. Results for Newton and Secant method.
Table 4. Results for Newton and Secant method.
nNewton MethodSecant Method
x n x n 1 t n t n 1 x n x n 1 t n t n 1
17.0000 × 10 2 7.0000 × 10 2 7.0000 × 10 2 7.0000 × 10 2
22.3910 × 10 3 1.5651 × 10 2 2.6368 × 10 3 1.7556 × 10 2
32.8629 × 10 6 8.2657 × 10 4 9.4309 × 10 5 5.9446 × 10 3
44.0981 × 10 12 2.3125 × 10 6 1.2890 × 10 7 5.7643 × 10 4
5 6.0867 × 10 12 1.5803 × 10 5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Regmi, S.; Argyros, I.K.; Shakhno, S.; Yarmola, H. Unified Convergence Criteria of Derivative-Free Iterative Methods for Solving Nonlinear Equations. Computation 2023, 11, 49. https://doi.org/10.3390/computation11030049

AMA Style

Regmi S, Argyros IK, Shakhno S, Yarmola H. Unified Convergence Criteria of Derivative-Free Iterative Methods for Solving Nonlinear Equations. Computation. 2023; 11(3):49. https://doi.org/10.3390/computation11030049

Chicago/Turabian Style

Regmi, Samundra, Ioannis K. Argyros, Stepan Shakhno, and Halyna Yarmola. 2023. "Unified Convergence Criteria of Derivative-Free Iterative Methods for Solving Nonlinear Equations" Computation 11, no. 3: 49. https://doi.org/10.3390/computation11030049

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop