Next Article in Journal
Extended Convergence of Two Multi-Step Iterative Methods
Previous Article in Journal
Lagrangians of Multiannual Growth Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extended Convergence for Two Sixth Order Methods under the Same Weak Conditions

1
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
2
Department of Mathematics, University of Houston, Houston, TX 77004, USA
3
Department of Mathematics, Puducherry Technological University, Pondicherry 605014, India
*
Author to whom correspondence should be addressed.
Foundations 2023, 3(1), 127-139; https://doi.org/10.3390/foundations3010012
Submission received: 25 November 2022 / Revised: 25 February 2023 / Accepted: 8 March 2023 / Published: 10 March 2023
(This article belongs to the Section Mathematical Sciences)

Abstract

:
High-convergence order iterative methods play a major role in scientific, computational and engineering mathematics, as they produce sequences that converge and thereby provide solutions to nonlinear equations. The convergence order is calculated using Taylor Series extensions, which require the existence and computation of high-order derivatives that do not occur in the methodology. These results cannot, therefore, ensure that the method converges in cases where there are no such high-order derivatives. However, the method could converge. In this paper, we are developing a process in which both the local and semi-local convergence analyses of two related methods of the sixth order are obtained exclusively from information provided by the operators in the method. Numeric applications supplement the theory.
MSC:
37N30; 47J25; 49M15; 65H10; 65J15

1. Introduction

The problem most common in applied and computational mathematics, and in the fields of science and engineering generally, is that of finding a solution to a nonlinear equation.
F ( x ) = 0
where F : Ω X Y is derivable as per Fréchet, X and Y are complete normed linear spaces and Ω is a non-null, open and convex set.
Researchers have battled for a long time to overcome this nonlinearity. In most of the cases, a direct solution is very hard to obtain. For this reason, the use of an iterative algorithm to arrive at a conclusion has been widely used by researchers and scientists. Newton’s method is a well-known iterative method for handling non-linear equations. Many new iterative strategies of higher order for the handling of non-linear equalities have been detected and are being applied in the last few years [1,2,3,4,5,6,7,8,9,10,11]. Theorems of convergence in the majority of these papers, however, are deduced by the application of high-order derivatives. In addition, the results are not discussed in terms of error bounds, convergence radii, or in the region where the solution is unique.
Examining local (LCA) and semi-local analyses (SLA) of an iterative algorithm makes it possible to estimate convergence domains, error estimates, and the unique region of a solution. The local and semi-local convergence results of efficient iterative methods were derived and stated in [9,10,11,12,13]. Important results were presented in these works, which include convergence radii, error estimation measurement, and extended benefits of this iteration approach. The results of this kind of analysis are valuable because they illustrate the complexities of starting point selection. Additionally, the applicability of our analysis can be extended to engineering problems such as the shrinking projection methods used for solving variational inclusion problems as in [14,15,16].
In this article, convergence theorems are developed for two competing methods having sixth order convergence found in [17] and are as stated below:
y n = x n F ( x n ) 1 F ( x n ) z n = x n 2 ( F ( x n ) + F ( y n ) ) 1 F ( x n ) x n + 1 = z n ( 7 2 I 4 F ( x n ) 1 F ( y n ) + 3 2 ( F ( x n ) 1 F ( y n ) 2 ) F ( x n ) 1 F ( z n )
and
y n = x n F ( x n ) 1 F ( x n ) A n = F ( x n ) + 2 F ( x n + y n 2 ) + F ( y n ) z n = y n 4 A n 1 F ( y n ) B n = F ( x n ) + 2 F ( x n + z n 2 ) + F ( z n ) x n + 1 = z n 4 B n 1 F ( z n ) .
The local convergence of methods (2) and (3) are given in [17]. The order was established assuming that the seventh derivative (at least) of the operator F exists. As a result, these schemes’ applicability is limited. In order to observe it, we define F on Ω = [ 0.5 , 1.5 ] by
F ( t ) = t 3 ln ( t ) + 3 t 5 3 t 4 , if t 0 0 , if t = 0
The third derivative is given by
F ( t ) = 11 72 t + 180 t 2 + 6 ln ( t ) .
Hence, due to the unboundedness of F , the conclusions on convergence of (2) and (3) are not true for this example. Nor does it provide a formula for the approximation of the error, the region of convergence, or the singleness and exact location of its root x * . This strengthens our idea to develop the Ball-Convergence-Theory and thus compare the convergence range of (2) and (3) using hypotheses based on F only. This research provides important formulas for the assessment of errors and convergence radii. The study also discusses the precise position and singleness of x * .
The rest of the contents are: Section 2 deals with the LCA of the methods (2) and (3). Section 3 discusses the SLA of the methods under consideration. Numerical examples are in Section 4. Concluding comments are also included.

2. LCA

Set M = 0 , + . Certain functions defined on the interval M play a role in the LCA of these methods. Assume:
(i)
∃ function ω 0 : M R , which is non-decreasing and continuous such that the function
ω 0 ( t ) 1
admits a smallest positive root ρ 0 . Set M 0 = 0 , ρ 0 .
(ii)
∃ a function ω : M 0 R , which is non-decreasing and continuous such that the function
g 1 ( t ) 1
admits a smallest positive root r 1 M 0 , where g 1 : M 0 R is
g 1 ( t ) = 0 1 ω ( ( 1 θ ) t ) d θ 1 ω 0 ( t ) .
(iii)
The function p ( t ) 1 has a smallest positive root ρ p M 0 , where the function p : M 0 R is given as
p ( t ) = 1 2 ( ω 0 ( t ) + ω 0 ( g 1 ( t ) t ) ) .
Set M 1 = 0 , ρ , where ρ = m i n { ρ 0 , ρ p } .
(iv)
The functions g 2 ( t ) 1 , g 3 ( t ) 1 have smallest positive roots r 2 , r 3 M 1 , where g 2 : M 1 R , g 3 : M 1 R are given by
g 2 ( t ) = g 1 ( t ) + ω ¯ ( t ) ( 1 + 0 1 ω 0 ( θ t ) d θ ) 2 ( 1 ω 0 ( t ) ) ( 1 p ( t ) ) , g 3 ( t ) = 1 + 1 2 3 ω ¯ ( t ) 1 ω 0 ( t ) 2 + 2 ω ¯ ( t ) 1 ω 0 ( t ) + 2 ( 1 + 0 1 ω 0 ( θ g 2 ( t ) t ) d θ 1 ω 0 ( t ) g 2 ( t ) , ω ¯ ( t ) = ω 0 ( t ) + ω 0 ( g 1 ( t ) t ) ω ( t ( 1 + g 1 ( t ) ) ) .
Note that in practice, we choose the smallest of the two functions in the formula for the function ω ¯ .
Define the parameter r as
r = m i n { r m } , m = 1 , 2 , 3 .
The parameter r is shown to be a radius of convergence (RC) for the method (2) (see Theorem 1).
Let M 2 = 0 , r . Then, for each t M 2 , the following items hold:
0 ω 0 ( t ) < 1
0 p ( t ) < 1
and 0 g m ( t ) < 1 .
The notation U ( x * , α ) stands for the open ball with center x * and of radius α > 0 , whereas U [ x * , α ] stands for the closure of the ball U ( x * , α ) .
The scalar functions ω 0 and ω relate x * to operators appearing on the method (2) or the method (3) are as follows.
Suppose:
(H1
∃ a solution x * Ω  of the equation  F ( x ) = 0  such that  F ( x * ) 1 L ( Y , X ) .
(H2
F ( x * ) 1 ( F ( u ) F ( x * ) ) ω 0 ( x x * )  for each  x Ω .
Set Ω 0 = U ( x * , ρ 0 ) Ω .
(H3
F ( x * ) 1 ( F ( u 2 ) F ( u 1 ) ) ω ( u 2 u 1 )  for each  u 1 , u 2 Ω 0 .
(H4
U [ x * , d ] Ω , where d is specified later.
The conditions ( H 1 )–( H 2 ) are utilized first to prove the convergence of the method (2). Let l n = x n x * .
Theorem 1.
Assume the conditions ( H 1 )–( H 4 ) hold and the initial guess x 0 U ( x * , d ) for d = r . Then, the following assertion holds:
lim n x n = x *
Proof. 
The iterates { x k } , { y k } , { z k } shall be shown to exist in the ball U ( x * , r ) by mathematical induction. Let u U ( x * , r ) , but arbitrary. By utilizing item (6) and the hypotheses ( H 1 ), ( H 2 ),
F ( x * ) 1 ( F ( u ) F ( x * ) ) ω 0 ( u x * ) ω 0 ( r ) < 1 .
Then, it follows by the standard Lemma due to Banach [12,18] involving linear operators that their inverses F ( u ) 1 L ( Y , X ) with
F ( u ) 1 F ( x * ) 1 1 ω 0 ( u x * ) .
If we choose u = x 0 , then the iterate y 0 exists by the first sub-step of the method (2) if k = 0 , since by hypothesis x 0 U ( x * , r ) . Moreover, we have
y 0 x * = x 0 x * F ( x 0 ) 1 x * = F ( x 0 ) 1 [ 0 1 F ( x * + ϑ ( x 0 x * ) ) d ϑ F ( x 0 ) ] ( x 0 x * ) ,
which gives by ( H 3 ), (8) (for m = 1 ), (9) (for u = x 0 ) and (5) that
y 0 x * 0 1 ω ( ( 1 ϑ ) l 0 ) d ϑ 1 ω 0 ( l 0 ) g 1 ( l 0 ) l 0 l 0 < r .
Thus, the iterate y 0 U ( x * , r ) . Then, by (5), (7), ( H 2 ) and (10), we obtain
( 2 F ( x * ) ) 1 ( F ( x 0 ) + F ( y 0 ) 2 F ( x * ) ) 1 2 ( F ( x * ) 1 ( F ( x 0 ) F ( x * ) ) + F ( x * ) 1 ( F ( y 0 ) F ( x * ) ) ) 1 2 ( ω 0 ( l 0 ) + ω 0 ( y 0 x * ) ) p ( l 0 ) p ( r ) < 1 ,
so
( F ( x 0 ) + F ( y 0 ) ) 1 F ( x * ) 1 2 ( 1 p ( l 0 ) ) .
Hence, the iterate z 0 exists by the second sub-step of the method (2) and
z 0 x * = y 0 x * + ( F ( x 0 ) 1 2 ( F ( x 0 ) + F ( y 0 ) ) 1 ) F ( x 0 ) = y 0 x * + F ( x 0 ) 1 ( F ( x 0 ) + F ( y 0 ) 2 F ( x 0 ) ) ( F ( x 0 ) + F ( y 0 ) ) 1 F ( x 0 ) ,
thus
z 0 x * y 0 x * + ω ¯ n ( 1 + 0 1 ω 0 ( ϑ l 0 ) ) d ϑ x n x * 2 ( 1 ω 0 ( l 0 ) ) ( 1 p ( l 0 ) ) g 2 ( l 0 ) l 0 l 0 ,
since
ω ¯ n = ω 0 ( l 0 ) + ω 0 ( y 0 x * ) ω ( l 0 + y 0 x * ) , F ( x 0 ) = F ( x n ) F ( x * ) = 0 1 [ F ( x * + ϑ ( x 0 x * ) ) d ϑ F ( x * ) + F ( x * ) ] ( x n x * ) , hence , F ( x * ) 1 F ( x 0 ) ( 1 + 0 1 ω 0 ( ϑ x 0 x * ) d ϑ ) l n .
It also follows by (12) that the iterate z 0 U ( x * , r ) . Furthermore, the iterate x 1 exists by the third sub-step of the method (2) for k = 0 . By the third sub-step, it follows in turn
x 1 x * = z 0 x * 1 2 [ 3 ( I F ( x n ) 1 F ( y n ) ) 2 + ( I F ( x n ) 1 F ( y n ) ) + 2 I ] F ( x n ) 1 F ( z n )
leading to
l 1 [ 1 + 1 2 ( 3 ( ω ¯ n 1 ω 0 ( l n ) ) 2 + 2 ( ω ¯ n 1 ω 0 ( l n ) ) + 2 ) ( 1 + 0 1 ω 0 ( ϑ z n x * ) d ϑ ) 1 ω 0 ( l n ) ] z n x * g 3 ( l n ) l n l n .
Thus, the iterate x 1 U ( x * , r ) . Exchange x 0 , y 0 , z 0 , x 1 by x k , y k , z k , x k + 1 in the preceding calculations to see that the following estimates hold:
y k x * g 1 ( l k ) l k l k < r , z k x * g 2 ( l k ) l k l k
and
l k + 1 g 3 ( l k ) l k l k .
Therefore, the iterates { x k } , { y k } , { z k } U ( x * , r ) . Finally, from
l k + 1 ξ l k < r , ξ = g 3 ( l 0 ) 0 , 1
it follows lim k x k = x * and x k + 1 U ( x * , r ) . □
The following proposition is to determine the uniqueness of this solution x * .
Proposition 1.
Assume:
(i)
∃ a solution x ¯ U ( x * , ρ 2 ) for some ρ 2 > 0 .
(ii)
The hypothesis ( H 2 ) holds on U ( x * , ρ 2 ) .
(iii)
There exists ρ 3 > ρ 2 such that
0 1 ω 0 ( ϑ ρ 3 ) d ϑ < 1 .
Set Ω 2 = U ( x * , ρ 3 ) Ω . Then, the only solution of (1) in the region Ω 2 is x * .
Proof. 
Assume ∃ x ¯ Ω 2 with F ( x ¯ ) = 0 . It follows that for
Q = 0 1 F ( x * + ϑ ( x ¯ x * ) ) d ϑ , F ( x * ) 1 ( Q F ( x * ) ) 0 1 ω 0 ( ϑ x ¯ x * ) d ϑ 0 1 ω 0 ( ϑ ρ 3 ) d ϑ < 1 ,
thus, x ¯ = x * by the identity x ¯ x * = Q 1 ( F ( x ¯ F ( x * ) ) = Q 1 ( 0 ) = 0 and the invertibility of the operator Q. □
The LCA of the method (3) is obtained analogously, but the functions g 2 and g 3 are given instead by
g 2 ( t ) = 0 1 ω ( ( 1 ϑ ) g 1 ( t ) t ) d ϑ 1 ω 0 ( g 1 ( t ) t ) + ( 1 + 0 1 ω 0 ( ϑ g 1 ( t ) t ) d ϑ ) ( ω ¯ ( t ) + 2 ( ( 1 + g 1 ( t ) ) t 2 ) ) ( 1 ω 0 ( g 1 ( t ) t ) ) ( 1 q ( t ) ) g 1 ( t ) , q ( t ) = 1 4 ( ω 0 ( t ) + 2 ω 0 ( 1 + g 1 ( t ) ) t 2 + ω 0 ( g 1 ( t ) t ) ) , q 1 ( t ) = 1 4 ω 0 ( t ) + 2 ω 0 ( 1 + g 2 ( t ) ) t 2 + ω 0 ( g 2 ( t ) t ) and g 3 ( t ) = 0 1 ω ( ( 1 ϑ ) g 2 ( t ) t ) d ϑ 1 ω 0 ( g 2 ( t ) t ) + ( 1 + 0 1 ω 0 ( ϑ g 2 ( t ) t ) d ϑ ) ( ω 1 ( t ) + 2 ω ( ( 1 + g 2 ( t ) ) t 2 ) ) ( 1 ω 0 ( g 2 ( t ) t ) ) ( 1 q 1 ( t ) ) g 2 ( t ) , where ω 1 ( t ) = ω 0 ( t ) + ω 0 ( g 2 ( t ) t ) ω ( ( 1 + g 2 ( t ) ) t ) .
This time the RC r ¯ is provided again by the Formula (5), but with the new functions g 2 and g 3 . Then, similarly under the conditions ( H 1 )–( H 4 ) with d = r ¯ , it follows
( 4 F ( x * ) ) 1 ( A n 4 F ( x * ) ) 1 4 [ F ( x * ) 1 ( F ( x n ) F ( x * ) ) + 2 F ( x * ) 1 ( F x n + y n 2 F ( x * ) ) +   F ( x * ) 1 ( F ( y n ) F ( x * ) ) ] q ( l n ) = q n < 1 , A n 1 F ( x * )   1 1 q n , F ( x * ) 1 ( A n 4 F ( y n ) )     F ( x * ) 1 ( F ( x n ) F ( y n ) ) + 2 F ( x * ) 1 ( F ( x n + y n 2 ) F ( y n ) ) ω ¯ n + 2 ω y n x n 2 , F ( x * ) 1 F ( y n )   = 0 1 F ( x * ) 1 [ F ( x * + ϑ ( y n x * ) ) d ϑ F ( x * ) + F ( x * ) ] ( y n x * ) ( 1 + 0 1 ω 0 ( ϑ y n x * ) d ϑ ) y n x * , z n x * = y n x * F ( y n ) 1 F ( y n ) + F ( y n ) 1 ( A n 4 F ( y n ) ) A n 1 F ( y n ) , z n x *   0 1 ω ( ( 1 ϑ ) y n x * ) d ϑ 1 ω 0 ( y n x * ) + ( 1 + 0 1 ω 0 ( ϑ y n x * ) d ϑ ) ( ω ¯ n + 2 ω ( y n x n 2 ) ) ( 1 ω 0 ( y n x * ) ) ( 1 q n ) y n x * g 2 ( l n ) l n l n , x n + 1 x * = z n x * F ( z n ) 1 F ( z n ) + F ( z n ) 1 ( B n 4 F ( z n ) ) F ( z n ) , l n + 1     0 1 ω ( ( 1 ϑ ) z n x * ) d ϑ 1 ω 0 ( z n x * ) + ( 1 + 0 1 ω 0 ( ϑ z n x * ) d ϑ ) ( ω ¯ n 1 + 2 ω ( y n x n 2 ) ) ( 1 ω 0 ( z n x * ) ) ( 1 q n 1 ) z n x * g 3 ( l n ) l n l n , where ω ¯ n 1 = ω 0 ( l n ) + ω 0 ( z n x * ) ω ( ( l n + z n x * ) .
Therefore, under the above-mentioned changes, the conclusions of the Theorem 1 hold, but for the method (3). The results of the Proposition (1) obviously also apply to the method (3). Therefore, we can provide the corresponding result for the method (3).
Theorem 2.
Assume the conditions ( H 1 )–( H 4 ) hold for d = r ¯ and the initial guess x 0 U ( x * , r ¯ ) . Then, the following assertion holds:
lim n + x n = x * .
Proof. 
It follows from Theorem 1 under the preceding changes. □
Remark 1.
Under the conditions ( H 1 )–( H 4 ), we can set ρ 2 = r or ρ 2 = r ¯ in Proposition 1 depending on which method is used.

3. SLA

If the role of x * is replaced by x 0 in the calculations of the previous section, one can introduce the SLA utilizing majorizing sequences. These sequences are defined for some λ 0 , respectively, by t 0 = 0 , s 0 = λ ,
p n = p ( t n ) = 1 2 ( ω 0 ( t n ) + ω 0 ( s n ) ) , u n = s n + ω ( s n t n ) ( s n t n ) ( 1 ω 0 ( t n ) ) ( 1 p n ) , a n = ( 1 + 0 1 ω 0 ( t n + ϑ ( u n t n ) ) d ϑ ) ( u n t n ) + ( 1 + ω 0 ( t n ) ) ( s n t n ) , t n + 1 = u n + 1 2 3 ( ω ( s n t n ) 1 ω 0 ( t n ) ) 2 + 2 ( ω ( s n t n ) 1 ω 0 ( t n ) ) + 2 a n 1 ω 0 ( t n ) , μ n + 1 = 0 1 ω ( ( 1 ϑ ) ( t n + 1 t n ) ) d ϑ ( t n + 1 t n ) + ( 1 + ω 0 ( t n ) ) ( t n + 1 s n ) , s n + 1 = t n + 1 + μ n + 1 1 ω 0 ( t n + 1 )
and
ψ ( t n ) = 1 4 ( ω 0 ( t n ) + 2 ω 0 ( s n t n 2 ) + ω 0 ( s n ) ) , u n = s n + 4 0 1 ω ( ( 1 ϑ ) ( s n t n ) ) d ϑ ( s n t n ) 1 ψ ( t n ) , b n = ( 1 + 0 1 ω 0 ( s n + ϑ ( u n s n ) ) d ϑ ) ( u n s n ) + 0 1 ω ( ( 1 ϑ ) ( s n t n ) ) d ϑ ( s n t n ) , ψ 1 ( t n ) = 1 4 ( ω 0 ( t n ) + 2 ω 0 ( t n + u n 2 ) + ω 0 ( u n ) ) , t n + 1 = u n + 4 b n 1 ψ 1 ( t n ) , s n + 1 = t n + 1 + μ n + 1 1 ω 0 ( t n + 1 ) .
These sequences majorize { x n } (see Theorem 3). However, first, we develop some convergence conditions for them.
Lemma 1.
Assume for each n = 0 , 1 , 2 ,
ω 0 ( t n ) < 1 , p ( t n ) < 1 and t n ξ for some ξ 0 .
Then, the sequence { t n } given by the method (2) is bounded from above by ξ, non-decreasing and is convergent to some ξ * [ 0 , ξ ] .
Proof. 
The result is implied immediately from the formula (14) and the condition (16). □
Lemma 2.
Suppose that for each n = 0 , 1 , 2 ,
ω 0 ( t n ) < 1 , ψ ( t n ) < 1 , ψ 1 ( t n ) < 1 a n d t n ξ 1 for some ξ 1 0 .
Then, the sequence { t n } given by the formula (15) is bounded from above by ξ 1 and is convergent to some ξ 1 * [ 0 , ξ 1 ] .
Proof. 
The result is implied immediately by the formula (15) and the condition (17). □
Remark 2.
A possible choice for the upper bounds ξ or ξ 1 is ρ 0 given in (i) of Section 2.
The following conditions are used for both methods. Suppose:
( C 1 )
There exists an element x 0 Ω and a parameter λ 0 with F ( x 0 ) 1 L ( Y , X ) and F ( x 0 ) 1 F ( x 0 ) λ .
( C 2 )
F ( x 0 ) 1 ( F ( u ) F ( x 0 ) ) ω 0 ( x x 0 ) for each u Ω .
Set Ω 1 = U ( x 0 , ρ 0 ) Ω .
( C 3 )
F ( x 0 ) 1 ( F ( u 2 ) F ( u 1 ) ) ω ( u 2 u 1 ) for each u 1 , u 2 Ω 1 .
( C 4 )
Conditions (16) and (17) hold for the methods (2) and (3), respectively.
( C 5 )
U [ x 0 , ξ ¯ ] Ω , where ξ ¯ = ξ or ξ ¯ = ξ 1 depending on which method is used.
Next, we are developing the semi-local convergence theorem for the method (2).
Theorem 3.
Under the conditions ( C 1 )–( C 5 ), the sequence { x n } generated by the method (2) is convergent to a solution x * U [ x 0 , ξ * ] of the given equation F ( x ) = 0 .
Proof. 
As in Theorem 1, mathematical induction and the following calculations lead in turn to
z n y n = ( F ( x n ) + F ( y n ) ) 1 ( 2 F ( x n ) ( F ( x n ) + F ( y n ) ) ) F ( x n ) 1 F ( x n ) , z n y n ω ( y n x n ) y n x n ( 1 p ( x n x 0 ) ) ( 1 ω 0 ( x n x 0 ) ) ω ( s n t n ) ( s n t n ) ( 1 p ( t n ) ) ( 1 ω 0 ( t n ) ) = u n s n , x n + 1 z n 1 2 ( 3 ( ω ( y n x n ) 1 ω 0 ( x n x 0 ) ) 2 + 2 ( ω ( y n x n ) 1 ω 0 ( x n x 0 ) ) + 2 ) a n 1 ω 0 ( x n x 0 ) 1 2 ( 3 ( ω ( s n t n ) 1 ω 0 ( t n ) ) 2 + 2 ( ω ( s n t n ) 1 ω 0 ( t n ) ) + 2 ) a n 1 ω 0 ( t n ) = t n + 1 u n , F ( x n + 1 ) = F ( x n + 1 ) F ( x n ) F ( x n ) ( x n + 1 x n ) + F ( x n ) ( x n + 1 y n ) ,
F ( x 0 ) 1 F ( x n + 1 )   0 1 F ( x 0 ) 1 ( F ( x n + ϑ ( x n + 1 + x n ) ) d ϑ F ( x n ) ) ( x n + 1 x n ) + F ( x 0 ) 1 ( F ( x n ) F ( x 0 ) F ( x 0 ) ) 0 1 ω ( ( 1 ϑ ) x n + 1 x n ) d ϑ x n + 1 x n + ( 1 + ω 0 ( x n x 0 ) ) x n + 1 y n 0 1 ω ( ( 1 ϑ ) ( t n + 1 t n ) ) d ϑ ( t n + 1 t n ) + ( 1 + ω 0 ( t n ) ) ( t n + 1 s n ) = μ n + 1 ,
y n + 1 x n + 1 F ( x n + 1 ) 1 F ( x 0 ) F ( x 0 ) 1 F ( x n + 1 ) μ n + 1 1 ω 0 ( x n + 1 x 0 ) μ n + 1 1 ω 0 ( t n + 1 ) = s n + 1 t n + 1 , z n x 0 z n y n + y n x 0 u n s n + s n t 0 = u n ξ * , x n + 1 x 0 x n + 1 z n + z n x 0 t n + 1 u n + u n t 0 = t n + 1 ξ * , y n + 1 x 0 y n + 1 x n + 1 + x n + 1 x 0 s n + 1 t n + 1 + t n + 1 t 0 = s n + 1 ξ * ,
since
F ( z n ) = F ( z n ) F ( x n ) + F ( x n ) = 0 1 ( F ( x n + ϑ ( z n x n ) ) d ϑ F ( x 0 ) + F ( x 0 ) ) ( z n x n ) + ( F ( x n ) F ( x 0 ) + F ( x 0 ) ) ( y n x n ) , so F ( x 0 ) 1 F ( z n ) ( 1 + 0 1 ω 0 ( x n x 0 + ϑ z n x n ) d ϑ ) z n x n + ( 1 + ω 0 ( x n x 0 ) ) y n x n = a ¯ n ( 1 + 0 1 ω 0 ( t n + ϑ ( u n t n ) ) d ϑ ) ( u n t n ) + ( 1 + ω 0 ( t n ) ) ( s n t n ) = a n .
Notice also that y 0 x 0 = λ = s 0 t 0 ξ * , so y 0 U [ x 0 , ξ ] initiating the induction. Thus, the sequence { x n } is fundamental in a Banach space X (since { t n } is fundamental as convergent by the condition ( C 4 )). By letting n in (18) and using the continuity of the operator F, we conclude that F ( x * ) = 0 . □
Proposition 2.
Assume:
(i)
a solution y * U ( x 0 , ρ 3 ) of (1) for some ρ 3 > 0 .
(ii)
The condition ( C 2 ) holds on the ball U ( x 0 , ρ 3 ) .
There exists ρ 4 ρ 3 such that
0 1 ω 0 ( ( 1 θ ) ρ 3 + θ ρ 4 ) d θ < 1 .
Set Ω 2 = U [ x 0 , ρ 4 ] Ω .
Then, the only solution of the equation F ( x ) = 0 in the region Ω 2 is y * .
Proof. 
Define the linear operator G = 0 1 F ( y * + θ ( y ¯ * y * ) ) d θ provided y ¯ * Ω and F ( y ¯ * ) = 0 . It then follows that
F ( x 0 ) 1 ( G F ( x 0 ) ) 0 1 ω 0 ( ( 1 θ ) y * x 0 + θ y ¯ * x 0 ) d θ 0 1 ω 0 ( ( 1 θ ) ρ 3 + θ ρ 4 ) d θ < 1 .
Hence, we deduce that y ¯ * = y * . □
Remark 3.
(1)
The parameter ρ 0 can replace ξ * or ξ 1 * in the Theorem 3.
(2)
Under conditions of Theorem 3, set ρ 3 = ξ * or ρ 3 = ξ 1 * in the Proposition 2.
Similarly, for the method (3), we have in turn the estimates
z n y n 4 A n 1 F ( x 0 ) F ( x 0 ) 1 F ( y n ) 4 0 1 ω ( ( 1 ϑ ) y n x n ) d ϑ y n x n 1 ψ ( x n x 0 ) u n s n , x n + 1 z n 4 B n 1 F ( x 0 ) F ( x 0 ) 1 F ( z n ) 4 b n 1 ψ 1 ( x n x 0 ) t n + 1 u n , and y n + 1 x n + 1 μ n + 1 1 ω 0 ( x n + 1 x 0 ) s n + 1 t n + 1 .
Thus, the conclusions of Theorem 3 and Proposition 2 hold for the method (3) with (14), (16) replacing (15) and (17), respectively.
Theorem 4.
Under the conditions ( C 1 )–( C 5 ), the sequence { x n } provided by the method (3) is convergent to a solution x * U [ x 0 , ξ 1 * ] of (1).
Proof. 
See Theorem 3 under the preceding changes. □

4. Numerical Examples

Example 1.
Let X = Y = R . Define the function F on Ω = [ 1 , 1 ] by
F ( x ) = e x 1 .
We obtain x * = 0 as a root of F ( x ) . The conditions ( H 1 )–( H 4 ) are satisfied for ω 0 ( t ) = ( e 1 ) t , ρ 0 = 0.581977 , Ω 0 = U ( x * , ρ 0 ) Ω and ω ( t ) = e t . Then, the radii obtained are as given in Table 1.
Example 2.
We define the function F ( x ) = sin x on Ω, where X = Y = Ω = R . We have F ( x ) = cos x and also x * = 0 is the solution of F ( x ) = 0 . Now, the conditions ( H 1 )–( H 4 ) are validated for ω 0 ( t ) = ω ( t ) = t . Then, the RC are as given in Table 1.
Example 3.
Consider the system of differential equations with
F 1 ( v 1 ) = e v 1 , F 2 ( v 2 ) = ( e 1 ) v 2 + 1 , F 3 ( v 3 ) = 1
subject to the initial conditions F 1 ( 0 ) = F 2 ( 0 ) = F 3 ( 0 ) = 0 . Let F = ( F 1 , F 2 , F 3 ) . Let X = Y = R 3 and Ω = U [ 0 , 1 ] . Then, x * = ( 0 , 0 , 0 ) T solves (1). Define the function F on Ω for v = ( v 1 , v 2 , v 3 ) T as
F ( v ) = ( e v 1 1 , e 1 2 v 2 2 + v 2 , v 3 ) T .
Then, the Fréchet derivative is given by
F ( v ) = e v 1 0 0 0 ( e 1 ) v 2 + 1 0 0 0 1
Therefore, by the definition of F we have F ( x * ) = 1 . Then, conditions ( H 1 ) - - ( H 4 ) are satisfied if ω 0 ( t ) = ( e 1 ) t , ρ 0 = 0.581977 , Ω 0 = U ( x * , ρ 0 ) Ω and ω ( t ) = e 1 e 1 t . Then, the radii are listed in Table 2.
Example 4.
Let X = Y = R . Let the function F on Ω for Ω = U ( x 0 , 1 α ) for some α [ 0 , 1 ) be
F ( x ) = x 3 α
Fix x 0 = 1 . Then, the conditions ( C 1 ) - - ( C 3 ) are satisfied for λ = 1 α 3 , ω 0 ( t ) = ( 3 α ) t , Ω 1 = ( x 0 , 1 3 α ) , ω ( t ) = 2 ( 1 + 1 3 α ) t . Choose ξ ¯ = 1 3 α . The conditions of Lemmas 1 and 2 are verified in Table 3 and Table 4, respectively.
Hence, we can observe that conditions ( C 4 ) and ( C 5 ) hold for both the methods (2) and (3). Thus, the conclusions of Theorem (3) and Theorem (4) hold for (2) and (3), respectively, i.e., the sequence { x n } produced by the method (2) (or (3)) converges to x * U [ x 0 , ξ * ] (or U [ x 0 , ξ 1 * ] ).
Example 5.
Let X = Y = R 5 , Ω = U [ 0 , 1 ] and consider the system of 5 equations defined by
j = 1 , j i 5 x j e x i = 0 , 1 i 5 ,
where x * = ( 0.20388835470224016 , 0.20388835470224016 , 0.20388835470224016 , 0.20388835470224016 , 0.20388835470224016 ) T is a root.
Choose x 0 = ( 0.3 , 0.3 , 0.3 , 0.3 , 0.3 ) T . Then, the errors are in Table 5.
Therefore, we can say that methods (2) and (3) converge to x * .

5. Conclusions

The LCA and SLA for the methods (2) and (3) are validated by applying a generalized condition of Lipschitz to the first derivative only. A comparison is made between the two convergence balls, which are very similar in terms of their efficiency. This study derives estimates of convergence balls, measurement of error distances, and existence-uniqueness regions of the solution. Finally, the proposed theoretical results are checked for application problems. The process of this article shall be applied on other high convergence order methods using inverses of operators that are linear in our future research [1,2,3,4,5,6,7,8].

Author Contributions

Conceptualization, I.K.A., S.R., J.A.J. and J.J.; methodology, I.K.A., S.R., J.A.J. and J.J.; software, I.K.A., S.R., J.A.J. and J.J.; validation, I.K.A., S.R., J.A.J. and J.J.; formal analysis, I.K.A., S.R., J.A.J. and J.J.; investigation, I.K.A., S.R., J.A.J. and J.J.; resources, I.K.A., S.R., J.A.J. and J.J.; data curation, I.K.A., S.R., J.A.J. and J.J.; writing—original draft preparation, I.K.A., S.R., J.A.J. and J.J.; writing—review and editing, I.K.A., S.R., J.A.J. and J.J.; visualization, I.K.A., S.R., J.A.J. and J.J.; supervision, I.K.A., S.R., J.A.J. and J.J.; project administration, I.K.A., S.R., J.A.J. and J.J.; funding acquisition, I.K.A., S.R., J.A.J. and J.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
L ( X , Y ) Set of Linear operators from X to Y
{ t n } Scalar sequence

References

  1. Cordero, A.; Gómez, E.; Torregrosa, J.R. Efficient high-order iterative methods for solving nonlinear systems and their application on heat conduction problems. Complexity 2017, 2017, 6457532. [Google Scholar] [CrossRef] [Green Version]
  2. Babajee, D.K.R. On the Kung-Traub conjecture for iterative methods for solving quadratic equations. Algorithms 2016, 9, 1. [Google Scholar] [CrossRef] [Green Version]
  3. Sharma, J.R.; Sharma, R.; Kalra, N. A novel family of composite Newton–Traub methods for solving systems of nonlinear equations. Appl. Math. Comput. 2015, 269, 520–535. [Google Scholar] [CrossRef]
  4. Kung, H.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. ACM 1974, 21, 643–651. [Google Scholar] [CrossRef]
  5. Noor, M.A.; Noor, K.I.; Al-Said, E.; Waseem, M. Some new iterative methods for nonlinear equations. Math. Probl. Eng. 2010, 2010, 198943. [Google Scholar] [CrossRef] [Green Version]
  6. Herceg, D.; Herceg, D. A family of methods for solving nonlinear equations. Appl. Math. Comput. 2015, 259, 882–895. [Google Scholar] [CrossRef]
  7. Lotfi, T.; Bakhtiari, P.; Cordero, A.; Mahdiani, K.; Torregrosa, J.R. Some new efficient multipoint iterative methods for solving nonlinear systems of equations. Int. J. Comput. Math. 2015, 92, 1921–1934. [Google Scholar] [CrossRef] [Green Version]
  8. Waseem, M.; Noor, M.A.; Noor, K.I. Efficient method for solving a system of nonlinear equations. Appl. Math. Comput. 2016, 275, 134–146. [Google Scholar] [CrossRef]
  9. Argyros, I.K.; Sharma, D.; Argyros, C.I.; Parhi, S.K.; Sunanda, S.K. Extended iterative schemes based on decomposition for nonlinear models. J. Appl. Math. Comput. 2022, 68, 1485–1504. [Google Scholar] [CrossRef]
  10. Argyros, C.I.; Argyros, I.K.; Regmi, S.; John, J.A.; Jayaraman, J. Semi-Local Convergence of a Seventh Order Method with One Parameter for Solving Non-Linear Equations. Foundations 2022, 2, 827–838. [Google Scholar] [CrossRef]
  11. Argyros, I.K.; Regmi, S.; Shakhno, S.; Yarmola, H. Perturbed Newton Methods for Solving Nonlinear Equations with Applications. Symmetry 2022, 14, 2206. [Google Scholar] [CrossRef]
  12. Argyros, I.K. The Theory and Applications of Iteration Methods, 2nd ed.; CRC Press/Taylor and Francis Publishing Group Inc.: Boca Raton, FL, USA, 2022. [Google Scholar]
  13. John, J.A.; Jayaraman, J.; Argyros, I.K. Local Convergence of an Optimal Method of Order Four for Solving Non-Linear System. Int. J. Appl. Comput. Math. 2022, 8, 194. [Google Scholar] [CrossRef]
  14. Hammad, H.; Cholamjiak, W.; Yambangwai, D.; Dutta, H. A modified shrinking projection methods for numerical reckoning fixed points of G-nonexpansive mappings in Hilbert spaces with graphs. Miskolc Math. Notes 2019, 20, 941–956. [Google Scholar] [CrossRef]
  15. Hammad, H.A.; Rehman, H.U.; De la Sen, M. Shrinking projection methods for accelerating relaxed inertial Tseng-type algorithm with applications. Math. Probl. Eng. 2020, 2020, 7487383. [Google Scholar] [CrossRef]
  16. Tuyen, T.M.; Hammad, H.A. Effect of shrinking projection and CQ-methods on two inertial forward–backward algorithms for solving variational inclusion problems. In Rendiconti del Circolo Matematico di Palermo Series 2; Springer: Cham, Switzerland, 2021; pp. 1–15. [Google Scholar]
  17. Abro, H.A.; Shaikh, M.M. A new time-efficient and convergent nonlinear solver. Appl. Math. Comput. 2019, 355, 516–536. [Google Scholar] [CrossRef]
  18. Kantorovich, L.V.; Akilov, G.P. Functional Analysis in Normed Spaces; Pergamon Press: Oxford, UK, 1964. [Google Scholar]
Table 1. Radii for Examples 1 and 2.
Table 1. Radii for Examples 1 and 2.
Example 1Example 2
Method (2)Method (3)Method (2)Method (3)
r 1 = 0.324947 r 1 = 0.324947 r 1 = 0.666667 r 1 = 0.666667
r 2 = 0.201201 r 2 = 0.196671 r 2 = 0.390253 r 2 = 0.444428
r 3 = 0.123369 r 3 = 0.16627 r 3 = 0.240567 r 3 = 0.373689
r = 0.123369 r ¯ = 0.16627 r = 0.240567 r ¯ = 0.373689
Table 2. Radii for Example 3.
Table 2. Radii for Example 3.
Method 2Method 3
r 1 = 0.382692 r 1 = 0.382692
r 2 = 0.224974 r 2 = 0.242274
r 3 = 0.140272 r 3 = 0.205931
r = 0.140272 r ¯ = 0.205931
Table 3. Estimates for method (2).
Table 3. Estimates for method (2).
n01234567
ω 0 ( t n ) 00.02742730.0601390.1477070.14892010.14982020.14982110.1498211
p ( t n ) 0.01538890.03256360.07283080.1827550.18470210.18484120.18484230.1848423
t n 00.0137130.0300680.07384990.07400890.074126320.07422120.0742212
ξ * = 0.0742212.
Table 4. Estimates for method (3).
Table 4. Estimates for method (3).
n012345678
ω 0 ( t n ) 00.05562830.09346550.1149210.1232010.1247460.1248120.1248130.124813
ψ ( t n ) 0.01708330.04036610.05471960.06103510.06234820.06240620.06240630.06240630.0624063
ψ 1 ( t n ) 0.01880720.06913350.1018440.1185740.1239520.1247790.1248130.1248130.124813
t n 00.02713570.04559290.05605880.06009820.06085190.06088410.06088420.0608842
ξ 1 * = 0.0608842.
Table 5. Error estimates for Example (5).
Table 5. Error estimates for Example (5).
Methods x 0 x * x 1 x * x 2 x * x 3 x *
Method (2) 9.61116 × 10 2 5.38484 × 10 4 1.7643 × 10 17 2.18261 × 10 98
Method (3) 9.61116 × 10 2 4.29875 × 10 5 4.45051 × 10 25 5.4804 × 10 145
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Argyros, I.K.; Regmi, S.; John, J.A.; Jayaraman, J. Extended Convergence for Two Sixth Order Methods under the Same Weak Conditions. Foundations 2023, 3, 127-139. https://doi.org/10.3390/foundations3010012

AMA Style

Argyros IK, Regmi S, John JA, Jayaraman J. Extended Convergence for Two Sixth Order Methods under the Same Weak Conditions. Foundations. 2023; 3(1):127-139. https://doi.org/10.3390/foundations3010012

Chicago/Turabian Style

Argyros, Ioannis K., Samundra Regmi, Jinny Ann John, and Jayakumar Jayaraman. 2023. "Extended Convergence for Two Sixth Order Methods under the Same Weak Conditions" Foundations 3, no. 1: 127-139. https://doi.org/10.3390/foundations3010012

Article Metrics

Back to TopTop