Next Article in Journal
Three-Step Derivative-Free Method of Order Six
Previous Article in Journal
Correction: Oks, E. A Possible Explanation of the Proton Radius Puzzle Based on the Second Flavor of Muonic Hydrogen Atoms. Foundations 2022, 2, 912–917
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generalized Iterative Method of Order Four with Divided Differences

1
Department of Mathematics, University of Houston, Houston, TX 77204, USA
2
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
3
Department of Mathematics, Hans Raj Mahila Mahavidyalaya, Jalandhar 144008, Punjab, India
*
Author to whom correspondence should be addressed.
Foundations 2023, 3(3), 561-572; https://doi.org/10.3390/foundations3030033
Submission received: 5 August 2023 / Revised: 3 September 2023 / Accepted: 5 September 2023 / Published: 7 September 2023
(This article belongs to the Section Mathematical Sciences)

Abstract

:
Numerous applications from diverse disciplines are formulated as an equation or system of equations in abstract spaces such as Euclidean multidimensional, Hilbert, or Banach, to mention a few. Researchers worldwide are developing methodologies to handle the solutions of such equations. A plethora of these equations are not differentiable. These methodologies can also be applied to solve differentiable equations. A particular method is utilized as a sample via which the methodology is described. The same methodology can be used on other methods utilizing inverses of linear operators. The problem with existing approaches on the local convergence of iterative methods is the usage of Taylor expansion series. This way, the convergence is shown but by assuming the existence of high-order derivatives which do not appear on the iterative methods. Moreover, bounds on the error distances that can be computed are not available in advance. Furthermore, the isolation of a solution of the equation is not discussed either. These concerns reduce the applicability of iterative methods and constitute the motivation for developing this article. The novelty of this article is that it positively addresses all these concerns under weaker convergence conditions. Finally, the more important and harder to study semi-local analysis of convergence is presented using majorizing scalar sequences. Experiments are further performed to demonstrate the theory.

1. Introduction

Suppose H is a Fréchet differentiable operator mapping from a Banach space S into S , and D is an open convex subset of S . Deriving a solution λ D of the nonlinear equation of the form
H ( x ) = 0
has innumerable applications in multiple disciplines of science and engineering. These kinds of problems are formulated as an equation such as (1) using mathematical modeling [1,2,3,4]. These equations may be defined on the real line, the Euclidean space with finite dimensions as a result of the discretization of a boundary value problem, and on a Hilbert or Banach space [2,3,4,5]. Such equations can be found in the numerical Section 4. The solution of (1) is rarely attainable in analytic form. The iterative methods provide a tool by which to handle non-analytic and complex functions, thereby approximating the solution λ of (1). To contend with the issues such as slow or no convergence, divergence and inefficiency, an extensive body of literature can be found on the convergence of iterative methods motivated by algebraic or geometrical considerations [3,4]. As a result, researchers all around the world are persistently endeavoring to create higher-order iterative methods [5,6,7,8,9,10,11,12,13,14,15,16,17,18].
In particular, we examine the convergence of the fourth-order method, free from derivatives, developed by Sharma et al. [19], which is defined for all m = 0 , 1 , 2 , by
u m = x m + H ( x m ) ,   G m = [ x m , u m ; H ] , y m = x m G m 1 H ( x m ) , v m = y m + H ( y m ) ,   A m = G m 1 [ y m , v m ; H ]   and x m + 1 = y m A m ( 3 I 2 A m ) G m 1 H ( y m ) ,
where G m 1 is inverse of first order divided difference of [ x m , u m ; H ] of H, and I is identity operator. The fourth order of the method (2) is shown provided that S = R k and by assuming the existence of at least the fifth derivative and utilizing Taylor series expansions. Hence, the application is limited to solving nonlinear Equation (1), where the operator is that many times differentiable. However, the method may converge even if H ( 5 ) does not exist.
For instance, consider D to be an interval 5 2 , 2 and the function H is defined on D as
H ( t ) = t 3 log ( π 2 t 2 ) + t 5 sin 1 t ,   t   0 0 ,   t = 0 .
Clearly, H ( 3 ) ( t ) is not continuous at t = 0 . As a result, lthough the method converges, the convergence of method (2) to the solution t = 1 π cannot be assured using results in [19].
Moreover, notice that the method (2) does not have any derivatives. The aforementioned limitations and the ones listed below constitute the motivation for developing this article.
Motivation
( C 1 )
A priori upper bounds on x m λ are not given, λ D being a solution of the Equation (1). The number of iterations to be performed to reach a predecided error tolerance is not known.
( C 2 )
The initial guess x 0 is “shot in dark”, and no information is available on the uniqueness of the solution.
( C 3 )
There convergence of the method is not assured (although it may converge to λ ) if at least H ( 5 ) does not exist.
( C 4 )
The results are limited to the case only when S = R k .
( C 5 )
The semi-local convergence, more interesting than the local convergence, is not given in [19].
( C 6 )
The same concerns exist for numerous other methods with no derivatives [17,19].
Novelty
All these limitations are taken up positively in the present article. In particular, the local convergence relies on the general concept on ω -continuity [2,5,9] and uses only information from the operators appearing on the method. Moreover, the semi-local convergence not provided in the studies utilizes majorizing sequences [2,5].
The novelty of the article lies in the fact that the process leading to the aforementioned benefits does not rely on the particular method (2). However, it can be utilized on other methods involving inverses of linear operators in a similar manner. Notice that the development of efficiency and computational benefits have been discussed in [19]. So, these aspects of the method do not repeat in the present article.
The article is structured as follows: The local convergence in Section 2 is followed by the semilocal convergence in Section 3. The numerical applications and concluding remarks which appear in Section 4 and Section 5, respectively, complete the article.

2. Local Analysis

The assumptions required are listed below provided Q = [ 0 , + ) .
( E 1 )
There exist functions g 1 : Q Q , φ 0 : Q × Q Q which are continuous as well as non-decreasing (FCN) such that the equation φ 0 ( t , g 1 ( t ) ) 1 = 0 has a minimal positive solution called P . Define the set Q 0 = [ 0 , P ) .
( E 2 )
There exist FCN g 2 : Q 0 Q , φ 3 : Q 0 Q , φ : Q 0 × Q 0 Q , φ 1 : Q 0 × Q 0 × Q 0 Q and φ 2 : Q 0 × Q 0 × Q 0 × Q 0 Q such that the equations h i ( t ) 1 = 0 , i = 1 , 2 have minimal positive solutions in the interval ( 0 , P ) denoted by P i , respectively, where the functions h i : Q 0 Q are given as
h 1 ( t ) = φ ( t , g 1 ( t ) ) 1 φ 0 ( t , g 1 ( t ) ) , a ( t ) = φ 2 ( t , h 1 ( t ) t , g 1 ( t ) , g 2 ( t ) ) 1 φ 0 ( t , g 1 ( t ) )
and
h 2 ( t ) = φ 1 ( t , h 1 ( t ) t , g 1 ( t ) ) + a ( t ) ( 1 + 2 a ( t ) ) ( 1 + φ 3 ( h 1 ( t ) t ) ) 1 φ 0 ( t , g 1 ( t ) ) h 1 ( t ) .
Define   P = min { P i } .
( E 3 )
There exist an invertible operator M and a solution λ D of the Equation (1) such that for all x D , u = x + H ( x ) ,
u λ g 1 ( x λ )
and
M 1 ( [ x , u , ; H ] M ) φ 0 ( x λ , u λ ) .
Define the set Q 0 = D B ( λ , P ) .
( E 4 )
M 1 ( [ x , u ; H ] [ x , λ ; H ] ) φ ( x λ , u λ ) , M 1 ( [ x , u ; H ] [ y , λ ; H ] ) φ 1 ( x λ , y λ , u λ ) , M 1 ( [ x , u ; H ] [ y , v ; H ] ) φ 2 ( x λ , y λ , u λ , v λ ) , v λ g 2 ( y λ ) ,   v = y + H ( y ) , y = x [ x , u ; H ] 1 H ( x ) ,
and
M 1 ( [ y , λ ; H ] M ) φ 3 ( y λ ) .
Notice that by the definition of P, ( E 1 ) , and ( E 3 ) ,
M 1 ( [ x , u ; H ] M ) φ 0 ( x λ , g 1 ( x λ ) ) < 1 .
Thus, y is well defined, since [ x , u , H ] 1 exists by a Lemma due to Banach for inverses of linear operators [2,4,5], and
( E 5 )
B [ λ , P ¯ ] D , where P ¯ = m a x { P , g 1 ( P ) , g 2 ( P ) } .
The developed notation and the assumptions ( E 1 ) ( E 5 ) are required in the main local result of this section for method (2).
Theorem 1.
Under the assumptions ( E 1 ) ( E 5 ) and provided that x 0 B [ λ , P ) { λ } , the sequence { x m } is convergent to the solution λ of (1).
Proof. 
By applying the assumptions ( E 1 ) ( E 3 )
M 1 ( G 0 M ) φ 0 ( x 0 λ , u 0 λ )   φ 0 ( P , g 1 ( x 0 λ ) ) φ 0 ( P , g 1 ( P ) ) < 1 ,
thus
G 0 1 M 1 1 φ 0 ( x 0 λ , u 0 λ ) .
The iterate y 0 exists by method (2, step one, from which
y 0 λ = x 0 λ G 0 1 H ( x 0 )   = G 0 1 ( G 0 [ x 0 , λ ; H ] ) ( x 0 λ ) .
The assumptions ( E 2 ) and ( E 4 ) , the formula (3), and the estimates (4) and (5) lead to
y 0 λ φ ( x 0 λ , u 0 λ ) x 0 λ 1 φ 0 ( x 0 λ , u 0 λ )   h 1 ( x 0 λ ) x 0 λ x 0 λ < P .
Hence, the iterate y 0 B ( λ , P ) . Notice also that by the invertibility of G 0 , the iterate x 1 exists by the method (2) step two and
x 1 λ = y 0 λ A 0 G 0 1 H ( y 0 ) 2 A 0 ( I A 0 ) G 0 1 H ( y 0 )   = y 0 λ ( A 0 I + I ) G 0 1 H ( y 0 ) 2 A 0 ( I A 0 ) G 0 1 H ( y 0 )   = y 0 λ G 0 1 H ( y 0 ) + ( I A 0 ) G 0 1 H ( y 0 )   2 ( A 0 I + I ) ( I A 0 ) G 0 1 H ( y 0 )   = G 0 1 ( G 0 [ y 0 , λ ; H ] ) ( y 0 λ )   2 ( I A 0 ) 2 G 0 1 H ( y 0 ) + 2 ( I A 0 ) G 0 1 H ( y 0 )   = G 0 1 ( G 0 [ y 0 , λ ; H ] ) ( y 0 λ )   ( I A 0 ) G 0 1 H ( y 0 ) + 2 ( I A 0 ) 2 G 0 1 H ( y 0 ) .
Therefore, by (3), (4), (6), and (7), it follows that
x 1 λ 1 1 φ 0 ( x 0 λ , u 0 λ )   a 0 ( 1 + 2 a 0 ) ( 1 + φ 3 ( y 0 λ ) ) + φ 1 ( x 0 λ , y 0 λ , u 0 λ ) y 0 λ h 2 ( x 0 λ ) x 0 λ x 0 λ ,
where the following calculations are also employed:
H ( y 0 ) = H ( y 0 ) H ( λ ) = [ y 0 , λ ; H ] ( y 0 λ ) , M 1 H ( y 0 ) M 1 ( [ y 0 , λ ; H ] M + M ) y 0 λ   ( 1 + φ 3 ( y 0 λ ) y 0 λ   ( 1 + φ 3 ( h 1 ( x 0 λ ) x 0 λ ) y 0 λ
and
I A 0 G 0 1 M M 1 ( G 0 [ y 0 , v 0 ; H ] )   φ 2 ( x 0 λ , y 0 λ , u 0 λ , v 0 λ ) 1 φ 0 ( x 0 λ , u 0 λ ) = a ¯ 0 a 0 ,
where
a 0 = φ 2 ( x 0 λ , h 1 ( x 0 λ ) x 0 λ , g 1 ( x 0 λ ) , g 2 ( x 0 λ ) ) 1 φ 0 ( x 0 λ , g 1 ( x 0 λ ) ) .
From the preceding calculations, if repeated for x m , y m , and x m + 1 in place of x 0 , y 0 , and x 1 , respectively, the induction for the estimates
y m λ h 1 ( x m λ ) x m λ x m λ
and
x m + 1 λ h 2 ( x m λ ) x m λ x m λ
is completed. Consequently, there exists
Λ = h 2 ( x 0 λ ) [ 0 , 1 ) ,   such that
x m + 1 λ Λ x m λ < P
resulting in x m + 1 B ( λ , P ) as well as lim m + x m = λ . □
Remark 1.
The selection of the real functions g 1 and g 2 can be specialized further due to the calculations:
u λ = x λ + H ( x ) = x λ + [ x , λ ; H ] ( x λ )   = ( I + M M 1 ( [ x , λ ; H ] M + M ) ) ( x λ ) ,   = ( ( I + M ) + M M 1 ( [ x , λ ; H ] M ) ) ( x λ ) .
Thus, a possible choice for the function g 1 is
g 1 ( t ) = ( I + M + M φ 3 ( t ) ) t .
Similarly, we obtain
v λ = y λ + H ( y )   = ( ( I + M ) + M M 1 ( [ y , λ , H ] M ) ) ( y λ ) ,
 so
v λ I + M + M φ 3 ( y λ ) y λ
 and consequently, a choice for the function g 2 can be
g 2 ( t ) = ( I + M + M φ 3 ( h 1 ( t ) t ) ) h 1 ( t ) t ,
 where h 1 is as previously written in ( E 2 ) .
The functions can be further specified if the linear operator M is precised.
A popular choice is M = H ( λ ) . However, in this case, although there are no derivatives on the method (2), it cannot be used to solve non-differentiable equations under previous assumptions, since we assume λ to be a simple solution (i.e., H ( λ ) is invertible). Thus, M should be chosen so that functions “φ” are as tight as possible but not M = H ( λ ) in the case of non-differentiable equations.
The isolation of the solution domain is specified in the next result.
Proposition 1.
Assume:
There exists a solution ζ B ( λ , P ) of the equation H ( x ) = 0 for some P > 0 . The condition ( E 2 ) and ( E 3 ) are validated on the ball B ( λ , P ) , and there exists P P such that
φ 3 ( P ) < 1 .
Then, the equation H ( x ) = 0 is uniquely solvable by λ in the domain D 3 = D B [ λ , P ] .
Proof. 
Define the divided difference [ ζ , λ ; H ] . Then, we obtain
M 1 ( [ ζ , λ ; H ] M ) φ 3 ( ζ λ )   φ 3 ( P ) < 1 ,
thus,
ζ λ = [ ζ , λ ; H ] 1 ( H ( ζ ) H ( λ ) )   = [ ζ , λ ; H ] 1 ( 0 ) = 0 .
Hence, we conclude ζ = λ . □
A possible choice for P = P .

3. Semi-Local Analysis

The mission of λ , “ φ ”, g 1 , and g 2 functions is exchanged by the initial point x 0 and the “ ψ ”, g 3 , and g 4 functions as defined below.
Assume the following:
( T 1 )
There exists FCN g 3 : Q Q , ψ 0 : Q × Q Q such that the equation ψ 0 ( t , g 3 ( t ) ) 1 = 0 has a minimal positive solution denoted by q. Let Q 1 = [ 0 , q ) . Consider FCN g 4 : Q 1 M , ψ : Q 1 × Q 1 M , ψ 1 : Q 1 × Q 1 × Q 1 M and ψ 2 : Q 1 × Q 1 × Q 1 × Q 1 M . Define for α 0 = 0 , β 0 0 the sequence { α m } as
b m = ψ ( α m , β m , g 3 ( α m ) , g 4 ( β m ) ) 1 ψ 0 ( α m , g 3 ( α m ) ) , c m = ψ ( α m , β m , g 3 ( α m ) ) ( β m α m ) , α m + 1 = β m + ( 1 + b m + 2 b m 2 ) c m 1 ψ 0 ( α m , g 3 ( α m ) ) , d m + 1 = ( 1 + ψ ( α m , α m + 1 ) ) ( α m + 1 α m )   + ( 1 + ψ 0 ( α m , g 3 ( α m ) ) ( β m α m )
and
b m + 1 = α m + 1 + d m + 1 1 ψ 0 ( α m , g 3 ( α m ) ) .
( T 2 )
There exists q 0 [ 0 , q ) such that for all m = 0 , 1 , 2 ,
ψ 0 ( α m , g 3 ( α m ) < 1   and   α m q 0 .
It follows via (12) and ( T 2 ) that
0 α m β m α m + 1 < q 0 ,
and there exists q [ 0 , q 0 ] such that lim m + α m = q .
( T 3 )
There exist an invertible linear operator M and x 0 D such that
M 1 ( [ x , y , ; H ] M ) ψ ( x x 0 , y x 0 ) , M 1 ( [ y , x , ; H ] [ x , u ; H ] ) ψ 1 ( x x 0 , y x 0 , u x 0 ) , M 1 ( [ x , u , ; H ] [ y , v ; H ] ) ψ 2 ( x x 0 , y x 0 , u x 0 , v x 0 )
and
v x 0 g 4 ( y x 0 )
for all x Q 1 = D B ( x 0 , q ) , with y, u, and v as given before.
It follows via conditions ( T 1 ) and ( T 3 ) that
M 1 ( [ x 0 , u 0 ; H ] M ) ψ 0 ( x 0 x 0 , u x 0 )   ψ 0 ( 0 , g 3 ( x 0 x 0 ) = ψ 0 ( 0 , g 3 ( 0 ) ) < 1 .
Thus, G 0 1 exists. Set G 0 1 G ( x 0 ) β 0 .
and
( T 4 )
B [ x 0 , q ¯ ] D , where q ¯ = max { q , g 3 ( q ) , g 4 ( q ) } .
The semi-local analysis of method (2) follows in the next result.
Theorem 2.
Under the Assumptions ( T 1 ) ( T 4 ) , the sequence { x m } is convergent to some solution λ B [ x 0 , q ] given by method (1) so that
λ x m q α m .
Proof. 
As in the local analysis, we obtain, in turn, the estimates
y 0 x 0 = G 0 1 H ( x 0 ) β 0 = β 0 α 0 < q ,
and, by induction,
x m + 1 y m = A m G m 1 H ( y m )   2 A m ( I A m ) G m 1 H ( y m )   = ( A m I + I ) G m 1 H ( y m )   2 ( A m I + I ) ( I A m ) G m 1 H ( y m )   = ( I A m ) G m 1 H ( y m ) G m 1 H ( y m )   + 2 ( I A m ) 2 G m 1 H ( y m ) 2 ( I A m ) G m 1 H ( y m )   = G m 1 H ( y m ) ( I A m ) G m 1 H ( y m ) ,
so
x m + 1 y m   ( 1 + b m + 2 b m 2 ) c m 1 ψ 0 ( α m , g 3 ( α m ) ) α m + 1 β m and x m + 1 x 0   x m + 1 y m + y m x 0 α m + 1 β m + β m α 0 q ,
where we also used
  G m 1 M M 1 ( G m [ y m , v m ; H ] )   ψ ( x m x 0 , y m x 0 , u m x 0 , v m x 0 ) 1 ψ 0 ( x m x 0 , u m x 0 ) = b ¯ m   ψ ( α m , β m , g 3 ( α m ) , g 4 ( β m ) ) 1 ψ 0 ( α m , g 3 ( α m ) ) = b m ,   H ( y m ) = H ( y m ) H ( x m ) [ x m , u m ; H ] ( y m x m )   = ( [ y m , x m ; H ] [ x m , u m ; H ] ) ( y m x m ) ,   M 1 H ( y m ) ψ ( x m x 0 , y m x 0 , u m x 0 ) y m x m   = c ¯ m c m = ψ ( α m , β m , g 3 ( α m ) ) ( β m α m ) .
Moreover, by the first substep,
H ( x m + 1 ) = H ( x m + 1 ) H ( x m ) G m ( y m x m ) ,
so
M 1 H ( x m + 1 ) ( 1 + ψ ( x m x 0 , x m + 1 x 0 ) ) x m + 1 x m   + ( 1 + ψ 0 ( x m x 0 , u m x 0 ) ) ( y m x m ) = d ¯ m + 1   ( 1 + ψ ( α m , α m + 1 ) ) ( α m + 1 α m )   + ( 1 + ψ 0 ( α m , g 3 ( α m ) ) ( β m α m ) = d m + 1 .
Consequently, we obtain
y m + 1 x m + 1   G m + 1 1 M M 1 H ( x m + 1 )   d m + 1 1 ψ 0 ( x m + 1 x 0 , u m + 1 x 0 )   d m + 1 1 ψ 0 ( α m + 1 , g 3 ( α m + 1 ) ) = β m + 1 α m + 1
and
y m + 1 x 0   y m + 1 x m + 1 + x m + 1 x 0   β m + 1 α m + 1 + α m + 1 α 0   = β m + 1 < q .
Thus, the sequence { x m } is fundamental in Banach space S . So, there exists λ = lim m + x m B ( λ , q ) . By sending m + in (14) it follows H ( λ ) = 0 . Then, from the estimation
x m + k x m α m + k a m ,
the estimate (13) is realized, provided that k + . □
Remark 2.
The functions g 3 and g 4 can be determined analogously to the functions g 1 and g 2 as follows:
u x 0 = x x 0 + H ( x )   = ( I + [ x , x 0 ; H ] ) ( x x 0 ) + H ( x 0 ) ,   = [ ( I + M ) + M M 1 ( [ x , x 0 ; H ] M ) ] ( x x 0 ) + H ( x 0 ) .
Assume that there exist FCN ψ 3 : Q 1 Q such that
M 1 ( [ x , x 0 ; H ] M ) ) ψ 3 ( x x 0 )
 for all x Q 1 . Then, we can choose
g 3 ( t ) = ( I + M + M ψ 3 ( t ) ) t + H ( x 0 ) .
Similarly,
g 4 ( β m ) = ( I + M + M ψ 3 ( β m ) ) β m + H ( x 0 ) .
The next result determines the isolation of a solution region.
Proposition 2. 
Assume the following:
There exists a solution ξ B ( x 0 , υ ) of solution H ( x ) = 0 for some υ > 0 . The first condition in ( T 3 ) is validated on the ball B ( x 0 , υ ) , and there exists ϖ υ such that
M 1 ( [ ξ , x ; H ] M ) ψ 3 ( ξ x 0 , x x 0 ) , ψ 3 ( υ , ϖ ) < 1 ,
 where ψ 3 : Q R is FCN.
Let D 4 = D B [ x 0 , ϖ ] . Then, the only solution of the equation H ( x ) = 0 is ξ in the region D 4 .
Proof. 
Let Y D 4 satisfy H ( Y ) = 0 . Define the divided difference [ ξ , Y ; H ] . This is possible if Y ξ . Then, we obtain it, in turn, by the conditions
M 1 ( [ ξ , Y ; H ] M ) v 2 ( ξ x 0 , Y x 0 ) ,   ψ 3 ( υ , ϖ ) < 1 ;
thus, [ ξ , Y ; H ] 1 L ( B ) . However, the identity
0 = H ( ξ ) H ( Y ) = [ ξ , Y ; H ] ( ξ Y )
leads to a contradiction, and the divided difference [ ξ , Y ; H ] cannot be defined. Therefore, we conclude that Y = ξ . □
Remark 3. 
(1) The point q can also be replaced by q in condition ( T 3 ) .
(2) If conditions ( T 1 ) ( T 4 ) are all validated, let λ = ξ and q = υ .

4. Numerical Examples

A local example is given first. The solution in the second example is obtained by using method (2) to solve a non-differentiable equation.
Example 1. 
Let S = R × R × R and D = B [ λ , 1 ] . Consider the mapping on the ball D given for ρ = ( ρ 1 , ρ 2 , ρ 3 ) t r as
H ( ρ ) = e 1 2 ρ 1 2 + ρ 1 , ρ 2 , e ρ 3 1 t r .
The Jacobian is given by
H ( ρ ) = ( e 1 ) ρ 1 + 1 0 0 0 1 0 0 0 e ρ 3 .
It follows that the solution λ = ( 0 , 0 , 0 ) t r and H ( λ ) = I , the identity mapping with choice M = H ( λ ) . The divided difference is defined by [ x , y ; H ] = 0 1 H ( x + θ ( y x ) ) d θ . Then, the conditions ( E 3 ) ( E 4 ) are validated provided that
φ 0 ( u 1 , u 2 ) = 1 2 ( e 1 ) ( u 1 + u 2 ) φ ( u 1 , u 2 ) = 1 2 ( e 1 ) u 1 φ 1 ( v 1 , v 2 , v 3 ) = 1 2 ( e 1 e 1 v 1 + ( e 1 ) v 2 ) φ 2 ( v 1 , v 2 , v 3 , v 4 ) = 1 2 ( v 1 + v 2 + v 3 + v 4 )
 and
φ 3 ( u 1 ) = e 1 2 u 1 .
Then, from (3), the radius of convergence P is given as
P = min { 0.2747721282852604, 0.15403846587561557}   = 0.15403846587561557.
Example 2.
Let Q : S S be a mapping. Recall that the standard divided difference of order one when B = R k is defined for x ¯ = ( x 1 , x 2 , , x k ) , y ¯ = ( y 1 , y 2 , , y k ) , i = 0 , 1 , 2 , , k , j = 0 , 1 , 2 , , k by
[ y ¯ , x ¯ ; Q ] j i = Q j ( y 1 , , y i 1 , y i , x i + 1 , , x k ) Q j ( y 1 , , y i 1 , x i , x i + 1 , , x k ) y i x i ,
 provided that y i x i .
The solution is sought for the nonlinear system
t 1 2 t 2 + 1 + 1 9 | t 1 1 | = 0 t 1 + t 2 2 7 + 1 9 | t 2 | = 0 .
Let Q = ( Q 1 , Q 2 ) for ( t 1 , t 2 ) R × R , where
Q 1 = t 1 2 t 2 + 1 + 1 9 | t 1 1 |   and Q 2 = t 1 + t 2 2 7 + 1 9 | t 2 | .
Then, the system becomes
Q ( t ) = 0   for   t = ( t 1 , t 2 ) t r .
The divided difference L = [ · , · ; Q ] belongs in the space M 2 × 2 ( R ) and is the standard 2 × 2 matrix in R 2 [14]. Let us choose x 0 = ( 4.2, 6.1) t r . Then, the application of the method (2) gives the solution λ after three iterations. The solution λ = ( λ 1 , λ 2 ) t r , where
λ 1 = 1.1593608501934547
 and
λ 2 = 2.361824342093888.

5. Concluding Remarks

A methodology is provided that proves the convergence of iterative method (2) under weaker conditions than before [19]. In particular, this methodology uses only conditions involving the operator on the method, in contrast to earlier approaches using the fifth derivative, not on the method. Moreover, the upper error bounds on the distances that are computable become available, i.e., we can use them to tell in advance how many iterations must be performed in order to obtain a pre-decided error tolerance. Such information is not available in [19] and related studies on other methods [6,7,8,9,10,11,12,13,14,15,16,19]. Furthermore, a computable ball, also not given before, is defined, inside which there is only one solution to the equation. Finally, the more difficult and important semi-local analysis not dealt with in [19] or similar studies on other iterative methods [5,17,18] is presented, where the convergence is shown using real majorizing sequences. The same methodology can be applied to other methods utilizing the inverses of linear operators. Further, numerical experiments are performed that demonstrate the theoretical part. In our future research, the methodology shall extend the applicability of multipoint and multi-step methods [2,3,4,17].

Author Contributions

Conceptualization, S.R., I.K.A., and G.D.; methodology, S.R., I.K.A., and G.D.; software, S.R., I.K.A., and G.D.; validation, S.R., I.K.A., and G.D.; formal analysis, S.R., I.K.A., and G.D.; investigation, S.R., I.K.A., and G.D.; resources, S.R., I.K.A., and G.D.; data curation, S.R., I.K.A., and G.D.; writing—original draft preparation, S.R., I.K.A., and G.D.; writing—review and editing, S.R., I.K.A., and G.D.; visualization, S.R., I.K.A., and G.D.; supervision, S.R., I.K.A., and G.D.; project administration, S.R., I.K.A., and G.D.; funding acquisition, S.R., I.K.A., and G.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

Correction Statement

This article has been republished with a minor correction to Data Availability Statement. This change does not affect the scientific content of the article.

References

  1. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Math. 1974, 21, 643–651. [Google Scholar] [CrossRef]
  2. Argyros, I.K. The Theory and Applications of Iterative Methods; CRC Press: Boca Raton, FL, USA, 2022. [Google Scholar]
  3. Traub, J.F. Iterative Methods for the Solution of Equations; Second Prentice Hall: New York, NY, USA, 1964. [Google Scholar]
  4. Ortega, J.M.; Rheinholdt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  5. Regmi, S.; Argyros, I.K.; Deep, G.; Rathour, L. A Newton-like Midpoint Method for Solving Equations in Banach Space. Foundations 2023, 3, 154–166. [Google Scholar] [CrossRef]
  6. Cordero, A.; Maimó, J.G.; Torregrosa, J.R.; Vassileva, M.P. Solving nonlinear problems by Ostrowski-Chun type parametric families. J. Math. Chem. 2014, 52, 430–449. [Google Scholar]
  7. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method for using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  8. Deep, G.; Argyros, I.K. Improved Higher Order Compositions for Nonlinear Equations. Foundations 2023, 3, 25–36. [Google Scholar] [CrossRef]
  9. Argyros, I.K.; Deep, G.; Regmi, S. Extended Newton-like Midpoint Method for Solving Equations in Banach Space. Foundations 2023, 3, 82–98. [Google Scholar] [CrossRef]
  10. Hueso, J.L.; Martínez, E.; Teruel, C. Convergence, efficiency and dynamics of new fourth and sixth order families of iterative methods for nonlinear systems. J. Comput. Appl. Math. 2015, 275, 412–420. [Google Scholar] [CrossRef]
  11. Sharma, J.R.; Guna, R.K.; Sharma, R. An efficient fourth order weighted-Newton method for systems of nonlinear equations. Numer. Algor. 2013, 62, 307–323. [Google Scholar] [CrossRef]
  12. Sharma, J.R.; Arora, H. On efficient weighted-Newton methods for solving systems of nonlinear equations. Appl. Math. Comput. 2013, 222, 497–506. [Google Scholar] [CrossRef]
  13. Abad, M.F.; Cordero, A.; Torregrosa, J.R. Fourth and Fifth-order methods for solving nonlinear systems of equations: An application to the global positioning system. Abstr. Appl. Anal. 2013, 2013, 586708. [Google Scholar] [CrossRef]
  14. Grau-Sánchez, M.; Grau, A.; Noguera, M. Frozen divided difference scheme for solving systems of nonlinear equations. J. Comput. Appl. Math. 2011, 235, 1739–1743. [Google Scholar] [CrossRef]
  15. King, R.F. A family of fourth order methods for nonlinear equations. SIAM J. Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
  16. Sharma, R.; Deep, G.; Bahl, A. Design and Analysis of an Efficient Multi step Iterative Scheme for systems of Nonlinear Equations. J. Math. Anal. 2021, 12, 53–71. [Google Scholar]
  17. Sharma, R.; Deep, G. A study of the local convergence of a derivative free method in Banach spaces. J. Anal. 2022, 31, 1257–1269. [Google Scholar] [CrossRef]
  18. Deep, G.; Sharma, R.; Argyros, I.K. On convergence of a fifth-order iterative method in Banach spaces. Bull. Math. Anal. Appl. 2021, 13, 16–40. [Google Scholar]
  19. Sharma, J.R.; Arora, H.; Petović, M. An efficient derivative free family of fourth order methods for solving systems of nonlinear equations. Appl. Math. Comput. 2014, 235, 383–393. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Regmi, S.; Argyros, I.K.; Deep, G. Generalized Iterative Method of Order Four with Divided Differences. Foundations 2023, 3, 561-572. https://doi.org/10.3390/foundations3030033

AMA Style

Regmi S, Argyros IK, Deep G. Generalized Iterative Method of Order Four with Divided Differences. Foundations. 2023; 3(3):561-572. https://doi.org/10.3390/foundations3030033

Chicago/Turabian Style

Regmi, Samundra, Ioannis K. Argyros, and Gagan Deep. 2023. "Generalized Iterative Method of Order Four with Divided Differences" Foundations 3, no. 3: 561-572. https://doi.org/10.3390/foundations3030033

Article Metrics

Back to TopTop