Next Article in Journal
Reliable Multi-View Deep Patent Classification
Next Article in Special Issue
Extending the Domain with Application of Four-Step Nonlinear Scheme with Average Lipschitz Conditions
Previous Article in Journal
Model Predictive Paradigm with Low Computational Burden Based on Dandelion Optimizer for Autonomous Vehicle Considering Vision System Uncertainty
Previous Article in Special Issue
Convergence Criteria of a Three-Step Scheme under the Generalized Lipschitz Condition in Banach Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Order of Convergence of the Noor–Waseem Method

1
Department of Mathematical & Computational Science, National Institute of Technology Karnataka, Surathkal 575 025, India
2
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(23), 4544; https://doi.org/10.3390/math10234544
Submission received: 21 October 2022 / Revised: 18 November 2022 / Accepted: 26 November 2022 / Published: 1 December 2022
(This article belongs to the Special Issue Computational Methods in Analysis and Applications 2023)

Abstract

:
In 2009, Noor and Waseem studied an important third-order iterative method. The convergence order is obtained using Taylor expansion and assumptions on the derivatives of order up to four. In this paper, we have obtained convergence order three for this method using assumptions on the first and second derivatives of the involved operator. Further, we have extended the method to obtain a fifth- and a sixth-order methods. The dynamics of the methods are also provided in this study. Numerical examples are included. The same technique can be used to extend the utilization of other single or multistep methods.
MSC:
47H99; 49M15; 65J15; 65D99; 65G99

1. Introduction

Due to its wide application in various fields, such as Engineering [1], Applied sciences [2,3], Mathematics [4], Medicine and Scientific computing [5,6], finding a solution of the nonlinear equation
F ( x ) = 0 ,
is an important problem in computational mathematics. Here, F : Ω T T 1 is a Fréchet differentiable operator between Banach spaces T and T 1 and Ω is an open convex set. Since a closed form solution for (1) is difficult to obtain (in general), iterative methods are usually employed to approximate the solution x * of (1). While studying iterative methods, the order of convergence is an important concern. In this paper, we consider the iterative method studied in [7] by Noor and Waseem. The method in [7] is defined for n = 0 , 1 , 2 , by
y n = x n F ( x n ) 1 F ( x n ) x n + 1 = x n 4 A n 1 F ( x n ) ,
where A n = 3 F ( 2 x n + y n 3 ) + F ( y n ) .
Noor and Waseem in [7] obtained a convergence order three for (2) using Taylor expansion. The analysis in [7] uses assumptions on the derivatives of F of order up to four.
Recall [4,8] that an iterative method is of order p > 0 if
x n + 1 x * c x n x * p ,
where c is called an asymptotic error constant or rate of convergence.
Observe that the assumption on the derivative of F up to order four reduces the applicability of the method to the problems involving operator, whose higher order derivatives are not bounded. For example, let f : [ 1 2 , 3 2 ] be defined by
f ( x ) = 1 20 ( x 4 l o g x 2 + x 6 x 5 ) i f x 0 0 i f x = 0 .
Then, we obtain by this definition
f ( x ) = 1 20 ( 2 x 3 + 4 x 3 l o g x 2 + 6 x 5 5 x 4 ) f ( x ) = 1 20 ( 14 x 2 + 12 x 2 l o g x 2 + 30 x 4 20 x 3 ) f ( x ) = 1 20 ( 52 x + 24 x l o g x 2 + 120 x 3 60 x 2 ) f I V ( x ) = 1 20 ( 24 l o g x 2 + 360 x 2 120 x + 100 ) .
Note that the fourth derivative of the function f is not bounded.
Later, in [9], the convergence of method (2) is proved using the assumptions only on the first derivative of F. However, the order of convergence is not obtained in [9].
Since the order of convergence is an important matter, our goal in this paper is to obtain the convergence order of (2), without using higher-order derivatives. In this direction, we obtain the convergence order three for (2) using assumptions on the derivatives of F of order up to two. Indeed, this is a considerable achievement. Note that, we are not using Taylor series expansion in our studies. Our new idea can be used to study and obtain convergence order of the other similar methods as well [1,2,6,10,11,12,13,14]. It is envisioned to study such similar methods in the future, since our technique does not depend on the method but only on the inverses of the linear operators involved.
Further, we extended the order of method (2) to five and six using the Cordero et al. [1,3] technique. The new methods are defined for n = 0 , 1 , 2 , as follows:
y n = x n F ( x n ) 1 F ( x n ) z n = x n 4 A n 1 F ( x n ) x n + 1 = z n F ( y n ) 1 F ( z n )
and
y n = x n F ( x n ) 1 F ( x n ) z n = x n 4 A n 1 F ( x n ) x n + 1 = z n F ( z n ) 1 F ( z n ) ,
where A n = 3 F ( 2 x n + y n 3 ) + F ( y n ) .
The rest of the paper is organized as follows. In Section 2, Section 3 and Section 4, we provide the convergence analysis of the methods in (2), (3) and (4), respectively. Numerical examples are provided in Section 5. The dynamics of the methods (2), (3) and (4) are given in Section 6. Finally, the paper ends with the conclusions in Section 7.

2. Convergence Analysis of (2)

For our convergence analysis, we introduce some functions and scalars. Let L > 0 , L 1 > 0 and L 2 > 0 be the given parameters. Let the functions φ , φ 1 , h i , i = 1 , 2 : [ 0 , 1 L ) R be defined by
φ ( t ) = L 2 ( 1 L t ) ,
φ 1 ( t ) = L 2 ( 1 + L 2 ( 1 L t ) t ) ,
and let h 1 ( t ) = φ ( t ) t 1 , h 2 ( t ) = φ 1 ( t ) t 1 . Then, h i , i = 1 , 2 are nondecreasing and continuous functions. Further, h i ( 0 ) = 1 < 0 and lim t 1 L h i ( t ) = + . Therefore, there exist smallest zeros r 1 , r 2 ( 0 , 1 L ) for the equations h i ( t ) = 0 .
Let the functions φ 2 , h 3 : [ 0 , r 2 ) R be defined by
φ 2 ( t ) = 1 12 ( 1 φ 1 ( t ) t ) ( ( 6 L 1 + L 2 t ) φ ( t ) + L 2 ) ,
and h 3 ( t ) = φ 2 ( t ) t 2 1 . Then, h 3 is a nondecreasing and continuous function, h 3 ( 0 ) = 1 < 0 and lim t r 2 h 3 ( t ) = + . Therefore, h 3 has the smallest zero r 3 ( 0 , r 2 ) .
Let
r = min { r 1 , r 3 } .
Then, for all t [ 0 , r ) , we have
0 φ ( t ) t < 1 ,
0 φ 1 ( t ) t < 1 ,
and
0 φ 2 ( t ) t 2 < 1 .
Throughout the paper, B ( x * , ρ ) = { x T : x x 0 < ρ } and B ¯ ( x * , ρ ) = { x T : x x 0 ρ } for some ρ > 0 .
Our analysis is based on the following assumptions:
(a1) x * is a simple solution of (1) and F ( x * ) 1 L ( T 1 , T ) ;
(a2) F ( x * ) 1 ( F ( x ) F ( y ) ) L x y x , y B ( x * , r ) ;
(a3) F ( x * ) 1 ( F ( u ) F ( v ) ) L 2 u v u , v B ( x * , r ) ;
(a4) F ( x * ) 1 F ( y ) L 1 y B ( x * , r ) ;
(a5) F ( u ) 1 ( F ( u ) F ( v ) ) L 3 u v u , v B ( x * , r ) .
 Theorem 1. 
Suppose the conditions (a1)–(a4) hold. Then, the sequence { x n } defined by (2), starting from x 0 B ( x * , r ) { x * } is well defined and remains in B ¯ ( x * , r ) for n = 0 , 1 , 2 , and converges to a solution x * of (1). Moreover, we have the following estimates
y n x * φ ( r ) x n x * 2
and
x n + 1 x * φ 2 ( r ) x n x * 3 .
Proof. 
The proof is by induction. Suppose x B ( x * , r ) . Then by (a2), we have
F ( x * ) 1 ( F ( x ) F ( x * ) ) L x x * L r < 1 .
By the Banach Lemma on invertible operators [4], we have
F ( x ) 1 F ( x * ) 1 1 L x x * .
Using the Mean Value Theorem, we have
F ( x 0 ) = F ( x 0 ) F ( x * ) = 0 1 F ( x * + t ( x 0 x * ) ) d t ( x 0 x * ) .
Next, since y 0 = x 0 F ( x 0 ) 1 F ( x 0 ) , we have
y 0 x * = x 0 x * F ( x 0 ) 1 F ( x 0 ) = ( x 0 x * ) F ( x 0 ) 1 0 1 F ( x * + t ( x 0 x * ) ) d t ( x 0 x * ) = 0 1 F ( x 0 ) 1 ( F ( x 0 ) F ( x * + t ( x 0 x * ) ) ) d t ( x 0 x * ) 0 1 F ( x 0 ) 1 ( F ( x 0 ) F ( x * + t ( x 0 x * ) ) ) d t x 0 x * = 0 1 F ( x 0 ) 1 F ( x * ) F ( x * ) 1 ( F ( x 0 ) F ( x * + t ( x 0 x * ) ) ) d t x 0 x * .
Thus, by (11) and (a2), we obtain
y 0 x * L 2 ( 1 L x 0 x * ) x 0 x * 2 φ ( x 0 x * ) x 0 x * 2 < x 0 x * < r .
Then, the iteratation y 0 B ( x * , r ) and (9) holds for n = 0 .
Next, we shall prove A 0 1 is well defined. Note that
4 F ( x * ) A 0 = 3 ( F ( x * ) F ( 2 x 0 + y 0 3 ) ) ( F ( y 0 ) F ( x * ) ) .
Hence, by (a2) we get
( 4 F ( x * ) ) 1 ( A 0 4 F ( x * ) ) 1 4 3 F ( x * ) 1 ( F ( 2 x 0 + y 0 3 ) F ( x * ) ) + F ( x * ) 1 ( F ( y 0 ) F ( x * ) ) L 4 3 2 x 0 + y 0 3 x * + y 0 x * L 4 [ ( 2 x 0 x * + y 0 x * ) + y 0 x * ] L 4 [ 2 + 2 L 2 ( 1 L x 0 x * ) x 0 x * ] x 0 x * L 2 [ 1 + L 2 ( 1 L x 0 x * ) x 0 x * ] x 0 x * φ 1 ( x 0 x * ) x 0 x * < 1 .
Therefore,
A 0 1 F ( x * ) 1 4 ( 1 φ 1 ( x 0 x * ) x 0 x * ) .
Let
G 0 = F x * + t ( x 0 x * ) + θ 2 x 0 + y 0 3 x * t ( x 0 x * )
and
H 0 = F x * + t ( x 0 x * ) + θ y 0 x * t ( x 0 x * ) .
Then, by (2) and (12), we have
x 1 x * = x 0 x * 4 A 0 1 0 1 F ( x * + t ( x 0 x * ) ) d t ( x 0 x * ) = A 0 1 A 0 4 0 1 F ( x * + t ( x 0 x * ) ) d t ( x 0 x * ) = A 0 1 0 1 3 F ( 2 x 0 + y 0 3 ) + F ( y 0 ) 4 0 1 F ( x * + t ( x 0 x * ) ) d t ( x 0 x * ) = A 0 1 [ 0 1 3 F ( 2 x 0 + y 0 3 ) F ( x * + t ( x 0 x * ) ) d t + 0 1 F ( y 0 ) F ( x * + t ( x 0 x * ) ) d t ] ( x 0 x * ) .
Then, by the Mean Value Theorem for second derivatives, we can write in turn that
x 1 x * = 3 A 0 1 [ 0 1 0 1 F x * + t ( x 0 x * ) + θ 2 x 0 + y 0 3 x * t ( x 0 x * ) d θ × 2 x 0 + y 0 3 x * t ( x 0 x * ) d t + 0 1 0 1 F ( x * + t ( x 0 x * ) + θ ( y 0 x * t ( x 0 x * ) ) ) d θ ( y 0 x * t ( x 0 x * ) ) d t ] ( x 0 x * ) = [ 3 A 0 1 0 1 0 1 G 0 d θ 2 x 0 + y 0 3 x * t ( x 0 x * ) d t + A 0 1 0 1 0 1 H 0 d θ ( y 0 x * t ( x 0 x * ) ) d t ] ( x 0 x * ) = [ A 0 1 0 1 0 1 G 0 d θ 2 x 0 + y 0 3 x * 3 t ( x 0 x * ) d t + A 0 1 0 1 0 1 H 0 d θ ( y 0 x * t ( x n x * ) ) d t ] ( x 0 x * ) = A 0 1 [ 0 1 0 1 G 0 d θ ( y 0 x * ) d t + 0 1 0 1 G 0 ( 2 3 t ) ( x 0 x * ) d t + 0 1 0 1 H 0 d θ ( y 0 x * ) d t 0 1 0 1 H 0 d θ t ( x 0 x * ) d t ] ( x 0 x * ) = A 0 1 0 1 0 1 G 0 d θ ( y 0 x * ) d t + A 0 1 0 1 0 1 H 0 d θ ( y 0 x * ) d t ( x 0 x * ) + [ A 0 1 0 1 0 1 G 0 d θ ( 2 4 t ) ( x 0 x * ) d t + A 0 1 0 1 0 1 G 0 d θ t ( x 0 x * ) d t A 0 1 0 1 0 1 H 0 d θ t ( x 0 x * ) d t ] ( x 0 x * ) = Γ 1 + A 0 1 0 1 0 1 G 0 d θ ( 2 4 t ) d t ( x 0 x * ) 2 + Γ 2 + Γ 3 ,
where
Γ 1 : = A 0 1 0 1 0 1 G 0 d θ ( y 0 x * ) d t ( x 0 x * ) , Γ 2 : = A 0 1 0 1 0 1 H 0 d θ ( y 0 x * ) d t ( x 0 x * )
and
Γ 3 : = A 0 1 0 1 0 1 ( G 0 H 0 ) d θ t d t ( x 0 x * ) 2 .
Observe that by (14) and (a4), we have
Γ 1 = A 0 1 F ( x * ) 0 1 0 1 F ( x * ) 1 G 0 d θ ( y 0 x * ) d t ( x 0 x * ) L 1 4 ( 1 φ 1 ( x 0 x * ) x 0 x * ) y 0 x * x 0 x * .
Similarly, we obtain
Γ 2 L 1 4 ( 1 φ 1 ( x 0 x * ) x 0 x * ) y 0 x * x 0 x * .
To compute the second term in (15), we observe that
0 1 0 1 A 0 1 G 0 d θ ( 2 4 t ) d t ( x 0 x * ) 2 max t [ 0 , 1 ] 0 1 A 0 1 G 0 d θ 0 1 ( 2 4 t ) d t ( x 0 x * ) 2 = 0 .
Notice that
G 0 H 0 = F ( X ) F ( Y ) ,
where
X = x * + t ( x 0 x * ) + θ 2 x 0 + y 0 3 x * t ( x 0 x * ) , Y = x * + t ( x 0 x * ) + θ y 0 x * t ( x 0 x * ) .
Note that again
X Y = 2 θ 3 [ x 0 x * ( y 0 x * ) ]
and hence by (14) and (a3), we have
Γ 3 = A 0 1 F ( x * ) 0 1 0 1 F ( x * ) 1 ( G 0 H 0 ) d θ t d t ( x 0 x * ) 2 L 2 4 ( 1 φ 1 ( x 0 x * ) x 0 x * ) 0 1 0 1 X Y d θ d t x 0 x * 2 L 2 12 ( 1 φ 1 ( x 0 x * ) x 0 x * ) [ 1 + φ ( x 0 x * ) x 0 x * ] x 0 x * 3 .
Combining (15)–(21), we get
x 1 x * φ 2 ( r ) x 0 x * 3 .
Then, since φ 2 ( r ) r 2 < 1 , we have x 1 x * < x 0 x * < r , so the iteratate x 1 B ( x * , r ) .
Replacing x 0 , y 0 and x 1 in the earlier estimates by x n , y n and x n + 1 , respectively, completes the induction for (9) and (10). □
 Remark 1. 
We have obtained the order of convergence three for the method (2) without using Taylor expansion and making assumptions only on derivatives of F up to order two. Thus, our analysis extends the applicability of the method (2).

3. Convergence Analysis of (3)

Let φ 3 , h 4 : [ 0 , r 2 ) R be defined by
φ 3 ( t ) = L 3 ( φ ( t ) + 1 2 φ 2 ( t ) t ) φ 2 ( t )
and h 4 ( t ) = φ 3 ( t ) t 4 1 . Then, h 4 ( 0 ) = 1 and h 4 ( t ) + as t r 2 . Therefore, h 4 has the smallest zero r 4 ( 0 , r 2 ) .
Let
R = min { r , r 4 } .
Then, for t [ 0 , R ) , we have
0 φ 3 ( t ) t 4 < 1 .
We have the following theorem for method (3).
 Theorem 2. 
Suppose the conditions (a1)–(a5) hold. Then, the sequence { x n } defined by (3), starting from x 0 B ( x * , R ) { x * } is well defined and remains in B ¯ ( x * , R ) for n = 0 , 1 , 2 , and converges to a solution x * of (1). Moreover, we have the following estimates
y n x * φ ( R ) x n x * 2 ,
z n x * φ 2 ( R ) x n x * 3
and
x n + 1 x * φ 3 ( R ) x n x * 5 .
Proof. 
Observe that by taking r = R and x n + 1 = z n in Theorem 1, we have
z n x * φ 2 ( x n x * ) x n x * 3
and the iterate z n B ( x * , R ) . Note that
x n + 1 x * = z n x * F ( y n ) 1 F ( z n ) = F ( y n ) 1 0 1 [ F ( y n ) F ( x * + t ( z n x * ) ) ] d t ( z n x * ) = 0 1 F ( y n ) 1 [ F ( y n ) F ( x * + t ( z n x * ) ) ] d t ( z n x * ) .
Thus, by (a5), we have
x n + 1 x * L 3 ( y n x * + 1 2 z n x * ) z n x * L 3 ( φ ( x n x * ) x n x * 2 + 1 2 φ 2 ( x n x * ) x n x * 3 ) φ 2 ( x n x * ) x n x * 3 L 3 ( φ ( x n x * ) + 1 2 φ 2 ( x n x * ) x n x * ) φ 2 ( x n x * ) x n x * 5 φ 3 ( R ) x n x * 5 .

4. Convergence Analysis of (4)

Let φ 4 , h 5 : [ 0 , r 2 ) R be defined by
φ 4 ( t ) = L 3 2 ( φ 2 ( t ) ) 2
and h 5 ( t ) = φ 4 ( t ) t 5 1 . Then, h 5 ( 0 ) = 1 and h 5 ( t ) + as t r 2 . Therefore, h 5 has a smallest zero r 5 ( 0 , r 2 ) .
Let
R 1 = min { r , r 5 } .
Then, for t [ 0 , R 1 ) , we have
0 φ 4 ( t ) t 5 < 1 .
We have the following theorem for method (4).
 Theorem 3. 
Suppose conditions (a1)–(a5) hold. Then, the sequence { x n } defined by (4), starting from x 0 B ( x * , R 1 ) { x * } is well defined and remains in B ¯ ( x * , R 1 ) for n = 0 , 1 , 2 , and converges to a solution x * of (1). Moreover, we have the following estimates
y n x * φ ( R 1 ) x n x * 2 ,
z n x * φ 2 ( R 1 ) x n x * 3
and
x n + 1 x * φ 4 ( R 1 ) x n x * 6 .
 Proof. 
Notice that (28) and (29) follows as in Theorem 2. Further,
x n + 1 x * = z n x * F ( z n ) 1 F ( z n ) = F ( z n ) 1 0 1 [ F ( z n ) F ( x * + t ( z n x * ) ) ] d t ( z n x * ) = 0 1 F ( z n ) 1 [ F ( z n ) F ( x * + t ( z n x * ) ) ] d t ( z n x * ) .
Hence, by (a5), we have
x n + 1 x * L 3 2 z n x * 2 L 3 2 ( φ 2 ( x n x * ) ) 2 x n x * 6 φ 4 ( R 1 ) x n x * 6 .
We complete this section by providing a result for the uniqueness of the solution x * , that applies to all the methods given in this paper.
 Proposition 1. 
Suppose:
(1) There exist a simple solution x * B ( x * , ρ ) of the Equation (1) for some ρ > 0 and a parameter K > 0 such that
F ( x * ) 1 ( F ( x * ) F ( x ) ) K x * x
for each x B ( x * , ρ ) .
(2) There exists ρ 1 ρ such that
ρ 1 < 2 K .
Set S = B ¯ ( x * , ρ 1 ) Ω . Then, Equation (1) is uniquely solvable at x * in the region S .
 Proof. 
Let γ S be a solution of Equation (1). Define the linear operator M as M = 0 1 F ( x * + τ ( γ x * ) ) d τ . By applying the conditions (31) and (32), we obtain in turn that
F ( x * ) 1 ( M F ( x * ) ) K 0 1 τ x * γ d τ K 2 ρ 1 < 1 .
That is, the linear operator M is invertible. Then, the identity
γ x * = M 1 ( F ( γ ) F ( x * ) ) = M 1 ( 0 ) = 0
leads to the conclusion that γ = x * .
 Remark 2. 
The efficiency index E I and the informational efficiency I E are defined as E I = o 1 m [15] and I E = o m [14], respectively, where o is the order of convergence and m is the number of functions (and derivatives). The E I and I E of the methods (2), (3) and (4) are 3 1 / 4 = 1.316 , 3 4 = 0.75 ; 5 1 / 5 = 1.3797 , 5 5 = 1 and 6 1 / 6 = 1.348 , 6 / 6 = 1 , respectively.

5. Examples

Three examples are presented in this section.
 Example 1. 
Let T = T 1 = R 3 , D = B ¯ ( 0 , 1 ) , x * = ( 0 , 0 , 1 ) T r . Define the function F on D for w = ( x , y , z ) T r by
F ( w ) = ( s i n x , y 2 5 + y , z ) T r .
Then, the Fréchet-derivatives are given by
F ( w ) = c o s x 0 0 0 2 y 5 + 1 0 0 0 1
and
F ( w ) = s i n x 0 0 0 0 0 0 0 0 0 0 0 0 2 5 0 0 0 0 0 0 0 0 0 0 0 0 0 .
The conditions (a1)–(a5) are validated if L = L 2 = L 3 = 1 and L 1 = 2 5 . Then, the parameters are:
r 1 = 0.6667 , r 2 = 0.7639 , r = R 1 = r 3 = 0.6588 , R = r 4 = 0.6260 , r 5 = 0.7469 .
 Example 2. 
Let T = T 1 = C [ 0 , 1 ] , the space of continuous functions defined on [ 0 , 1 ] and be equipped with the max norm. Let D = B ¯ ( 0 , 1 ) . Define the function F on D by
F ( φ ) ( x ) = φ ( x ) 5 0 1 x θ φ ( θ ) 3 d θ .
We have that
F ( φ ( ξ ) ) ( x ) = ξ ( x ) 15 0 1 x θ φ ( θ ) 2 ξ ( θ ) d θ , f o r   e a c h ξ D .
Then, we get that for x * = 0 , the conditions (a1)-(a5) hold, provided L = L 3 = 15 and L 2 = 8.5 and L 1 = 31 . Then the parameters are:
r 1 = 0.0444 , r 2 = 0.0509 , r = R 1 = r 3 = 0.0380 , R = r 4 = 0.0378 , r 5 = 0.0457 .
 Example 3. 
Getting back to the motivational example at the introduction of this paper, we have L = 45.9234 = L 1 , L 2 = 125.7312 , L 3 = 100.6338 . Then the parameters are:
r = R 1 = r 1 = r 3 = 0.0145 , r 2 = 0.0166 , R = r 4 = 0.0125 , r 5 = 0.0156 .
In the next example, we compare method (3) with the fifth-order iterative method studied in [16]. Furthermore, we provide the iterates for the methods (2) and (3).
 Example 4. 
Consider the system of equations
3 x 1 2 x 2 + x 2 = 1 x 1 4 + x 1 x 2 3 = 1 .
Note that the solutions are ( 1 , 0.2 ) , ( 0.4 , 1.3 ) and ( 0.9 , 0.3 ) . We approximate the solution ( 0.9 , 0.3 ) using the methods (2), (3) (4) and the fifth-order method considered in [16], with the initial point ( 2 , 1 ) . The obtained results are provided in Table 1 and Table 2.

6. Basins of Attractions

In this section, we study the basin of attraction and Julia sets corresponding to methods (2), (3) and (4). Recall that the collection of all initial points from which the iterative method converges to a solution of a given equation are called the basins of attraction or Fatou sets, of an iterative method [11]. The complement of the Fatou set is known as a Julia set. We provide the basins of attraction associated with the roots of the following three systems of equations.
 Example 5. 
x 3 y = 0 y 3 x = 0
with solutions { ( 1 , 1 ) , ( 0 , 0 ) , ( 1 , 1 ) }.
 Example 6. 
3 x 2 y y 3 = 0 x 3 3 x y 2 1 = 0
with solutions { ( 1 2 , 3 2 ) , ( 1 2 , 3 2 ) , ( 1 , 0 ) }.
 Example 7. 
x 2 + y 2 4 = 0 3 x 2 + 7 y 2 16 = 0
with solutions { ( 3 , 1 ) , ( 3 , 1 ) , ( 3 , 1 ) , ( 3 , 1 ) }.
For generating the basin of attraction associated with each root of a given system of nonlinear equations, we consider the rectangular region R = { ( x , y ) R 2 : 2 x 2 , 2 y 2 } , which contains all the roots of test problems. We consider an equidistant grid of 401 × 401 points in R and choose these points as the initial guess x 0 , for the methods (2), (3) and (4). A fixed tolerance 10 8 and a maximum of 50 iterations are used for all the cases. A color is being assigned to each attracting basin corresponding to each root. If we do not obtain the desired tolerance with the fixed iterations, we do not continue and we decide that the iterative method starting at x 0 does not converge to any of the roots and assign black color to those points. In this way, we distinguish the basins of attraction by their respective colors for distinct roots of each method.
Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 demonstrates the basin of attraction corresponding to each root of above Examples (Examples 5–7) for the methods (2), (3) and (4). The Julia set (black region), which contains all the initial points from which the iterative method does not converge to any of the roots, can easily be observed in the figure.
The figures presented in this work were created on a 16-core 64 bit Windows machine with Intel Core i7-10700 CPU @ 2.90GHz using MATLAB programming language.
 Remark 3. 
Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 above, clearly show that method (4) has a larger basin of attraction compared to method (2) and method (3).

7. Conclusions

A process is developed to determine the convergence order of method (2), method (3) and method (4). The analysis involves only the first and second derivative in contrast to the earlier works using the fourth derivative [7]. Moreover, computable error distances are also provided, which are not given before [7]. Hence, the applicability of these methods is extended. The new process does not depend on these methods. Therefore, it can also be used to extend the usage of other methods of higher order using inverses of linear operators. This is our future topic of research.

Author Contributions

Conceptualization and validation by S.G., J.P., R.S. and I.K.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding for APC.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors Santhosh George and Jidesh P wish to thank the SERB, Govt. of India for the Project No. CRG/2021/004776. Ramya. S thanks Govt. of India for INSPIRE fellowship.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Increasing the convergence order of an iterative method for nonlinear systems. Appl. Math. Lett. 2012, 25, 2369–2374. [Google Scholar] [CrossRef] [Green Version]
  2. Behl, R.; Maroju, P.; Martínez, E.; Singh, S. A study of the local convergence of a fifth order iterative scheme. Indian J. Pure Appl. Math. 2020, 51, 439–455. [Google Scholar]
  3. Cordero, A.; Martínez, E.; Toregrossa, J.R. Iterative methods of order four and five for systems of nonlinear equations. J. Comput. Appl. Math. 2012, 231, 541–551. [Google Scholar] [CrossRef] [Green Version]
  4. Argyros, I.K. The Theory and Applications of Iteration Methods, 2nd ed.; Engineering Series; CRC Press, Taylor and Francis Group: Boca Raton, FL, USA, 2022. [Google Scholar]
  5. Magréñan, A.A.; Argyros, I.K.; Rainer, J.J.; Sicilia, J.A. Ball convergence of a sixth-order Newton-like method based on means under weak conditions. J. Math. Chem. 2018, 56, 2117–2131. [Google Scholar] [CrossRef]
  6. Shakhno, S.M.; Gnatyshyn, O.P. On an iterative Method of order 1.839… for solving nonlinear least squares problems. Appl. Math. Appl. 2005, 161, 253–264. [Google Scholar]
  7. Noor, M.A.; Waseem, M. Some iterative methods for solving a system of nonlinear equations. Comput. Math. Appl. 2009, 57, 101–106. [Google Scholar] [CrossRef] [Green Version]
  8. Kelley, C.T. Iterative Methods for Linear and Nonlinear Equations; SIAM: Philadelphia, PA, USA, 1995. [Google Scholar]
  9. Argyros, I.K.; George, S.; Argyros, C. On the semi-local convergence of a Noor- Waseem third order method to solve equations. Adv. Appl. Math. Sci. 2022, 21, 6529–6543. [Google Scholar]
  10. Mannel, F. On the order of convergence of Broyden’s method: Faster convergence on mixed linear-nonlinear systems of equations and a conjecture on the q-order. Calcolo 2021, 58, 47. [Google Scholar] [CrossRef]
  11. Magréñan, A.A.; Gutiérrez, J.M. Real dynamics for damped Newton’s method applied to cubic polynomials. J. Comput. Appl. Math. 2015, 275, 527–538. [Google Scholar] [CrossRef]
  12. Schuller, G. On the order of convergence of certain Quasi-Newton-methods. Numer. Math. 1974, 23, 181–192. [Google Scholar] [CrossRef]
  13. Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth order weighted—Newton scheme for systems of nonlinear equations. Numer. Algorithms 2013, 62, 307–323. [Google Scholar] [CrossRef]
  14. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Hoboken, NJ, USA, 1964. [Google Scholar]
  15. Ostrowski, A.M. Solution of Equations in Euclidean and Banach Spaces; Academic Press: Amsterdam, The Netherlands, 1973. [Google Scholar]
  16. Iliev, A.; Iliev, I. Numerical Method with Order t for Solving System Nonlinear Equations. In Collection of Works from the Scientific Conference Dedicated to 30 Years of FMI; 2000; pp. 105–112. Available online: https://www.researchgate.net/publication/308470689_Numerical_Method_with_Order_t_for_Solving_System_Nonlinear_Equations (accessed on 20 October 2022).
Figure 1. Dynamical plane of the methods (2) with basins of attraction for the Example 5.
Figure 1. Dynamical plane of the methods (2) with basins of attraction for the Example 5.
Mathematics 10 04544 g001
Figure 2. Dynamical plane of the methods (3) (left) and (4) (right) with basins of attraction for the Example 5.
Figure 2. Dynamical plane of the methods (3) (left) and (4) (right) with basins of attraction for the Example 5.
Mathematics 10 04544 g002
Figure 3. Dynamical plane of the methods (2) with basins of attraction for Example 6.
Figure 3. Dynamical plane of the methods (2) with basins of attraction for Example 6.
Mathematics 10 04544 g003
Figure 4. Dynamical plane of the methods (3) (left) and (4) (right) with basins of attraction for Example 6.
Figure 4. Dynamical plane of the methods (3) (left) and (4) (right) with basins of attraction for Example 6.
Mathematics 10 04544 g004
Figure 5. Dynamical plane of method (2) with basins of attraction for Example 7.
Figure 5. Dynamical plane of method (2) with basins of attraction for Example 7.
Mathematics 10 04544 g005
Figure 6. Dynamical plane of the methods (3) (left) and (4) (right) with basins of attraction for Example 7.
Figure 6. Dynamical plane of the methods (3) (left) and (4) (right) with basins of attraction for Example 7.
Mathematics 10 04544 g006
Table 1. Method (3).
Table 1. Method (3).
kFifth Order Method in [16]Method (3)
x k = ( x 1 k , x 2 k ) x k = ( x 1 k , x 2 k )
0 ( 2.000000000000000000 , 1.000000000000000000 ) ( 2.000000000000000000 , 1.000000000000000000 )
1 ( 1.082281042482679530 , 0.123366196386319406 ) ( 0.97999747117802393781 , 0.31079296183420979104 )
2 ( 0.992837748938471569 , 0.306361894605406281 ) ( 0.99252009675815366929 , 0.30661919359513767346 )
3 ( 0.992779994851123249 , 0.306440446511020432 ) ( 0.99277988170910103082 , 0.30644055554978738564 )
4 ( 0.992779994851123249 , 0.306440446511020432 ) ( 0.99277999485110035582 , 0.30644044651104612730 )
Table 2. Method (2) and Method (4).
Table 2. Method (2) and Method (4).
kMethod (2)Method (4)
x k = ( x 1 k , x 2 k ) x k = ( x 1 k , x 2 k )
0 ( 2.000000000000000000 , 1.000000000000000000 ) ( 2.000000000000000000 , 1.000000000000000000 )
1 ( 1.019623593558109941 , 0.265386054724064790 ) ( 1.03759994297628344028 , 0.26149549469920185806 )
2 ( 0.992853658605661104 , 0.306346433846240717 ) ( 0.99619799193796287894 , 0.30257508692302936825 )
3 ( 0.992779994852644009 , 0.306440446509150976 ) ( 0.99277999575683006927 , 0.30644044541552573068 )
4 ( 0.992853658605661104 , 0.306346433846240717 ) ( 0.0 . 99277999485112322641 , 0.3064404465110204256 )
5 ( 0.992779994852644009 , 0.306440446509150976 ) ( 0.0 . 99277999485112322641 , 0.3064404465110204256 )
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

George, S.; Sadananda, R.; Padikkal, J.; Argyros, I.K. On the Order of Convergence of the Noor–Waseem Method. Mathematics 2022, 10, 4544. https://doi.org/10.3390/math10234544

AMA Style

George S, Sadananda R, Padikkal J, Argyros IK. On the Order of Convergence of the Noor–Waseem Method. Mathematics. 2022; 10(23):4544. https://doi.org/10.3390/math10234544

Chicago/Turabian Style

George, Santhosh, Ramya Sadananda, Jidesh Padikkal, and Ioannis K. Argyros. 2022. "On the Order of Convergence of the Noor–Waseem Method" Mathematics 10, no. 23: 4544. https://doi.org/10.3390/math10234544

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop