Next Article in Journal
Research on Formation Control Method of Heterogeneous AUV Group under Event-Triggered Mechanism
Next Article in Special Issue
Derivative-Free Iterative Schemes for Multiple Roots of Nonlinear Functions
Previous Article in Journal
Modeling Immigration in Spain: Causes, Size and Consequences
Previous Article in Special Issue
Using Matrix Eigenvalues to Construct an Iterative Method with the Highest Possible Efficiency Index Two
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Derivative Free Fourth-Order Optimal Scheme for Applied Science Problems

Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
Mathematics 2022, 10(9), 1372; https://doi.org/10.3390/math10091372
Submission received: 23 March 2022 / Revised: 12 April 2022 / Accepted: 14 April 2022 / Published: 20 April 2022
(This article belongs to the Special Issue New Trends and Developments in Numerical Analysis)

Abstract

:
We suggest a new and cost-effective iterative scheme for nonlinear equations. The main features of the presented scheme are that it does not involve any derivative in the structure, achieves an optimal convergence of fourth-order factors, has more flexibility for obtaining new members, and is two-point, cost-effective, more stable and yields better numerical results. The derivation of our scheme is based on the weight function technique. The convergence order is studied in three main theorems. We have demonstrated the applicability of our methods on four numerical problems. Out of them, two are real-life cases, while the third one is a root clustering problem and the fourth one is an academic problem. The obtained numerical results illustrate preferable outcomes as compared to the existing ones in terms of absolute residual errors, CPU timing, approximated zeros and absolute error difference between two consecutive iterations.

1. Introduction

Most of the applied science problems are nonlinear in nature because nature itself is nonlinear instead of simple or linear. The solutions of nonlinear problems are more complicated as compared to the linear and simple problems. Therefore, we consider a nonlinear problem of the following form:
f ( x ) = 0 ,
( f : D C C is an analytic function). Such equations originate from the applied and computer science, engineering, statistics, economics, chemistry, biology and physics, etc. (see details in [1,2,3]). The application of iterative methods can also be found for the computation of approximate solutions of stationary and evolutionary problems, which are associated with differential and partial differential equations (more details in [4,5]). The exact solutions of such problems are almost non-existent. Thus, we have to focus or depend on the approximate solutions that can be obtained with the help of iterative methods. One of the most famous schemes is known as Newton’s method, which is given by
x σ + 1 = x σ f ( x σ ) f ( x σ ) .
Undoubtedly, this scheme has second-order convergence and is a widely used method for nonlinear equations. There are several problems with this scheme. Some of the major ones are: it is a one-point method (for convergence and efficiency problems; details are given in [1,2,3]), it has a linear order of convergence for multiple zeros and the calculation of the first-order derivative at each substep. Finding the derivative is quite a rigorous problem because, sometimes, the derivative of a function consumes a large amount of time in achieving the final result or does not exist.
Therefore, higher-order optimal derivative-free methods came into demand. Then, some scholars suggested a few such methods that have fourth-order optimal convergence. Some of the most important members are given below.
In 2015, Hueso et al. [6] suggested
y σ = x σ 2 m m + 2 f x σ f x σ , μ σ , x σ + 1 = x σ s 1 + s 2 H ( x σ , y σ ) + s 3 H ( y σ , x σ ) + s 4 H ( x σ , y σ 2 f x σ f x σ , μ σ ,
where
μ σ = x σ + f ( x σ ) q , q R , H ( x σ , y σ ) = f x σ , y σ f x σ , μ σ , f [ x σ , y σ ] = f ( x σ ) f ( y σ ) x σ y σ , is a finite difference of the first order , s 1 = 1 4 m m 3 + 3 m 2 + 2 m 4 , s 2 = 1 8 m m m + 2 m ( m + 2 ) 3 , s 3 = 1 8 m 4 m m + 2 m , s 4 = 0 ,
is denoted by ( H M ) ( with q = 1 ) .
In 2019, Sharma et al. [7] proposed
s σ = x σ + β f ( x σ ) , z σ = x σ m f ( x σ ) f [ s σ , x σ ] , x σ + 1 = z σ H f ( x σ ) f [ s σ , x σ ] ,
where
t σ = f z σ f x σ 1 / m , y σ = f z σ f s σ 1 / m , H = m 2 t σ y σ + 2 m t σ y σ + m y σ + t σ y σ 1 m t σ + t σ 2 ,
is denoted by ( S M 1 ) . The suggested scheme (4) is one of their best methods among others proposed by Sharma et al. [7].
In 2019, Sharma et al. [8] gave
z σ = x σ m f ( x σ ) f [ v σ , x σ ] , x σ + 1 = z σ m h σ ( 3 h σ ) 6 20 h σ 1 y σ + 1 f ( x σ ) f [ v σ , x σ ] ,
where v σ = x σ + β f ( x σ ) , u σ = f ( z σ ) f ( x σ ) 1 m and y σ = f ( v σ ) f ( x σ ) 1 m , with h σ = u σ 1 + u σ . The expression (5) is one of their best schemes among other methods presented by Sharma et al. [8]. We call it ( S M 2 ) .
In 2020, Kumar et al. [9] presented
w σ = x σ m f ( x σ ) f [ v σ , x σ ] , x σ + 1 = w σ ( m + 2 ) s σ 1 2 s σ f ( x σ ) f [ v σ , x σ ] + 2 f [ w σ , v σ ] ,
where v σ = x σ + β f ( x σ ) , s σ = f ( w σ ) f ( x σ ) 1 m , which is called ( K M ) . The expression (6) is one of their best schemes among others given by Kumar et al. [9].
In 2020, Behl et al. [10] suggested:
y σ = x σ m f ( x σ ) f [ u σ , x σ ] , x σ + 1 = y σ + s σ + z σ y σ x σ 2 1 2 s σ ,
where u σ = x σ + β f x σ + x σ , s σ = f y σ f x σ 1 / m and z σ = f y σ f u σ 1 / m , which is called ( B M ) . Some other higher-order derivative-free techniques can be found in [11,12,13,14,15].
We aspire to suggest a new two-step, more general and cost-effective family of iterative methods. The new scheme is derivative-free and has optimal convergence of order four. The derivation of this two-step scheme is based on the weight function technique. Further, we present three main Theorems 1–3, which demonstrate the fourth-order convergence for m 2 , when the value of ( m ) is known in advance. The applicability of our methods is illustrated on four numerical problems. Two of them are real-life, the third one is root clustering (which originates from applied mathematics) and the last one is an academic problem. The numerical outcomes demonstrate preferable results in terms of absolute residual errors, CPU timing, approximated zeros and the absolute error difference between two consecutive iterations, in contrast to previous studies.
The rest of the paper is summarized as follows. Section 2 includes the construction as well as the convergence analysis of our scheme. The convergence analysis is studied thoroughly in three Theorems 1–3. Section 3 is devoted to the numerical experiments, where we illustrate the efficiency and convergence of our scheme. In addition, we also propose three weight functions that satisfy the hypotheses of Theorems 1–3. Further, four numerical problems are chosen to confirm the theoretical results. Finally, the concluding remarks are presented in Section 4.

2. Construction of Higher-Order Scheme

We suggest a new form of iterative scheme that has fourth-order optimal convergence for multiple zeros, which is given by
t σ = x σ m H ( ζ ) , x σ + 1 = t σ m ζ 1 2 η + b η θ + M ( θ ) ,
where μ σ = x σ + α f ( x σ ) , α , b R , and m 2 is the known multiplicity of the needed zero. Further, the maps H : C C and M : C C are weight functions and analytic in the neighborhood of origin (0). Moreover, we considered ζ = f ( x σ ) f [ μ σ , x σ ] , and θ = f ( t σ ) f ( x σ ) 1 m and η = f ( t σ ) f ( μ σ ) 1 m two multi-valued maps. We assume that the principal root (see [16]) is given by θ = exp 1 m log f ( t σ ) f ( x σ ) , with log f ( t σ ) f ( x σ ) = log f ( t σ ) f ( x σ ) + i arg f ( t σ ) f ( x σ ) for π < arg f ( t σ ) f ( x σ ) π . The choice of arg ( z ) for z C agrees with that of log ( z ) , which is depicted in the numerical section. In an analogous way, we obtain θ = f ( t σ ) f ( x σ ) 1 m . exp 1 m arg f ( t σ ) f ( x σ ) = O ( e σ ) , where e σ = x σ ξ .
In Theorems 1–3, we demonstrate the convergence analysis of our scheme (8), without adopting any extra value of f at some other points.
Theorem 1.
We assume that x = ξ is a multiple zero of order two ( m = 2 ) of function f. Consider the map f : D C C , which is analytic in D in the neighborhood of the needed zero ξ. Then, our scheme (8) attains fourth-order of convergence, if
H ( 0 ) = 0 , H ( 0 ) = 1 , H ( 0 ) = 0 , M ( 0 ) = 0 , M ( 0 ) = 1 2 , M ( 0 ) = 4 2 b ,
where | H ( 0 ) | < , | M ( 0 ) | < . The scheme (8) satisfies the following error equation:
e n + 1 = ( α λ 2 + 2 a 1 ) 384 4 α a 1 λ 2 ( 6 b Γ + 9 ) 4 a 1 2 ( Γ 33 ) α 2 Γ λ 2 2 48 a 2 + 12 α 2 b λ 2 2 + 8 Δ e σ 4 + O ( e σ 5 ) ,
where λ 2 = f ( ξ ) .
Proof. 
We assume that e σ = x σ ξ and a l = 2 ! ( 2 + l ) ! f ( 2 + l ) ( ξ ) f ( 2 ) ( ξ ) , 1 l 4 , ( l N ) are the terms of error (in σ th iteration) and asymptotic constant, respectively. We choose the Taylor’s series expansions for f at two different points x = x σ and x = μ σ in the neighborhood of ξ with hypotheses f ( ξ ) = f ( ξ ) = 0 and f ( ξ ) 0 . Then, we obtain
f ( x σ ) = λ 2 2 ! e σ 2 1 + a 1 e σ + a 2 e σ 2 + a 3 e σ 3 + a 4 e σ 4 + O e σ 5
and
f ( μ σ ) = λ 2 2 ! e σ 2 [ 1 + α λ 2 + a 1 e σ + 1 4 α 2 λ 2 2 + 10 α λ 2 a 1 + 4 a 2 e σ 2 + 1 4 ( 5 α 2 λ 2 2 a 1 + 6 α λ 2 a 1 2 + 12 α λ 2 a 2 + 4 a 3 ) e σ 3 + 1 8 ( α 3 λ 2 3 a 1 + 14 α 2 λ 2 2 a 1 2 + 16 α 2 λ 2 2 a 2 + 28 α λ 2 a 1 a 2 + 28 α λ 2 a 3 + 8 a 4 ) e σ 4 + O e σ 5 ] ,
respectively, with λ 2 = f ( ξ ) .
By inserting expressions (10) and (11) into scheme (8), we have
ζ = f ( x σ ) f [ μ σ , x σ ] = 1 2 e σ + 1 8 α λ 2 2 a 1 e σ 2 + 1 32 α 2 λ 2 2 8 α a 1 λ 2 + 12 a 1 2 16 a 2 e σ 3 + O e σ 4 .
It is clear from expression (12) that we have ζ = O ( e σ ) . Thus, we can easily expand H ( ζ ) in the neighborhood of origin ( 0 ) in the following way:
H ( ζ ) = H ( 0 ) + H ( 0 ) ζ + 1 2 ! H ( 0 ) ζ 2 + 1 3 ! H ( 0 ) ζ 3 .
By adopting expressions (12) and (13) in (8), we obtain
e t σ = t σ ξ = 2 H ( 0 ) + 1 H ( 0 ) e σ + 1 4 2 a 1 H ( 0 ) + α H ( 0 ) λ 2 H ( 0 ) e σ 2 + O e σ 3 .
From (14), we observe that the scheme will attain at least second-order of convergence, when
H ( 0 ) = 0 , H ( 0 ) = 1 ,
By adopting expression (15) in (14), we obtain
e t σ = 1 4 α λ 2 + 2 a 1 H ( 0 ) e σ 2 + O e σ 3 .
With the help of Taylor’s series expansions, we obtain
f ( t σ ) = λ 2 2 ! e t σ 2 1 + a 1 e t σ + a 2 e t σ 2 + a 3 e t σ 3 + a 4 e t σ 4 + O e σ 5 .
By adopting (12), (13) and (17), we further yield
θ = f ( t σ ) f ( x σ ) 1 2 = 1 4 α λ 2 + 2 a 1 H ( 0 ) e σ + 1 48 [ 3 α 2 λ 2 2 18 a 1 α λ 2 + H ( 0 ) + 48 a 1 2 48 a 2 + 2 Δ 6 α H ( 0 ) λ 2 ] e σ 2 + O e σ 3 ) ,
and
η = f ( t σ ) f ( μ σ ) 1 2 = 1 4 α λ 2 + 2 a 1 H ( 0 ) e σ + 1 48 [ 9 α 2 λ 2 2 6 a 1 α λ 2 + 3 H ( 0 ) + 48 a 1 2 48 a 2 + 2 Δ 12 α H ( 0 ) λ 2 ] e σ 2 + O e σ 3 ) ,
where Δ = H ( 0 ) .
From expression (18), we have θ = O ( e σ ) . Thus, we expand M ( θ ) in the neighborhood of origin ( 0 ) , which is defined as
M ( θ ) = M ( 0 ) + M ( 0 ) θ + 1 2 ! M ( 0 ) θ 2 + 1 3 ! M ( 0 ) θ 3 .
By inserting expressions (10)–(20) into expression (8), we obtain
e σ + 1 = M ( 0 ) e σ + i = 0 2 B i e σ i + 2 + O e σ 5 ,
where B i = B i λ 2 , α , a 1 , a 2 , a 3 , a 4 , H ( 0 ) , H ( 0 ) , M ( 0 ) , M ( 0 ) , M ( 0 ) , M ( 0 ) , e.g., B 0 = 1 8 a 1 4 M ( 0 ) + 4 M ( 0 ) + 6 H ( 0 ) 2 M ( 0 ) + 3 + α λ 2 2 M ( 0 ) + 2 M ( 0 ) + 3 , etc.
The coefficients of e σ , e σ 2 and e σ 3 should be simultaneously zero, in order to deduce the fourth-order convergence. This can be easily obtained by the following values:
M ( 0 ) = 0 , H ( 0 ) = 0 , M ( 0 ) = 1 2 , M ( 0 ) = 4 2 b , b R .
We have the following error equation by adopting (22) in (21):
e n + 1 = ( α λ 2 + 2 a 1 ) 384 4 α a 1 λ 2 ( 6 b Γ + 9 ) 4 a 1 2 ( Γ 33 ) α 2 Γ λ 2 2 48 a 2 + 12 α 2 b λ 2 2 + 8 Δ e σ 4 + O e σ 5 ,
Δ = | H ( 0 ) | < and Γ = | M ( 0 ) | < . We deduce from expression (23) that our scheme (8) has obtained the fourth-order of convergence for α R and m = 2 . In addition, we have attained this convergence order without adopting any extra value of f at some other points. Hence, (8) is an optimal scheme. □
Theorem 2.
Suppose that x = ξ is a multiple solution of order three ( m = 3 ) of function f. Consider the map f : D C C , which is analytic in D surrounding the needed zero ξ. Then, our scheme (8) attains fourth-order convergence, if
H ( 0 ) = 0 , H ( 0 ) = 1 , H ( 0 ) = 0 , M ( 0 ) = 0 , M ( 0 ) = 1 2 , M ( 0 ) = 4 2 b ,
where | H ( 0 ) | < and | M ( 0 ) | < . Scheme (8) satisfies the following error equation:
e n + 1 = A 1 324 9 α λ 3 + 2 A 1 2 ( Γ 36 ) + 36 A 2 4 Δ e σ 4 + O e σ 5 ,
where λ 3 = f ( ξ ) .
Proof. 
We assume that e σ = x σ ξ and A k = 3 ! ( 3 + k ) ! f ( 3 + k ) ( ξ ) f ( 3 ) ( ξ ) , 1 k 4 , ( k N ) are the terms of error (in σ th iteration) and asymptotic constant, respectively. We choose the Taylor’s series expansions for f at two different points x = x σ and x = μ σ in the neighborhood ξ with hypotheses f ( ξ ) = f ( ξ ) = f ( ξ ) = 0 and f ( ξ ) 0 . Then, we have
f ( x σ ) = λ 3 3 ! e σ 2 1 + A 1 e σ + A 2 e σ 2 + A 3 e σ 3 + A 4 e σ 4 + O e σ 5 ,
and
f ( μ σ ) = λ 3 3 ! e σ 3 [ 1 + A 1 e σ + 1 2 ( α λ 3 + 2 A 2 ) e σ 2 + 1 6 ( 7 α A 1 λ 3 + 6 A 3 ) e σ 3 + 1 12 ( α 2 λ 3 2 + 8 α A 1 2 λ 3 + 16 α A 2 λ 3 + 12 A 4 ) e σ 4 + O e σ 5 ] ,
respectively, with λ 3 = f ( ξ ) .
By using expressions (25) and (26) in scheme (8), we obtain
ζ = f ( x σ ) f [ μ σ , x σ ] = 1 3 e σ A 1 9 e σ 2 + 1 54 8 A 1 2 3 ( α λ 3 + 4 A 2 ) e σ 3 + O e σ 4 .
Next, from expression (27), we have ζ = O ( e σ ) . Thus, we expand the weight function H ( ζ ) in the neighborhood of origin ( 0 ) in the following way:
H ( ζ ) = H ( 0 ) + H ( 0 ) ζ + 1 2 ! H ( 0 ) ζ 2 + 1 3 ! H ( 0 ) ζ 3 .
By using expressions (27) and (28) in scheme (8), we have
e t σ = t σ ξ = 3 H ( 0 ) + 1 H ( 0 ) e σ + 1 6 2 A 1 H ( 0 ) H ( 0 ) e σ 2 + O e σ 3 .
From (29), we observe that the scheme will attain at least second-order convergence, when
H ( 0 ) = 0 , H ( 0 ) = 1 ,
Substituting expression (30) in (29), we have
e t σ = 1 6 2 A 1 H ( 0 ) e σ 2 + O e σ 3 .
Again with the help of Taylor’s series expansions, we obtain
f ( t σ ) = λ 3 3 ! e t σ 3 1 + A 1 e t σ + A 2 e t σ 2 + A 3 e t σ 3 + A 4 e t σ 4 + O e σ 5 .
From expressions (25), (26) and (32), we further yield
θ = f ( t σ ) f ( x σ ) 1 3 = 1 6 2 A 1 H ( 0 ) e σ 1 54 30 A 1 2 9 α λ 3 9 A 1 H ( 0 ) 36 A 2 + Δ e σ 2 + 1 324 [ A 1 ( 54 α λ 3 576 A 2 + 8 Δ + 3 H ( 0 ) 2 ) + 18 α λ 3 H ( 0 ) + 5 A 2 H ( 0 ) + 18 A 3 90 A 1 2 H ( 0 ) + 276 A 1 3 ] e σ 3 + O e σ 4 ,
η = f ( t σ ) f ( μ σ ) 1 3 = 1 6 2 A 1 H ( 0 ) e σ 1 54 30 A 1 2 9 α λ 3 9 A 1 H ( 0 ) 36 A 2 + Δ e σ 2 + 1 324 [ A 1 ( 36 α λ 3 576 A 2 + 8 Δ + 3 H ( 0 ) 2 ) + 9 3 α λ 3 H ( 0 ) + 10 A 2 H ( 0 ) + 36 A 3 90 A 1 2 H ( 0 ) + 276 A 1 3 ] e σ 3 + O e σ 4 .
From expression (33), we have θ = O ( e σ ) . Thus, we expand M ( θ ) in the neighborhood of origin ( 0 ) as:
M ( θ ) = M ( 0 ) + M ( 0 ) θ + 1 2 ! M ( 0 ) θ 2 + 1 3 ! M ( 0 ) θ 3 .
By using expressions (25)–(35) in scheme (8), we have
e σ + 1 = M ( 0 ) e σ + i = 0 2 D i e σ i + 2 + O e σ 5 ,
where D i = D i λ 3 , α , A 1 , A 2 , A 3 , A 4 , H ( 0 ) , H ( 0 ) , M ( 0 ) , M ( 0 ) , M ( 0 ) , M ( 0 ) . For example, the first coefficient is explicitly written as D 0 = 1 12 A 1 4 M ( 0 ) 4 M ( 0 ) + 2 + 2 M ( 0 ) 1 H ( 0 ) , etc.
The coefficients of e σ , e σ 2 and e σ 3 should be simultaneously zero, in order to deduce the fourth-order convergence. This can be easily attained by the following values:
M ( 0 ) = 0 , H ( 0 ) = 0 , M ( 0 ) = 1 2 , M ( 0 ) = 4 2 b , b R .
We have the following error equation by adopting (37) in (36):
e n + 1 = A 1 324 9 α λ 3 + 2 A 1 2 ( Γ 36 ) + 36 A 2 4 Δ e σ 4 + O e σ 5 .
We deduce from expression (38) that our scheme (8) has obtained the fourth order of convergence for α R and m = 3 . In addition, we have attained this convergence order without adopting any extra value of f at some other points. Hence, (8) is an optimal scheme. □

General Error form of the Proposed Scheme

Theorem 3.
Following the same suppositions of Theorem 1, our scheme (8) attains the fourth order of convergence for m 4 . The scheme (8) satisfies the following error equation:
e n + 1 = C 1 6 m 3 C 1 2 Γ 3 ( m + 9 ) + 6 C 2 m 2 Δ e σ 4 + O e σ 5 .
Proof. 
We consider that e σ = x σ ξ and C k = m ! ( m + k ) ! f ( m + k ) ( ξ ) f ( m ) ( ξ ) , 1 k 4 , ( k N ) are the terms of error (in σ th iteration) and asymptotic constant, respectively. We choose the Taylor’s series expansions for f at two different points x = x σ and x = μ σ in the neighborhood x = ξ with hypotheses f ( ξ ) = f ( ξ ) = f ( m 1 ) ( ξ ) = 0 , f ( m ) ( ξ ) 0 and m 4 . Then, we obtain
f ( x σ ) = λ m m ! e σ m 1 + C 1 e σ + C 2 e σ 2 + C 3 e σ 3 + C 4 e σ 4 + O e σ 5
and
f ( μ σ ) = λ m m ! e σ m 1 + i = 0 2 Δ ¯ i e σ i + 1 + O e σ 4 ,
where λ m = f ( m ) ( ξ ) , Δ ¯ i = Δ ¯ i ( m , λ m , α , C 1 , C 2 , C 3 , C 4 ) . For example, Δ ¯ 0 = C 1 , Δ ¯ 1 = C 2 and Δ ¯ 2 = 1 6 α f ( 4 ) ( ξ ) + 6 C 3 , m = 4 C 3 , m 5 , etc.
Inserting expressions (39) and (40) into expression (8), we obtain
ζ = f ( x σ ) f [ μ σ , x σ ] = 1 m e σ C 1 m 2 e σ 2 + C 1 2 ( m + 1 ) 2 C 2 m m 3 e σ 3 + Δ ¯ e σ 4 + O e σ 5 ,
where Δ ¯ = 1 256 4 α λ 4 25 C 1 3 + 64 C 2 C 1 48 C 3 , m = 4 m ( 3 m + 4 ) C 1 C 2 3 C 3 m 2 C 1 3 ( m + 1 ) 2 m 4 , m 5 .
Next, from expression (41), we have ζ = O ( e σ ) . Thus, we expand the weight function H ( ζ ) in the neighborhood of origin ( 0 ) , which is defined as
H ( ζ ) = H ( 0 ) + H ( 0 ) ζ + 1 2 ! H ( 0 ) ζ 2 + 1 3 ! H ( 0 ) ζ 3 .
By using expressions (41) and (42) in scheme (8), we have
e t σ = t σ ξ = m H ( 0 ) + 1 H ( 0 ) e σ + 1 2 m 2 C 1 H ( 0 ) H ( 0 ) e σ 2 + O e σ 3 .
From (43), we observe that scheme (8) will attain at least second-order convergence, if
H ( 0 ) = 0 , H ( 0 ) = 1 .
Substituting expression (44) in (43), we have
e t σ = 1 2 m 2 C 1 H ( 0 ) e σ 2 + O e σ 3 .
Again, with the help of Taylor’s series expansions, we have
f ( t σ ) = λ m m ! e t σ m 1 + C 1 e t σ + C 2 e t σ 2 + C 3 e t σ 3 + C 4 e t σ 4 + O e σ 5 .
By adopting (41), (42) and (46), we further yield
θ = f ( t σ ) f ( x σ ) 1 m = ( 1 ) m 2 m H ( 0 ) 2 C 1 e σ + O e σ 2 ,
η = f ( t σ ) f ( μ σ ) 1 m = ( 1 ) m 2 m H ( 0 ) 2 C 1 e σ + O e σ 2 .
From expression (47), we have θ = O ( e σ ) . Thus, we expand M ( θ ) in the neighborhood of origin ( 0 ) as
M ( θ ) = M ( 0 ) + M ( 0 ) θ + 1 2 ! M ( 0 ) θ 2 + 1 3 ! M ( 0 ) θ 3 .
By using expressions (39)–(49) in scheme (8), we have
e σ + 1 = M ( 0 ) e σ + i = 0 2 B ¯ i e σ i + 2 + O e σ 5 ,
where B ¯ i = B ¯ i λ m , α , C 1 , C 2 , C 3 , C 4 , H ( 0 ) , H ( 0 ) , M ( 0 ) , M ( 0 ) , M ( 0 ) , M ( 0 ) .
The coefficients of e σ , e σ 2 and e σ 3 should be simultaneously zero, in order to deduce the fourth-order convergence. This can be easily attained by the following values:
M ( 0 ) = 0 , H ( 0 ) = 0 , M ( 0 ) = 1 2 , M ( 0 ) = 4 2 b , b R .
By adopting (51) in (50), we obtain the following error equation:
e n + 1 = C 1 6 m 3 C 1 2 Γ 3 ( m + 9 ) + 6 C 2 m 2 Δ e σ 4 + O e σ 5 .
We deduce from expression (52) that our scheme (8) has obtained the fourth order of convergence for α R and m 4 . In addition, we have attained this convergence order without adopting any extra value of f at some other points. Hence, (8) is an optimal scheme. □
Remark 1.
It seems from (52) (for m 4 ) that α and b are not involved in this expression. However, they actually appear in the coefficient of e σ 5 . Here, we do not need to calculate the coefficient of e σ 5 because the optimal fourth order of convergence is already obtained. Further, the calculation work of e σ 5 is quite rigorous and consumes a large amount of time. Nonetheless, the role of α and b can be observed in (23) and (38).

3. Numerical Experimentation

We demonstrate the efficiency and convergence of some members from our scheme (8). Therefore, we choose the following three weight functions:
  • First weight function:
    H ( ζ ) = ζ , M ( θ ) = θ 2 .
  • Second weight function:
    H ( ζ ) = ζ 3 + ζ , M ( θ ) = θ 4 ( 2 b ) θ 1 4 ( 2 b ) θ 2 , b R .
  • Third weight function:
    H ( ζ ) = ζ , M ( θ ) = θ 4 ( 2 b ) θ 1 4 ( 2 b ) θ 2 b R .
Clearly, all the above three weight functions satisfy the conditions provided in Theorems 1–3. Now, we use these weight functions in our scheme (8) and call them ( P M 1 ) ( with b = 2 ) , ( P M 2 ) with b = 1 10 and ( P M 3 ) with b = 1 10 , respectively. We consider two applied science problems, one clustering root problem and an academic problem for the numerical tests.
There are no fixed criteria for the comparison of two different iterative methods. However, we assume the following six different aspects for the comparison:
  • The values of iterate x σ at σ = 1 , 2 , 3 ;
  • The absolute residual error;
  • The differences between two consecutive iterations;
  • CPU timing;
  • The number of iterations for attaining accuracy up to ϵ = 10 100 ;
  • Computational order of convergence (COC) based on the accuracy.
The values of the above mentioned parameters are depicted in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8, along with initial guesses. The values of x σ , | f ( x σ ) | , COC and | x σ + 1 x σ | were calculated in Mathematica-9 for a minimum of 3000 significant digits, which minimizes the rounding off error. However, we depict these values up to 15 (with exponent), 2 (with exponent), 6 and 2 (with exponent) significant digits, respectively.
We adopted the following rules
ρ = ln x σ + 1 ξ | x σ ξ ln x σ ξ x σ 1 ξ , for   each   σ = 1 , 2 ,
and
ρ * = ln x σ + 1 x σ x σ x σ 1 ln x σ x σ 1 x σ 1 x σ 2 , for   each   σ = 2 , 3 ,
in order to calculate the computational order of convergence and the approximate computational order of convergence (ACOC) [17], respectively. Further, the CPU timing is obtained by the command “AbsoluteTiming[]” in M a t h e m a t i c a 9 . We execute the same programming five times, and their average time is depicted in Table 7. The b 1 ( ± b 2 ) stands for b 1 × 10 ( ± b 2 ) in Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6.
The configurations and outline of the adopted laptop are defined as follows:
  • Processor: Intel(R) Core(TM)2 Duo CPU T6400 @ 2.00 GHz;
  • Manufacturer: HP;
  • Installed memory (RAM): 4:00 GB;
  • Windows edition: Windows 7 Professional;
  • System type: 64-bit-Operating System.
In order to maintain uniformity in the case of the comparison of the iterative methods, we choose β = 1 2 in the existing as well as our methods. We consider five existing methods for comparison, namely (3)–(7). The details of these methods are given in the Introduction.
Remark 2.
For the following specific values of the weight functions,
H ( ζ ) = ζ , M ( θ ) = θ 2 + 2 θ 2 .
We can obtain the Behl’s scheme [18] as a special case of our method.
Example 1
(Eigenvalue problem). Eigenvalues and vectors are one of the most basic and challenging problems of linear algebra. The quality of a thing or object can be determined with the help of the eigenvalue problem. It is not always suitable to use the linear algebra technique. Thus, we have to rely on numerical techniques, which provide the approximate zero. Therefore, we choose the proceeding square matrix of 9 × 9 , which has multiple zeros:
A = 1 8 12 0 0 19 19 76 19 18 437 64 24 0 24 24 64 8 32 376 16 0 24 4 4 16 4 8 92 40 0 0 10 50 40 2 20 242 4 0 0 1 41 4 1 2 25 40 0 0 18 18 104 18 20 462 84 0 0 29 29 84 21 42 501 16 0 0 4 4 16 4 16 92 0 0 0 0 0 0 0 0 24 ,
whose characteristic equation is given below:
f 1 ( x ) = x 9 29 x 8 + 349 x 7 2261 x 6 + 8455 x 5 17663 x 4 + 15927 x 3 + 6993 x 2 24732 x + 12960 .
The function f 1 ( x ) has x = 3 , a multiple zero with m = 4 . The computational results along with starting guesses are depicted in Table 1 and Table 2.
From Table 1, we can conclude that methods P M 2 and P M 3 display the most outstanding behavior among the mentioned methods in terms of accurate iterate x σ , the difference between two consecutive iterations and absolute residual errors. Further, we can say that the other methods have almost two times larger residual errors than P M 2 and P M 3 .
We can observe from Table 2 that the desired root is closer from the second iteration onward in our suggested methods P M 2 and P M 3 as compared to the mentioned ones. In addition, the other existing methods have almost three times larger residual errors, which demonstrates the better performance of our methods P M 2 and P M 3 .
Table 1. Behavior of iterative methods on eigenvalue problem f 1 with x 0 = 3.1 .
Table 1. Behavior of iterative methods on eigenvalue problem f 1 with x 0 = 3.1 .
Methods σ x σ x σ + 1 x σ f ( x σ )
HM1 2.91137492398779 5.4( 2 ) 5.3(−3)
2 2.96490346352068 3.4( 2 ) 1.3( 4 )
3 2.99892553417637 1.1( 3 ) 1.1( 10 )
SM11 2.97977318565461 2.0( 2 ) 1.4( 5 )
2 3.00000002569087 2.6( 8 ) 3.5( 29 )
3 3.00000000000000 3.1( 16 ) 7.7( 61 )
SM21 2.98013951815471 2.0( 2 ) 1.3( 5 )
2 3.00000002608467 2.6( 8 ) 3.7( 29 )
3 3.00000000000000 3.2( 16 ) 8.7( 61 )
KM1 2.98003995753693 2.0( 2 ) 1.3( 5 )
2 3.00000000783350 7.8( 9 ) 3.0( 31 )
3 3.00000000000000 2.9( 17 ) 5.8( 65 )
BM1 2.98023627745323 2.0( 2 ) 1.2( 5 )
2 3.00000002294267 2.3( 8 ) 2.2( 29 )
3 3.00000000000000 2.5( 16 ) 3.1( 61 )
P M 1 1 2.98054341015763 1.9( 2 ) 1.2( 5 )
2 3.00000001179089 1.2( 8 ) 1.5( 30 )
3 3.00000000000000 6.6( 17 ) 1.5( 63 )
PM21 2.98097080391158 1.9( 2 ) 1.1( 5 )
2 2.99999999596992 4.0( 9 ) 2.1( 32 )
3 3.00000000000000 5.4( 35 ) 6.8( 136 )
PM31 2.98078021888572 1.9( 2 ) 1.1( 5 )
2 2.99999999202006 8.0( 9 ) 3.2( 31 )
3 3.00000000000000 9.5( 34 ) 6.5( 131 )
Table 2. Behavior of iterative methods on eigenvalue problem f 1 with x 0 = 2.9 .
Table 2. Behavior of iterative methods on eigenvalue problem f 1 with x 0 = 2.9 .
Methods σ x σ x σ + 1 x σ f ( x σ )
HM1 2.94925327756793 4.6( 2 ) 5.6( 4 )
2 2.99539687796758 4.6( 3 ) 3.6( 8 )
3 2.99999968702615 3.1( 7 ) 7.7( 25 )
SM11 3.00060869418782 6.1( 4 ) 1.1( 11 )
2 2.99999982373464 1.8( 7 ) 7.7( 26 )
3 3.00000000000000 1.4( 29 ) 2.7( 114 )
SM21 3.00049576757952 5.0( 4 ) 4.8( 12 )
2 2.99999988310565 1.2( 7 ) 1.5( 26 )
3 3.00000000000000 7.6( 30 ) 2.7( 115 )
KM1 3.00023025005565 2.3( 4 ) 2.2( 13 )
2 2.99999997480358 2.5( 8 ) 3.2( 29 )
3 3.00000000000000 2.2( 32 ) 1.8( 125 )
BM1 3.00042871122695 4.3( 4 ) 2.7( 12 )
2 2.99999991260426 8.7( 8 ) 4.7( 27 )
3 3.00000000000000 3.2( 30 ) 8.0( 117 )
PM11 3.00016776870627 1.7( 4 ) 6.3( 14 )
2 2.99999998662501 1.3( 8 ) 2.6( 30 )
3 3.00000000000000 3.5( 33 ) 1.1( 128 )
PM21 2.99994117155367 5.9( 5 ) 9.6( 16 )
2 3.00000000000000 2.4( 18 ) 2.8( 69 )
3 3.00000000000000 7.3( 72 ) 2.3( 283 )
PM31 2.99993717924703 6.3( 5 ) 1.2( 15 )
2 3.00000000000000 3.6( 18 ) 1.4( 68 )
3 3.00000000000000 4.1( 71 ) 2.3( 280 )
Example 2
(Continuous stirred tank reactor (CSTR)). Here, we consider another problem of applied science, namely an isothermal continuous stirred tank reactor (CSTR). The components M 1 and M 2 are the fed rates of the reactors B 1 and B 2 B 1 , respectively. In this way, we have the proceeding reaction scheme (for details, see [19]):
M 1 + M 2 B 1 B 1 + M 2 C 1 C 1 + M 2 D 1 C 1 + M 2 E 1
Douglas [20] invented this model (55) while designing a simple model that can control the feedback systems. He transformed the expression (55) in the mathematical form, which is given by
R C 1 2.98( x + 2.25) ( x + 1.45) ( x + 2.85) 2 ( x + 4.35) = 1 ,
where R C 1 is the gain of the proportional controller. For a particular value of R C 1 = 0 , we obtain
f 2 ( x ) = x 4 + 11.50 x 3 + 47.49 x 2 + 83.06325x + 51.23266875.
The solutions of f 2 are called poles of the open-loop transfer function. The zeros of f 2 are x = 1.45, 2.85, 2.85, 4.35 . Among them, x = 2.85 is a multiple zero with m = 2 . The starting points and numerical results for f 2 are illustrated in Table 3 and Table 4.
From Table 3, we find that the lowest residual error among the existing methods ( H M , S M 1 , S M 2 K M , B M ) is 7.3( 47 ) ; however, our methods P M 1 P M 1 , P M 2 and P M 3 have 8.0( 82 ) , 1.8( 79 ) and 2.4( 78 ) , respectively. Thus, we can say that the existing methods have almost two times larger residual errors than our methods. This also indicates the faster convergence of our methods P M 1 , P M 2 and P M 3 as compared to others. Our techniques P M 1 , P M 2 and P M 3 also perform much better in terms of x σ and | x σ + 1 x σ | as compared to other existing ones.
We can observe from Table 4 that our method P M 1 has the lowest residual error 4.0( 174 ) as compared to S M 1 3.2( 122 ) (which is the lowest among other existing ones ( H M , S M 2 K M , B M ) ). This clearly indicates that P M 1 has the fastest convergence and smallest residual error among others. Our methods P M 1 , P M 2 and P M 3 have almost a two times lower error difference | x σ + 1 x σ | and better x σ as compared to other existing ones.
Table 3. Behavior of iterative methods on CSTR problem f 2 with x 0 = 2.8 .
Table 3. Behavior of iterative methods on CSTR problem f 2 with x 0 = 2.8 .
Methods σ x σ x σ + 1 x σ f ( x σ )
HM1 2.85582089357922 5.8( 3 ) 7.1( 5 )
2 2.85007033072822 7.0( 5 ) 1.0( 8 )
3 2.85000001038600 1.0( 8 ) 2.3( 16 )
SM11 2.85308999319490 3.1( 3 ) 2.0( 5 )
2 2.84999999999768 2.3( 12 ) 1.1( 23 )
3 2.85000000000000 5.9( 24 ) 7.3( 47 )
SM21 2.85309291853761 3.1( 3 ) 2.0( 5 )
2 2.84999999996770 3.2( 11 ) 2.2( 21 )
3 2.85000000000000 1.1( 21 ) 2.8( 42 )
KM1 2.85309218936353 3.1( 3 ) 2.0( 5 )
2 2.84999999997525 2.5( 11 ) 1.3( 21 )
3 2.85000000000000 6.7( 22 ) 9.5( 43 )
BM1 2.85309146863467 3.1( 3 ) 2.0( 5 )
2 2.84999999998271 1.7( 11 ) 6.3( 22 )
3 2.85000000000000 3.3( 22 ) 2.3( 43 )
PM11 2.85308831372191 3.1( 3 ) 2.0( 5 )
2 2.85000000007061 7.1( 11 ) 1.0( 20 )
3 2.85000000000000 2.0( 41 ) 8.0( 82 )
PM21 2.85307545464340 3.1( 3 ) 2.0( 5 )
2 2.85000000012101 1.2( 10 ) 3.1( 20 )
3 2.85000000000000 3.0( 40 ) 1.8( 79 )
PM31 2.85314917237240 3.1( 3 ) 2.0( 5 )
2 2.85000000015910 1.6( 10 ) 5.3( 20 )
3 2.85000000000000 1.1( 39 ) 2.4( 78 )
Table 4. Behavior of iterative methods on CSTR problem f 2 with x 0 = 2.9 .
Table 4. Behavior of iterative methods on CSTR problem f 2 with x 0 = 2.9 .
Methods σ x σ x σ + 1 x σ f ( x σ )
HM1 2.85475917811483 4.7( 3 ) 4.8( 5 )
2 2.85004711393000 4.7( 5 ) 4.7( 9 )
3 2.85000000466098 4.7( 9 ) 4.6( 17 )
SM11 2.84999996343561 3.7( 8 ) 2.8( 15 )
2−2.85000000000000 1.5( 15 ) 4.5( 30 )
3 2.85000000000000 1.2( 61 ) 3.2( 122 )
SM21 2.84999821593561 1.8( 6 ) 6.7( 12 )
2 2.85000000000349 3.5( 12 ) 2.6( 23 )
3 2.85000000000000 5.3( 47 ) 5.9( 93 )
KM1 2.84999867508443 1.3( 6 ) 3.7( 12 )
2 2.85000000000193 1.9( 12 ) 7.8( 24 )
3 2.85000000000000 3.8( 48 ) 3.0( 95 )
BM1 2.84999908290043 9.2( 7 ) 1.8( 12 )
2 2.85000000000092 9.2( 13 ) 1.8( 24 )
3 2.85000000000000 1.4( 49 ) 4.1( 98 )
PM11 2.85000401687642 4.0( 6 ) 3.4( 11 )
2 2.85000000000000 2.0( 22 ) 8.8( 44 )
3 2.85000000000000 1.4( 87 ) 4.0( 174 )
PM21 2.85000635124083 6.4( 6 ) 8.5( 11 )
2 2.85000000000000 2.2( 21 ) 1.1( 41 )
3 2.85000000000000 3.5( 83 ) 2.5( 165 )
PM31 2.85000738796420 7.4( 6 ) 1.1( 10 )
2 2.85000000000000 4.9( 21 ) 5.1( 41 )
3 2.85000000000000 9.7( 82 ) 2.0( 162 )
Example 3
(Root clustering problem). We chose a root clustering problem similar to Zeng [21]:
f 3 ( x ) = ( x 1 ) 30 ( x 2 ) 150 ( x 3 ) 191 ( x 4 ) 95 .
The zeros of f 3 are x = 1 , 2 , 3 and x = 4 of multiplicity m = 30 , 150 , 191 and m = 95 , respectively. All of the zeros are quite close to each other. Therefore, this is known as a root clustering problem. We chose x = 3 as the multiple zero of multiplicity 191 for the numerical experiment. The computational results are depicted in Table 5 with an initial approximation.
Undoubtedly, S M 2 demonstrates slightly better behavior than our and existing methods, as shown in Table 5. However, there is no huge difference, as our methods show in the previous Table 1, Table 2, Table 3 and Table 4. Our results are also significantly closer to S M 2 in terms of | x σ + 1 x σ | . It is merely a difference of only four significant digits in the case of P M 1 .
Table 5. Behavior of iterative methods on root clustering problem f 3 with x 0 = 2.8 .
Table 5. Behavior of iterative methods on root clustering problem f 3 with x 0 = 2.8 .
Methods σ x σ x σ + 1 x σ f ( x σ )
HM1 3.00004810962185 4.8( 5 ) 2.2( 816 )
2 3.00000000000000 6.3( 18 ) 1.8( 3276 )
3 3.00000000000000 1.9( 69 ) 8.1( 13117 )
SM11 3.00014776492729 1.5( 4 ) 2.7( 723 )
2 2.99999999999999 8.8( 15 ) 2.5( 2676 )
3 3.00000000000000 5.7( 29 ) 9.5( 5387 )
SM21 3.00001437474853 1.4( 5 ) 1.4( 916 )
2 3.00000000000000 9.3( 21 ) 8.7( 38181 )
3 3.00000000000000 1.6( 81 ) 1.5( 15422 )
KM1 3.00001556100437 1.6( 5 ) 5.1( 910 )
2 3.00000000000000 1.6( 20 ) 1.5( 3774 )
3 3.00000000000000 1.6( 80 ) 9.5( 15233 )
BM1 3.00001556100437 1.6( 5 ) 5.1( 910 )
2 3.00000000000000 1.6( 20 ) 1.5( 3774 )
3 3.00000000000000 1.6( 80 ) 9.5( 15233 )
PM11 3.00002015875780 2.0( 5 ) 1.5( 888 )
2 3.00000000000000 7.7( 20 ) 7.8( 3643 )
3 3.00000000000000 1.6( 77 ) 5.4( 14660 )
PM21 3.00002746410154 2.7( 5 ) 6.9( 863 )
2 3.00000000000000 4.7( 19 ) 4.8( 3493 )
3 3.00000000000000 3.9( 74 ) 1.2( 14013 )
PM31 3.00002746474990 2.7( 5 ) 6.9( 863 )
2 3.00000000000000 4.7( 19 ) 4.9( 3493 )
3 3.00000000000000 3.9( 74 ) 1.3( 14013 )
Example 4
(Academic problem). We chose another academic problem, which is given by
f 4 ( x ) = x 2 sin 4 x .
The zero of f 4 is x = 0 of multiplicity m = 3 . Computational results are depicted in Table 6 with initial approximation.
Undoubtedly, S M 2 demonstrates slightly better behavior than our and existing methods, as shown in Table 6. However, there is no huge difference, as our methods show in the previous Table 1, Table 2, Table 3 and Table 4. Our results are also significantly closer to S M 2 in terms of | x σ + 1 x σ | , with a difference of only four significant digits in the case of P M 1 .
Table 6. Behavior of iterative methods on f 4 with x 0 = 0.1 .
Table 6. Behavior of iterative methods on f 4 with x 0 = 0.1 .
Methods σ x σ x σ + 1 x σ f ( x σ )
HM1 1.24117076065003( 2 ) 1.2( 2 ) 7.6( 6 )
2 2.74682517035481( 5 ) 2.7( 5 ) 8.3( 14 )
3 2.98438976688573( 13 ) 3.0( 13 ) 1.1( 37 )
SM11 1.22659666187711( 7 ) 1.2( 7 ) 7.4( 21 )
2 1.37114651655205( 36 ) 1.0( 107 ) 1.0( 107 )
3 2.39328432150731( 181 ) 2.4( 181 ) 5.5( 542 )
SM21 3.74384290175570( 9 ) 3.7( 9 ) 2.1( 25 )
2 1.81607199017953( 44 ) 1.8( 44 ) 2.4( 131 )
3 4.87764600127062( 221 ) 4.9( 221 ) 4.6( 661 )
KM1 2.19975180887394( 7 ) 2.2( 7 ) 4.3( 20 )
2 5.59585008105915( 35 ) 5.6( 35 ) 7.0( 103 )
3 5.96112195693271( 173 ) 6.0( 173 ) 8.5( 517 )
BM1 3.74931046465951( 9 ) 3.7( 9 ) 2.1( 25 )
2 1.82937187058353( 44 ) 1.8( 44 ) 2.4( 131 )
3 5.05888679422820( 221 ) 5.1( 221 ) 5.2( 661 )
PM11 4.04274483802274( 9 ) 4.0( 9 ) 2.6( 25 )
2 2.66640818057457( 44 ) 2.7( 44 ) 7.6( 131 )
3 3.32796034070430( 220 ) 3.3( 220 ) 1.5( 658 )
PM21 3.56649330514408( 9 ) 3.6( 9 ) 1.8( 25 )
2 1.42479388451577( 44 ) 1.4( 44 ) 1.2( 131 )
3 1.44979029661674( 221 ) 1.4( 221 ) 1.2( 662 )
PM31 1.098062847026967( 4 ) 1.3( 4 ) 8.2( 12 )
2 1.96974784664775( 13 ) 3.9( 13 ) 2.4( 37 )
3 1.012552510346339( 38 ) 1.2( 38 ) 6.3( 114 )
Remark 3.
From Table 7, we find that P M 1 has the lowest average execution time for attaining the desired accuracy. The average execution times of the computational results of methods H M and S M 1 , respectively, are two and three times those of P M 1 , P M 2 and P M 3 . Further, P M 1 , P M 2 and P M 3 also consume less CPU time (on average) as compared to S M 2 , K M and B M .
Table 7. CPU timing on the basis of number of iterations.
Table 7. CPU timing on the basis of number of iterations.
I . M . Ex. (1)Ex. (2)Ex. (3)Ex. (4) T . T . A . T .
x 0 = 3 . 1 x 0 = 2 . 9 x 0 = 2 . 8 x 0 = 2 . 9 x 0 = 3 . 1 0 . 1
HM0.0600000.3500000.0150010.06000017.6100780.002319118.09739213.01623202
SM10.0620010.3400000.0100000.04500325.4452290.001546525.90377954.31729658
SM20.0500000.3400000.0100000.04600110.7610690.00154111.2086111.86810183
KM0.0600000.3320040.0100000.04800310.1180550.007765410.57582741.7626379
BM0.0500000.3310000.0100000.04000010.0630770.001424610.49550161.74925027
PM10.0500000.3200000.0020000.0310007.6080330.00140138.01243431.33540572
PM20.0500000.3160020.0030000.0400007.7620410.00141518.17245811.36207635
PM30.2400000.3200000.0040000.0400007.6050340.00182328.21085721.3684762
The abbreviations of T.T. and A.T. stand for total timing and average timing, respectively.
Remark 4.
On the basis of the obtained number of iterations in Table 8, we conclude that P M 2 requires the fewest average iterations (in order to attain the desired accuracy) as compared to the existing methods. In addition, the average number of iterations of our methods P M 1 and P M 3 (4) is also lower as compared to 4.3 (which is the lowest among the existing methods). Thus, we deduce that our method P M 2 is the fastest among other mentioned methods.
Table 8. Number of iterations required in order to attain the desired accuracy.
Table 8. Number of iterations required in order to attain the desired accuracy.
I . M . Ex. (1)Ex. (2)Ex. (3)Ex. (4) T . Iter . A . Iter .
x 0 = 3 . 1 x 0 = 2 . 9 x 0 = 2 . 8 x 0 = 2 . 9 x 0 = 3 . 1 x 0 = 0 . 1
HM667745355.83
SM1545443264.3
SM2545543264.3
KM545543264.3
BM545543264.3
PM1544443244
PM2444443233.83
PM3444444244
The abbreviations of T.Iter. and A.Iter. are total iterations and average iterations, respectively.
Remark 5.
From Table 9, it is straightforward to say that methods P M 1 , P M 2 and P M 3 exhibit consistent COC (except Example 4) in contrast to the other existing methods. The calculation of COC is based on the number of iterations (which is depicted in Table 8 corresponding to the methods and examples).
Table 9. COC based on the number of iterations required in order to attain the desired accuracy.
Table 9. COC based on the number of iterations required in order to attain the desired accuracy.
I . M . Ex. (1)Ex. (2)Ex. (3)Ex. (4)
x 0 = 3 . 1 x 0 = 2 . 9 x 0 = 2 . 8 x 0 = 2 . 9 x 0 = 3 . 1 x 0 = 0 . 1
HM4.0004.0002.0002.0004.0003.000
SM14.0004.0001.3251.3215.8835.000
SM24.0004.0001.3306.0124.0005.000
KM4.0004.0001.3306.0144.0005.000
BM4.0004.0001.3296.0174.0005.000
PM14.0004.0004.0004.0004.0005.000
PM24.0004.0004.0004.0004.0005.000
PM34.0004.0004.0004.0004.0005.000

4. Concluding Remarks

  • We have suggested a new two-step, derivative-free and cost-effective iterative scheme for multiple zeros ( m 2 ) .
  • Our scheme is based on the weight function technique. By using the weight functions at both substeps, we provide more flexibility for generating more general new schemes. Several new and existing cases are depicted in numerical Section 2 and Remark 2, respectively.
  • Our Scheme (8) consumes only three values of f at different points. Thus, the optimality of our scheme is confirmed by the Kung–Traub conjecture.
  • Our methods have the lowest residual error, more stable C O C , difference between two iterations and better approximate zero as compared to the existing ones (see Table 1, Table 2, Table 3 and Table 4 and Table 9).
  • Undoubtedly, S M 2 demonstrates slightly better behavior than our and existing methods in Example 3. Our results are also considerably closer to S M 2 in terms of | x σ + 1 x σ | , with a difference of only four significant digits in the case of P M 1 .
  • P M 1 requires the lowest execution time to obtain the numerical results. The execution times for the computational results of methods H M and S M 1 , respectively, are two and three times those of our methods, namely P M 1 , P M 2 and P M 3 . Thus, we deduce that our schemes are cost-effective.
  • The average number of iterations of our methods P M 2 and P M 3 is the lowest as compared to 4.6 (lowest among the existing methods).
  • Finally, we conclude from Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 that Scheme (8) is more stable, cost-effective and could be a better substitution for the existing methods.
  • We cannot use our scheme for the solution of nonlinear systems. In the future, we can work in two directions: either for the extension to the eighth order of convergence for multiple roots or the extension for nonlinear systems.

Funding

This research was funded by Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under Grant No. D- 013-130-1441-1442.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This project was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under Grant No. D- 013-130-1441-1442. The authors, therefore, acknowledge with thanks DSR for the technical and financial support.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Ostrowski, A.M. Solutions of Equations and System of Equations; Academic Press: New York, NY, USA, 1964. [Google Scholar]
  2. Petkovic, M.; Neta, B.; Petkovic, L.; Dzunic, J. Multipoint Methods for Solving Nonlinear Equations; Academic Press: Cambridge, MA, USA, 2012. [Google Scholar]
  3. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall Series in Automatic Computation; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  4. Argyros, I.K.; Magreñán, A.A. Iterative Methods and Their Dynamics with Applications A Contemporary Study; CRC Press: Boca Raton, FL, USA; Taylor & Francis Group: Hoboken, NJ, USA, 2017. [Google Scholar]
  5. Bradie, B. A Friendly Introduction to Numerical Analysis; Pearson Education Inc.: New Delhi, India, 2006. [Google Scholar]
  6. Hueso, J.L.; Martínez, E.; Teruel, C. Determination of multiple roots of nonlinear equations and applications. Math. Chem. 2015, 53, 880–892. [Google Scholar] [CrossRef] [Green Version]
  7. Sharma, J.R.; Kumar, S.; Jäntschi, L. On a Class of Optimal Fourth Order Multiple Root Solvers without Using Derivatives. Symmetry 2019, 11, 1452. [Google Scholar] [CrossRef] [Green Version]
  8. Sharma, J.R.; Kumar, S.; Jäntschi, L. On Derivative Free Multiple-Root Finders with Optimal Fourth Order Convergence. Mathematics 2020, 8, 1091. [Google Scholar] [CrossRef]
  9. Kumar, S.; Kumar, D.; Sharma, J.R.; Cesarano, C.; Aggarwal, P.; Chu, Y.M. An optimal fourth order derivative-free Numerical Algorithm for multiple roots. Symmetry 2020, 12, 1038. [Google Scholar] [CrossRef]
  10. Behl, R.; Alharbi, S.K.; Mallawi, F.O.; Salimi, M. An Optimal Derivative-Free Ostrowski’s Scheme for Multiple Roots of Nonlinear Equations. Mathematics 2020, 8, 1809. [Google Scholar] [CrossRef]
  11. Le, D. An efficient derivative free method for solving nonlinear equations. ACM Trans. Math.Soft. 1985, 11, 250–262. [Google Scholar] [CrossRef]
  12. Cordero, A.; Hueso, J.L.; Martinez, E.; Torregrosa, J.R. A new technique to obtain derivative-free optimal iterative methods for solving nonlinear equations. J. Comput. Appl. Math. 2013, 252, 95–102. [Google Scholar] [CrossRef]
  13. Zhanlav, T.; Otgondorj, K. Comparison of some optimal derivative free three point iterations. Numer. Anal. Approx. Theory 2020, 49, 76–90. [Google Scholar]
  14. Cordero, A.B.; Torregrosa, J.R.S. Low-complexity root-finding iteration functions with no derivatives of any order of convergence. J. Comput. Appl. Math. 2015, 275, 502–515. [Google Scholar] [CrossRef]
  15. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Mach. 1974, 21, 643–651. [Google Scholar] [CrossRef]
  16. Ahlfors, L.V. Complex Analysis; McGraw-Hill Book, Inc.: New York, NY, USA, 1979. [Google Scholar]
  17. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  18. Behl, R.; Cordero, A.; Torregrosa, J.R. A new higher-order optimal derivative free scheme for multiple roots. J. Comput. Appl. Math. 2022, 404, 113773. [Google Scholar] [CrossRef]
  19. Constantinides, A.; Mostoufi, N. Numerical Methods for Chemical Engineers with MATLAB Applications; Prentice Hall PTR: Hoboken, NJ, USA, 1999. [Google Scholar]
  20. Douglas, J.M. Process Dynamics and Control; Prentice Hall: Englewood Cliffs, NJ, USA, 1972; Volume 2. [Google Scholar]
  21. Zeng, Z. Computing multiple roots of inexact polynomials. Math. Comput. 2004, 74, 869–903. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Behl, R. A Derivative Free Fourth-Order Optimal Scheme for Applied Science Problems. Mathematics 2022, 10, 1372. https://doi.org/10.3390/math10091372

AMA Style

Behl R. A Derivative Free Fourth-Order Optimal Scheme for Applied Science Problems. Mathematics. 2022; 10(9):1372. https://doi.org/10.3390/math10091372

Chicago/Turabian Style

Behl, Ramandeep. 2022. "A Derivative Free Fourth-Order Optimal Scheme for Applied Science Problems" Mathematics 10, no. 9: 1372. https://doi.org/10.3390/math10091372

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop