Next Article in Journal
A Survey on the Hausdorff Dimension of Intersections
Previous Article in Journal
A Hierarchical Design Framework for the Design of Soft Robots
Previous Article in Special Issue
On Some Fixed Point Iterative Schemes with Strong Convergence and Their Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Optimal Derivative-Free Fourth-Order Method and Its Memory Variant for Non-Linear Models and Their Dynamics

1
School of Mathematics, Thapar Institute of Engineering and Technology, Patiala 147004, India
2
Mathematical Modelling and Applied Computation Research Group (MMAC), Department of Mathematics, King Abdulaziz University, P.O. Box 80203, Jeddah 21589, Saudi Arabia
*
Author to whom correspondence should be addressed.
Math. Comput. Appl. 2023, 28(2), 48; https://doi.org/10.3390/mca28020048
Submission received: 1 February 2023 / Revised: 9 March 2023 / Accepted: 20 March 2023 / Published: 22 March 2023

Abstract

:
We propose a new optimal iterative scheme without memory free from derivatives for solving non-linear equations. There are many iterative schemes existing in the literature which either diverge or fail to work when f ( x ) = 0 . However, our proposed scheme works even in these cases. In addition, we extended the same idea for iterative methods with memory with the help of self-accelerating parameters estimated from the current and previous approximations. As a result, the order of convergence increased from four to seven without the addition of any further functional evaluation. To confirm the theoretical results, numerical examples and comparisons with some of the existing methods are included which reveal that our scheme is more efficient than the existing schemes. Furthermore, basins of attraction are also included to describe a clear picture of the convergence of the proposed method as well as some of the existing methods.

1. Introduction

Many problems in computational sciences and other disciplines can be modelled in the form of a non-linear equation or systems. In particular, a large number of problems in applied mathematics and engineering are solved by finding the solutions of these equations. In the literature, there are several iterative methods that have been designed by using different procedures to approximate the simple roots of a non-linear equation,
f ( x ) = 0 ,
where f : I R R is a real function defined in an open interval I. To find the roots of Equation (1), we look towards iterative schemes. A lot of iterative methods of different convergence orders already exist in the literature (see [1,2] and the references therein) to approximate the roots of Equation (1). Out of them, the most eminent one-point iterative method without memory is the quadratic convergent Newton–Raphson scheme [3] given by
y n = x n f ( x n ) f ( x n ) , n = 0 , 1 ,
One drawback of this method is that when f ( x n ) = 0 , the method fails, which confines its applications. The first objective and inspiration to design iterative methods for solving this kind of problem are to obtain the highest order of convergence with the least computational cost. Therefore, a lot of researchers are interested in constructing optimal multipoint methods [4] without memory, in the sense of Kung Traub conjecture [5] which states that multipoint iterative methods without memory, requiring n + 1 functional evaluations per iteration, have a convergence order at most 2 n . Among them, an optimal fourth-order iterative method was developed by Kou et al. [6] defined by
y n = x n f ( x n ) f ( x n ) , x n + 1 = x n f ( x n ) 2 + f ( y n ) 2 f ( x n ) ( f ( x n ) f ( y n ) ) , n = 0 , 1 ,
Further, Kansal et al. proposed an optimal fourth-order iterative method [7] in parameters α ( 1 ) and β defined by
y n = x n f ( x n ) f ( x n ) , n = 0 , 1 , x n + 1 = x n α + 1 α ± f ( x n ) 2 + ( β 2 α 2 ) f ( x n ) f ( y n ) β ( α + 1 ) f ( y n ) 2 f ( x n ) 2 + β f ( x n ) f ( y n ) 1 / 2 f ( x n ) f ( x n ) .
Soleymani developed an optimal fourth-order method [8] given by
y n = x n f ( x n ) f ( x n ) , x n + 1 = y n f ( x n ) 2 f ( x n ) 2 2 f ( x n ) f ( y n ) f ( y n ) f ( x n ) 1 + f ( y n ) 2 f ( x n ) 2 1 + f ( y n ) 2 f ( x n ) 2 1 + f ( x n ) 2 f ( x n ) 2 , n = 0 , 1 , 2 ,
Furthermore, an optimal-order method was proposed by Chun et al. [9] given by
y n = x n 2 3 f ( x n ) f ( x n ) , x n + 1 = x n + f ( x n ) + 3 f ( y n ) 2 f ( x n ) 6 f ( y n ) f ( x n ) f ( x n ) , n = 0 , 1 , 2 ,
On the other hand, sometimes it is possible to increase the order of convergence without any new function evaluation based on acceleration parameter(s) which appear in the error equation of the multipoint methods without memory. It was Traub [3], who slightly altered Steffensen’s method [10] and presented the first method with memory as follows:
γ 0 , x 0 are suitably given , w n = x n + γ n f ( x n ) , 0 γ n R , x n + 1 = x n f ( x n ) f [ x n , w n ] , n = 0 , 1 , 2 ,
This method has an order of convergence of 2.414. Still, if we use a better self-accelerating parameter, there are apparent chances that the order of convergence will increase.
Following the steps of Traub, many authors are constructing higher-order methods with and without memory. Among many others, Chicharro et al. [11] presented a bi-parametric family of order four and then developed a family of methods with memory having a higher order of convergence without further increasing the number of functional evaluations per iteration. In [12], the authors presented a derivative-free form of King’s family with memory. The authors in [13] developed a tri-parametric derivative-free family of Hansen–Patrick-type methods which requires only three functional evaluations to achieve optimal fourth-order convergence. Then, they extended the idea with memory as a result of which the R-order convergence increased from four to seven, without any additional functional evaluation.
The development of such methods has increased over the years. Some applications of these iterative methods can be seen in [14,15,16,17]. Thus, by taking into consideration these developments, we further attempt to propose an iterative method without memory and then convert it into a more efficient method with memory such that the order of convergence is increased without any further functional evaluation.
However, another important aspect of an iterative scheme to be considered is its stability, which is the analysis that tells us how dependent the scheme of the initial guesses used is. In this regard, a comparison between iterative methods by using the basins of attraction was developed by Ardelean [18]. This motivates us to work on the optimal-order methods and their with memory variants along with their basins of attraction.
The rest of the paper is organized as follows. Section 2 contains the development of a new iterative method without memory and the proof of its order of convergence. Section 3 covers the inclusion of memory to develop a new iterative method with memory and its error analysis. Numerical results for the proposed methods and comparisons with some of the existing methods to illustrate our theoretical results are given in Section 4. Section 5 depicts the convergence of the methods using basins of attraction. Lastly, Section 6 collates the conclusions.

R-Order of Convergence

For finding the R-order convergence [19] of our proposed method with memory, we make use of the following Theorem 1 given by Traub.
Theorem 1.
Suppose that ( I M ) is an iterative method with memory that generates a sequence { x m } (converging to the root ξ) of approximations to ξ. If there exists a non-zero constant ζ and non-negative numbers s j , 0 j k , such that the inequality,
ϵ m + 1 ζ j = 0 k ϵ m j s j
holds, then the R-order of convergence of the iterative method ( I M ) satisfies the inequality,
O R ( ( I M ) , ξ ) t * ,
where t * is the unique positive root of the equation,
t k + 1 j = 0 k s j t k j = 0 .

2. Iterative Method without Memory and Its Convergence Analysis

We aim to construct a new two-point derivative-free optimal scheme without memory in this section and extend it to a memory scheme.
If the well-known Steffensen’s method is combined with Newton’s method, we obtain the following fourth-order scheme:
y n = x n f ( x n ) 2 f ( w n ) f ( x n ) , x n + 1 = y n f ( y n ) f ( y n ) ,
where w n = x n + f ( x n ) . To avoid the computation of f ( y n ) , the authors in [20] approximated it by the derivative m ( y n ) of the following first-degree Padé approximant:
m ( t ) = a 1 + a 2 ( t y n ) 1 + a 3 ( t y n ) ,
where a 1 , a 2 and a 3 are real parameters to be determined satisfying the following conditions:
m ( x n ) = f ( x n ) ,
m ( y n ) = f ( y n ) ,
m ( w n ) = f ( w n ) .
Using these conditions, the derivative of the Padé approximant evaluated in y n is given as
m ( y n ) = f [ x n , y n ] f [ y n , w n ] f [ x n , w n ] .
Using (14) in the second step of (9), they presented the following scheme:
y n = x n f ( x n ) 2 f ( w n ) f ( x n ) , x n + 1 = y n f ( y n ) f [ x n , w n ] f [ x n , y n ] f [ y n , w n ] ,
where w n = x n + f ( x n ) . This scheme is optimal in the sense of the Kung–Traub conjecture having an order of convergence of four with three functional evaluations per iteration, f ( x n ) , f ( y n ) and f ( w n ) .
Now, in order to extend the method with memory, we devise the idea of introducing two parameters γ and λ in (15) and we present a modification in this method as follows:
y n = x n f ( x n ) f [ x n , w n ] + λ f ( w n ) , x n + 1 = y n f ( y n ) ( f [ x n , w n ] + λ f ( w n ) ) ( f [ x n , y n ] + λ f ( w n ) ) f [ y n , w n ] ,
where w n = x n + γ f ( x n ) .
This modified scheme yields the optimal order of convergence 4 having three functional evaluations per iteration, f ( x n ) , f ( y n ) and f ( w n ) .
Next, we establish the convergence results for our proposed family without memory given by Equation (16).
Theorem 2.
Suppose that f : D R R is a real function suitably differentiable in a domain D. If ξ D is a simple root of f ( x ) = 0 and an initial guess x 0 is sufficiently close to ξ, then the iterative method given by Equation (16), converges to ξ with convergence order p = 4 having the following error relation,
e n + 1 = ( 1 + f ( ξ ) γ ) 2 ( λ + c 2 ) ( 2 + f ( ξ ) γ ) λ c 2 + 2 c 2 2 c 3 e n 4 + O ( e n ) 5 ,
where e n = x n ξ , ξ is a simple root of f ( x ) = 0 and c n = f ( n ) ( ξ ) n ! f ( ξ ) , n = 2 , 3 ,
Proof. 
Expanding f ( x n ) about x n = ξ by the Taylor series, we have
f ( x n ) = f ( ξ ) ( e n + c 2 e n 2 + c 3 e n 3 + c 4 e n 4 ) + O ( e n ) 5 .
Using Equation (17) in the first step of Equation (16), we have
e n , y = y n ξ = ( 1 + f ( ξ ) γ ) ( λ + c 2 ) e n 2 + ( ( 2 + 2 f ( ξ ) γ + f ( ξ ) 2 γ 2 ) λ c 2 ( 2 + 2 f ( ξ ) γ + f ( ξ ) 2 γ 2 ) c 2 2 ( 1 + f ( ξ ) γ ) ( ( 1 + f ( ξ ) γ ) λ 2 ( 2 + f ( ξ ) γ ) c 3 ) ) e n 3 + ( ( 5 + 7 f ( ξ ) γ + 4 f ( ξ ) 2 γ 2 + f ( ξ ) 3 γ 3 ) λ c 2 2 + ( 4 + 5 f ( ξ ) γ + 3 f ( ξ ) 2 γ 2 + f ( ξ ) 3 γ 3 ) c 2 3 ( 4 + 7 f ( ξ ) γ + 5 f ( ξ ) 2 γ 2 + f ( ξ ) 3 γ 3 ) λ c 3 c 2 ( ( 3 + 5 f ( ξ ) γ + 3 f ( ξ ) 2 γ 2 + f ( ξ ) 3 γ 3 ) λ 2 + ( 7 + 10 f ( ξ ) γ + 7 f ( ξ ) 2 γ 2 + 2 f ( ξ ) 3 γ 3 ) c 3 ) + ( 1 + f ( ξ ) γ ) ( ( 1 + f ( ξ ) γ ) 2 λ 3 + ( 3 + 3 f ( ξ ) γ + f ( ξ ) 2 γ 2 ) c 4 ) ) e n 4 + O ( e n ) 5 .
In addition, the Taylor’s expansion of f ( y n ) is
f ( y n ) = f ( ξ ) ( e n , y + c 2 e n , y 2 + c 3 e n , y 3 + c 4 e n , y 4 ) + O ( e n , y ) 5 .
Using Equations (17)–(19), we have
f ( y n ) ( f [ x n , w n ] + λ f ( w n ) ) ( f [ x n , y n ] + λ f ( w n ) ) f [ y n , w n ] = ( 1 + f ( ξ ) γ ) ( λ + c 2 ) e n 2 + ( ( 2 + 2 f ( ξ ) γ + f ( ξ ) 2 γ 2 ) λ c 2 ( 2 + 2 f ( ξ ) γ + f ( ξ ) 2 γ 2 ) c 2 2 ( 1 + f ( ξ ) γ ) ( ( 1 + f ( ξ ) γ ) λ 2 ( 2 + f ( ξ ) γ ) c 3 ) ) e n 3 + ( ( 1 2 f ( ξ ) γ 2 f ( ξ ) 2 γ 2 ) λ c 2 2 + ( 2 + f ( ξ ) γ + f ( ξ ) 2 γ 2 + f ( ξ ) 3 γ 3 ) c 2 3 ( 3 + 5 f ( ξ ) γ + 4 f ( ξ ) 2 γ 2 + f ( ξ ) 3 γ 3 ) λ c 3 c 2 ( ( 1 + f ( ξ ) 2 γ 2 ) λ 2 + 2 ( 3 + 4 f ( ξ ) γ + 3 f ( ξ ) 2 γ 2 + f ( ξ ) 3 γ 3 ) c 3 ) + ( 1 + f ( ξ ) γ ) ( ( 1 + f ( ξ ) γ ) 2 λ 3 + ( 3 + 3 f ( ξ ) γ + f ( ξ ) 2 γ 2 ) c 4 ) ) e n 4 + O ( e n ) 5 .
Finally, putting Equation (20) into the second step of Equation (16), we obtain
e n + 1 = ( 1 + f ( ξ ) γ ) 2 ( λ + c 2 ) ( 2 + f ( ξ ) γ ) λ c 2 + 2 c 2 2 c 3 e n 4 + O ( e n ) 5 ,
which is the error equation for the proposed optimal scheme given by Equation (16) with a convergence order of four. This completes the proof. □

3. Iterative Method with Memory and Its Convergence Analysis

Now, we present an extension to the method given by Equation (16) by the inclusion of memory to improve the convergence order without the addition of any new functional evaluations.
If we clearly observed, it can be seen from the error relation given in Equation (21) that the order of convergence of the proposed family given by Equation (16) is 4 if γ 1 f ( ξ ) and λ c 2 . Therefore, if γ = 1 f ( ξ ) and λ = c 2 = f ( ξ ) 2 f ( ξ ) , then the order of convergence of our proposed family can be improved, but this value cannot be reached because the values of f ( ξ ) and f ( ξ ) are not practically available. Instead, we can use approximations calculated by already available information [21]. Hence, the main idea in constructing the methods with memory consists of the calculation of parameters γ = γ n and λ = λ n as the iteration proceeds by the formulae,
γ n = 1 f ( ξ ) and λ n = c 2 = f ( ξ ) 2 f ( ξ )
for n = 1 , 2 , Further, it is also assumed that the initial estimates γ 0 and λ 0 must be chosen before starting the iterations. Thus, we give an estimation for γ n and λ n given by
γ n = 1 N 3 ( x n ) and λ n = N 4 ( w n ) 2 N 4 ( w n ) ,
where N 3 ( k ) = N 3 ( k ; x n , x n 1 , y n 1 , w n 1 ) and N 4 ( k ) = N 4 ( k ; w n , x n , w n 1 , y n 1 , x n 1 ) are Newton’s interpolating polynomials of the third- and fourth-degrees, respectively, which are set through the best available nodal points, ( x n , x n 1 , y n 1 , w n 1 ) for N 3 and ( w n , x n , w n 1 , y n 1 , x n 1 ) for N 4 .
Thus, by replacing γ by γ n and λ by λ n in the method given by Equation (16), we obtain a new family with memory as follows:
γ 0 , λ 0 , x 0 are given , w 0 = x 0 + γ 0 f ( x 0 ) γ n = 1 N 3 ( x n ) , w n = x n + γ n f ( x n ) , λ n = N 4 ( w n ) 2 N 4 ( w n ) , n = 1 , 2 , , y n = x n f ( x n ) f [ x n , w n ] + λ n f ( w n ) , x n + 1 = y n f ( y n ) ( f [ x n , w n ] + λ n f ( w n ) ) ( f [ x n , y n ] + λ n f ( w n ) ) f [ y n , w n ] .
Next, we establish the convergence results for our proposed family with memory given by Equation (23).
Theorem 3.
Suppose that f : D R R is a real function suitably differentiable in a domain D. If ξ D is a simple root of f ( x ) = 0 and an initial guess x 0 is sufficiently close to ξ, then the iterative method given by Equation (23) converges to ξ with a convergence order of at least 7.
Proof. 
Let { x n } be a sequence of approximations generated by an iterative method ( I M ) . If this sequence converges to zero ξ of f with the R-order ( r ) of I M , then we can write
e n + 1 D n , r e n r , e n = x n ξ ,
where D n , r tends to the asymptotic error constant D r of I M , when n . Thus,
e n + 1 D n , r ( D n 1 , r e n 1 r ) r = D n , r D n 1 , r r e n 1 r 2
Let the iterative sequences { w n } and { y n } have R-orders r 1 and r 2 , respectively. Therefore, we obtain
e n , w = w n ξ D n , r 1 e n r 1 D n , r 1 ( D n 1 , r e n 1 r ) r 1 = D n , r 1 D n 1 , r r 1 e n 1 r r 1
and
e n , y = y n ξ D n , r 2 e n r 2 D n , r 2 ( D n 1 , r e n 1 r ) r 2 = D n , r 2 D n 1 , r r 2 e n 1 r r 2 .
Using (26), (27) and a lemma stated in [13], we obtain
1 + γ n f ( ξ ) ψ 1 e n 1 , w e n 1 , y e n 1 = ψ 1 D n 1 , r 1 D n 1 , r 2 e n 1 r 1 + r 2 + 1 , λ n + c 2 ψ 2 e n 1 , w e n 1 , y e n 1 = ψ 2 D n 1 , r 1 D n 1 , r 2 e n 1 r 1 + r 2 + 1 .
In view of our proposed family of methods without memory given by Equation (16), we have the following error relations,
e n , w = ( 1 + γ f ( ξ ) ) e n + O ( e n ) 2 ,
e n , y = ( 1 + γ f ( ξ ) ) ( λ + c 2 ) e n 2 + O ( e n ) 3 ,
e n + 1 = ϕ 1 ( 1 + γ f ( ξ ) ) 2 ( λ + c 2 ) e n 4 + O ( e n ) 5 ,
where ϕ 1 = ( 2 + f ( ξ ) γ ) λ c 2 + 2 c 2 2 c 3 .
According to the error relations given by Equations (29)–(31) with self-accelerating parameters, γ = γ n and λ = λ n , we can write the corresponding error relations for the methods given by Equation (23) with memory as follows:
e n , w ( 1 + γ n f ( ξ ) ) e n ,
e n , y ( 1 + γ n f ( ξ ) ) ( λ n + c 2 ) e n 2 ,
e n + 1 ϕ 2 ( 1 + γ n f ( ξ ) ) 2 ( λ n + c 2 ) e n 4 ,
where ϕ 2 = ( 2 + f ( ξ ) γ n ) λ n c 2 + 2 c 2 2 c 3 depending on iteration index since γ n and λ n are re-calculated in each step. Now using Equations (28) and (32)–(34), we obtain the following relations:
e n , w ( 1 + γ n f ( ξ ) ) e n ψ 1 D n 1 , r 1 D n 1 , r 2 D n 1 , r e n 1 r + r 1 + r 2 + 1 ,
e n , y ( 1 + γ n f ( ξ ) ) ( λ n + c 2 ) e n 2 ψ 1 ψ 2 D n 1 , r 1 2 D n 1 , r 2 2 D n 1 , r 2 e n 1 2 r + 2 r 1 + 2 r 2 + 2 ,
e n + 1 ϕ 2 ( 1 + γ n f ( ξ ) ) 2 ( λ n + c 2 ) e n 4 ϕ 2 ψ 1 2 ψ 2 D n 1 , r 1 3 D n 1 , r 2 3 D n 1 , r 4 e n 1 4 r + 3 r 1 + 3 r 2 + 3 .
Now, comparing the error exponents of e n 1 on the right-hand side of the pairs given by Equations (26) with (35), (27) with (36) and (25) with (37), respectively, we obtain the following system of equations:
r r 1 r r 1 r 2 = 1 , r r 2 2 r 2 r 1 2 r 2 = 2 , r 2 4 r 3 r 1 3 r 2 = 3 .
Solving this system of equations, we obtain a non-trivial solution as r 1 = 2 , r 2 = 4 and r = 7 . Hence, we can conclude that the lower bound of the R-order of our proposed family with memory given by Equation (23) is seven. This completes our proof. □

4. Numerical Results

In this section, the numerical results of our proposed scheme are examined. Furthermore, we will demonstrate the corresponding results after comparison with some existing schemes, both with and without memory. All calculations have been accomplished using Mathematica 11.1 in multiple precision arithmetic environments with specification of a processor Intel(R) Core(TM) i5-1035G1 CPU @ 1.00 GHz 1.20 GHz (64-bit operating system), Windows 11. We suppose that the initial values of γ (or γ 0 ) and λ (or λ 0 ) must be selected prior to performing the iterations and a suitable x 0 be given.
The functions used for our computations are given in Table 1.
To check the theoretical order of convergence, the computational order of convergence [22], ρ c (COC) is calculated using the following formula,
ρ c = l o g ( | f ( x k ) / f ( x k 1 ) | ) l o g ( | f ( x k 1 ) / f ( x k 2 ) | ) , k = 2 , 3 , ,
considering the last three approximations in the iterative procedure. The errors of approximations to the respective zeros of the test functions, x n ξ and COC are displayed in Table 2 and Table 3.
We consider the following existing methods for the comparisons:
Soleymani et al. method ( S M ) without memory [23]:
y n = x n f ( x n ) f [ x n , w n ] , w n = x n + γ f ( x n ) , γ R \ { 0 } , x n + 1 = x n f ( x n ) + f ( y n ) f [ x n , w n ] 2 f ( x n ) + α f ( y n ) f [ x n , w n ] f ( y n ) f ( x n ) 2 1 γ f [ x n , w n ] 2 + 2 γ f [ x n , w n ] , α R , n = 0 , 1 , 2 ,
Cordero et al. method ( A M 1 ) without memory [20]:
y n = x n f ( x n ) f [ x n , w n ] , w n = x n + f ( x n ) , x n + 1 = y n f ( y n ) f [ x n , w n ] f [ x n , y n ] f [ y n , w n ] , n = 0 , 1 , 2 ,
Chun method ( C M ) without memory [24]:
y n = x n f ( x n ) f ( x n ) , x n + 1 = x n f ( x n ) f ( x n ) ( 1 + u + 2 u 2 ) , u = f ( y n ) f ( x n ) n = 0 , 1 , 2 ,
Cordero et al. method ( A M 2 ) with memory [25]:
γ 0 , λ 0 , x 0 are given , w 0 = x 0 + γ 0 f ( x 0 ) γ n = 1 N 3 ( x n ) , w n = x n + γ n f ( x n ) , λ n = N 4 ( w n ) 2 N 4 ( w n ) , n = 1 , 2 , , y n = x n f ( x n ) f [ x n , w n ] + λ n f ( w n ) , x n + 1 = y n f ( y n ) ( f [ x n , y n ] + ( y n x n ) f [ x n , w n , y n ] ,
where N 3 and N 4 are as defined in Section 3.
Džunić method ( D M 1 and D M 2 ) with memory [26]:
γ 0 , λ 0 , x 0 are given , w 0 = x 0 + γ 0 f ( x 0 ) γ n = 1 N 3 ( x n ) , w n = x n + γ n f ( x n ) , λ n = N 4 ( w n ) 2 N 4 ( w n ) , n = 1 , 2 , , y n = x n f ( x n ) f [ x n , w n ] + λ n f ( w n ) , x n + 1 = y n f ( y n ) g ( t n ) ( f [ y n , w n ] + λ n f ( w n ) , t n = f ( y n ) f ( x n ) ,
where N 3 and N 4 are as defined in Section 3.
Furthermore, we consider some real-life problems, which are as follows:
Example 1.
Fractional conversion in a chemical reactor [27],
f 6 ( x ) = x 1 x 5 log 0.4 ( 1 x ) 0.4 0.5 x + 4.45977 = 0 .
Here, x denotes the fractional conversion of quantities in a chemical reactor. If x is less than zero or greater than one, then the above fractional conversion will be of no physical meaning. Hence, x is taken to be bounded in the region 0 x 1 . Moreover, the desired root is ξ 0.7573962462537538 .
Example 2.
The path traversed by an electron in the air gap between two parallel plates considering the multi-factor effect is given by
u ( t ) = u 0 + ν 0 + c 0 E m ω sin ω t 0 + β ( t t 0 ) + c 0 E 0 m ω 2 cos ( ω t + β ) + sin ( ω t + β ) ,
where u 0 and ν 0 are the position and velocity of the electron at time t 0 , m and c 0 are the mass and charge of the electron at rest and E 0 sin ( ω t + β ) is the RF electric field between the plates. If particular parameters are chosen, Equation (45) can be simplified as
f 7 ( x ) = x 1 2 cos x + π 4 = 0 .
The desired root of Equation (46) is ξ 0.3090932715417949 .
We also implemented our proposed schemes given by Equations (16) and (23) on the above-mentioned problems. Table 4 and Table 5 demonstrate the corresponding results. Further, Table 2 demonstrates COC for our proposed method without memory ( P M ) given by Equation (16), the method given by Equation (39) denoted as S M , the method given by Equation (40) denoted as A M 1 , and the method given by Equation (41) denoted as C M , respectively. Table 3 demonstrates COC for our proposed method with memory ( P M M ) given by Equation (23), the method given by Equation (42) denoted as A M 2 , and the method given by Equation (43) by taking g ( t ) = 1 + t denoted as D M 1 and g ( t ) = 1 / ( 1 t ) denoted by D M 2 , respectively.
It can be seen from Table 2 and Table 3 that for the function f 1 , A M 1 fails to provide a solution and D M 1 requires more than three iterations to converge to the root. Furthermore, P M M converges to the desired root with an error of approximations much lower than A M 2 and D M 2 . For the function f 2 , S M , A M 1 and D M 1 fail to provide a solution and C M and D M 2 do not converge to the desired solution within three iterations. S M has a somewhat complex structure, and as a consequence takes more time than our method P M in most of the cases to converge to the root. Furthermore, A M and D M 2 converge to the root taking more time than P M and P M M , respectively. C M has a drawback of its derivative, so it will not work on points at which the function is zero or close to zero.
Furthermore, for functions f 3 , f 4 and f 5 , the proposed methods P M and P M M converge to the required root with minimum error compared to the existing methods.
Hence, we can conclude that our methods work on several functions to obtain roots, whereas the existing methods have some limitations.
Remark 1.
The proposed schemes given by Equations (16) and (23) have been compared to some already existing methods and it can be seen from the computational results that our proposed schemes give results in many cases where the existing methods fail in terms of COC and errors, as depicted in Table 2, Table 3, Table 4 and Table 5. Our methods display a noticeable decrease in approximation errors, as shown in the above-mentioned tables.
Remark 2.
From Table 4 and Table 5, one can observe that for the function f 6 , the existing method A M 1 fails to converge. In addition, for the function f 7 , an obvious decrease in the order of convergence of the existing methods is noticeable.

5. Basins of Attraction

The basins of attraction of the root t * of u ( t ) = 0 is the set of all initial points t 0 in the complex plane that converge to t * on the application of the given iterative scheme. Our objective is to make use of the basins of attraction to examine the comparison of several root-finding iterative methods in the complex plane in terms of convergence and stability.
On this front, we take a 512 × 512 grid of the rectangle S = [ 2 , 2 ] × [ 2 , 2 ] C . A colour is assigned to each point t 0 S on the basis of the convergence of the corresponding method starting from t 0 to the simple root and if the method diverges, a black colour is assigned to that point. Thus, distinct colours are assigned to the distinct roots of the corresponding problem. It was decided that an initial point t 0 converges to a root t * when t * t 0 < 10 4 . Then, point t 0 is said to belong to the basins of attraction of t * . Likewise, the method beginning from the initial point t 0 is said to diverge if no root is located in a maximum of 25 iterations. We have used MATLAB R2022a software [28] to draw the presented basins of attraction.
Furthermore, Table 6 lists the average number of iterations denoted by Avg_Iter and the percentage of non-converging points denoted by P N C of the methods to generate the basins of attraction.
To carry out the desired comparisons, we considered the test problems given below:
Problem 1.
The first function considered is p 1 ( z ) = z 2 1 . The roots of this function are 1 and 1 . The basins corresponding to our proposed method and the existing methods are shown in Figure 1 and Figure 2. From Table 6, it can be seen that the proposed methods, P M and P M M converge to the root in fewer iterations. Furthermore, from the figures, it is observed that P M M converges to the root with no diverging points but the existing methods have some points painted as black. S M , in particular has very small basins.
Problem 2.
The second function taken is p 2 ( z ) = z 3 1 with roots 1 , 0.5 + 0.866 i and 0.5 0.866 i . Figure 3 and Figure 4 show the basins for p 2 ( z ) in which it can be seen that S M , A M 1 and D M 1 have wider regions of divergence. Moreover, the average number of iterations taken by the proposed methods is less in each case compared to the existing methods.
Problem 3.
The third function considered is p 3 ( z ) = z 4 1 with roots ± 1 and ± i . Figure 5 and Figure 6 show that S M , C M and D M 1 have smaller basins. Although P M and P M M have some diverging points, they converge in a fewer number of iterations faster than the existing methods.
Therefore, we can conclude that from Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6, it can be observed that P M has larger basins in comparison to S M and A M 1 in all cases. The basins for D M 1 are very small in comparison to P M M in all cases. In addition, from Table 6, we observe that the average number of iterations taken by the methods S M , A M 1 , and C M are more than P M and for D M 1 and D M 2 , the iterations required are more than P M M .
Remark 3.
One can see from Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 and Table 6 that our proposed methods have larger basins of attraction in comparison to the existing ones. In addition, there is a marginal increase in the average number of iterations per point of the existing methods. Consequently, through our proposed methods, the chances of non-convergence to the root are less when compared to the existing methods.

6. Conclusions

We have proposed a new fourth-order optimal method without memory. In order to increase the order of convergence, we have extended the proposed method without memory to with memory, without the addition of any new functional evaluations taking into consideration two self-accelerating parameters. Consequently, the order of convergence increased from four to seven. Computational results demonstrate that the proposed methods converge to the root with a higher rate in comparison to other methods of the same order at the considered point. In addition, our proposed schemes give results in many of the cases where the existing methods fail in terms of COC and errors. Moreover, we have also presented the basins of attraction for the proposed method as well as some existing methods, which assert that the chances of non-convergence to the root much less in our proposed methods when compared to the existing methods.

Author Contributions

M.K.: Conceptualization; methodology; validation; H.S.: writing—original draft preparation; M.K. and R.B.: writing—review and editing, supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This project was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under grant no. (KEP-MSc-58-130-43). The authors, therefore, acknowledge with thanks DSR for technical and financial support.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to sincerely thank the reviewers for their valuable suggestions, which significantly improved the readability of the paper. The second author gratefully acknowledges technical support from the Seed Money Project (TU/DORSP/57/7290) to support this research work of TIET, Punjab.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Argyros, I.K.; Kansal, M.; Kanwar, V.; Bajaj, S. Higher-order derivative-free families of Chebyshev-Halley type methods with or without memory for solving nonlinear equations. Appl. Math. Comput. 2017, 315, 224–245. [Google Scholar] [CrossRef]
  2. Jain, P.; Chand, P.B. Derivative free iterative methods with memory having R-order of convergence. Int. J. Nonlinear Sci. Numer. Simul. 2020, 21, 641–648. [Google Scholar] [CrossRef]
  3. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Bergen, NJ, USA, 1964. [Google Scholar]
  4. Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint methods for solving nonlinear equations: A survey. Appl. Math. Comput. 2014, 226, 635–660. [Google Scholar] [CrossRef] [Green Version]
  5. Kung, H.T.; Traub, J.F. Optimal order of one-point and multi-point iteration. J. Assoc. Comput. Math. 1974, 21, 643–651. [Google Scholar] [CrossRef]
  6. Kou, J.; Li, Y.; Wang, X. A composite fourth-order iterative method for solving non-linear equations. Appl. Math. Comput. 2007, 184, 471–475. [Google Scholar] [CrossRef]
  7. Kansal, M.; Kanwar, V.; Bhatia, S. New modifications of Hansen-Patrick’s family with optimal fourth and eighth orders of convergence. Appl. Math. Comput. 2015, 269, 507–519. [Google Scholar] [CrossRef]
  8. Soleymani, F. Novel computational iterative methods with optimal order for nonlinear equations. Adv. Numer. Anal. 2011, 2011, 270903. [Google Scholar] [CrossRef] [Green Version]
  9. Chun, C.; Lee, M.Y.; Neta, B.; Džunić, J. On optimal fourth-order iterative methods free from second derivative and their dynamics. Appl. Math. Comput. 2012, 218, 6427–6438. [Google Scholar] [CrossRef] [Green Version]
  10. Zheng, Q.; Li, J.; Huang, F. An optimal Steffensen-type family for solving nonlinear equations. Appl. Math. Comput. 2011, 217, 9592–9597. [Google Scholar] [CrossRef]
  11. Chicharro, F.I.; Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. King-type derivative-free iterative families: Real and memory dynamics. Complexity 2017, 2017, 2713145. [Google Scholar] [CrossRef]
  12. Sharifi, S.; Siegmund, S.; Salimi, M. Solving nonlinear equations by a derivative-free form of the King’s family with memory. Calcolo 2016, 53, 201–215. [Google Scholar] [CrossRef] [Green Version]
  13. Kansal, M.; Kanwar, V.; Bhatia, S. Efficient derivative-free variants of Hansen-Patrick’s family with memory for solving nonlinear equations. Numer. Algorithms 2016, 73, 1017–1036. [Google Scholar] [CrossRef]
  14. Jagan, K.; Sivasankaran, S. Soret & Dufour and triple stratification effect on MHD flow with velocity slip towards a stretching cylinder. Math. Comput. Appl. 2022, 27, 25. [Google Scholar]
  15. Qasem, S.A.; Sivasankaran, S.; Siri, Z.; Othman, W.A. Effect of thermal radiation on natural conviction of a nanofluid in a square cavity with a solid body. Therm. Sci. 2021, 25, 1949–1961. [Google Scholar] [CrossRef]
  16. Asmadi, M.S.; Kasmani, R.; Siri, Z.; Sivasankaran, S. Upper-Convected Maxwell fluid analysis over A horizontal wedge using Cattaneo-Christov heat flux model. Therm. Sci. 2021, 25, 1013–1021. [Google Scholar] [CrossRef] [Green Version]
  17. Kasmani, R.M.; Sivasankaran, S.; Bhuvaneswari, M.; Siri, Z. Effect of chemical reaction on convective heat transfer of boundary layer flow in nanofluid over a wedge with heat generation/absorption and suction. J. Appl. Fluid Mech. 2015, 9, 379–388. [Google Scholar] [CrossRef]
  18. Ardelean, G. A comparison between iterative methods by using the basins of attraction. Appl. Math. Comput. 2011, 218, 88–95. [Google Scholar] [CrossRef]
  19. Ortega, J.M.; Rheinboldt, W.C. Iterative Solutions of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  20. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A new technique to obtain derivative-free optimal iterative methods for solving nonlinear equations. J. Comput. Appl. Math. 2013, 252, 95–102. [Google Scholar] [CrossRef]
  21. Soleymani, F.; Lotfi, T.; Tavakoli, E.; Haghani, F.K. Several iterative methods with memory using self-accelerators. Appl. Math. Comput. 2015, 254, 452–458. [Google Scholar] [CrossRef]
  22. Jay, I.O. A note on Q-order of convergence. BIT Numer. Math. 2011, 41, 422–429. [Google Scholar] [CrossRef]
  23. Soleymani, F.; Sharma, R.; Li, X.; Tohidi, E. An optimized derivative-free form of the Potra-Pták method. Math. Comput. Modell. 2012, 56, 97–104. [Google Scholar] [CrossRef]
  24. Chun, C. Some variants of King’s fourth-order family of methods for nonlinear equations. Appl. Math. Comput. 2007, 190, 57–62. [Google Scholar] [CrossRef]
  25. Cordero, A.; Lotfi, T.; Bakhtiari, P.; Torregrosa, J.R. An efficient two-parametric family with memory for nonlinear equations. Numer. Algorithms 2015, 68, 323–335. [Google Scholar] [CrossRef] [Green Version]
  26. Džunić, J. On efficient two-parameter methods for solving nonlinear equations. Numer. Algorithms 2013, 63, 549–569. [Google Scholar] [CrossRef]
  27. Shacham, M. Numerical solution of constrained nonlinear algebraic equations. Int. J. Numer. Methods Eng. 1986, 23, 1455–1481. [Google Scholar] [CrossRef]
  28. Zachary, J.L. Introduction to Scientific Programming: Computational Problem Solving Using Maple and C; Springer: New York, NY, USA, 2012. [Google Scholar]
Figure 1. Basins of attraction for P M , S M , A M 1 , and C M , respectively, for p 1 ( z ) .
Figure 1. Basins of attraction for P M , S M , A M 1 , and C M , respectively, for p 1 ( z ) .
Mca 28 00048 g001
Figure 2. Basins of attraction for P M M , A M 2 , D M 1 , and D M 2 , respectively, for p 1 ( z ) .
Figure 2. Basins of attraction for P M M , A M 2 , D M 1 , and D M 2 , respectively, for p 1 ( z ) .
Mca 28 00048 g002
Figure 3. Basins of attraction for P M , S M , A M 1 , and C M , respectively, for p 2 ( z ) .
Figure 3. Basins of attraction for P M , S M , A M 1 , and C M , respectively, for p 2 ( z ) .
Mca 28 00048 g003
Figure 4. Basins of attraction for P M M , A M 2 , D M 1 , and D M 2 , respectively, for p 2 ( z ) .
Figure 4. Basins of attraction for P M M , A M 2 , D M 1 , and D M 2 , respectively, for p 2 ( z ) .
Mca 28 00048 g004
Figure 5. Basins of attraction for P M , S M , A M 1 , and C M , respectively, for p 3 ( z ) .
Figure 5. Basins of attraction for P M , S M , A M 1 , and C M , respectively, for p 3 ( z ) .
Mca 28 00048 g005
Figure 6. Basins of attraction for P M M , A M 2 , D M 1 , and D M 2 , respectively, for p 3 ( z ) .
Figure 6. Basins of attraction for P M M , A M 2 , D M 1 , and D M 2 , respectively, for p 3 ( z ) .
Mca 28 00048 g006
Table 1. Test functions along with their roots and initial guesses taken.
Table 1. Test functions along with their roots and initial guesses taken.
Test FunctionReal RootInitial Guess Taken
f 1 ( x ) = ( x 2 ) ( x 10 + x + 1 ) e x 1 = 0 21.925
f 2 ( x ) = e x 2 + 7 x 30 1 = 0 32.90
f 3 ( x ) = sin ( π x ) e x 2 + x cos x 1 + x log ( x sin x + 1 ) = 0 00.05
f 4 ( x ) = e x 3 x cos ( x 2 1 ) + x 3 + 1 = 0 1 1.10
f 5 ( x ) = e x 2 3 x sin x + log ( x 2 + 1 ) = 0 00.05
Table 2. Comparison of the different methods without memory.
Table 2. Comparison of the different methods without memory.
Without Memory Methods x 1 ξ x 2 ξ x 3 ξ ρ c CPU Time
f 1 ( x )
P M ( γ = 0.1 , λ = 0.1 ) 1.1026 × 10 2 3.4683 × 10 5 2.3844 × 10 15 4.03080.390
S M ( α = 10 , γ = 0.01 ) 4.5722 × 10 2 1.4814 × 10 3 1.8466 × 10 10 4.88880.343
A M 1 FFF # #
C M 6.6406 × 10 2 1.8454 × 10 3 3.2406 × 10 9 3.45740.329
f 2 ( x )
P M ( γ = 0.1 , λ = 0.1 ) 5.3295 × 10 3 3.6701 × 10 8 6.3025 × 10 29 4.01080.265
S M ( α = 10 , γ = 0.01 ) FFF # #
A M 1 FFF # #
C M N C N C N C #
f 3 ( x )
P M ( γ = 0.1 , λ = 0.1 ) 7.6728 × 10 6 4.3783 × 10 21 4.6420 × 10 82 4.00000.671
S M ( α = 10 , γ = 0.01 ) 2.2439 × 10 5 1.4028 × 10 18 2.1427 × 10 71 4.00000.875
A M 1 3.8672 × 10 5 1.3302 × 10 17 1.8622 × 10 67 4.00000.812
C M 2.2767 × 10 5 1.1497 × 10 18 7.4781 × 10 72 4.00000.624
f 4 ( x )
P M ( γ = 0.1 , λ = 0.1 ) 3.6861 × 10 6 1.6522 × 10 23 6.6701 × 10 93 4.00000.312
S M ( α = 10 , γ = 0.01 ) 1.4106 × 10 5 2.4856 × 10 21 2.3942 × 10 84 4.00000.453
A M 1 9.0450 × 10 5 1.2109 × 10 15 3.8809 × 10 59 4.00010.422
C M 2.2615 × 10 5 1.8131 × 10 19 7.4932 × 10 76 4.00000.281
f 5 ( x )
P M ( γ = 0.1 , λ = 0.1 ) 1.0074 × 10 5 3.6243 × 10 20 6.0724 × 10 78 4.00000.390
S M ( α = 10 , γ = 0.01 ) 3.8032 × 10 4 1.0334 × 10 12 5.6176 × 10 47 4.00030.594
C M 1.6301 × 10 4 2.0715 × 10 14 5.4018 × 10 54 3.99990.359
F—Method fails; ##—COC not required in case of failure; NC—Not converging to root after three iterations; #—COC not mentioned in case of non-convergence after three iterations.
Table 3. Comparison of the different methods with memory.
Table 3. Comparison of the different methods with memory.
Without Memory Methods x 1 ξ x 2 ξ x 3 ξ ρ c CPU Time
f 1 ( x )
P M M ( γ 0 = 0.1 , λ 0 = 0.1 ) 1.1025 × 10 2 2.0090 × 10 11 1.7059 × 10 72 6.97280.984
A M 2 ( γ 0 = λ 0 = 0.1 ) 3.7765 × 10 1 1.8449 × 10 2 6.9515 × 10 12 4.86781.031
D M 1 ( γ 0 = λ 0 = 0.1 ) N C N C N C #
D M 2 ( γ 0 = λ 0 = 0.1 ) 9.4868 × 10 1 7.6918 × 10 2 3.7808 × 10 6 1.98710.969
f 2 ( x )
P M M ( γ 0 = 0.1 , λ 0 = 0.1 ) 5.3295 × 10 3 2.2157 × 10 12 6.5108 × 10 78 6.97410.844
A M 2 ( γ 0 = λ 0 = 0.1 ) 5.1899 × 10 2 3.2288 × 10 6 4.5631 × 10 35 6.61210.812
D M 1 ( γ 0 = λ 0 = 0.1 ) FFF # #
D M 2 ( γ 0 = λ 0 = 0.1 ) N C N C N C #
f 3 ( x )
P M M ( γ 0 = 0.1 , λ 0 = 0.1 ) 7.6728 × 10 6 4.8557 × 10 38 7.5120 × 10 261 6.91993.047
A M 2 ( γ 0 = λ 0 = 0.1 ) 4.2993 × 10 6 1.1962 × 10 37 2.2842 × 10 258 6.99463.047
D M 1 ( γ 0 = λ 0 = 0.1 ) 2.1772 × 10 5 7.2683 × 10 34 6.9858 × 10 232 6.95373.141
D M 2 ( γ 0 = λ 0 = 0.1 ) 1.2537 × 10 5 3.6538 × 10 36 5.6673 × 10 248 6.93653.266
f 4 ( x )
P M M ( γ 0 = 0.1 , λ 0 = 0.1 ) 3.6861 × 10 6 2.9711 × 10 39 2.0613 × 10 271 7.01521.360
A M 2 ( γ 0 = λ 0 = 0.1 ) 1.2532 × 10 5 2.6367 × 10 35 6.0850 × 10 244 7.03031.328
D M 1 ( γ 0 = λ 0 = 0.1 ) 1.2723 × 10 5 2.8862 × 10 35 1.1458 × 10 243 7.03011.359
D M 2 ( γ 0 = λ 0 = 0.1 ) 1.2656 × 10 5 2.7964 × 10 35 9.1836 × 10 244 7.03011.358
f 5 ( x )
P M M ( γ 0 = 0.1 , λ 0 = 0.1 ) 1.0074 × 10 5 6.5505 × 10 34 9.9064 × 10 231 6.98271.625
A M 2 ( γ 0 = λ 0 = 0.1 ) 2.5921 × 10 5 6.2077 × 10 31 4.4988 × 10 211 7.03101.672
D M 1 ( γ 0 = λ 0 = 0.1 ) 6.0285 × 10 5 6.9376 × 10 28 9.7865 × 10 190 7.05571.891
D M 2 ( γ 0 = λ 0 = 0.1 ) 1.2734 × 10 5 3.1600 × 10 32 3.9836 × 10 220 7.06251.812
F—Method fails; ##—COC not required in case of failure; NC—Not converging to root after three iterations; #—COC not mentioned in case of non-convergence after three iterations.
Table 4. Comparison of the different methods without memory for real-life problems.
Table 4. Comparison of the different methods without memory for real-life problems.
Without Memory Methods x 1 ξ x 2 ξ x 3 ξ ρ c CPU Time
f 6 ( x )
P M ( γ = 0.1 , λ = 0.1 ) 7.5452 × 10 3 1.0390 × 10 3 3.8220 × 10 7 3.75810.454
S M ( α = 10 , γ = 0.01 ) 1.4049 × 10 3 5.3743 × 10 7 8.8482 × 10 17 4.02390.390
A M 1 FFF # #
C M 1.0275 × 10 3 1.7055 × 10 8 8.8493 × 10 17 3.99150.265
f 7 ( x )
P M ( γ = 0.1 , λ = 0.1 ) 1.0994 × 10 3 8.4592 × 10 14 3.0463 × 10 31 3.99990.281
S M ( α = 10 , γ = 0.01 ) 8.6465 × 10 4 6.5137 × 10 14 3.0463 × 10 31 4.00010.374
A M 1 2.3818 × 10 3 3.9429 × 10 12 3.0463 × 10 31 3.99980.405
C M 1.5968 × 10 3 6.6431 × 10 13 3.0463 × 10 31 3.99980.219
F—Method fails; ##—COC not required in case of failure.
Table 5. Comparison of the different methods with memory for real-life problems.
Table 5. Comparison of the different methods with memory for real-life problems.
Without Memory Methods x 1 ξ x 2 ξ x 3 ξ ρ c CPU Time
f 6 ( x )
P M M ( γ 0 = 0.1 , λ 0 = 0.1 ) 7.4286 × 10 3 9.0440 × 10 8 8.8493 × 10 17 7.19191.641
A M 2 ( γ 0 = λ 0 = 0.1 ) 3.4817 × 10 4 1.7393 × 10 13 8.8493 × 10 17 7.79531.171
D M 1 ( γ 0 = λ 0 = 0.1 ) 8.2698 × 10 2 3.2902 × 10 2 1.0096 × 10 2 1.88431.468
D M 2 ( γ 0 = λ 0 = 0.1 ) 4.4070 × 10 2 2.4810 × 10 2 4.9141 × 10 3 1.07041.938
f 7 ( x )
P M M ( γ 0 = 0.1 , λ 0 = 0.1 ) 1.0994 × 10 3 5.2189 × 10 26 3.0463 × 10 31 6.97181.109
A M 2 ( γ 0 = λ 0 = 0.1 ) 8.5295 × 10 4 5.7122 × 10 29 3.0463 × 10 31 6.85731.219
D M 1 ( γ 0 = λ 0 = 0.1 ) 2.4626 × 10 3 1.8209 × 10 23 3.0463 × 10 31 6.93450.984
D M 2 ( γ 0 = λ 0 = 0.1 ) 1.5623 × 10 3 3.6273 × 10 25 3.0463 × 10 31 6.92451.078
Table 6. Comparison of the different methods without and with memory in terms of Avg_Iter and P N C .
Table 6. Comparison of the different methods without and with memory in terms of Avg_Iter and P N C .
Without Memory MethodsAvg_Iter P NC With Memory MethodsAvg_Iter P NC
p 1 ( z )
P M ( γ = 0.1 , λ = 0.1 ) 3.0552 0.6718 P M M ( γ 0 = 0.1 , λ 0 = 0.1 ) 2.6643 0
S M ( α = 10 , γ = 0.01 ) 4.1128 3.0064 A M 2 ( γ 0 = λ 0 = 0.1 ) 2.5278 0.0160
A M 1 3.3635 0.0072 D M 1 ( γ 0 = λ 0 = 0.1 ) 4.3746 7.5332
C M 3.8199 0.2117 D M 2 ( γ 0 = λ 0 = 0.1 ) 2.8281 0.0084
p 2 ( z )
P M ( γ = 0.1 , λ = 0.1 ) 5.8428 10.6179 P M M ( γ 0 = 0.1 , λ 0 = 0.1 ) 4.8963 4.5556
S M ( α = 10 , γ = 0.01 ) 9.4207 26.5533 A M 2 ( γ 0 = λ 0 = 0.1 ) 4.2219 1.9265
A M 1 9.8161 11.2513 D M 1 ( γ 0 = λ 0 = 0.1 ) 10.5985 33.8319
C M 6.3409 4.5195 D M 2 ( γ 0 = λ 0 = 0.1 ) 5.1956 0.7041
p 3 ( z )
P M ( γ = 0.1 , λ = 0.1 ) 8.4306 21.6465 P M M ( γ 0 = 0.1 , λ 0 = 0.1 ) 5.9777 3.6148
S M ( α = 10 , γ = 0.01 ) 12.8203 42.1045 A M 2 ( γ 0 = λ 0 = 0.1 ) 6.3765 2.2373
A M 1 10.2165 6.6311 D M 2 ( γ 0 = λ 0 = 0.1 ) 17.1381 63.0899
C M 9.5562 16.9537 D M 2 ( γ 0 = λ 0 = 0.1 ) 7.8478 3.5973
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sharma, H.; Kansal, M.; Behl, R. An Efficient Optimal Derivative-Free Fourth-Order Method and Its Memory Variant for Non-Linear Models and Their Dynamics. Math. Comput. Appl. 2023, 28, 48. https://doi.org/10.3390/mca28020048

AMA Style

Sharma H, Kansal M, Behl R. An Efficient Optimal Derivative-Free Fourth-Order Method and Its Memory Variant for Non-Linear Models and Their Dynamics. Mathematical and Computational Applications. 2023; 28(2):48. https://doi.org/10.3390/mca28020048

Chicago/Turabian Style

Sharma, Himani, Munish Kansal, and Ramandeep Behl. 2023. "An Efficient Optimal Derivative-Free Fourth-Order Method and Its Memory Variant for Non-Linear Models and Their Dynamics" Mathematical and Computational Applications 28, no. 2: 48. https://doi.org/10.3390/mca28020048

Article Metrics

Back to TopTop