Next Article in Journal
Anchored and Lifted Diffusion Flames Supported by Symmetric and Asymmetric Edge Flames
Next Article in Special Issue
On a Fifth-Order Method for Multiple Roots of Nonlinear Equations
Previous Article in Journal
An Extended Vector Polar Histogram Method Using Omni-Directional LiDAR Information
Previous Article in Special Issue
Extension of King’s Iterative Scheme by Means of Memory for Nonlinear Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Families of Multi-Point Iterative Methods and Their Self-Acceleration with Memory for Solving Nonlinear Equations

1
Department of Mathematics, National Institute of Technology Manipur, Imphal 795004, Manipur, India
2
Department of Physics and Chemistry, Technical University of Cluj-Napoca, 103-105 Muncii Blvd., 400641 Cluj-Napoca, Romania
*
Author to whom correspondence should be addressed.
Symmetry 2023, 15(8), 1546; https://doi.org/10.3390/sym15081546
Submission received: 11 July 2023 / Revised: 26 July 2023 / Accepted: 4 August 2023 / Published: 6 August 2023

Abstract

:
In this paper, we have constructed new families of derivative-free three- and four-parametric methods with and without memory for finding the roots of nonlinear equations. Error analysis verifies that the without-memory methods are optimal as per Kung–Traub’s conjecture, with orders of convergence of 4 and 8, respectively. To further enhance their convergence capabilities, the with-memory methods incorporate accelerating parameters, elevating their convergence orders to 7.5311 and 15.5156, respectively, without introducing extra function evaluations. As such, they exhibit exceptional efficiency indices of 1.9601 and 1.9847, respectively, nearing the maximum efficiency index of 2. The convergence domains are also analysed using the basins of attraction, which exhibit symmetrical patterns and shed light on the fascinating interplay between symmetry, dynamic behaviour, the number of diverging points, and efficient root-finding methods for nonlinear equations. Numerical experiments and comparison with existing methods are carried out on some nonlinear functions, including real-world chemical engineering problems, to demonstrate the effectiveness of the new proposed methods and confirm the theoretical results. Notably, our numerical experiments reveal that the proposed methods outperform their existing counterparts, offering superior precision in computation.

1. Introduction

Iterative methods play a crucial role in solving complex nonlinear equations of the form
Ω ( s ) = 0 ,
where  Ω : D R R  represents a real function defined on an open interval D. With diverse applications spanning scientific and engineering domains, the computation of nonlinear Equation (1) remains a common yet formidable challenge due to the lack of analytical methods. However, iterative methods provide approximate solutions to nonlinear Equation (1) with high accuracy.
Let  α D R  be a simple root of (1) and  s 0  be an initial approximation to  α . Then,  Ω ( α ) = 0  and  Ω ( α ) 0 . The most widely used iterative method for finding the simple root of (1) is given below.
s n + 1 = s n Ω ( s n ) Ω ( s n ) , n = 0 , 1 , 2 , ,
which is the well-known Newton method [1]. It is a one-point without-memory method with a quadratic order of convergence.
In the past few decades, various optimal multi-point without-memory methods have been developed for the computation of approximated simple roots [2,3,4,5,6]. The concept of an optimal without-memory method is rooted in Kung–Traub’s conjecture [7]. This conjecture proposes that a multi-point without-memory iterative method, which requires q function evaluations per iteration, achieves optimality when its convergence order is precisely  2 q 1 . The Newton method (2) is optimal for  q = 2 . However, the evaluation of the derivative in the Newton method is a setback for many problems where derivative evaluation is complicated or even does not exist.
To obtain a derivative-free variant of the Newton method (2), the first derivative  Ω ( s n )  in (2) is approximated using the first-order Newton divided difference
Ω ( s n ) Ω [ s n , w n ] = Ω ( s n ) Ω ( w n ) s n w n ,
where  w n = s n + γ Ω ( s n ) , γ 0  is any real parameter, Equation (2) can be expressed as
s n + 1 = s n Ω ( s n ) Ω [ s n , w n ] ,
which corresponds to the Traub–Steffensen method [1]. By setting  γ = 1 , we obtain the well-known Steffensen method [8]. To assess the efficiency of an iterative method, Ostrowski [9] introduced the efficiency index  ( E I ) = p 1 q , where q represents the number of function evaluations per iteration and p denotes the order of convergence.
The extension of without-memory methods into with-memory methods using accelerating parameters have gained much attention in recent years [10,11,12,13]. In multi-point with memory iterative methods, the order of convergence is significantly increased without any additional function evaluation by using information from the current as well as the previous iterations. In this paper, we introduce new derivative-free families of three-parametric three-point and four-parametric four-point with and without-memory methods for finding simple roots of nonlinear equations. The formulation of the methods is based on the derivative-free biparametric families of without-memory methods developed in [14] and by using accelerating parameters without any additional function evaluations for the with-memory methods. As a result, the orders of convergence of the with-memory methods increase from 4 to 7.5311 and 8 to 15.5156. The accelerating parameters are approximated using Newton’s interpolating polynomials so as to obtain highly efficient with-memory methods.
The subsequent sections of this paper are organised as follows. Section 2 provides the development of modified derivative-free families of without-memory methods, including an in-depth analysis of their theoretical convergence properties. In Section 3, we delve into the derivation and convergence analysis of the derivative-free families of with-memory methods. The numerical experiments and comparative study of the proposed with and without-memory methods against existing approaches on various test functions, including real-world problems, are presented in Section 4 to assess the effectiveness and applicability of our proposed methods. In this section, we also explore the dynamical properties of the methods through the study of basins of attraction, revealing the presence of reflection symmetry in all provided basins of attraction. Finally, Section 5 concludes this paper with key remarks and observations.

2. Modified Families of Three- and Four-Parametric Without-Memory Methods

In this section, we present the new modified derivative-free families of three- and four-parametric multi-point without-memory methods of optimal order in two separate subsections.

2.1. Modified Families of Three-Parametric Three-Point Without-Memory Methods

First, let us consider the following two derivative-free families of biparametric three-point without-memory methods [14].
w n = s n + γ Ω ( s n ) , y n = s n Ω ( s n ) Ω [ s n , w n ] + β Ω ( w n ) , s n + 1 = y n Ω ( y n ) ξ ( y n ) 1 + 1 2 Ω ( y n ) Ω [ y n , w n ] + β Ω ( w n ) 2 ρ ( y n ) Ω ( y n ) .
w n = s n + γ Ω ( s n ) , y n = s n Ω ( s n ) Ω [ s n , w n ] + β Ω ( w n ) , s n + 1 = y n Ω ( y n ) ξ ( y n ) 1 + 1 2 2 Ω ( y n ) Ω [ y n , w n ] + ξ ( y n ) 2 ρ ( y n ) Ω ( y n ) ,
where  ξ ( y n ) = Ω [ s n , y n ] Ω [ y n , w n ] Ω [ s n , w n ]  and  ρ ( y n ) = 2 Ω [ s n , y n ] Ω [ y n , w n ] Ω [ s n , y n , w n ] ( Ω [ s n , w n ] ) 2 .
From here, we introduce a new parameter  λ R { 0 }  through the modification of  ξ ( y n )  as follows:
M ( y n ) = Ω [ s n , y n ] Ω [ y n , w n ] Ω [ s n , w n ] + λ ( y n s n ) ( y n w n ) = ξ ( y n ) + λ ( y n s n ) ( y n w n ) .
Now, substituting the above Equation (7) in Equations (5) and (6), we get two new three-parametric families of without memory iterative methods. These modified methods, denoted as Modified Methods (MM), are defined as follows:
Modified Method 4a (MM 4 a ):
w n = s n + γ Ω ( s n ) , y n = s n Ω ( s n ) Ω [ s n , w n ] + β Ω ( w n ) , s n + 1 = y n Ω ( y n ) M ( y n ) 1 + 1 2 Ω ( y n ) Ω [ y n , w n ] + β Ω ( w n ) 2 ρ ( y n ) Ω ( y n ) .
Modified Method 4b (MM 4 b ):
w n = s n + γ Ω ( s n ) , y n = s n Ω ( s n ) Ω [ s n , w n ] + β Ω ( w n ) , s n + 1 = y n Ω ( y n ) M ( y n ) 1 + 1 2 2 Ω ( y n ) Ω [ y n , w n ] + ξ ( y n ) 2 ρ ( y n ) Ω ( y n ) .
These modified methods adhere to Kung–Traub’s conjecture, requiring three evaluations of the function at each iteration and exhibit an efficiency index of  4 1 / 3 1.587 .
Next, we explore the theoretical convergence analysis of the newly introduced modified methods, specifically MM 4 a  and MM 4 b , as outlined in the following theorem.
Theorem 1.
Let an initial approximation  s 0  be close enough to the root α of a sufficiently differentiable real function  Ω : D R R , where D is an open interval. Then, the modified methods MM 4 a  (8) and MM 4 b  (9) exhibit a fourth order of convergence for any  β , γ , λ R {0}. Additionally, both the methods have the same error equation given by
ε n + 1 = ( 1 + Ω ( α ) γ ) 2 β + d 2 ( λ + Ω ( α ) d 2 2 Ω ( α ) d 3 ) Ω ( α ) ε n 4 + O ( ε n 5 ) ,
where  d j = 1 j ! Ω ( j ) ( α ) Ω ( α ) , j = 2 , 3 , , and  ε n = s n α  is the error at  n t h  iteration.
Proof. 
The proof of Modified Method 4a (MM 4 a ):
Let the error at  n t h  iteration be  ε n = s n α . Then, employing the Taylor’s series expansion in the vicinity of  s = α , we obtain
Ω ( s n ) = Ω ( α ) ε n + d 2 ε n 2 + d 3 ε n 3 + d 4 ε n 4 + O ( ε n 5 ) ,
where  d j = 1 j ! Ω ( j ) ( α ) Ω ( α ) , j = 2 , 3 , . By using  w n = s n + γ Ω ( s n ) γ R { 0 } , expanding  Ω ( w n )  using Taylor’s series yields
Ω ( w n ) = Ω ( α ) 1 + Ω ( α ) γ ε n + 1 + Ω ( α ) γ ( 3 + Ω ( α ) γ ) d 2 ε n 2 + i = 1 2 A i ε n i + 2 + O ( ε n 5 ) ,
where  A i , i = 1 , 2  are functions of  γ , Ω ( α ) , d 2 , d 3 , d 4 , i.e.,  A 1 = 2 Ω ( α ) γ 1 + Ω ( α ) γ d 2 2 + Ω ( α ) γ d 3 + ( 1 + Ω ( α ) γ ) 3 d 3 , etc.
Then, using (11) and (12), we have
Ω [ s n , w n ] = Ω ( α ) 1 + 2 + Ω ( α ) γ d 2 ε n + i = 1 3 B i ε n i + 1 + O ( ε n 5 ) ,
where  B i , i = 1 , 2 , 3  are functions of  γ , Ω ( α ) , d 2 , d 3 , d 4 , i.e.,  B 1 = Ω ( α ) γ d 2 2 + ( 3 + Ω ( α ) γ ( 3 + Ω ( α ) γ ) ) d 3 ,   B 2 = 2 + Ω ( α ) γ 2 Ω ( α ) γ d 2 d 3 + 2 + Ω ( α ) γ ( 2 + Ω ( α ) γ ) d 4 ,  etc.
Using (11), (12) and (13), we can write
y n α = 1 + Ω ( α ) γ β + d 2 ε n 2 + i = 1 2 C i ε n i + 2 + O ( ε n 5 ) ,
where  C i , i = 1 , 2  are functions of  β , γ , Ω ( α ) , d 2 , d 3 , d 4 .
Then, by employing (14),  Ω ( y n )  is obtained as follows:
Ω ( y n ) = Ω ( α ) 1 + Ω ( α ) γ β + d 2 ε n 2 + i = 1 2 C i ε n i + 2 + O ( ε n 5 ) .
With the help of (11)–(15), we obtain
M ( y n ) = Ω ( α ) + 1 + Ω ( α ) γ λ + 2 Ω ( α ) β d 2 + 3 Ω ( α ) d 2 2 Ω ( α ) d 3 ε n 2 + i = 1 2 D i ε n i + 2 + O ( ε n 5 ) ,
where  D i , i = 1 , 2  are functions of  β , λ , γ , Ω ( α ) , d 2 , d 3 , d 4 .
Now, putting the values of Equations (11)–(16) into the final step of Modified Method 4a (MM 4 a ) (8), we get the following expression for the error equation:
ε n + 1 = ( 1 + Ω ( α ) γ ) 2 β + d 2 ( λ + Ω ( α ) d 2 2 Ω ( α ) d 3 ) Ω ( α ) ε n 4 + O ( ε n 5 ) ,
which confirms the optimal fourth order for the Modified Method 4a (MM 4 a ) (8). Similarly, we can prove the optimal fourth order convergence for the Modified Method 4b (MM 4 b ) (9). The proof of the theorem is completed. □

2.2. Modified Families of Four-Parametric Four-Point Without-Memory Methods

Here, we examine the derivative-free families of biparametric four-point without-memory methods proposed in [14].
w n = s n + γ Ω ( s n ) , y n = s n Ω ( s n ) Ω [ s n , w n ] + β Ω ( w n ) , z n = y n Ω ( y n ) ξ ( y n ) 1 + 1 2 Ω ( y n ) Ω [ y n , w n ] + β Ω ( w n ) 2 ρ ( y n ) Ω ( y n ) , s n + 1 = z n Ω ( z n ) η ( z n ) 1 + 1 2 Ω ( z n ) Ω [ z n , w n ] + β Ω ( w n ) 2 ψ ( z n ) Ω ( z n ) .
w n = s n + γ Ω ( s n ) , y n = s n Ω ( s n ) Ω [ s n , w n ] + β Ω ( w n ) , z n = y n Ω ( y n ) ξ ( y n ) 1 + 1 2 2 Ω ( y n ) Ω [ y n , w n ] + ξ ( y n ) 2 ρ ( y n ) Ω ( y n ) , s n + 1 = z n Ω ( z n ) η ( z n ) 1 + 1 2 2 Ω ( z n ) Ω [ z n , w n ] + ξ ( y n ) 2 ψ ( z n ) Ω ( z n ) ,
where  η ( z n ) = Ω [ s n , z n ] + Ω [ s n , y n , w n ] Ω [ s n , z n , w n ] Ω [ s n , y n , z n ] ( s n z n )  and  ψ ( z n ) = 2 Ω [ s n , y n , w n ] .
Now, we introduce a new parameter  θ R { 0 }  through the modification of  η ( z n )  as follows:
N ( z n ) = η ( z n ) + θ ( z n s n ) ( z n y n ) ( z n w n ) .
Then, substituting the above Equation (20) as well as Equation (7) into Equations (18) and (19) yields two new four-parametric families of without-memory iterative methods. These modified methods, denoted as Modified Methods (MM), are defined as follows:
Modified Method 8a (MM 8 a ):
w n = s n + γ Ω ( s n ) , y n = s n Ω ( s n ) Ω [ s n , w n ] + β Ω ( w n ) , z n = y n Ω ( y n ) M ( y n ) 1 + 1 2 Ω ( y n ) Ω [ y n , w n ] + β Ω ( w n ) 2 ρ ( y n ) Ω ( y n ) , s n + 1 = z n Ω ( z n ) N ( z n ) 1 + 1 2 Ω ( z n ) Ω [ z n , w n ] + β Ω ( w n ) 2 ψ ( z n ) Ω ( z n ) .
Modified Method 8b (MM 8 b ):
w n = s n + γ Ω ( s n ) , y n = s n Ω ( s n ) Ω [ s n , w n ] + β Ω ( w n ) , z n = y n Ω ( y n ) M ( y n ) 1 + 1 2 2 Ω ( y n ) Ω [ y n , w n ] + ξ ( y n ) 2 ρ ( y n ) Ω ( y n ) , s n + 1 = z n Ω ( z n ) N ( z n ) 1 + 1 2 2 Ω ( z n ) Ω [ z n , w n ] + ξ ( y n ) 2 ψ ( z n ) Ω ( z n ) .
Modified methods MM 8 a  and MM 8 b  are optimal as per Kung–Traub’s conjecture, require four function evaluations per iteration, and exhibit an efficiency index of  8 1 / 4 1.682 .
Next, we delve into the theoretical convergence analysis of the newly introduced modified methods, namely MM 8 a  and MM 8 b , as outlined in the following theorem.
Theorem 2.
Let an initial approximation  s 0  be close enough to the root α of a sufficiently differentiable real function  Ω : D R R , where D is an open interval. Then, the modified methods MM 8 a  (21) and MM 8 b  (22) exhibit the eighth order of convergence for any  β , γ , λ , θ R {0}. In addition, the methods MM 8 a  and MM 8 b  have the same error equation given by
ε n + 1 = ( 1 + Ω ( α ) γ ) 4 β + d 2 2 ( λ + Ω ( α ) d 2 2 Ω ( α ) d 3 ) ( θ + Ω ( α ) d 4 ) Ω ( α ) 2 ε n 8 + O ( ε n 9 ) ,
where  d j = 1 j ! Ω ( j ) ( α ) Ω ( α ) , j = 2 , 3 , , and  ε n = s n α  is the error at  n t h  iteration.
Proof. 
The proof of Modified Method 8a (MM 8 a ):
Considering all the assumptions made in Theorems 1, from Equation (17), we have
z n α = ( 1 + Ω ( α ) γ ) 2 β + d 2 ( λ + Ω ( α ) d 2 2 Ω ( α ) d 3 ) Ω ( α ) ε n 4 + i = 5 8 F i ε n i + O ( ε n 9 ) ,
where  F i , i = 5 , 6 , , 8  are functions of  β , γ , Ω ( α ) , d 2 , d 3 , , d 8 .
Using the above Equation (24), we have
Ω ( z n ) = ( 1 + Ω ( α ) γ ) 2 β + d 2 ( λ + Ω ( α ) d 2 2 Ω ( α ) d 3 ) ε n 4 + i = 5 8 F i ε n i + O ( ε n 9 ) .
Applying Equations (11), (12), (15), (24) and (25), the approximation of  Ω ( z n )  is obtained as follows:
N ( z n ) = Ω ( α ) + ( 1 + Ω ( α ) γ ) 2 ( β + d 2 ) θ + 2 Ω ( α ) d 2 3 + 2 d 2 ( λ Ω ( α ) d 3 ) + Ω ( α ) d 4 ε n 4 + i = 5 8 G i ε n i + O ( ε n 9 ) ,
where  G i , i = 5 , 6 , , 8  are functions of  β , γ , λ , θ , Ω ( α ) , d 2 , d 3 , , d 8 .
Now, substituting the values of Equations (12), (24)–(26) in the last step of Equation (21), we obtain the error equation as follows:
ε n + 1 = 1 Ω ( α ) 2 ( 1 + Ω ( α ) γ ) 4 β + d 2 2 ( λ + Ω ( α ) d 2 2 Ω ( α ) d 3 ) ( θ + Ω ( α ) d 4 ) ε n 8 + O ( ε n 9 ) ,
which confirms the optimal eight order for the Modified Method 8a (MM 8 a ) (21). Similarly, we can prove the optimal eighth order convegence for the modified method MM 8 b  (22). This completes the proof of the theorem. □
Remark 1.
From Theorems 1 and 2, the analysis of the error Equations (10) and (23) shows that the convergence order of the new modified derivative-free families of without-memory methods (MM 4 a a n d  MM 4 b , MM 8 a a n d  MM 8 b ) can be increased significantly without any additional function evaluations using the free parameters γ, β, λ and θ, i.e., by putting  γ = 1 Ω ( α ) β = d 2 λ = Ω ( α ) d 3 Ω ( α ) d 2 2  and  θ = Ω ( α ) d 4 .
However, the exact values of  Ω ( α ) d 2 d 3 , and  d 4  are not known to us. So, the parameters γ, β, λ, and θ have to be approximated using known information available from the current as well as the previous iterations. This will be the basis for extending the modified derivative-free families of without-memory methods into derivative-free with-memory methods.

3. New Families of Three- and Four-Parametric With-Memory Methods

In this section, we shall discuss the extension of the new modified derivative-free families of without-memory methods presented in Section 2 into their respective with=memory versions under two separate subsections. Using the available free parameters as accelerating parameters, we aim to increase the convergence order without any additional function evaluations per iteration thereby obtaining highly efficient multi-point with-memory methods.
Let us now discuss in detail the formulation of the methods, the approximations of the accelerating parameters, and the convergence analysis of the with-memory methods in the following subsections.

3.1. Three-Parametric Three-Point With-Memory Methods

Here, we introduce new derivative-free with-memory methods based on the newly suggested modified fourth order derivative-free families of without-memory methods MM 4 a  (8) and MM 4 b  (9).
From error Equation (10), the convergence order of the methods MM 4 a  (8), and MM 4 b  (9) can be increased from 4 to 8 without any additional function evaluation if we take  γ = 1 Ω ( α ) β = d 2  and  λ = Ω ( α ) d 3 Ω ( α ) d 2 2 , where  d 2 = Ω ( α ) 2 Ω ( α ) d 3 = Ω ( α ) 6 Ω ( α ) . However, the problem is that the exact values of  Ω ( α ) Ω ( α )  and  Ω ( α )  are not available to us. So, we use the approximations  γ = γ n β = β n  and  λ = λ n , where  γ n β n  and  λ n  are the accelerating parameters computed using the available information from the current as well as the previous iterations such that the following conditions are satisfied:
lim n γ n = 1 Ω ( α ) , lim n β n = Ω ( α ) 2 Ω ( α ) and lim n λ n = Ω ( α ) 6 Ω ( α ) 2 4 Ω ( α ) .
Now, we consider the following approximations for the accelerating parameters  γ n β n  and  λ n .
γ n = 1 N 3 ( s n ) , β n = N 4 ( w n ) 2 N 4 ( w n ) , λ n = N 5 ( y n ) 6 N 5 ( y n ) 2 4 N 5 ( y n ) , n = 0 , 1 , 2 , ,
where  N 3 ( t ) N 4 ( t )  and  N 5 ( t )  are the respective Newton’s interpolating polynomials of third, fourth, and fifth degrees passing through the best saved points, i.e.,
N 3 ( t ) = N 3 ( t ; s n , y n 1 , w n 1 , s n 1 ) ; N 4 ( t ) = N 4 ( t ; w n , s n , y n 1 , w n 1 , s n 1 ) ; N 5 ( t ) = N 5 ( t ; y n , w n , s n , y n 1 , w n 1 , s n 1 ) .
Now, applying the approximations of the three accelerating parameters  β n γ n  and  λ n  from (28) in the methods MM 4 a  (8) and MM 4 b  (9), we obtain the following new derivative-free with-memory methods.
New With-Memory Method 4a (NWMM 4 a ): For a given  s 0 , γ 0 , β 0 λ 0 , we have  w 0 = s 0 + γ 0 Ω ( s 0 ) . Then,
γ n = 1 N 3 ( s n ) , β n = N 4 ( w n ) 2 N 4 ( w n ) , λ n = N 5 ( y n ) 6 N 5 ( y n ) 2 4 N 5 ( y n ) , w n = s n + γ n Ω ( s n ) , y n = s n Ω ( s n ) Ω [ s n , w n ] + β n Ω ( w n ) , s n + 1 = y n Ω ( y n ) M ( y n ) 1 + 1 2 Ω ( y n ) Ω [ y n , w n ] + β n Ω ( w n ) 2 ρ ( y n ) Ω ( y n ) .
New With-Memory Method 4b (NWMM 4 b ): For a given  s 0 , γ 0 , β 0 λ 0 , we have  w 0 = s 0 + γ 0 Ω ( s 0 ) . Then,
γ n = 1 N 3 ( s n ) , β n = N 4 ( w n ) 2 N 4 ( w n ) , λ n = N 5 ( y n ) 6 N 5 ( y n ) 2 4 N 5 ( y n ) , w n = s n + γ n Ω ( s n ) , y n = s n Ω ( s n ) Ω [ s n , w n ] + β n Ω ( w n ) , s n + 1 = y n Ω ( y n ) M ( y n ) 1 + 1 2 2 Ω ( y n ) Ω [ y n , w n ] + ξ ( y n ) 2 ρ ( y n ) Ω ( y n ) .
In order to prove the convergence order of methods NWMM 4 a  (29) and NWMM 4 b  (30), we first present the following lemma.
Lemma 1.
If  γ n = 1 N 3 ( s n ) β n = 1 2 N 4 ( w n ) N 4 ( w n )  and  λ n = 1 6 N 5 ( y n ) 1 4 N 5 ( y n ) 2 N 5 ( y n ) n = 0 , 1 , 2 , , then the following estimates
1 + γ n Ω ( α ) P 1 ε n 1 , y ε n 1 , w ε n 1 ε n 1 , y ε n 1 , w ε n 1 ,
β n + d 2 P 2 ε n 1 , y ε n 1 , w ε n 1 ε n 1 , y ε n 1 , w ε n 1 ,
λ n + Ω ( α ) d 2 2 Ω ( α ) d 3 P 3 ε n 1 , y ε n 1 , w ε n 1 ε n 1 , y ε n 1 , w ε n 1
hold, where  ε n = s n α ε n , y = y n α ε n , w = w n α , and  P 1 P 2 P 3  are some asymptotic constants.
Proof. 
The proof is similar to Lemma 1 of [12]. □
Now, we state and prove the following theorem for obtaining the R-order of convergence [8] of the new three-point with-memory methods NWMM 4 a  (29) and NWMM 4 b  (30).
Theorem 3.
If an initial approximation  s 0  is sufficiently close to the root α of  Ω ( s ) = 0 , the parameters  γ n β n  and  λ n  are calculated by the expressions (28), then the R-order of convergence of the methods NWMM 4 a  (29) and NWMM 4 b  (30) is at least 7.5311.
Proof. 
Let the sequence of approximations  { s n }  produced by the method NWMM 4 a  (29) converges to the root  α  with order r. Then, we can write
ε n + 1 ε n r ,
where  ε n = s n α .
Then,
ε n ε n 1 r .
Thus,
ε n + 1 ε n r = ε n 1 r r = ε n 1 r 2 .
Assuming the iterative sequences  { w n } { y n }  have orders  r 1 , r 2 , respectively, then using (34) and (35) gives
ε n , w ε n r 1 = ε n 1 r r 1 = ε n 1 r r 1 ,
ε n , y ε n r 2 = ε n 1 r r 2 = ε n 1 r r 2 .
Using Theorem 1 and Lemma 1, we get
ε n , w 1 + γ n Ω ( α ) ε n = ε n 1 r + r 1 + r 2 + 1 ,
ε n , y 1 + γ n Ω ( α ) β n + d 2 ε n 2 = ε n 1 2 r + 2 r 1 + 2 r 2 + 2 ,
ε n + 1 1 + γ n Ω ( α ) 2 β n + d 2 λ n + Ω ( α ) d 2 2 Ω ( α ) d 3 ε n 4 = ε n 1 4 r + 4 r 1 + 4 r 2 + 4 .
Now, comparing the corresponding powers of  ε n 1  on the right hand sides of (37), and (39), (38) and (40), (36) and (41), we get
r r 1 r r 1 r 2 1 = 0 , r r 2 2 r 2 r 1 2 r 2 2 = 0 , r 2 4 r 4 r 1 4 r 2 4 = 0 .
This system of equations has the non-trivial solution  r 1 = 1.8828 , r 2 = 3.7656  and  r = 7.5311 . Hence, the R-order of convergence of the method NWMM 4 a  (29) is at least  7.5311 . The R-order of convergence for the methods NWMM 4 b  (30) can be proved in a similar manner. The proof is complete. □

3.2. Four-Parametric Four-Point With-Memory Methods

Here, we introduce new derivative-free with-memory methods which are extensions of the newly suggested modified eighth order derivative-free families of without-memory methods MM 8 a  (21) and MM 8 b  (22).
It is evident from error Equation (23) that the convergence order of the methods MM 8 a  (21), and MM 8 b  (22) can be increased from 8 to 16 if we take  γ = 1 Ω ( α ) β = d 2 λ = Ω ( α ) d 3 Ω ( α ) d 2 2  and  θ = Ω ( α ) d 4 , where  d 2 = Ω ( α ) 2 Ω ( α ) d 3 = Ω ( α ) 6 Ω ( α ) d 4 = Ω i v ( α ) 24 . In a similar manner to the previous subsection, we use the approximations  γ = γ n β = β n λ = λ n , and  θ = θ n , where  γ n β n λ n , and  θ n  are the accelerating parameters computed using the available information from the current as well as the previous iterations such that the following conditions are satisfied:
lim n γ n = 1 Ω ( α ) , lim n β n = Ω ( α ) 2 Ω ( α ) , lim n λ n = Ω ( α ) 6 Ω ( α ) 2 4 Ω ( α ) and lim n θ n = Ω i v ( α ) 24 .
Now, we consider the following approximations for the accelerating parameters  γ n β n λ n  and  θ n .
γ n = 1 N 4 ( s n ) , β n = N 5 ( w n ) 2 N 5 ( w n ) , λ n = N 6 ( y n ) 6 N 6 ( y n ) 2 4 N 6 ( y n ) , and θ n = N 7 i v ( z n ) 24 , n = 0 , 1 , 2 , ,
where  N 4 ( t ) N 5 ( t ) N 6 ( t ) , and  N 7 ( t )  are the respective Newton’s interpolating polynomials of fourth, fifth, sixth, and seventh degrees passing through the best saved points, i.e.,
N 4 ( t ) = N 4 ( t ; s n , z n 1 , y n 1 , w n 1 , s n 1 ) ; N 5 ( t ) = N 5 ( t ; w n , s n , z n 1 , y n 1 , w n 1 , s n 1 ) N 6 ( t ) = N 6 ( t ; y n , w n , s n , z n 1 , y n 1 , w n 1 , s n 1 ) N 7 ( t ) = N 7 ( t ; z n , y n , w n , s n , z n 1 , y n 1 , w n 1 , s n 1 ) .
Now, applying the approximations of the four accelerating parameters  γ n β n λ n , and  θ n  from (43) in the modified methods MM 8 a  (21) and MM 8 b  (22), we obtain the following new derivative-free with-memory methods.
New With-Memory Method 8a (NWMM 8 a ): For a given  s 0 , γ 0 , β 0 λ 0 θ 0 , we have  w 0 = s 0 + γ 0 Ω ( s 0 ) . Then,
γ n = 1 N 4 ( s n ) , β n = N 5 ( w n ) 2 N 5 ( w n ) , λ n = N 6 ( y n ) 6 N 6 ( y n ) 2 4 N 6 ( y n ) , θ n = N 7 i v ( z n ) 24 , w n = s n + γ n Ω ( s n ) , y n = s n Ω ( s n ) Ω [ s n , w n ] + β n Ω ( w n ) , z n = y n Ω ( y n ) M ( y n ) 1 + 1 2 Ω ( y n ) Ω [ y n , w n ] + β n Ω ( w n ) 2 ρ ( y n ) Ω ( y n ) s n + 1 = z n Ω ( z n ) N ( z n ) 1 + 1 2 Ω ( z n ) Ω [ z n , w n ] + β n Ω ( w n ) 2 ψ ( z n ) Ω ( z n ) .
New With-Memory Method 8b (NWMM 8 b ): For a given  s 0 , γ 0 , β 0 λ 0 θ 0 , we have  w 0 = s 0 + γ 0 Ω ( s 0 ) . Then,
γ n = 1 N 4 ( s n ) , β n = N 5 ( w n ) 2 N 5 ( w n ) , λ n = N 6 ( y n ) 6 N 6 ( y n ) 2 4 N 6 ( y n ) , θ n = N 7 i v ( z n ) 24 , w n = s n + γ n Ω ( s n ) , y n = s n Ω ( s n ) Ω [ s n , w n ] + β n Ω ( w n ) , z n = y n Ω ( y n ) M ( y n ) 1 + 1 2 2 Ω ( y n ) Ω [ y n , w n ] + ξ ( y n ) 2 ρ ( y n ) Ω ( y n ) , s n + 1 = z n Ω ( z n ) N ( z n ) 1 + 1 2 2 Ω ( z n ) Ω [ z n , w n ] + ξ ( y n ) 2 ψ ( z n ) Ω ( z n ) .
In order to prove the convergence order of methods NWMM 8 a  (44) and NWMM 8 b  (45), we first present the following lemma.
Lemma 2.
If  γ n = 1 N 4 ( s n ) β n = N 5 ( w n ) 2 N 5 ( w n ) λ n = N 6 ( y n ) 6 N 6 ( y n ) 2 4 N 6 ( y n ) , and  θ n = N 7 i v ( z n ) 24 n = 0 , 1 , 2 , , then the following estimates
1 + γ n Ω ( α ) Q 1 ε n 1 , z ε n 1 , y ε n 1 , w ε n 1 ε n 1 , z ε n 1 , y ε n 1 , w ε n 1 ,
β n + d 2 Q 2 ε n 1 , z ε n 1 , y ε n 1 , w ε n 1 ε n 1 , z ε n 1 , y ε n 1 , w ε n 1 ,
λ n + Ω ( α ) d 2 2 Ω ( α ) d 3 Q 3 ε n 1 , z ε n 1 , y ε n 1 , w ε n 1 ε n 1 , z ε n 1 , y ε n 1 , w ε n 1 ,
θ n Ω ( α ) d 4 Q 4 ε n 1 , z ε n 1 , y ε n 1 , w ε n 1 ε n 1 , z ε n 1 , y ε n 1 , w ε n 1
hold, where  ε n = s n α ε n , y = y n α ε n , w = w n α , and  Q 1 Q 2 Q 3 Q 4  are some asymptotic constants.
Proof. 
The proof is similar to Lemma 1 of [12]. □
Now, we state and prove the following theorem for obtaining the R-order of convergence [8] of the new four-point with-memory methods NWMM 8 a  (44) and NWMM 8 b  (45).
Theorem 4.
If an initial approximation  s 0  is sufficiently close to the root α of  Ω ( s ) = 0 , the parameters  γ n β n λ n , and  θ n  are calculated by the expressions (43), then the R-order of convergence of the methods NWMM 8 a  (44) and NWMM 8 b  (45) is at least 15.5156.
Proof. 
Let the sequence of approximations  { s n }  produced by the method NWMM 8 a  (44) converges to the root  α  with order r. Then, we can write
ε n + 1 ε n r ,
where  ε n = s n α .
Then,
ε n ε n 1 r .
Thus,
ε n + 1 ε n r = ε n 1 r r = ε n 1 r 2 .
Assuming the iterative sequences  { w n } { y n }  and  { z n }  have orders  r 1 , r 2 , and  r 3 , respectively, then using (50) and (51) gives
ε n , w ε n r 1 = ε n 1 r r 1 = ε n 1 r r 1 ,
ε n , y ε n r 2 = ε n 1 r r 2 = ε n 1 r r 2 ,
ε n , z ε n r 3 = ε n 1 r r 3 = ε n 1 r r 3 .
Using Theorem 2 and Lemma 2, we get
ε n , w 1 + γ n Ω ( α ) ε n = ε n 1 r + r 1 + r 2 + r 3 + 1 ,
ε n , y 1 + γ n Ω ( α ) β + d 2 ε n 2 = ε n 1 2 r + 2 r 1 + 2 r 2 + 2 r 3 + 2 ,
ε n , z 1 + γ n Ω ( α ) 2 β n + d 2 λ n + Ω ( α ) d 2 2 Ω ( α ) d 3 ε n 4 = ε n 1 4 r + 4 r 1 + 4 r 2 + 4 r 3 + 4 ,
ε n + 1 1 + γ n Ω ( α ) 4 β + d 2 2 λ n + Ω ( α ) d 2 2 Ω ( α ) d 3 θ n Ω ( α ) d 4 ε n 8 = ε n 1 8 r + 8 r 1 + 8 r 2 + 8 r 3 + 8 .
Now, comparing the corresponding powers of  ε n 1  on the right hand sides of (53) and (56), (54) and (57), (55) and (58), (52) and (59), we get
r r 1 r r 1 r 2 r 3 1 = 0 , r r 2 2 r 2 r 1 2 r 2 2 r 3 2 = 0 , r r 3 4 r 4 r 1 4 r 2 4 r 3 4 = 0 , r 2 8 r 8 r 1 8 r 2 8 r 3 8 = 0 .
This system of equations has the non-trivial solution  r 1 = 1.9394 , r 2 = 3.8789 r 3 = 7.7578  and  r = 15.5156 . Hence, the R-order of convergence of the method NWMM 8 a  (44) is at least  15.5156 . The R-order of convergence for the methods NWMM 8 b  can be proved in similar manner. The proof is complete. □

4. Numerical Experiments

In this section, we examine the performance and the computational efficiency of the newly developed with and without-memory methods discussed in Section 2 and Section 3 and compare with some methods of similar nature available in the literature. In particular, we have considered for the comparison the following derivative-free three-parametric methods: FZM 4 (4.1) [15], VTM 4 (28) [16], and SM 4 (4.1) [17], and the following four-parametric methods: AJM 8  [13], ZM 8  (ZR1 from [18]), and ACM 8  (M1 from [19]).
All numerical tests have been executed using the multi-precision arithmetic programming software Mathematica 12.2. For all methods, we have chosen the same values of the parameters  γ 0 = β 0 = λ 0 = θ 0 = 1  in all the test functions in order to start the initial iteration. These same values are used for the corresponding parameters of all the compared methods in order to have uniform and fair comparison in all the test functions.
Numerical test functions which comprise a standard academic example and some real-life chemical engineering problems along with their simple roots  ( α )  and initial guesses  ( s 0 )  are presented below.
Example 1.
A standard academic test function given by
Ω 1 ( s ) = e s 2 ( 1 + s 3 + s 6 ) ( s 2 ) .
It has a simple root  α = 2 . We use  s 0 = 2.3  as the initial guess and the results are displayed in Table 1.
Example 2.
The Michaelis–Menten model [20] describes the kinetics of enzyme-mediated reactions and has the following expression:
d S d t = ν m S K s + S ,
where S is the substrate concentration (moles/L),  ν m  is the maximum uptake rate (moles/L/d), and  K s  is the half-saturation constant, which is the substrate level at which uptake is half of the maximum (moles/L).
If  S 0  is the initial substrate level at  t = 0 , then the above equation can be solved for S as follows:
S = S 0 ν m t + K s log ( S 0 / S ) .
For a particular case where  t = 10 S 0 = 8  moles/L,  ν m = 0.7  moles/L/d, and  K s = 2.5  moles/L, the above equation reduces to the following nonlinear function.
Ω 2 ( s ) = s 2.5 log 8 s 1 ,
where s denotes the substrate concentration S to be determined. The nonlinear equation  Ω 2 ( s ) = 0  has a simple root  α 3.2511115053800575 . We use  s 0 = 3.8  as the initial guess and the results are displayed in Table 2.
Example 3.
Let us consider the conversion of the fraction of Nitrogen–Hydrogen feed into Ammonia, called fractional conversion, at a pressure of 250 atm and temperature of  500 C  (see [21] for details). When reduced to the polynomial form, the problem has the following expression:
Ω 3 ( s ) = s 4 7.79075 s 3 + 14.7445 s 2 + 2.511 s 1.674 .
The nonlinear equation  Ω 3 ( s ) = 0  has a simple root  α 0.27775954284172066 . We take  s 0 = 0.6  as the initial guess and the results are displayed in Table 3.
Example 4.
The equation of state for a van der Waals fluid takes the following form [22]:
P + a V 2 ( V b ) = R T ,
where  a , b , and R are positive constants, P is the pressure, T is the absolute temperature, and V is the molar volume.
Now, let us substitute  p = P P c = 27 b 2 P a t = T T c = 27 R b T 8 a  and  v = V V c = v 3 b , where  P c = a 27 b 2  is the critical pressure,  T c = 8 a 27 R b  is the critical temperature, and  V c = 3 b  is the critical molar volume.
Then, the above Equation (66), in which the pressure, temperature, and volume are expressed in terms of their critical values, becomes
p + 3 v 2 ( 3 v 1 ) = 8 t ,
where  p , t , and v are called the reduced pressure, temperature, and volume, respectively. For particular values of  p = 6  and  t = 2 , Equation (67) reduces to the following nonlinear equation.
Ω 4 ( s ) = 18 s 3 22 s 2 + 9 s 3 = 0 ,
where s represents the reduced volume v to be determined. Equation (68) has a simple root  α 0.86728815393727851 . We use  s 0 = 1.2  as the initial guess and the results are displayed in Table 4.
All the results and analysis of the numerical computations are displayed in Table 1, Table 2, Table 3 and Table 4. The aim of these tables is to showcase the performance of the iterative methods in terms of their convergence speed and accuracy. We measure the convergence by tracking the number of iterations (n) required to satisfy the stopping criterion:
| s n s n 1 | + | Ω ( s n ) | < 10 60 ,
where  s n  represents the current iterate and  | Ω ( s n ) |  denotes the absolute residual error of the function. To provide further insights, we also include in the tables the estimated error in consecutive iterations,  | s n s n 1 | , for the initial three iterations. Moreover, we calculate the computational order of convergence (COC) using the formula [23]:
COC = log | Ω ( s n ) / Ω ( s n 1 ) | log | Ω ( s n 1 ) / Ω ( s n 2 ) | .
From Table 1, Table 2, Table 3 and Table 4, the numerical results reveal the good performance and better efficiency of the proposed with and without-memory methods, thus confirming their theoretical results. The proposed methods show better accuracy with high efficiency in terms of minimal errors after three iterations as compared to the existing methods in comparison. Table 2, Table 3 and Table 4 confirm the applicability of the proposed families of methods when applied to some real world chemical problems. In addition, some of the compared methods fail to converge to the required roots and diverge away from the roots, which is not the case for the proposed families of methods, as can be observed from Figure 1, Figure 2, Figure 3 and Figure 4. Further, the numerical test results reveal that the COC supports the theoretical convergence order of the new proposed with and without-memory methods in the test functions.

Comparison by Basins of Attraction

In this section, we explore the dynamical properties of the proposed methods discussed in Section 2. To analyse their behaviour in the complex plane, we examine the basins of attraction associated with each method. Specifically, we compare MM 4 a  with Method (5), MM 4 b  with Method (6), MM 8 a  with Method (18), and MM 8 b  with Method (19), respectively.
We used a  401 × 401  grid to represent the complex plane region  R = [ 2 , 2 ] × [ 2 , 2 ] . Each point  z 0  in R was assigned a colour based on the root it converged to using an iterative method. Divergent points were marked in black if they failed to converge within 100 iterations or within a tolerance of  10 4 . Simple roots were represented by white circles. Brighter colours indicated faster convergence, while darker colours indicated slower convergence. In Figure 5, we illustrate the basins of attraction obtained by applying the fourth and eighth-order methods to the function  p ( z ) = z 3 + z . To have a fair comparison, we take the same values of the parameters  γ = β = 0.001  for all compared methods.
In Figure 5, it is evident that all the compared methods exhibit large basins of attraction with only a few divergent points. However, the proposed modified methods outperform the biparametric methods due to the inclusion of additional parameters. In fact, the proposed methods MM 4 b  and MM 8 b  are the best with no divergent points.
Moreover, we can observe from Table 5 and Table 6 that each of the proposed methods show significant improvements in terms of fewer divergent points. In particular, MM 4 a  shows an improvement of  71.4 %  over method (5), and  100 %  improvement for method MM 4 b  over method (6). Similarly, MM 8 a  and MM 8 b  show  48.1 %  and  100 %  improvements over methods (18) and (19), respectively. Notably, both methods MM 4 b  and MM 8 b  show no divergent points, as observed from Table 5 and Table 6, respectively. This underscores the crucial role of the extra parameters in enhancing stability and reducing divergent points in the proposed methods.

5. Concluding Remarks

In this paper, we have presented new derivative-free three- and four-parametric with and without-memory methods for finding simple roots of nonlinear equations. The methods are based on the modifications of the derivative-free without-memory methods developed in [14]. The use of accelerating parameters in the with-memory methods has enabled us to increase the convergence order of the without-memory methods and obtain very high computational efficiency index of  7.5311 1 / 3 1.9601  for the three-point and  15.5156 1 / 4 1.9847  for the four-point with-memory methods. The numerical test results have demonstrated the good performance and applicability of the proposed with and without-memory methods. They are found to have better accuracy and efficiency as compared to the existing methods in the comparison in terms of minimal residual errors and errors in consecutive iterations for convergence towards the required simple roots in minimal number of iterations. Moreover, the study of the dynamical aspects through the basins of attraction further confirms the crucial role of the extra parameters in enhancing stability and reducing divergent points in the proposed methods.

Author Contributions

Conceptualisation, G.T. and S.P.; methodology, G.T. and S.P.; software, G.T.; validation, G.T. and S.P.; formal analysis, L.C.B. and S.P.; resources, L.C.B.; writing—original draft preparation, G.T. and S.P.; writing—review and editing, G.T., S.P. and L.J.; visualisation, G.T.; data curation, L.J.; supervision, S.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Technical University of Cluj-Napoca open access publication grant.

Data Availability Statement

Not applicable.

Acknowledgments

The authors gratefully acknowledge the University Grants Commission (UGC), New Delhi, India, for providing financial assistance to carry out this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Traub, J.F. Iterative Methods for the Solution of Equations; American Mathematical Society: Providence, RI, USA, 1982; Volume 312. [Google Scholar]
  2. Sivakumar, P.; Madhu, K.; Jayaraman, J. Optimal fourth order methods with its multi-step version for nonlinear equation and their Basins of attraction. SeMA 2019, 76, 559–579. [Google Scholar] [CrossRef]
  3. Solaiman, O.S.; Abdul Karim, S.A.; Hashim, I. Optimal fourth- and eighth-order of convergence derivative-free modifications of King’s method. J. King Saud Univ. Sci. 2019, 31, 1499–1504. [Google Scholar] [CrossRef]
  4. Thangkhenpau, G.; Panday, S. Optimal Eight Order Derivative-Free Family of Iterative Methods for Solving Nonlinear Equations. IAENG Int. J. Comput. Sci. 2023, 50, 335–341. [Google Scholar]
  5. Singh, A.; Jaiswal, J.P. A class of optimal eighth-order Steffensen-type iterative methods for solving nonlinear equations and their basins of attraction. Appl. Math. Inf. Sci. 2016, 10, 251–257. [Google Scholar] [CrossRef] [Green Version]
  6. Thangkhenpau, G.; Panday, S. Efficient Families of Multipoint Iterative Methods for Solving Nonlinear Equations. Eng. Lett. 2023, 31, 574–583. [Google Scholar]
  7. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. Am. Comput. Mach. 1974, 21, 643–651. [Google Scholar] [CrossRef]
  8. Ortega, J.M.; Rheinboldt, W.G. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  9. Ostrowski, A.M. Solution of Equations and Systems of Equations; Academic Press: New York, NY, USA; London, UK, 1966. [Google Scholar]
  10. Awadalla, M.; Qureshi, S.; Soomro, A.; Abuasbeh, K. A Novel Three-Step Numerical Solver for Physical Models under Fractal Behavior. Symmetry 2023, 15, 330. [Google Scholar] [CrossRef]
  11. Akram, S.; Khalid, M.; Junjua, M.-U.-D.; Altaf, S.; Kumar, S. Extension of King’s Iterative Scheme by Means of Memory for Nonlinear Equations. Symmetry 2023, 15, 1116. [Google Scholar] [CrossRef]
  12. Džunić, J. On efficient two-parameter methods for solving nonlinear equations. Numer. Algorithms 2013, 63, 549–569. [Google Scholar] [CrossRef]
  13. Singh, A.; Jaiswal, J.P. Improving R-Order Convergence of Derivative Free with Memory Method by Two Self-accelerator Parameters. In Mathematical Analysis and Its Applications; Springer Proceedings in Mathematics & Statistics; Agrawal, P., Mohapatra, R., Singh, U., Srivastava, H., Eds.; Springer: New Delhi, India, 2015; Volume 143. [Google Scholar] [CrossRef]
  14. Thangkhenpau, G.; Panday, S.; Chanu, W.H. New efficient bi-parametric families of iterative methods with engineering applications and their basins of attraction. Res. Control Optim.. 2023, 12, 100243. [Google Scholar] [CrossRef]
  15. Zafara, F.; Yasmina, N.; Kutbib, M.A.; Zeshana, M. Construction of tri-parametric derivative free fourth order with and without memory iterative method. J. Nonlinear Sci. Appl. 2016, 9, 1410–1423. [Google Scholar] [CrossRef]
  16. Torkashvand, V. A two-point eighth-order method based on the weight function for solving nonlinear equations. J. Numer. Anal. Approx. Theory 2021, 50, 73–93. [Google Scholar] [CrossRef]
  17. Soleymani, F.; Lotfi, T.; Tanakli, E.; Haghani, F.K. Several iterative methods with memory using self-accelerators. Appl. Math. Comput. 2015, 254, 452–458. [Google Scholar] [CrossRef]
  18. Zafar, F.; Cordero, A.; Torregrosa, J.R.; Rafi, A. A class of four parametric with- and without-memory root finding methods. Comput. Math. Methods 2019, 1, e1024. [Google Scholar] [CrossRef] [Green Version]
  19. Cordero, A.; Janjua, M.; Torregrosa, J.R.; Yasmin, N.; Zafar, F. Efficient four parametric with and without-memory iterative methods possessing high efficiency indices. Math. Probl. Eng. 2018, 2018, 1–12. [Google Scholar] [CrossRef]
  20. Chapra, S.C. Applied Numerical Methods with MATLAB® for Engineers and Scientists, 3rd ed.; McGraw-Hill: New York, NY, USA, 2012. [Google Scholar]
  21. Naseem, A.; Rehman, M.A.; Abdeljawad, T. Numerical methods with engineering applications and their visual analysis via polynomiography. IEEE Access 2021, 9, 99287–99298. [Google Scholar] [CrossRef]
  22. Shams, M.; Rafiq, N.; Ahmad, B.; Mir, N.A. Inverse numerical iterative technique for finding all roots of non-linear equations with engineering applications. J. Math. 2021, 2021, 6643514. [Google Scholar] [CrossRef]
  23. Petković, M.S. Remarks on “On a general class of multipoint root-finding methods of high computational efficiency”. SIAM J. Numer. Math. 2011, 49, 1317–1319. [Google Scholar] [CrossRef]
Figure 1. Graphical comparison based on Log of  | s n s n 1 |  for  Ω 1 ( s ) .
Figure 1. Graphical comparison based on Log of  | s n s n 1 |  for  Ω 1 ( s ) .
Symmetry 15 01546 g001
Figure 2. Graphical comparison based on Log of  | s n s n 1 |  for  Ω 2 ( s ) .
Figure 2. Graphical comparison based on Log of  | s n s n 1 |  for  Ω 2 ( s ) .
Symmetry 15 01546 g002
Figure 3. Graphical comparison based on Log of  | s n s n 1 |  for  Ω 3 ( s ) .
Figure 3. Graphical comparison based on Log of  | s n s n 1 |  for  Ω 3 ( s ) .
Symmetry 15 01546 g003
Figure 4. Graphical comparison based on Log of  | s n s n 1 |  for  Ω 4 ( s ) .
Figure 4. Graphical comparison based on Log of  | s n s n 1 |  for  Ω 4 ( s ) .
Symmetry 15 01546 g004
Figure 5. Basins of attraction for the proposed methods and the biparametric methods applied to the function  p ( z ) = z 3 + z .
Figure 5. Basins of attraction for the proposed methods and the biparametric methods applied to the function  p ( z ) = z 3 + z .
Symmetry 15 01546 g005
Table 1. Comparison of test function  Ω 1 ( s ) .
Table 1. Comparison of test function  Ω 1 ( s ) .
Methodsn | s 1 s 0 | | s 2 s 1 | | s 3 s 2 | | Ω 1 ( s 3 ) | COC
Without memory
FZM 4 5 0.33009 0.030086 9.2612 × 10 7 9.2120 × 10 25 4.0000
VTM 4 5 0.31989 0.019888 2.0505 × 10 7 2.6989 × 10 27 4.0000
SM 4 5 0.33009 0.030086 9.2612 × 10 7 9.2120 × 10 25 4.0000
MM 4 a 5 0.29425 0.0057493 4.0008 × 10 10 1.2989 × 10 38 4.0000
MM 4 b 5 0.30077 0.00076940 1.3367 × 10 13 1.6187 × 10 52 4.0000
ZM 8 4 0.29937 0.00062516 6.3288 × 10 26 9.4210 × 10 202 8.0000
ACM 8 4 0.30459 0.0045854 1.0110 × 10 20 7.6376 × 10 162 8.0000
AJM 8 4 0.28708 0.012920 7.4714 × 10 15 1.4745 × 10 112 8.0000
MM 8 a 4 0.30000 1.2807 × 10 6 1.6291 × 10 48 1.4926 × 10 383 8.0000
MM 8 b 4 0.30000 2.2530 × 10 6 1.4939 × 10 46 7.4639 × 10 368 8.0000
With memory
FZM 4 4 0.33009 0.030087 9.0157 × 10 15 2.2995 × 10 100 7.6456
VTM 4 4 0.31989 0.019888 1.3343 × 10 15 1.0917 × 10 108 7.6031
SM 4 4 0.33009 0.030087 1.4778 × 10 13 5.4594 × 10 90 7.2871
NWMM 4 a 4 0.29425 0.0057493 6.0134 × 10 18 2.7847 × 10 131 7.5271
NWMM 4 b 4 0.30077 0.00076940 4.1128 × 10 26 2.1806 × 10 189 7.5603
ZM 8 4 0.29937 0.00062516 3.6733 × 10 50 4.7977 × 10 749 15.145
ACM 8 4 0.30459 0.0045854 1.6369 × 10 39 1.3451 × 10 598 15.541
AJM 8 4 0.28708 0.012920 8.6225 × 10 26 1.4868 × 10 350 14.000
NWMM 8 a 3 0.30000 1.2807 × 10 6 4.4603 × 10 96 1.7660 × 10 1486 15.544
NWMM 8 b 3 0.30000 2.2530 × 10 6 6.8171 × 10 91 3.0895 × 10 1398 15.470
Table 2. Comparison of test function  Ω 2 ( s ) .
Table 2. Comparison of test function  Ω 2 ( s ) .
Methodsn | s 1 s 0 | | s 2 s 1 | | s 3 s 2 | | Ω 2 ( s 3 ) | COC
Without memory
FZM 4 5 0.52278 0.026103 3.6376 × 10 7 2.5627 × 10 26 4.0000
VTM 4 5 0.52808 0.020806 1.0009 × 10 7 9.7869 × 10 29 4.0000
SM 4 5 0.52278 0.026103 3.6376 × 10 7 2.5627 × 10 26 4.0000
MM 4 a 5 0.53110 0.017791 3.5680 × 10 8 1.0391 × 10 30 4.0000
MM 4 b 5 0.53031 0.018583 4.2589 × 10 8 2.1093 × 10 30 4.0000
ZM 8 4 0.54884 0.000043546 2.1311 × 10 36 1.2407 × 10 286 8.0000
ACM 8 4 0.54976 0.00086944 3.0625 × 10 25 1.2791 × 10 196 8.0000
AJM 8 4 0.54818 0.00070892 1.1467 × 10 26 9.5186 × 10 209 8.0000
MM 8 a 4 0.54923 0.00034385 2.5135 × 10 29 3.6221 × 10 230 8.0000
MM 8 b 4 0.54924 0.00034685 2.6942 × 10 29 6.3116 × 10 230 8.0000
With memory
FZM 4 4 0.52278 0.026104 1.8327 × 10 20 7.9524 × 10 153 7.5500
VTM 4 4 0.52808 0.020806 3.0179 × 10 21 9.7288 × 10 159 7.5491
SM 4 4 0.52278 0.026104 8.1149 × 10 20 5.7816 × 10 140 7.2839
NWMM 4 a 4 0.53110 0.017791 1.1641 × 10 21 5.3718 × 10 162 7.5518
NWMM 4 b 4 0.53031 0.018583 1.6423 × 10 21 7.3086 × 10 161 7.5520
ZM 8 3 0.54884 0.000043546 1.4128 × 10 80 1.4045 × 10 1223 15.145
ACM 8 3 0.54976 0.00086944 3.5527 × 10 64 4.1590 × 10 995 15.420
AJM 8 4 0.54818 0.00070892 2.1494 × 10 58 4.0336 × 10 814 14.000
NWMM 8 a 3 0.54923 0.00034385 1.7197 × 10 70 5.4863 × 10 1093 15.426
NWMM 8 b 3 0.54924 0.00034685 1.9715 × 10 70 4.5987 × 10 1092 15.426
Table 3. Comparison of test function  Ω 3 ( s ) .
Table 3. Comparison of test function  Ω 3 ( s ) .
Methodsn | s 1 s 0 | | s 2 s 1 | | s 3 s 2 | | Ω 3 ( s 3 ) | COC
Without memory
FZM 4 - D i v e r g e n t D i v e r g e n t D i v e r g e n t D i v e r g e n t
VTM 4 - D i v e r g e n t D i v e r g e n t D i v e r g e n t D i v e r g e n t
SM 4 - D i v e r g e n t D i v e r g e n t D i v e r g e n t D i v e r g e n t
MM 4 a 6 0.28422 0.037789 2.2754 × 10 4 6.5335 × 10 14 4.0000
MM 4 b 6 0.28258 0.039391 2.7277 × 10 4 1.3208 × 10 13 4.0000
ZM 8 - D i v e r g e n t D i v e r g e n t D i v e r g e n t D i v e r g e n t
ACM 8 5 0.28268 0.039558 1.7645 × 10 6 1.2870 × 10 45 8.0000
AJM 8 4 0.32808 0.0058362 4.7720 × 10 16 8.2224 × 10 121 8.0000
MM 8 a 4 0.32322 0.00098388 2.3084 × 10 24 9.2316 × 10 189 8.0000
MM 8 b 4 0.32335 0.0011099 6.9239 × 10 24 6.0483 × 10 185 8.0000
With memory
FZM 4 5 0.021086 0.34332 5.6284 × 10 6 2.5093 × 10 42 7.7424
VTM 4 5 0.024773 0.34701 5.9869 × 10 6 4.1983 × 10 42 7.7424
SM 4 5 0.021086 0.34332 5.1371 × 10 6 9.9324 × 10 43 7.7423
NWMM 4 a 4 0.28422 0.038016 8.7889 × 10 14 3.8698 × 10 105 7.7335
NWMM 4 b 4 0.28258 0.039664 2.1998 × 10 13 3.8297 × 10 102 7.7325
ZM 8 5 0.59976 0.28631 8.7926 × 10 3 9.9619 × 10 33 16.000
ACM 8 4 0.28268 0.039560 1.2422 × 10 23 3.3101 × 10 366 16.000
AJM 8 4 0.32808 0.0058362 2.8373 × 10 31 9.0156 × 10 427 14.000
NWMM 8 a 4 0.32322 0.00098388 1.7162 × 10 51 1.9333 × 10 862 17.000
NWMM 8 b 4 0.32335 0.0011099 1.3326 × 10 50 2.6215 × 10 847 17.000
Table 4. Comparison of test function  Ω 4 ( s ) .
Table 4. Comparison of test function  Ω 4 ( s ) .
Methodsn | s 1 s 0 | | s 2 s 1 | | s 3 s 2 | | Ω 4 ( s 3 ) | COC
Without memory
FZM 4 D i v e r g e n t D i v e r g e n t D i v e r g e n t D i v e r g e n t
VTM 4 D i v e r g e n t D i v e r g e n t D i v e r g e n t D i v e r g e n t
SM 4 D i v e r g e n t D i v e r g e n t D i v e r g e n t D i v e r g e n t
MM 4 a 6 0.21298 0.10244 0.017273 1.6018 × 10 4 4.0000
MM 4 b 6 0.21292 0.10228 0.017481 3.0615 × 10 4 4.0000
ZM 8 5 0.42172 0.088533 4.7739 × 10 4 8.4089 × 10 20 8.0000
ACM 8 5 0.21344 0.11153 7.7426 × 10 3 2.1579 × 10 8 8.0000
AJM 8 4 0.31961 0.013099 3.3791 × 10 9 2.5912 × 10 61 8.0000
MM 8 a 4 0.31144 0.021272 8.9028 × 10 10 1.9552 × 10 68 8.0000
MM 8 b 4 0.31130 0.021408 7.1073 × 10 10 3.2256 × 10 69 8.0000
With memory
FZM 4 5 0.0039485 0.32818 5.8627 × 10 4 8.0061 × 10 24 8.0000
VTM 4 5 0.0045984 0.32754 5.7705 × 10 4 7.0536 × 10 24 8.0000
SM 4 5 0.0039485 0.32818 5.8627 × 10 4 8.0061 × 10 24 8.0000
NWMM 4 a 5 0.21298 0.11973 4.4869 × 10 7 4.6843 × 10 49 8.0000
NWMM 4 b 5 0.21292 0.11979 4.2775 × 10 7 3.1960 × 10 49 8.0000
ZM 8 4 0.42172 0.089010 2.2383 × 10 13 2.4905 × 10 198 16.000
ACM 8 4 0.21344 0.11927 1.7133 × 10 12 3.4570 × 10 184 16.000
AJM 8 4 0.31961 0.013099 1.0088 × 10 23 3.6015 × 10 318 14.000
NWMM 8 a 4 0.31144 0.021272 2.9242 × 10 26 9.3185 × 10 431 17.000
NWMM 8 b 4 0.31130 0.021408 2.8163 × 10 26 4.9186 × 10 431 17.000
Table 5. Number of divergent points on the function  p ( z )  for the fourth-order methods.
Table 5. Number of divergent points on the function  p ( z )  for the fourth-order methods.
Method (5) and MM 4 a Method (6) and MM 4 b
MethodsMethod (5)MM 4 a Improv.(%)Method (6)MM 4 b Improv. (%)
Parameters γ , β γ , β λ = 0.3 γ , β γ β , λ = 1
No. of div. pts.1053071.4%140100%
% of div. pts.0.065%0.019% 0.0087%0%
Abbreviations used: div. = divergent, pts. = points, Improv. = Improvement.
Table 6. Number of divergent points on the function  p ( z )  for the eighth-order methods.
Table 6. Number of divergent points on the function  p ( z )  for the eighth-order methods.
Method (18) and MM 8 a Method (19) and MM 8 b
MethodsMethod (18)MM 8 a Improv. (%)Method (19)MM 8 b Improv. (%)
Parameters γ , β γ , β , λ = 4 , θ = 0.4 γ , β γ , β , λ = 4 , θ = 0.4
No. of div. pts.794148.1%140100%
% of div. pts.0.049%0.025% 0.0087%0%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Thangkhenpau, G.; Panday, S.; Bolunduţ, L.C.; Jäntschi, L. Efficient Families of Multi-Point Iterative Methods and Their Self-Acceleration with Memory for Solving Nonlinear Equations. Symmetry 2023, 15, 1546. https://doi.org/10.3390/sym15081546

AMA Style

Thangkhenpau G, Panday S, Bolunduţ LC, Jäntschi L. Efficient Families of Multi-Point Iterative Methods and Their Self-Acceleration with Memory for Solving Nonlinear Equations. Symmetry. 2023; 15(8):1546. https://doi.org/10.3390/sym15081546

Chicago/Turabian Style

Thangkhenpau, G, Sunil Panday, Liviu C. Bolunduţ, and Lorentz Jäntschi. 2023. "Efficient Families of Multi-Point Iterative Methods and Their Self-Acceleration with Memory for Solving Nonlinear Equations" Symmetry 15, no. 8: 1546. https://doi.org/10.3390/sym15081546

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop