Next Article in Journal
Quantization of Constrained Systems as Dirac First Class versus Second Class: A Toy Model and Its Implications
Next Article in Special Issue
Efficient Families of Multi-Point Iterative Methods and Their Self-Acceleration with Memory for Solving Nonlinear Equations
Previous Article in Journal
Study on Poisson Algebra and Automorphism of a Special Class of Solvable Lie Algebras
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extension of King’s Iterative Scheme by Means of Memory for Nonlinear Equations

1
Department of Mathematics, Government College Women University Faisalabad, Faisalabad 38000, Pakistan
2
Centre for Advanced Studies in Pure and Applied Mathematics, Bahauddin Zakariya, Multan 60000, Pakistan
3
Department of Mathematics, Ghazi University, Dera Ghazi Khan 32200, Pakistan
4
Department of Mathematics and Statistics, Institute of Southern Punjab, Multan 60800, Pakistan
5
Department of Mathematics, University Centre for Research and Development, Chandigarh University, Mohali 140413, India
*
Author to whom correspondence should be addressed.
Symmetry 2023, 15(5), 1116; https://doi.org/10.3390/sym15051116
Submission received: 27 April 2023 / Revised: 13 May 2023 / Accepted: 16 May 2023 / Published: 19 May 2023

Abstract

:
We developed a new family of optimal eighth-order derivative-free iterative methods for finding simple roots of nonlinear equations based on King’s scheme and Lagrange interpolation. By incorporating four self-accelerating parameters and a weight function in a single variable, we extend the proposed family to an efficient iterative scheme with memory. Without performing additional functional evaluations, the order of convergence is boosted from 8 to 15.51560 , and the efficiency index is raised from 1.6817 to 1.9847 . To compare the performance of the proposed and existing schemes, some real-world problems are selected, such as the eigenvalue problem, continuous stirred-tank reactor problem, and energy distribution for Planck’s radiation. The stability and regions of convergence of the proposed iterative schemes are investigated through graphical tools, such as 2D symmetric basins of attractions for the case of memory-based schemes and 3D stereographic projections in the case of schemes without memory. The stability analysis demonstrates that our newly developed schemes have wider symmetric regions of convergence than the existing schemes in their respective domains.

1. Introduction

Solving nonlinear equations or root finding is an essential task in numerical analysis and has a wide range of applications in physics, chemistry, mathematical biology, medicines, economics, and engineering. Numerical or iterative methods to find approximate solutions of nonlinear equations are the most frequently used techniques. One can distinguish between two common approaches for numerical solutions of nonlinear equations, namely one-point and multipoint iterative schemes. Newton’s method is one of the famous root-finding methods for solving a single nonlinear equation f ( x ) = 0 , given as follows [1]:
x m = x m f ( x m ) f ( x m ) , m 0 ,
where f is a real valued function. Newton’s scheme is a one-point method and is quadratically convergent in some neighborhood of the root η of f for an initial guess x 0 close enough to η . There are several two-point methods to find simple roots of f ( x ) = 0 in the literature. Among them, King’s method is the most popular two-point fourth-order method [2], which is given as follows:
y m = x m f ( x m ) f ( x m ) , m 0 , z m = y m f ( x m ) + γ f ( y m ) f ( x m ) + ( γ 2 ) f ( y m ) f ( y m ) f ( x m ) , γ R .
Multipoint iterative schemes gained much interest at the beginning of the twenty-first century, since they overcome the theoretical limits of one-point methods regarding convergence speed and computational efficiency. Some root-finding schemes based on multipoint iterations are first studied in the books by Traub [1] and Ostrowski [3] and the papers in [2,4], published in the twentieth century. Multipoint schemes can be further classified into two categories: “methods with memory” and “methods without memory”. A multipoint method using current information only is called a method without memory, while a method employing the current as well as previous information is known as a method with memory. An important aspect of the optimality of the methods with memory is that they improve the convergence order and efficiency index of the iterative schemes without memory without using additional function evaluations. Moreover, this approach makes it an impactful class of multipoint iterative methods. Kung and Traub conjectured that an iterative scheme without memory based on r evaluations of function could obtain an order of convergence of at most 2 r 1 (known as optimal order). The computational efficiency index of an iterative method is expressed as 2 r 1 r [3].
Traub [1] proposed a uniparametric derivative-free scheme with memory by a slight modification of Steffensen’s iterative method [5] for suitably chosen x 0 , φ 0 as follows:
ν m = x m + φ m f x m , m 0 , x m = x m f ( x m ) f ν m , x m , φ m = 1 N 1 x m ,
where f [ v , x ] = f ( v ) f ( x ) ( v x ) denotes divided difference of first order, N 1 = f x m + x x m f ν m , x m , and φ m is the free accelerating parameter. The iterative scheme with memory (3) has a convergence order of 2.41 . Several researchers have developed root-finding methods using memory based on existing optimal methods without memory; see, e.g., [2,6,7,8,9,10,11,12,13,14,15,16].
It is a common fact that iterative schemes are very sensitive towards an initial guess. To resolve this difficulty, we identify the region in which a root lies and, as a result, a variety of safe initial guesses are available to choose from. The convergence regions and stability of an iterative scheme can be visualized in 2D dynamical planes with the help of symmetric basins of attractions and drawing the 3D stereographic projections on the sphere.
One of the biggest challenges of iterative processes is to verify that they converge to the exact solution, rather than getting stuck in a local minimum or diverging. An effective way for examining the behavior of iterative methods, particularly those based on fixed-point iterations, is the analysis of basins of attractions. The areas of the parameter space that correspond to the same solution are referred to as basins of attractions. The conditions under which the iterative approach converges, as well as the rate of convergence and the stability of the solution, can be found by examining the basins of attractions. This information is essential for choosing the best iterative approach for a particular problem, as well as for adjusting a method’s parameters to determine how well it performs.
Motivated and inspired by the research being conducted in this direction, we developed a new modified King’s type scheme with memory, having a convergence order of at least 15.5156 based on Lagrange interpolation. In Section 2, we present a new optimal eighth-order derivative-free iterative family without memory. In Section 3, we extend the proposed optimal scheme without memory presented in Section 2 to an iterative scheme with memory and provide its convergence analysis. In Section 4, we present some particular cases of the weight functions and iterative methods corresponding to the weight functions. In Section 5, we give applications of the presented iterative methods to solve engineering problems, such as the energy distribution for Planck’s radiation and continuous stirred-tank reactor. A comparison of the newly developed iterative scheme with already existing similar schemes is also presented in Section 5 by using different test functions. In Section 6, we present the extensive stability analysis of the proposed scheme without memory by drawing its 3D stereographic projections. In Section 7, we present dynamical analysis of the proposed iterative scheme with memory with the help of its 2D symmetric basins of attractions. Finally, the conclusions are discussed in Section 8.

2. Extension of King’s Method to an Optimal Eighth-Order Derivative-Free Scheme

Our primary aim is to develop an optimal eighth-order method without memory based on Lagrange interpolation which can be extended to an iterative scheme with memory. By adding Newton’s method to the third step of King’s method (2), we obtain
q m = x m f ( x m ) f ( x m ) , h m = q m f ( q m ) f ( x m ) f ( x m ) + ω f ( q m ) f ( x m ) + ( ω 2 ) f ( q m ) , x m + 1 = h m f ( h m ) f ( h m ) .
The three-step iterative scheme (4) is not optimal because it needs five evaluations of functions to provide eighth-order convergence. To make it optimal, derivative-free, and extendable to a with-memory scheme, we adopt the following procedure.
We use the approximation of the derivative f x m in the first step of the iterative method (4) as follows:
f ( x m ) f [ ν m , x m ] + β 2 f ( ν m ) ,
where ν m = x m + β 1 f ( x m ) .
The derivative f ( x m ) in the second step is replaced by the following approximation (with a weight function U , depending on variable s m = f ( q m ) f ( x m ) ):
f ( x m ) f [ q m , ν m ] + β 2 f ( ν m ) + β 3 ( q m ν m ) ( q m x m ) U ( s m ) .
We replace the derivative f ( h m ) in the third step of the iterative scheme (4) by the following approximation:
f ( h m ) L 3 ( h m ) + β 4 ( h m ν m ) ( h m q m ) ( h m x m ) ,
where β 1 , β 2 , β 3 , and β 4 are free parameters and
L 3 ( h m ) = f [ h m , ν m ] + f [ h m , x m ] + f [ h m , q m ] f [ x m , q m ] f [ ν m , q m ] f [ x m , ν m ] + ( x m h m ) f ( x m ) ( x m ν m ) ( x m q m ) + ( ν m h m ) f ( ν m ) ( ν m x m ) ( ν m q m ) + ( q m h m ) f ( q m ) ( q m ν m ) ( q m x m ) ,
where L 3 ( h m ) is the derivative of Lagrange interpolating polynomial of degree three that interpolates x m , ν m , q m , and h m .
By using the above approximations, we obtain the following eighth-order iterative method without memory using four function evaluations per cycle:
ν m = x m + β 1 f ( x m ) , q m = x m f ( x m ) f [ ν m , x m ] + β 2 f ( ν m ) , h m = q m U ( s m ) f ( q m ) f [ q m , ν m ] + β 2 f ( ν m ) + β 3 ( q m ν m ) ( q m x m ) f ( x m ) + ω f ( q m ) f ( x m ) + ( ω 2 ) f ( q m ) , x m = h m f ( h m ) R m ,
where
R m = f [ h m , ν m ] + f [ h m , x m ] + f [ h m , q m ] f [ x m , q m ] f [ ν m , q m ] f [ x m , ν m ] + ( x m h m ) f ( x m ) ( x m ν m ) ( x m q m ) + ( ν m h m ) f ( ν m ) ( ν m x m ) ( ν m q m ) + ( q m h m ) f ( q m ) ( q m ν m ) ( q m x m ) + β 4 ( h m ν m ) ( h m q m ) ( h m x m ) .
The next theorem indicates that the proposed method without memory (8) has a convergence order of eight with an efficiency index of 8 1 4 1.68179 .
Theorem 1.
Suppose that f : I f R R is sufficiently differentiable and η I f is a real root of an equation f x = 0 , where I f R is an open interval and x 0 is a good initial guess close to root η. Then, the convergence order of four parametric three-step schemes (8) is at least eight if the following conditions hold for the weight function U ( s m ) :
U ( 0 ) = 1 , U ( 0 ) = 1 , U ( 0 ) = 2 ,
and the error equation of the iterative scheme (8) for all values of β 1 , β 2 , β 3 , and β 4 is given as
e m + 1 = 1 f ( η ) 2 ( ( C 2 + β 2 ) 2 ( β 1 f ( η ) + 1 ) 4 ( 2 f ( η ) 2 ω c 2 2 β 1 + 4 f ( η ) 2 C 2 β 1 β 2 ω + 2 f ( η ) 2 β 1 β 2 2 ω 2 f ( η ) 2 C 2 2 β 1 4 β 2 f ( η ) 2 β 1 C 2 2 f ( η ) 2 β 1 β 2 2 + 2 f ( η ) C 2 2 ω + 4 f ( η ) C 2 β 2 ω + 2 f ( η ) 2 β 2 2 ω 2 β 2 f ( η ) C 2 2 f ( η ) β 2 2 f ( η ) C 3 + β 3 ) ( 2 f ( η ) 2 ω c 2 3 β 1 + 4 f ( η ) 2 ω C 2 2 β 1 β 2 + 2 f ( η ) 2 ω C 2 β 1 β 2 2 2 f ( η ) 2 C 2 3 β 1 4 f ( η ) 2 β 1 β 2 C 2 2 2 f ( η ) 2 C 2 β 1 β 2 2 + 2 f ( η ) ω C 2 3 + 4 f ( η ) ω C 2 2 β 2 + 2 f ( η ) ω C 2 β 2 2 2 f ( η ) C 2 2 β 2 2 f ( η ) C 2 β 2 2 f ( η ) C 2 C 3 + f ( η ) C 4 + C 2 β 3 β 4 ) ) e m 8 + O ( e m 9 ) .
Proof. 
Let e m be the error in the root η at m th step, specified as
e m = x m η .
Expanding f ( x m ) using Taylor series, we obtain
f ( x m ) = f ( η ) ( e m + C 2 e m 2 + C 3 e m 3 + C 4 e m 4 + C 5 e m 5 + C 6 e m 6 + C 7 e m 7 + C 8 e m 8 ) + O ( e m 9 ) ,
where
C n = f ( n ) ( η ) n ! f ( η ) , n η .
We obtain the error term e m , ξ = ν m η by using Taylor’s expansion:
e m , ξ = ( 1 + β 1 f ( η ) ) e m + β 1 f ( η ) ( C 2 e m 2 + C 3 e m 3 + C 4 e m 4 + C 5 e m 5 + C 6 e m 6 + C 7 e m 7 + C 8 e m 8 ) + O ( e n 9 ) .
Similarly, by applying Taylor’s series, we obtain f ( η m ) , which is given as follows:
f ( ν m ) = f ( η ) ( ( 1 + β 1 f ( η ) ) e m + β 1 f ( η ) C 2 e m 2 + + O ( e m 9 ) ) .
Thus, the error term of q m is
e m , w = ( C 2 + β 2 ) ( 1 + β 1 f ( η ) ) e m 2 + + O ( e m 9 ) ,
where
e m , w = q m η .
By applying Taylor’s expansion, we obtain f ( q m ) , which is given as
f ( q m ) = f ( η ) [ ( C 2 + β 2 ) ( 1 + β 1 f ( η ) ) e m 2 + + O ( e m 9 ) ] .
Again, by Taylor’s series, we obtained the error expression of h m as follows:
e m , z = ( C 2 + β 2 ) ( U ( 0 ) 1 ) ( f ( η ) β 1 + 1 ) e n 2 + ( 3 β 1 f ( η ) C 3 U ( 0 ) f ( η ) 2 C 2 2 β 1 U ( 0 ) f ( η ) 2 β 1 2 β 2 2 2 U ( 0 ) f ( η ) C 2 2 β 1 2 U ( 0 ) f ( η ) β 1 β 2 2 U ( 0 ) f ( η ) 2 C 2 β 1 2 β 2 2 U ( 0 ) f ( η ) C 2 β 1 β 2 2 f ( η ) β 1 β 2 2 f ( η ) 2 β 1 2 β 2 2 U ( 0 ) C 2 2 U ( 0 ) β 2 2 2 U ( 0 ) f ( η ) 2 C 2 β 1 2 β 2 4 U ( 0 ) f ( η ) C 2 β 1 β 2 2 β 2 C 2 f ( η ) 2 C 2 β 1 2 β 2 2 β 2 f ( η ) β 1 C 2 2 C 2 2 + f ( η ) 2 C 3 β 1 2 2 β 1 f ( η ) C 2 2 β 1 2 f ( η ) 2 C 2 2 + U ( 0 ) C 2 2 2 U ( 0 ) C 3 2 U ( 0 ) C 2 β 2 U ( 0 ) f ( η ) 2 C c 3 β 1 2 3 U ( 0 ) f ( η ) C 3 β 1 + 2 C 3 β 2 2 ) e m 3 + + O ( e m 9 ) .
In order to achieve fourth-order convergence, we choose U ( 0 ) = 1 , U ( 0 ) = 1 , and U ( 0 ) = 2 ; we have an error term of h m :
e m , z = 1 f ( η ) ( ( f ( η ) β 1 + 1 ) 2 ( C 2 + β 2 ) ( 2 f ( η ) 2 ω c 2 2 β 1 + 4 f ( η ) 2 C 2 β 1 β 2 ω + 2 f ( η ) 2 β 1 β 2 2 ω 2 f ( η ) 2 C 2 2 β 1 4 β 2 f ( η ) 2 β 1 C 2 2 f ( η ) 2 β 1 β 2 2 + 2 f ( η ) C 2 2 ω + 4 f ( η ) C 2 β 2 ω + 2 f ( η ) 2 β 2 2 ω 2 β 2 f ( η ) C 2 2 f ( η ) β 2 2 f ( η ) C 3 + β 3 ) e m 4 + + O ( e m 9 ) .
By using the Taylor’s series, we can easily find the expression of f ( h m ) and R m that is given in the newly proposed with-memory method (8). Thus, we obtain a convergence order of eight in the proposed iterative scheme (8) as follows:
e m + 1 = 1 f ( η ) 2 ( ( C 2 + β 2 ) 2 ( β 1 f ( η ) + 1 ) 4 ( 2 f ( η ) 2 ω c 2 2 β 1 + 4 f ( η ) 2 C 2 β 1 β 2 ω + 2 f ( η ) 2 β 1 β 2 2 ω 2 f ( η ) 2 C 2 2 β 1 4 β 2 f ( η ) 2 β 1 C 2 2 f ( η ) 2 β 1 β 2 2 + 2 f ( η ) C 2 2 ω + 4 f ( η ) C 2 β 2 ω + 2 f ( η ) 2 β 2 2 ω 2 β 2 f ( η ) C 2 2 f ( η ) β 2 2 f ( η ) C 3 + β 3 ) ( 2 f ( η ) 2 ω C 2 3 β 1 + 4 f ( η ) 2 ω C 2 2 β 1 β 2 + 2 f ( η ) 2 ω C 2 β 1 β 2 2 2 f ( η ) 2 C 2 3 β 1 4 f ( η ) 2 β 1 β 2 C 2 2 2 f ( η ) 2 C 2 β 1 β 2 2 + 2 f ( η ) ω C 2 3 + 4 f ( η ) ω C 2 2 β 2 + 2 f ( η ) ω C 2 β 2 2 2 f ( η ) C 2 2 β 2 2 f ( η ) C 2 β 2 2 f ( η ) C 2 C 3 + f ( η ) C 4 + C 2 β 3 β 4 ) ) e m 8 + O ( e n 9 ) .
Remark 1.
The above theorem verifies that the proposed method (8) has eighth-order convergence with an efficiency index of 8 1 4 1.68179 . Furthermore, note that if we chose β 1 = 1 f ( η ) and β 2 = C 2 , then the coefficient of e m 8 vanishes and we obtain the following error equation:
e m + 1 = C 2 4 C 3 3 β 3 f η C 3 C 2 β 3 f η C 3 + f η C 4 β 4 e m 14 f η + O e m 15 .
By choosing β 3 = f η C 3 and β 4 = f η C 4 , the resulting scheme has an optimal order of convergence of at least sixteen. In order to accomplish the with-memory iterative scheme, parameters play an essential role. Thus, from the above error analysis, we observe that the multipoint scheme (8) is extendable to an iterative scheme with memory.

3. Efficient King’s Type Scheme with Memory

This section covers a significant role in our work. Here, we extend the proposed optimal scheme (8) such that it achieves the highest possible order and efficiency by using previous iteration values. We use four self-accelerating parameters, β 1 , β 2 , β 3 , and β 4 in the scheme (8) and approximate these parameters by Newton’s interpolatory polynomial of the appropriate degree. So, in this way, we can increase the order of convergence. If we choose β 1 = 1 f ( η ) , β 2 = C 2 , β 3 = f ( η ) C 3 and β 4 =   f ( η ) C 4 , where C j = f ( j ) ( η ) j ! f ( η ) and j 2 , then the order of convergence reaches up to sixteen. Thus, to obtain an iterative scheme with memory, the parameters β 1 , β 2 , β 3 , and β 4 are determined by using the following formulas:
β 1 = β 1 , m = 1 N 4 x m 1 f ( η ) ,
β 2 = β 2 , m = N 5 ( ν m ) 2 N 5 ν m C 2 ,
β 3 = β 3 , m = N 6 ( q m ) 6 f ( η ) C 3 ,
β 4 = β 4 , m = N 7 i v ( h m ) 24 f ( η ) C 4 .
The above approximations are made by using Newton’s fourth-, fifth-, sixth-, and seventh-degree interpolatory polynomials, passing through the best available approximations, and they are given as follows:
N 4 φ = N 4 φ ; x m , h m , q m , ν m , x m , N 5 φ = N 5 φ ; ν m , x m , h m , q m , ν m , x m , N 6 φ = N 6 φ ; q m , ν m , x m , h m , q m , ν m , x m , N 7 φ = N 7 φ ; h m , q m , ν m , x m , h m , q m , ν m , x m ,
for any m 1 . The explicit representations for N 4 φ , N 5 φ , N 6 φ , and N 7 φ are given as follows:
N 4 φ ; x m , h m , q m , ν m , x m = f x m + f x m , h m φ x m + f x m , h m , q m φ x m φ h m + f x m , h m , q m , ν m φ x m φ h m φ q m + f x m , h m , q m , ν m , x m φ x m φ h m φ q m φ ν m ,
N 5 φ ; ν m , x m , h m , q m , ν m , x m = f ν m + f ν m , x m φ ν m + f ν m , x m , h m φ ν m φ x m + f ν m , x m , h m , q m φ ν m φ x m φ h m + f ν m , x m , h m , q m , ν m φ ν m φ x m φ h m φ q m + f ν m , x m , h m , q m , ν m , x m φ ν m φ x m φ h m φ q m φ ν m ,
N 6 φ ; q m , ν m , x m , h m , q m , ν m , x m = f q m + f q m , ν m φ q m + f q m , ν m , x m φ q m φ ν m + f q m , ν m , x m , h m φ q m φ ν m φ x m + f q m , ν m , x m , h m , q m φ q m φ ν m φ x m φ h m + f q m , ν m , x m , h m , q m , ν m φ q m φ ν m φ x m φ h m φ q m + f q m , ν m , x m , h m , q m , ν m , x m φ q m φ ν m φ x m φ h m φ q m φ ν m ,
N 7 φ ; h m , q m , ν m , x m , h m , q m , ν m , x m = f h m + f h m , q m φ h m + f h m , q m , η x φ h m φ q m + f h m , q m , ν m , x m φ h m φ q m φ η x + f h m , q m , ν m , x m , h m φ h m φ q m φ ν m φ x m + f h m , q m , ν m , x m , h m , q m φ h m φ q m φ ν m φ x m φ h m + f h m , q m , ν m , x m , h m , q m , ν m φ h m φ q m φ ν m φ x m φ h m φ q m + f h m , q m , ν m , x m , h m , q m , ν m , x m φ h m φ q m φ ν m φ x m φ h m φ q m φ ν m .
Now, we substitute the free parameters β 1 , β 2 , β 3 , and β 4 in (8) with β 1 , x , β 2 , m , β 3 , m , and β 4 , m . Consequently, we obtain a three-step iterative scheme with memory, as given below:
ν m = x m + β 1 , m f ( x m ) , β 1 , m = 1 N 4 x m , m 0 , q m = x m f ( x m ) f [ ν m , x m ] + β 2 , m f ( ν m ) , β 2 , m = N 5 ( ν m ) 2 N 5 ν m , β 3 , m = N 6 ( q m ) 6 , β 4 , m = N 7 i v ( h m ) 24 , h m = y m U ( s m ) f ( q m ) f [ q m , ν m ] + β 2 , m f ( ν m ) + β 3 , m ( q m ν m ) ( q m x m ) f ( x m ) + ω f ( q m ) f ( x m ) + ( ω 2 ) f ( q m ) , x m = h m f ( h m ) R m ,
where β 1 , 0 , β 2 , 0 , β 3 , 0 , and β 4 , 0 should be chosen suitably, s m = f q m f x m , and
R m = f [ h m , ν m ] + f [ h m , x m ] + f [ h m , q m ] f [ x m , q m ] f [ ν m , q m ] f [ x m , ν m ] +
( x m h m ) f ( x m ) ( x m ν m ) ( x m q m ) + ( ν m h m ) f ( v m ) ( ν m x m ) ( ν m q m ) + ( q m h m ) f ( q m ) ( q m ν m ) ( q m x m ) + β 4 , m ( h m ν m ) ( h m q m ) ( h m x m ) .
Lemma 1.
If β 1 , m = 1 N 4 x m , β 2 , m = N 5 ( ν m ) 2 N 5 ν m , β 3 , m = N 6 ( q m ) 6 , β 4 , m = N 7 i v ( h m ) 24 , for m = 1 , 2 , , then the following holds:
1 + β 1 , m f ( η ) e m 1 , z e m 1 , w e m 1 , ξ e m 1 ,
c 2 + β 2 , m e m 1 , z e m 1 , w e m 1 , ξ e m 1 ,
P m e m 1 , z e m 1 , w e m 1 , ξ e m 1 ,
Q m e m 1 , z e m 1 , w e m 1 , ξ e m 1 ,
where
P m = ( 2 f ( η ) 2 ω c 2 2 β 1 + 4 f ( η ) 2 c 2 β 1 β 2 ω + 2 f ( η ) β 1 2 β 2 2 ω 2 f ( η ) 2 C 2 2 β 1 4 β 2 f ( η ) 2 β 1 C 2 2 f ( η ) 2 β 1 β 2 2 + 2 f ( η ) C 2 2 ω + 4 f ( η ) C 2 β 2 ω + 2 f ( η ) 2 β 2 2 ω 2 β 2 f ( η ) C 2 2 f ( η ) β 2 2 f ( η ) C 3 + β 3 ) ,
Q m = ( 2 f ( η ) 2 ω C 2 3 β 1 + 4 f ( η ) 2 ω C 2 2 β 1 β 2 + 2 f ( η ) 2 ω C 2 β 1 β 2 2 2 f ( η ) 2 C 2 3 β 1 4 β 2 f ( η ) 2 β 1 C 2 2 2 f ( η ) 2 C 2 β 1 β 2 2 + 2 f ( η ) ω C 2 3 + 4 f ( η ) ω C 2 2 β 2 + 2 f ( η ) ω C 2 β 2 2 2 f ( η ) C 2 2 β 2 2 f ( η ) C 2 β 2 2 f ( η ) C 2 C 3 + f ( η ) C 4 + C 2 β 3 r 4 ) .
Proof. 
The proof follows from Lemma 1 in [15]. □
Theorem 2.
Suppose that x 0 is an initial approximation which is close to a root η of a nonlinear function f x , with f being sufficiently differentiable. If the self-accelerating parameters β 1 , m , β 2 , m , β 3 , m , and β 4 , m are evaluated by the formulae given in (13), then the three-step with-memory scheme (18) has an R-order of convergence of at least 15.5156 with an efficiency index of 15.5156 1 4 1.9847.
Proof. 
Assume that the sequence x m is generated by the iterative scheme (18) converging to a real zero η with at least order r, then we may write
e m + 1 e m r ,
where e m = x m η . From (24), we obtain:
e m + 1 e m r r = e m 1 r 2 .
Let ν m , q m , and h m be the iterative sequences of the convergence orders σ 1 , σ 2 , and σ 3 , then the error relation is of the form:
e m , ξ e m r σ 1 = e m 1 r σ 1 ,
e m , w e m r σ 2 = e m 1 r σ 2 ,
e m , z e m r σ 3 = e m 1 r σ 3 .
Now, using (26)–(28) and Lemma 1, we obtain
1 + β 1 , m f ( η ) e m 1 σ 1 + σ 2 + σ 3 + 1 ,
C 2 + β 2 , m e m 1 σ 1 + σ 2 + σ 3 + 1 ,
P m e m 1 σ 1 + σ 2 + σ 3 + 1 ,
Q m e m 1 σ 1 + σ 2 + σ 3 + 1 ,
where P m and Q m are given in (22) and (23). So, by Theorem 1, we obtain the following terms:
e m , ξ 1 + β 1 , m f ( η ) e m ,
e m , w 1 + β 1 , m f ( η ) ( C 2 + β 2 , m ) e m 2 ,
e m , z 1 + β 1 , m f ( η ) 2 ( C 2 + β 2 , m ) P m e m 4 ,
e m + 1 1 + β 1 , m f ( η ) 4 ( C 2 + β 2 , m ) 2 P m Q m e m 8 ,
where P m and Q m are given in (22) and (23). Now, substitute (24) and (29)–(32) in (33)–(36); we obtain
e m , ξ = e m 1 σ 1 + σ 2 + σ 3 + 1 + r ,
e m , w = e m 1 2 σ 1 + σ 2 + σ 3 + 1 + 2 r ,
e m , z = e m 1 4 σ 1 + σ 2 + σ 3 + 1 + 4 r ,
e m + 1 = e m 1 8 σ 1 + σ 2 + σ 3 + 1 + 8 r .
By attributing the coefficients of the suitable exponents of e m 1 in the pair of following relations (26)∧(37), (27)∧(38), (28)∧(39), and (25)∧(40), respectively, we obtained the system of equations in σ 1 , σ 2 , σ 3 , and r as
r σ 1 r σ 1 + σ 2 + σ 3 + 1 = 0 , r σ 2 2 r 2 σ 1 + σ 2 + σ 3 + 1 = 0 , r σ 3 4 r 4 σ 1 + σ 2 + σ 3 + 1 = 0 , r 2 8 r 8 σ 1 + σ 2 + σ 3 + 1 = 0 .
By solving these equations, we obtain the nontrivial solutions σ 1 = 1.939451 , σ 2 = 3.878902 , σ 3 = 7.757804 , and r = 15.515609 . Hence, the R-order of convergence of the proposed iterative scheme with memory is at least 15.5156 . □

4. Some Special Cases of Weight Functions

We choose the following choices of weight functions satisfying the conditions U ( 0 ) = 1 , U ( 0 ) = 1 , U ( 0 ) = 2 :
U s m = 1 s m + 2 sin s m 2 ,
U s m = 1 1 + s m 2 s m 2 .
By using the above weight functions defined in (42), we obtain the following special cases of the proposed without-memory iterative scheme (8):
Method AK1:
ν m = x m + β 1 , m f ( x m ) , q m = x m f ( x m ) f [ ν m , x m ] + β 2 , m f ( ν m ) , h m = q m ( 1 s m + 2 sin s m 2 ) f ( q m ) f [ q m , ν m ] + β 2 , m f ( ν m ) + β 3 , m ( q m ν m ) ( q m x m ) f ( x m ) + ω f ( q m ) f ( x m ) + ( ω 2 ) f ( q m ) , x m + 1 = h m f ( h m ) R m ,
Method AK2:
ν m = x m + β 1 , m f ( x m ) , q m = x m f ( x m ) f [ ν m , x m ] + β 2 , m f ( ν m ) , h m = q m 1 1 + s m 2 s m 2 f ( q m ) f [ q m , ν m ] + β 2 , m f ( ν m ) + β 3 , m ( q m ν m ) ( q m x m ) f ( x m ) + ω f ( q m ) f ( x m ) + ( ω 2 ) f ( q m ) , x m + 1 = h m f ( h m ) R m .
The special cases of the proposed with-memory iterative scheme (18) are as follows:
Method AKM1:
ν m = x m + β 1 , m f ( x m ) , β 1 , m = 1 N 4 x m , q m = x m f ( x m ) f [ ν m , x m ] + β 2 , m f ( ν m ) , β 2 , m = N 5 ( ν m ) 2 N 5 ν m , β 3 , m = N 6 ( q m ) 6 , β 4 , m = N 7 i v ( h m ) 24 , h m = q m ( 1 s m + 2 sin s m 2 ) f ( q m ) f [ q m , ν m ] + β 2 , m f ( ν m ) + β 3 , m ( q m ν m ) ( q m x m ) f ( x m ) + ω f ( q m ) f ( x m ) + ( ω 2 ) f ( q m ) , x m + 1 = h m f ( h m ) R m .
Method AKM2:
ν m = x m + β 1 , m f ( x m ) , β 1 , m = 1 N 4 x m , q m = x m f ( x m ) f [ ν m , x m ] + β 2 , m f ( ν m ) , β 2 , m = N 5 ( ν m ) 2 N 5 ν m , β 3 , m = N 6 ( q m ) 6 , β 4 , m = N 7 i v ( h m ) 24 , h m = q m 1 1 + s m 2 s m 2 f ( q m ) f [ q m , ν m ] + β 2 , m f ( ν m ) + β 3 , m ( q m ν m ) ( q m x m ) f ( x m ) + ω f ( q m ) f ( x m ) + ( ω 2 ) f ( q m ) , x m + 1 = h m f ( h m ) R m ,
where s m = f q m f x m and
R m = f [ h m , ν m ] + f [ h m , x m ] + f [ h m , q m ] f [ x m , q m ] f [ ν m , q m ] f [ x m , ν m ] + ( x m h m ) f ( x m ) ( x m ν m ) ( x m q m ) + ( ν m h m ) f ( ν m ) ( ν m x m ) ( ν m q m ) + ( q m h m ) f ( q m ) ( q m ν m ) ( q m x m ) + β 4 , m ( h m ν m ) ( h m q m ) ( h m x m ) .

5. Numerical Experiments

The primary purpose of the development of an iterative scheme for the solution of nonlinear equations is to achieve as rapid as possible convergence order with minimal cost of computation. In this section, we compare the newly constructed schemes with already existing methods. The computational order of convergence (COC) is defined by
C O C log f x m + 1 / f x m log f x m / f x m 1 .
The computer programming package, Maple 18 , is used with 2000 fixed floating points in the numerical tests. The error approximation x m η for the first three iterations is given in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10. We compare the family of without-memory methods A K 1 and A K 2 and with-memory methods A K M 1 and A K M 2 with the three-step optimal methods proposed by Lotfi et al. [17], represented by ( L M ) ; Cordero et al. [18], represented by ( M 1 ) ; and Zafar et al. [13], represented by ( F W M ) , which are given below:
Method LM:
ν m = x m + α 1 , m f x m , α 1 , m = 1 N 4 x m , m 2 , q m = x m f x m f x m , ν m + α 2 , m f ν m , α 2 , m = N 5 ( ν m ) 2 N 5 ν m , α 3 , m = N 6 ( q m ) 6 , α 4 , m = N 7 i v ( h m ) 24 , h m = q m f ( q m ) f [ q m , x m ] + f [ ν m , x m , q m ] ( q m x m ) + α 3 , m ( q m ν m ) ( q m x m ) , x m + 1 = h m f h m B m ,
where x 0 , α 1 , 0 , α 2 , 0 , α 3 , 0 , and α 4 , 0 are given and
B m = f [ x m , h m ] + f [ ν m , x m , q m ] f [ ν m , x m , h m ] f [ q m , x m , h m ] ( x m h m ) + α 4 , m ( h m ν m ) ( h m q m ) ( h m x m ) .
Method M1:
ν m = x m + p 1 , x f x m , p 1 , m = 1 N 4 x m , m 0 , q m = x m f x m f x m , ν m + p 2 , m f ν m , p 2 , m = N 5 ( ν m ) 2 N 5 ν m , p 3 , m = N 6 ( q m ) 6 , p 4 , m = N 7 i v ( h m ) 24 , d m = f q m f x m , h m = q m 1 + 2 d m 1 d m f ( q m ) f [ q m , ν m ] + p 2 , m f ( ν m ) + p 3 , m ( q m ν m ) ( q m x m ) , x m + 1 = h m f h m G m ,
where p 1 , 0 , p 2 , 0 , p 3 , 0 , and p 4 , 0 are given and
G m = f q m , h m + f h m , q m , x m h m q m + f h m , q m , x m , ν m h m q m h m x m + p 4 , m h m ν m h m q m h m x m .
Method FWM:
ν m = x m + b 1 , x f x m , b 1 , m = 1 N 4 x m , m 0 , q m = x m f x m f x m , ν m + b 2 , m f ν m , b 2 , m = N 5 ( ν m ) 2 N 5 ν m , b 3 , m = N 6 ( q m ) 6 , b 4 , m = N 7 i v ( h m ) 24 , d m = f q m f x m , h m = q m 1 d m f ( q m ) f [ q m , ν m ] + b 2 , m f ( ν m ) + b 3 , m ( q m ν m ) ( q m x m ) f ( x m ) + ω f ( q m ) f ( x m ) + ( ω 2 ) f ( q m ) , x m + 1 = h m f h m E m ,
where b 1 , 0 , b 2 , 0 , b 3 , 0 , and b 4 , 0 are given and
E m = f q m , h m + f h m , q m , x m h m q m + f h m , y x , x m , ν m h m q m h m x m + b 4 , m h m ν m h m q m h m x m .
For the comparison of the with-memory schemes, we use β 1 , 0 = β 3 , 0 = β 4 , 0 = b 1 , 0 = b 3 , 0 = b 4 , 0 = α 1 , 0 = α 4 , 0 = p 1 , 0 = p 3 , 0 = p 4 , 0 = 0.01 , α 2 , 0 = 0.1 , and β 2 , 0 = b 2 , 0 = p 2 , 0 = α 3 , 0 = 0.1 to start the iterative process.

5.1. Location of Maximum Energy Distribution

Planck’s radiation law
θ = 8 π a b q 5 e a b / q u v 1 ,
where θ is the energy density within an isothermal black body, q is the wavelength of radiation, v is the absolute temperature of the black body, b is Planck’s constant, u is the Boltzmann’s constant, and a is the speed of light. To determine the wavelength which maximizes the energy density, we calculate
d θ d q = 8 π a b q 5 e a b / q u v 1 5 + a b / q u v e a b / q u v e a b / q u v 1 .
The term in front of the parentheses is zero in the limits, as q 0 and as q , although both of these situations give rise to minima in the energy density. The maximum we are seeking arises when the term inside the parenthesis is zero. This happens when
1 a b 5 q max u v = e a b / q max u v ,
where q max is the wavelength that maximizes the energy density. If we let x = a b / q max u v , then the equation for the maximum becomes
1 x 5 = e x .
Let us define the function
F x = e x 1 + x 5 = 0 .
The problem is now converted into a root-finding problem, as shown in (50). The solution of the equation is 4.965114232 , 0 .

5.2. Eigenvalue Problem

One of the most challenging and toughest tasks of linear algebra concerns the eigenvalues of a large square matrix. Another big task is to find the roots of the characteristics equation of a square matrix greater than 4. So, we consider the following 9 × 9 matrix [19]:
S = 1 8 12 0 0 19 19 76 19 18 437 64 24 0 24 24 64 8 32 376 16 0 24 4 4 16 4 8 92 40 0 0 10 50 40 2 20 242 4 0 0 1 41 4 1 0 25 40 0 0 18 18 104 18 20 462 84 0 0 29 29 84 21 42 501 16 0 0 4 4 16 4 16 92 0 0 0 0 0 0 0 0 24
The characteristics polynomial of the above matrix S is given as follows:
F x = x 9 29 x 8 + 349 x 7 2261 x 6 + 8455 x 5 17663 x 4 + 15927 x 3 + 6993 x 2 24732 x + 12960
The problem is now converted into a root-finding problem, as shown in (51). The solution of the equation is 1 , 4 , 5 , 8 , 1 , 3 , 3 , 3 , 3 .

5.3. Continuous Stirred-Tank Reactor (CSTR)

We assume an isothermal stirred-tank reactor (CSTR). Consider the components B and G, which represent the feed rates to reactors Q and q Q , respectively.
B + G Q Q + G C 1 C 1 + G D 1 D 1 + G E 1
To develop simple feedback control systems, Douglas analyzed this problem in [20]. In the investigation of this system, he presented the following equation for the transfer function of the reactor with a proportional control system:
G c 2.98 ( x + 2.25 ) ( x + 1.45 ) ( x + 2.85 ) 2 ( x + 4.35 ) = 1 ,
where G c is the gain of the proportional controller. If we take G c = 0 , we obtain the nonlinear equation:
F ( x ) = x 4 + 11.50 x 3 + 47.49 x 2 + 83.06325 x + 51.23266875 .
Transfer equations have four negative real roots, given as
x 1 = 1.45 ; x 2 = 2.85 ; x 3 = 2.85 ; x 4 = 4.45 .
These roots are called the poles of the open-loop transfer function. Table 11 shows the test functions, exact roots, and initial approximations.
For the sake of comparison, a variety of test functions are considered. Our without-memory methods A K 1 and A K 2 performed better for test functions F 2 x , F 4 x , F 5 x , F 6 x , and F 7 x . In the case of the with-memory methods, A K M 1 and A K M 2 performed exceptionally well for all the test functions F 1 x , F 2 x , F 3 x , F 4 x , F 5 x , F 6 x , and F 7 x . All these results can be verified from Table 1, Table 3, Table 5, Table 7, Table 9, Table 12, and Table 13 in the case of without-memory methods and from Table 2, Table 4, Table 6, Table 8, Table 10, Table 14, and Table 15 in the case of with-memory methods. Considering the above-mentioned facts, it is concluded that our newly constructed with- and without-memory methods are robust, proficient, and considerably better than the already existing schemes of similar types.

6. Stereographic Projection of Iterative Method without Memory

In this section, we analyzed the reliability of the iterative scheme through stereographic projection, which is a useful tool to provide us significant information about the convergence and stability of without-memory multipoint iterative schemes. This type of analysis has been performed by Andrew Nicklawsky [21].
Stereographic projection is a mapping between a sphere and a plane. The set of points on the surface of the sphere gives a complete representation in 3D space, the directions being the set of lines from the center of the sphere. It is defined as the unit sphere U 2 in 3D space, and R 3 is the set of points x , y , z such that
U 2 = { x , y , z R 3 | x 2 + y 2 + z 2 = 1 } R 3 .
For the stereographic projection, the transformation of the Cartesian coordinates is as follows:
x , y , z = ( 2 X 1 + X 2 + Y 2 , 2 Y 1 + X 2 + Y 2 , 1 + X 2 + Y 2 1 + X 2 + Y 2 ) .
The transformation of the Polar coordinates is as follows:
r , θ , z = 2 R 1 + R 2 , , R 2 1 R 2 + 1 .
The origin of the above two projections is 0 , 0 : the point 0 , 0 , 1 is on the south pole, the interior of the unit circle is mapped on the southern hemisphere, and the unit circle is mapped on the equator. The point 0 , 0 , 1 is on the north pole, which is undefined but can be considered the manifestation of infinity as the points growing closer to it come from points in the plane increasingly distant from the origin. We take three complex polynomials to obtain the stereographic projection of different methods, which are
r 1 z = z 3 1 . r 2 z = z 5 1 . r 3 z = z 6 1 .
To generate the stereographic projection, we used a maximum of 60 iterations with 200 resolutions implemented in the computer programming language MATLAB R2014a. The roots of each polynomial are marked with different colors. r 1 z has three roots, so it can be seen from the stereographic images of r 1 z that three colors appeared, and these three colors reflect three roots, where the black color is used for divergence. The same holds in the case of r 2 z , where five colors can be seen, and in the case of r 3 z , where six colors of roots can be seen from the stereographic projections. If the sequence of root-finding iterative schemes converges in a smaller number of iterations, then the color will be bright, and if it is not converging to any of the roots, then the initial root is allotted with black color. In the following figures, we used a lower value of 2 and an upper value of 2 with an increment of 0.1 on polynomials of 3rd, 5th, and 6th degree. The dynamical behavior of the newly proposed without-memory iterative schemes A K 1 and A K 2 and the known schemes without memory by Zafar et al. [13] F W M , Cordero et al. [18] M 1 , and Lotfi et al. [17] L M are seen in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6.
Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 show the region of convergence and divergence through the stereographic projection of the developed families A K 1 and A K 2 compared with the existing families of the same domain. It can be seen from Figure 1 and Figure 2. that the newly developed methods AK1 and AK2 have wider and brighter regions of convergence and less regions of divergence as compared with FWM and M1, and their stability is comparable with LM in the case of r 1 z . The same behavior can be observed from Figure 3 and Figure 4 in the case of polynomial r 2 z and Figure 5 and Figure 6 in the case of r 3 z . Since the dynamical behavior of the presented families A K 1 and A K 2 have less black color in contrast with F W M and M 1 , the stability and consistency of the newly proposed family is evident from the stereographic projection.
Without-memory iterative methods are algorithms that determine a subsequent iteration exclusively from the previous one. The current iteration, which is located on the sphere, can be converted to a point on the plane via stereographic projection. The next iterate is then obtained by applying the iteration process in the plane and projecting the resulting point back onto the sphere. This method can make the computation of iterates simpler, because working with points on a plane rather than a sphere may be simpler. Additionally, stereographic projection can show how iterates move on the sphere during the progress of an iteration, which can give geometric insight into how iterative techniques behave.

7. Dynamical Analysis of Iterative Methods with Memory

It is interesting to visualize the convergence region and stability of an iterative scheme through complex tools, such as symmetric basins of attraction and stereographic projections. The basic definitions and complex behavior of rational functions can be found in [22,23,24,25]. In the previous section, we investigated stereographic projections, and here, we analyze their symmetric basins of attractions for various iterative methods.
From a numerical viewpoint, complex properties of rational functions associated with iterative schemes give us significant information about their stability. To generate symmetric basins of attractions, we used two different scheme on Matlab R2014a. We consider a square box B = 2 , 2 × 2 , 2 C with 200 × 200 grids. An initial approximation for the zero of a polynomial lies in the basins of attractions if an iterative method converges to it with stopping criterion; f ξ m < 10 5 .
The first scheme assigns a specific shade to the initial points in order to visualize the convergence speed as a function of the number of iterations. Each initial point is assigned a unique color. If the iterative scheme’s sequence converges in fewer iterations, the color is brighter; if it does not converge to any of the roots after 30 iterations, the initial root is assigned a dark blue color. We use the same scale in the second method but assign a shade to each starting point based on the number of iterations required for the root-finding scheme to converge to any of the given function’s roots. We use a maximum of 25 iterations and the same stopping criteria as previously stated. If an iterative scheme does not converge within the specified number of iterations, it is said to be divergent for that particular starting point and is denoted by black shade. To generate symmetric basins of attractions (polynomiographs), we employ three complex functions, which are
y 3 ξ = ξ 3 1 , ξ = 1.0 , 0.5000 + 0.86605 I , 0.5000 0.86605 I y 5 ξ = ξ 5 1 , ξ = 1.0 , 0.3090 + 0.95105 I , 0.8090 + 0.58778 I , 0.8090 0.58778 I , 0.30902 0.95105 I . y 6 ξ = ξ 6 1 2 ξ 5 + 11 i + 1 4 ξ 4 3 i + 19 4 ξ 3 + 5 i + 11 4 ξ 2 + i 11 4 ξ + 3 2 3 i , ξ = 1.0068 + 2.0047 i , 0.0281 + 0.9963 i , 0.0279 1.5225 i , 1.0235 0.9556 i , 0.9557 0.0105 i , 0.5284 0.5125 i .
The symmetric basins of attractions of the proposed with-memory schemes ( A K M 1 ) and ( A K M 2 ) and the existing methods with memory by Zafar et al. [13] ( F W M ) , Lotfi et al. [17] ( L M ) , and Cordero et al. [18] ( M 1 ) are shown in Figure 7, Figure 8 and Figure 9.
According to Figure 7, Figure 8 and Figure 9, we conclude that with-memory methods (45) and (46), depending upon self-accelerating parameters calculated by (12), have the fastest order of convergence, as their convergence regions are darker and brighter than the other known schemes. Our methods A K M 1 and A K M 2 have wider convergence regions as compared with F W M , L M , and M 1 . Thus, the proposed schemes are more reliable and have better convergence regions than F W M , L M , and M 1 .
Additionally, the investigation of attraction basins can indicate the possibility of different solutions, which could have significant implications for the modeled system. In physics, for instance, the existence of numerous stable solutions can cause phase transitions, while in optimization problems, the existence of several solutions can point to the existence of local minima or maxima. In conclusion, the analysis of attraction basins offers a potential tool for comprehending the behavior of iterative approaches and enhancing their effectiveness. It can help professionals and researchers across a range of disciplines in more effectively and precisely resolving challenging issues.

8. Conclusions

In this paper, we developed a new family of optimal eighth-order derivative-free iterative method without memory for finding simple roots of nonlinear equations based on King’s scheme [2] and Lagrange interpolation. A convergence analysis is presented to show that the proposed method without memory has an optimal order of convergence of eight. Furthermore, we used four self-accelerating parameters and one variable’s weight function to extend the special cases A K 1 and A K 2 of the proposed family without memory to iterative schemes with memory, A K M 1 and A K M 2 , without using additional function evaluations. We successfully increased the order of convergence from 8 to 15.51560 . The efficiency index of the proposed family without memory is increased from 1.6817 to 1.9847 without using additional functional evaluations. For the sake of comparison and implementation of the proposed and existing iterative schemes with and without memory, numerical results are presented using various test problems, including the eigenvalue problem, continuous stirred-tank reactor, and energy distribution for Planck’s radiation. Various dynamical approaches are presented to see the effectiveness of competitive iterative schemes: 3D stereographic projection in the case of iterative schemes without memory and 2D symmetric basins of attractions in the case of iterative schemes with memory. Such a rich analysis is rarely found in the literature. From both approaches, it is evident that our newly developed schemes are more stable, robust, and competent in their respective domains. Future research and analysis of the proposed scheme may be carried out in terms of stability and wide convergence zones in various domains. Visualizing the behavior of the iterative approaches may need looking into the usage of other graphical tools in addition to 2D symmetric basins of attraction and 3D stereographic projections.

Author Contributions

Methodology, S.A. (Saima Akram); Software, S.A. (Saima Akram); Formal analysis, S.A. (Saima Akram) and M.K.; Investigation, M.K.; Resources, S.K.; Data curation, S.K.; Writing—original draft, M.-u.-D.J.; Writing—review & editing, M.-u.-D.J. and S.A. (Shazia Altaf); Visualization, S.A. (Shazia Altaf); Supervision, S.A. (Saima Akram). All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

We would like to express our gratitude to the anonymous reviewers for their comments and suggestions to improve the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Sample Availability

Not applicable.

References

  1. Traub, J.F. Iterative Methods for the Solution of Equations; PrenticeHall, Inc.: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  2. King, R.F. A family of fourth order methods for non-linear equations. SIAM J. Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
  3. Ostrowski, A.M. Solution of Equations and System of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
  4. Kung, H.T.; Traub, J.F. Optimal order of one point and multipoint iteration. J. Assoc. Comput. Math. 1974, 21, 643–651. [Google Scholar] [CrossRef]
  5. Steffensen, I.F. Remarks on iteration. Skand. Aktuarietidskr. 1933, 16, 64–72. [Google Scholar] [CrossRef]
  6. Akram, S.; Zafar, F.; Junjua, M.D.; Yasmin, N. A general family of derivative free with and without memory root finding methods. JPRM 2020, 16, 64–83. [Google Scholar]
  7. Džunić, J.; Petković, M.S. On generalized multipoint root-solvers with memory. Comput. Appl. Math. 2012, 236, 2909–2920. [Google Scholar] [CrossRef]
  8. Junjua, M.D.; Zafar, F.; Yasmin, N.; Akram, S. A general class of derivative free root solvers with-memory. U.P.B. Sci. Bull. Ser. A 2017, 79, 19–28. [Google Scholar]
  9. Lotfi, T.; Soleymani, F.; Ghorbanzadeh, M.; Assari, P. On the construction of some tri-parametric iterative methods with memory. Numer. Algorithms 2015, 70, 835–845. [Google Scholar] [CrossRef]
  10. Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint Methods for Solving Nonlinear Equations; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
  11. Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint methods for solving nonlinear equations: A survey. Appl. Math. Comput. 2014, 226, 635–660. [Google Scholar] [CrossRef]
  12. Solaiman, O.S.; Ariffin, S.; Karim, A.; Hashimc, I. Optimal fourth- and eighth-order of convergence derivative-free modifications of King’s method. J. King Saud Univ.-Sci. 2019, 31, 1499–1504. [Google Scholar] [CrossRef]
  13. Zafar, F.; Akram, S.; Yasmin, N.; Junjua, M. On the construction of three step four parametric methods with accelerated order of convergence. J. Nonlinear Sci. Appl. 2016, 9, 4542–4553. [Google Scholar] [CrossRef]
  14. Zafar, F.; Cordero, A.; Torregrosa, J.R.; Ra, A. A class of four parametric with- and without-memory root finding methods. Comput. Math. Methods 2019, 1, e1024. [Google Scholar] [CrossRef]
  15. Zafar, F.; Yasmin, N.; Kutbi, M.A.; Zeshan, M. Construction of tri-parametric derivative free fourth order with and without memory iterative method. J. Nonlinear Sci. Appl. 2016, 9, 1410–1423. [Google Scholar] [CrossRef]
  16. Petkovic, M.S.; Zunic, J.; Petkovic, L.D. A family of two point methods with-memory for solving nonlinear equations. Appl. Anal. Discret. Math. 2011, 5, 298–317. [Google Scholar] [CrossRef]
  17. Lotfi, T.; Assari, P. New three- and four-parametric iterative with-memory methods with efficiency index near 2. Appl. Math. Comput. 2015, 270, 1004–1010. [Google Scholar] [CrossRef]
  18. Cordero, A.; Janjua, M.; Torregrosa, J.R.; Yasmin, N.; Zafar, F. Efficient four parametric with and without-memory iterative methods possessing high efficiency indices. Math. Prob. Eng. 2018, 2018, 8093673. [Google Scholar] [CrossRef]
  19. Behl, R.; Zafar, F.; Alshormani, A.S.; Junjua, M.; Yasmin, N. An Optimal Eighth-Order Scheme for Multiple Zeros of Univariate Functions. Int. J. Comput. Methods 2019, 16, 1843002. [Google Scholar] [CrossRef]
  20. Douglas, J.M. Process Dynamics and Control; Prentice Hall: Englewood Cliffs, NJ, USA, 1972. [Google Scholar]
  21. Nicklawsky, A. Visualizing Chaos. Mathematics Student Work. 1. 2012. Available online: https://digitalcommons.csbsju.edu/math_students/1 (accessed on 13 May 2023).
  22. Amat, S.; Busquier, S.; Bermúdez, C.; Magrenan, A.A. On the election of the damped parameter of a two-step relaxed Newton-type method. Nonlinear Dyn. 2016, 84, 9–18. [Google Scholar] [CrossRef]
  23. Chun, C.; Neta, B. Comparing the basins of attraction for KANwar-Bhatia-Kansal family to the best fourth order method. Appl. Math. Comput. 2015, 266, 277–292. [Google Scholar] [CrossRef]
  24. Sharma, J.R.; Kumar, S. Efficient methods of optimal eighth and sixteenth order convergence for solving nonlinear equations. SeMA 2018, 75, 229–253. [Google Scholar] [CrossRef]
  25. Neta, B.; Chun, C.; Scott, M. Basins of attraction for optimal eighth order methods to find simple roots of nonlinear equations. Appl. Math. Comput. 2014, 227, 567–592. [Google Scholar] [CrossRef]
Figure 1. Stereographic projection of AK1 (left), AK2 (middle), and FWM (right) on r 1 (z).
Figure 1. Stereographic projection of AK1 (left), AK2 (middle), and FWM (right) on r 1 (z).
Symmetry 15 01116 g001
Figure 2. Stereographic projection of M1 (left) and LM (Right) on r 1 (z).
Figure 2. Stereographic projection of M1 (left) and LM (Right) on r 1 (z).
Symmetry 15 01116 g002
Figure 3. Stereographic projection of AK1 (left), AK2 (middle), and FWM (right) on r 2 (z).
Figure 3. Stereographic projection of AK1 (left), AK2 (middle), and FWM (right) on r 2 (z).
Symmetry 15 01116 g003
Figure 4. Stereographic projection of M1 (left) and LM (right) on r 2 (z).
Figure 4. Stereographic projection of M1 (left) and LM (right) on r 2 (z).
Symmetry 15 01116 g004
Figure 5. Stereographic projection of AK1 (left), AK2 (middle), and FWM (right) on r 3 (z).
Figure 5. Stereographic projection of AK1 (left), AK2 (middle), and FWM (right) on r 3 (z).
Symmetry 15 01116 g005
Figure 6. Stereographic projection of M1 (left) and LM (right) on r 3 (z).
Figure 6. Stereographic projection of M1 (left) and LM (right) on r 3 (z).
Symmetry 15 01116 g006
Figure 7. The convergence regions of different methods with memory for y 3 ( ξ ) .
Figure 7. The convergence regions of different methods with memory for y 3 ( ξ ) .
Symmetry 15 01116 g007
Figure 8. The convergence regions of different methods with memory for y 5 ( ξ ) .
Figure 8. The convergence regions of different methods with memory for y 5 ( ξ ) .
Symmetry 15 01116 g008
Figure 9. The convergence regions of different methods with memory for y 6 ( ξ ) .
Figure 9. The convergence regions of different methods with memory for y 6 ( ξ ) .
Symmetry 15 01116 g009
Table 1. Numerical results of methods without memory for F 1 x .
Table 1. Numerical results of methods without memory for F 1 x .
F 1 x = exp x 1 + 1 5 x , x 0 = 0.16 , η = 0
x 1 η x 2 η x 3 η COC
LM 2.79 × 10 9 2.60 × 10 71 1.44 × 10 567 7.99
M1 5.84 × 10 8 3.78 × 10 59 1.16 × 10 468 7.99
FWM 5.84 × 10 8 3.78 × 10 59 1.16 × 10 468 7.99
AK1 1.95 × 10 8 1.89 × 10 63 1.47 × 10 503 7.99
AK2 1.08 × 10 8 6.47 × 10 66 1.11 × 10 523 7.99
Table 2. Numerical results of methods with memory for F 1 x .
Table 2. Numerical results of methods with memory for F 1 x .
F 1 x = exp x 1 + 1 5 x , x 0 = 0.16 , η = 0
x 1 η x 2 η x 3 η COC
LM 2.79 × 10 9 3.09 × 10 142 3.45 × 10 2000 13.97
M1 5.84 × 10 8 3.44 × 10 120 1.35 × 10 1866 15.56
FWM 5.84 × 10 8 3.44 × 10 120 1.35 × 10 1866 15.56
AKM1 1.95 × 10 8 7.52 × 10 128 2.29 × 10 1985 15.56
AKM2 1.08 × 10 8 5.43 × 10 132 1.80 × 10 2000 15.15
Table 3. Numerical results of methods without memory for F 2 x .
Table 3. Numerical results of methods without memory for F 2 x .
F 2 x = x 9 29 x 8 + 349 x 7 2261 x 6 + 8455 x 5 17663 x 4
+ 15927 x 3 + 6993 x 2 24732 x + 12960 , x 0 = 1 . 09 , η = 1
x 1 η x 2 η x 3 η COC
LM 1.85 × 10 4 9.40 × 10 23 4.05 × 10 169 8.00
M1 3.24 × 10 3 1.72 × 10 8 4.04 × 10 51 8.08
FWM 3.24 × 10 3 1.72 × 10 8 4.04 × 10 51 8.08
AK1 1.43 × 10 5 9.45 × 10 29 3.43 × 10 214 8.00
AK2 1.55 × 10 4 9.50 × 10 23 1.82 × 10 168 8.00
Table 4. Numerical results of methods with memory for F 2 x .
Table 4. Numerical results of methods with memory for F 2 x .
F 2 x = x 9 29 x 8 + 349 x 7 2261 x 6 + 8455 x 5 17663 x 4
+ 15927 x 3 + 6993 x 2 24732 x + 12960 , x 0 = 1 . 09 , η = 1
x 1 η x 2 η x 3 η COC
LM 1.85 × 10 4 5.87 × 10 57 4.16 × 10 881 15.69
M1 3.24 × 10 3 1.14 × 10 36 1.59 × 10 569 15.92
FWM 3.24 × 10 3 1.14 × 10 36 1.59 × 10 569 15.92
AKM1 1.43 × 10 5 3.99 × 10 78 7.98 × 10 1215 15.67
AKM2 1.55 × 10 4 1.54 × 10 57 6.74 × 10 892 15.74
Table 5. Numerical results of methods without memory for F 3 x .
Table 5. Numerical results of methods without memory for F 3 x .
F 3 ( x ) = x 4 + 11.50 x 3 + 47.49 x 2 + 83.06325 x + 51.23266875 ,
x 0 = 1 . 09 , η = 1
x 1 η x 2 η x 3 η COC
LM 1.10 × 10 9 3.80 × 10 71 7.82 × 10 563 7.99
M1 4.27 × 10 8 4.58 × 10 57 8.12 × 10 449 7.99
FWM 4.27 × 10 8 4.58 × 10 57 8.12 × 10 449 7.99
AK1 1.64 × 10 8 7.60 × 10 61 1.63 × 10 479 7.99
AK2 4.37 × 10 9 8.300 × 10 66 1.41 × 10 519 7.99
Table 6. Numerical results of methods with memory for F 3 x .
Table 6. Numerical results of methods with memory for F 3 x .
F 3 ( x ) = x 4 + 11.50 x 3 + 47.49 x 2 + 83.06325 x + 51.23266875 ,
x 0 = 1 . 09 , η = 1
x 1 η x 2 η x 3 η COC
LM 1.10 × 10 9 8.99 × 10 143 3.61 × 10 1998 13.94
M1 4.27 × 10 8 1.10 × 10 116 3.95 × 10 1854 15.99
FWM 4.27 × 10 8 1.10 × 10 116 3.95 × 10 1854 15.99
AKM1 1.64 × 10 8 2.45 × 10 123 1.52 × 10 1960 15.99
AKM2 4.37 × 10 9 1.57 × 10 132 3.71 × 10 1998 15.11
Table 7. Numerical results of methods without memory for F 4 x .
Table 7. Numerical results of methods without memory for F 4 x .
F 4 x = 1 3 x 4 x 2 1 3 x + 1 , x 0 = 1.25 , η = 1
x 1 η x 2 η x 3 η COC
LM 6.33 × 10 7 5.89 × 10 50 3.29 × 10 394 7.99
M1 1.38 × 10 8 5.58 × 10 64 3.94 × 10 507 8.00
FWM 1.38 × 10 8 5.58 × 10 64 3.94 × 10 507 8.00
AK1 1.38 × 10 8 1.22 × 10 63 4.54 × 10 504 7.99
AK2 1.38 × 10 8 1.48 × 10 63 2.60 × 10 503 7.99
Table 8. Numerical results of methods with memory for F 4 x .
Table 8. Numerical results of methods with memory for F 4 x .
F 4 x = 1 3 x 4 x 2 1 3 x + 1 , x 0 = 1.25 , η = 1
x 1 η x 2 η x 3 η COC
LM 6.33 × 10 7 1.43 × 10 95 1.01 × 10 1469 15.50
M1 1.38 × 10 8 1.16 × 10 122 2.13 × 10 1891 15.50
FWM 1.38 × 10 8 1.16 × 10 122 2.13 × 10 1891 15.50
AKM1 1.38 × 10 8 1.19 × 10 122 2.93 × 10 1891 15.50
AKM2 1.38 × 10 8 1.20 × 10 122 3.44 × 10 1891 15.50
Table 9. Numerical results of methods without memory for F 5 x .
Table 9. Numerical results of methods without memory for F 5 x .
F 5 x = sin ( x ) 1 100 x , x 0 = 0.7 , η = 0
x 1 η x 2 η x 3 η COC
LM 1.60 × 10 5 1.24 × 10 43 1.56 × 10 348 8.00
M1 8.83 × 10 7 8.08 × 10 54 4.01 × 10 430 7.99
FWM 8.83 × 10 7 8.08 × 10 54 4.01 × 10 430 7.99
AK1 6.19 × 10 7 4.26 × 10 55 2.15 × 10 440 7.99
AK2 4.33 × 10 7 2.32 × 10 56 1.56 × 10 450 7.99
Table 10. Numerical results of methods with memory for F 5 x .
Table 10. Numerical results of methods with memory for F 5 x .
F 5 x = sin ( x ) 1 100 x , x 0 = 0.7 , η = 0
x 1 η x 2 η x 3 η COC
LM 1.60 × 10 5 1.23 × 10 80 1.20 × 10 1260 15.71
M1 8.83 × 10 7 5.98 × 10 99 1.26 × 10 1547 15.72
FWM 8.83 × 10 7 5.98 × 10 99 1.26 × 10 1547 15.72
AKM1 6.19 × 10 7 6.68 × 10 101 3.56 × 10 1578 15.72
AKM2 4.33 × 10 7 6.74 × 10 103 1.95 × 10 1609 15.72
Table 11. Test functions for comparison of different methods.
Table 11. Test functions for comparison of different methods.
Ex.Test FunctionsExact RootInitial Point
1 F 1 x = exp x 1 + 1 5 x η = 0 x 0 = 0.16
2 F 2 x = x 9 29 x 8 + 349 x 7 2261 x 6 + 8455 x 5 η = 1 x 0 = 1.09
17663 x 4 + 15927 x 3 + 6993 x 2 24732 x + 12960
3 F 3 ( x ) = x 4 + 11.50 x 3 + 47.49 x 2 + 83.06325 x + 51.23266875 η = 4.3 x 0 = 4.35
4 F 4 x = 1 3 x 4 x 2 1 3 x + 1 η = 1 x 0 = 1.25
5 F 5 x = sin ( x ) 1 100 x η = 0 x 0 = 0.7
6 F 6 x = arctan ( x ) η = 0 x 0 = 1
7 F 7 x = x 3 + x 2 3 x 3 η = 1 x 0 = 0.12
Table 12. Numerical results of methods without memory for F 6 x .
Table 12. Numerical results of methods without memory for F 6 x .
F 6 x = arctan ( x ) , x 0 = 1 , η = 0
x 1 η x 2 η x 3 η COC
LM 3.90 × 10 2 2.16 × 10 16 2.14 × 10 130 7.99
M1 1.98 × 10 2 1.36 × 10 19 4.56 × 10 156 7.95
FWM 1.98 × 10 2 1.36 × 10 19 4.56 × 10 156 7.95
AK1 1.76 × 10 2 7.67 × 10 20 4.41 × 10 158 7.96
AK2 4.12 × 10 3 2.35 × 10 24 3.31 × 10 194 7.96
Table 13. Numerical results of methods without memory for F 7 x .
Table 13. Numerical results of methods without memory for F 7 x .
F 7 x = x 3 + x 2 3 x 3 , x 0 = 0.12 , η = 1
x 1 η x 2 η x 3 η COC
LM 1.60 × 10 2 4.92 × 10 15 4.66 × 10 115 7.99
M1 1.74 × 10 4 3.61 × 10 29 1.27 × 10 226 7.99
FWM 1.74 × 10 4 3.61 × 10 29 1.27 × 10 226 7.99
AK1 1.73 × 10 4 1.35 × 10 29 1.90 × 10 230 7.99
AK2 1.73 × 10 4 6.46 × 10 30 2.45 × 10 233 7.99
Table 14. Numerical results of methods with memory for F 6 x .
Table 14. Numerical results of methods with memory for F 6 x .
F 6 x = arctan ( x ) , x 0 = 1 , η = 0
x 1 η x 2 η x 3 η COC
LM 3.90 × 10 2 6.65 × 10 25 1.07 × 10 382 15.71
M1 1.98 × 10 2 1.02 × 10 30 1.02 × 10 474 15.69
FWM 1.98 × 10 2 1.02 × 10 30 1.02 × 10 474 15.69
AKM1 1.76 × 10 2 2.31 × 10 31 6.29 × 10 485 15.70
AKM2 4.12 × 10 3 1.83 × 10 40 2.08 × 10 628 15.74
Table 15. Numerical results of methods with memory for F 7 x .
Table 15. Numerical results of methods with memory for F 7 x .
F 7 x = x 3 + x 2 3 x 3 , x 0 = 0.12 , η = 1
x 1 η x 2 η x 3 η COC
LM 1.60 × 10 2 7.05 × 10 31 2.31 × 10 484 15.99
M1 1.74 × 10 4 1.71 × 10 61 1.38 × 10 973 15.99
FWM 1.74 × 10 4 1.71 × 10 61 1.38 × 10 973 15.99
AKM1 1.73 × 10 4 1.62 × 10 61 5.57 × 10 974 15.99
AKM2 1.73 × 10 4 1.58 × 10 61 3.72 × 10 974 15.99
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Akram, S.; Khalid, M.; Junjua, M.-u.-D.; Altaf, S.; Kumar, S. Extension of King’s Iterative Scheme by Means of Memory for Nonlinear Equations. Symmetry 2023, 15, 1116. https://doi.org/10.3390/sym15051116

AMA Style

Akram S, Khalid M, Junjua M-u-D, Altaf S, Kumar S. Extension of King’s Iterative Scheme by Means of Memory for Nonlinear Equations. Symmetry. 2023; 15(5):1116. https://doi.org/10.3390/sym15051116

Chicago/Turabian Style

Akram, Saima, Maira Khalid, Moin-ud-Din Junjua, Shazia Altaf, and Sunil Kumar. 2023. "Extension of King’s Iterative Scheme by Means of Memory for Nonlinear Equations" Symmetry 15, no. 5: 1116. https://doi.org/10.3390/sym15051116

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop