Previous Article in Journal
Extended Simulation Function via Rational Expressions

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Optimal One-Point Iterative Function Free from Derivatives for Multiple Roots

by
Deepak Kumar
1,
Janak Raj Sharma
2 and
Ioannis K. Argyros
3,*
1
Department of Mathematics, Chandigarh University, Gharuan, Mohali, Punjab 140413, India
2
Department of Mathematics, Sant Longowal Institute of Engineering and Technology, Longowal, Punjab 148106, India
3
Department of Mathematics Sciences, Cameron University, Lawton, OK 73505, USA
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(5), 709; https://doi.org/10.3390/math8050709
Submission received: 31 March 2020 / Revised: 24 April 2020 / Accepted: 26 April 2020 / Published: 3 May 2020

## Abstract

:
We suggest a derivative-free optimal method of second order which is a new version of a modification of Newton’s method for achieving the multiple zeros of nonlinear single variable functions. Iterative methods without derivatives for multiple zeros are not easy to obtain, and hence such methods are rare in literature. Inspired by this fact, we worked on a family of optimal second order derivative-free methods for multiple zeros that require only two function evaluations per iteration. The stability of the methods was validated through complex geometry by drawing basins of attraction. Moreover, applicability of the methods is demonstrated herein on different functions. The study of numerical results shows that the new derivative-free methods are good alternatives to the existing optimal second-order techniques that require derivative calculations.
MSC:
65H10; 65J10; 65Y20; 41A25

## 1. Introduction

Finding the roots of nonlinear functions is a significant problem in numerical mathematics and has numerous applications in different parts of applied sciences [1,2]. In this paper, we consider iterative techniques to locate a multiple root $α$ of multiplicity m—that means $g ( j ) ( α ) = 0 , j = 0 , 1 , 2 , … , m − 1$ and $g ( m ) ( α ) ≠ 0$—of a nonlinear equation $g ( x ) = 0$.
The most widely used method for finding the multiple root of $g ( x ) = 0$ is the quadratically convergent modified Newton’s method (MNM) (see [3]):
$x n + 1 = x n − m g ( x n ) g ′ ( x n ) .$
In literature, there are many iterative methods of different order of convergence to approximate the multiple roots of $g ( x ) = 0$, (see, for example [4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19]). Such methods require the evaluations of derivatives of either first order or first and second order. The motivation for developing higher order methods is closely related to the Kung–Traub conjecture [20] that establishes an upper bound for the order of convergence to be attained with a specific number of functional evaluations, $ρ c ≤ 2 μ − 1$, where $ρ c$ is order of convergence and $μ$ represents functions’ evaluations. The methods which follow the Kung–Traub conjecture are called optimal methods.
Contrary to the methods that require derivative evaluation, the derivative-free techniques to deal with the instances of multiple roots are exceptionally uncommon. The principle issue with generating such techniques is the difficulty in finding their convergence order. Derivative-free techniques are significant in the circumstances when the derivative of function g is hard to evaluate or is costly to compute. One such derivative-free scheme is the old style Traub–Steffensen iteration [21] which replaces $g ′$ in the traditional Newton’s iteration with suitable approximation based on difference quotient:
$g ′ ( x n ) ≃ g ( x n + β g ( x n ) ) − g ( x n ) β g ( x n ) = g [ w n , x n ] ,$
where $g [ w n , x n ] = g ( w n ) − g ( x n ) w n − x n$ is the first order divided difference with $w n = x n + β g ( x n )$, $β ∈ R − { 0 }$.
Then modified Newton’s method (1) becomes modified Traub–Steffensen method
$x n + 1 = x n − m g ( x n ) g [ w n , x n ] .$
The modified Traub–Steffensen method (2) is a noticeable improvement of Newton’s method, because it maintains quadratic convergence without using any derivative.
The principle objective of this work is to design a general class of derivative-free multiple root methods including (2) by utilising the weight-function approach. These methods are developed when specific weight functions (that may rely upon at least one parameters) are chosen. The rest of the paper is as follows. In Section 2, the plan for a second-order technique is created and its convergence is considered. In Section 3, the basins of attractors are displayed to check the steadiness of techniques. Numerical examinations on various conditions are enacted in Section 4 to exhibit the applicability and efficiency of the schemes introduced here. Concluding comments are given in Section 5.

## 2. Formulation of Method

Given a known multiplicity $m > 1$, we consider the following one-step scheme for multiple roots with the first step as the Traub–Steffensen iteration (2):
$x n + 1 = x n − G ( Θ n ) ,$
where the function $G ( Θ n ) : C → C$ is analytic in a neighbourhood of zero with $Θ n = g ( x n ) g [ w n , x n ]$.
In the sequel, we shall study the convergence results of the proposed iterative scheme (3). For clarity, the results are obtained separately for different cases depending upon the multiplicity m.
Theorem 1.
Assume that $g : C → C$ is an analytic function in a domain containing a multiple zero (say, α) with multiplicity $m = 2$. Suppose that the initial point $x 0$ is close enough to α; then the convergence order of the Formula (3) is at least 2, provided that $G ( 0 ) = 0$, $G ′ ( 0 ) = 2$ and $| G ″ ( 0 ) | < ∞$.
Proof.
Assume that the error at n-th stage is $e n = x n − α$. Using the Taylor’s expansion of $g ( x n )$ about $α$ and keeping in mind that $g ( α ) = 0$, $g ′ ( α ) = 0$ and $g ″ ( α ) ≠ 0 ,$ we have
$g ( x n ) = g ″ ( α ) 2 ! e n 2 1 + C 1 e n + C 2 e n 2 + C 3 e n 3 + ⋯ ,$
where $C k = 2 ! ( 2 + k ) ! g ( 2 + k ) ( α ) g ″ ( α )$ for $k ∈ N$.
Similarly we have the Taylor’s expansion of $g ( w n )$ about $α$
$g ( w n ) = g ″ ( α ) 2 ! e w n 2 1 + C 1 e w n + C 2 e w n 2 + C 3 e w n 3 + ⋯ ,$
where $e w n = w n − α = e n + β g ″ ( α ) 2 ! e n 2 1 + C 1 e n + C 2 e n 2 + C 3 e n 3 + ⋯ .$
By using (4) and (5), we obtain that
$Θ n = e n 2 − 1 8 2 C 1 + β g ″ ( α ) e n 2 + O ( e n 3 ) .$
Expanding weight function $G ( Θ n )$ in the neighbourhood of origin by Taylor series, we have that
$G ( Θ n ) = G ( 0 ) + Θ n G ′ ( 0 ) + 1 2 Θ n 2 G ″ ( 0 ) + O ( Θ n 3 ) .$
Using the expression (6) into (3), it follows that
$e n + 1 = − G ( 0 ) + 1 − G ′ ( 0 ) 2 e n + 1 8 2 C 1 G ′ ( 0 ) + β G ′ ( 0 ) g ″ ( α ) − G ″ ( 0 ) e n 2 + O ( e n 3 ) .$
In order to obtain second order convergence, the constant term and coefficient of $e n$ in (7) should be simultaneously put equal to zero. It is possible only for the following values of $G ( 0 )$ and $G ′ ( 0 )$:
$G ( 0 ) = 0 , G ′ ( 0 ) = 2 .$
By using the above values in (7), the error relation is given by
$e n + 1 = 1 8 4 C 1 + 2 β g ″ ( α ) − G ″ ( 0 ) e n 2 + O ( e n 3 ) .$
Thus, the theorem is proven. ☐
Theorem 2.
Assume that $g : C → C$ is an analytic function in a domain containing a multiple zero (say, α) with multiplicity $m = 3$. Suppose that the initial point $x 0$ is close enough to α; then the convergence order of the Formula (3) is at least 2, provided that $G ( 0 ) = 0$, $G ′ ( 0 ) = 3$ and $| G ″ ( 0 ) | < ∞$.
Proof.
Suppose that the error at n-th iteration is $e n = x n − α$. Using the Taylor’s expansion of $g ( x n )$ about $α$ and keeping into mind that $g ( α ) = 0$, $g ′ ( α ) = 0$, $g ″ ( α ) = 0 ,$ and $g ‴ ( α ) ≠ 0 ,$ we have
$g ( x n ) = g ‴ ( α ) 3 ! e n 2 1 + C 1 e n + C 2 e n 2 + C 3 e n 3 + ⋯ ,$
where $C k = 3 ! ( 3 + k ) ! g ( 3 + k ) ( α ) g ‴ ( α )$ for $k ∈ N$.
Additionally, the Taylor’s expansion of $g ( w n )$ about $α$ is
$g ( w n ) = g ‴ ( α ) 3 ! e w n 2 1 + C 1 e w n + C 2 e w n 2 + C 3 e w n 3 + ⋯ ,$
where $e w n = w n − α = e n + β g ‴ ( α ) 3 ! e n 2 1 + C 1 e n + C 2 e n 2 + C 3 e n 3 + ⋯ .$
By using (10) and (11), we obtain that
$Θ n = e n 3 − C 1 9 e n 2 + O ( e n 3 ) .$
We can expand weight function $G ( Θ n )$ in the neighbourhood of origin by Taylor series; then we have that
$G ( Θ n ) = G ( 0 ) + Θ n G ′ ( 0 ) + 1 2 Θ n 2 G ″ ( 0 ) + O ( Θ n 3 ) .$
By inserting the expression (12) into (3)
$e n + 1 = − G ( 0 ) + 1 − G ′ ( 0 ) 3 e n + 1 18 2 C 1 G ′ ( 0 ) − G ″ ( 0 ) e n 2 + O ( e n 3 ) .$
In order to obtain second order convergence, the constant term and coefficient of $e n$ in (13) should be simultaneously equal to zero. It is possible only for the following values of $G ( 0 )$ and $G ′ ( 0 )$:
$G ( 0 ) = 0 , G ′ ( 0 ) = 3 .$
By using the above values in (13), the error relation is given by
$e n + 1 = 1 18 6 C 1 − G ″ ( 0 ) e n 2 + O ( e n 3 ) .$
Thus, the theorem is proven. ☐
Remark 1.
We can observe from the above results that the number of conditions on $G ( Θ n )$ is 2 corresponding to the cases $m = 2 , 3$ to attain the quadratic order convergence of the method (3). These cases also satisfy common conditions: $G ( 0 ) = 0$, $G ′ ( 0 ) = m$. Nevertheless, their error equations differ from each other, as the parameter β does not appear in the equation for $m = 3$. It has been seen that when $m ≥ 3$, the error equation in each such case does not contain β term. We shall prove this fact in next theorem.
For the multiplicity $m ≥ 3$, we prove the order of convergence of scheme (3) by the following theorem:
Theorem 3.
Let $g : C → C$ be an analytic function in a region enclosing a multiple zero (say, α) with multiplicity m. Assume that initial guess $x 0$ is sufficiently close to α; then the iteration scheme defined by (3) has second order of convergence, provided that $G ( 0 ) = 0$, $G ′ ( 0 ) = m$ and $| G ″ ( 0 ) | < ∞$.
Proof.
Let the error at n-th iteration be $e n = x n − α$. Using the Taylor’s expansion of $g ( x n )$ about $α$, we have that
$g ( x n ) = g ( m ) ( α ) m ! e n m 1 + C 1 e n + C 2 e n 2 + C 3 e n 3 + O ( e n 4 ) .$
Using (16) in $w n = x n + β g ( x n )$, we obtain that
$e w n = w n − α = x n − α + β g ( x n ) = e n + β g ( m ) ( α ) m ! e n m 1 + C 1 e n + C 2 e n 2 + C 3 e n 3 + O ( e n 4 ) .$
Taylor’s expansion of $g ( w n )$ about $α$ is given as
$g ( w n ) = g ( m ) ( α ) m ! e w n m 1 + C 1 e w n + C 2 e w n 2 + C 3 e w n 3 + O ( e w n 4 ) .$
From (16)–(18), it follows that
$g [ w n , x n ] = e n m g ( m ) ( α ) m ! m e n + ( m + 1 ) C 1 + ( m + 2 ) C 2 e n + ( m + 3 ) C 3 e n 2 + O ( e n 3 ) .$
By using (16) and (5), we obtain that
$Θ n = e n m − C 1 m 2 e n 2 + ( 1 + m ) C 1 2 − 2 m C 2 m 3 e n 3 + O ( e n 4 ) .$
We can expand weight function $G ( Θ n )$ in the neighbourhood of origin by Taylor series; then we have that
$G ( Θ n ) = G ( 0 ) + Θ n G ′ ( 0 ) + 1 2 Θ n 2 G ″ ( 0 ) + O ( Θ n 3 ) .$
Inserting the expression (20) in the scheme (3), we obtain the error
$e n + 1 = − G ( 0 ) + 1 − G ′ ( 0 ) m e n − − 2 C 1 G ′ ( 0 ) + G ″ ( 0 ) 2 m 2 e n 2 + O ( e n 3 ) .$
In order to obtain second order convergence, the constant term and coefficient of $e n$ in (21) should be simultaneously equal to zero. That is possible only for the following values of $G ( 0 )$ and $G ′ ( 0 )$
$G ( 0 ) = 0 , G ′ ( 0 ) = m .$
By using the above values in (21), the error relation is given by
$e n + 1 = 2 m C 1 − G ″ ( 0 ) 2 m 2 e n 2 + O ( e n 3 ) .$
Hence, the second order convergence is established. This completes the proof of theorem.
Remark 2.
It is important to note that parameter β, which is used in $w n$, appears only in the error equations of the cases $m = 2$ but not for $m ≥ 3$. For $m ≥ 3$ we have observed that this parameter appears in the coefficients of $e n 3$ and higher order. However, we do not need such terms in order to show the required quadratic order convergence.
We can yield numerous methods of the family (3) based on the form of function $G ( Θ n )$ that satisfies the conditions (22). However, we must restrict the choices to consider the forms of low order polynomials and simple rational functions. Accordingly, the following simple forms, satisfying the conditions (22), are chosen:
(1)
$G ( Θ n ) = m Θ n ( 1 + a 1 Θ n )$
(2)
$G ( Θ n ) = m Θ n 1 + a 2 Θ n$
(3)
$G ( Θ n ) = m Θ n 1 + a 3 m Θ n$,
(4)
$G ( Θ n ) = m ( e Θ n − 1 )$
(5)
$G ( Θ n ) = m log ( Θ n + 1 )$
(6)
$G ( Θ n ) = m sin Θ n$
(7)
$G ( Θ n ) = Θ n ( 1 m + a 4 Θ n ) 2$
(8)
$G ( Θ n ) = Θ n 2 + Θ n 1 m + a 5 Θ n$,
where $a 1$, $a 2$, $a 3$, $a 4$ and $a 5$ are free parameters.
The corresponding method to each of the above form is defined as follows:
Method 1 (M1):
$x n + 1 = x n − m Θ n ( 1 + a 1 Θ n ) .$
Method 2 (M2):
$x n + 1 = x n − m Θ n 1 + a 2 Θ n .$
Method 3 (M3):
$x n + 1 = x n − m Θ n 1 + a 3 m Θ n .$
Method 4 (M4):
$x n + 1 = x n − m ( e Θ n − 1 ) .$
Method 5 (M5):
$x n + 1 = x n − m log ( Θ n + 1 ) .$
Method 6 (M6):
$x n + 1 = x n − m sin Θ n .$
Method 7 (M7):
$x n + 1 = x n − Θ n ( 1 m + a 4 Θ n ) 2 .$
Method 8 (M8):
$x n + 1 = x n − Θ n 2 + Θ n 1 m + a 5 Θ n .$
Remark 3.
The scheme (3) defines a family of second order methods which utilises two function evaluations; namely, $g ( x n )$ and $g ( w n )$. This family is, therefore, optimal in the sense of the Kung–Traub hypothesis [20].

## 3. Basins of Attraction

Our point here is to analyse the new strategies dependent on graphical tool—the basins of attraction—of the zeros of a polynomial $p ( z )$ in Argand plane. Examination of the basins of attraction of roots by the iterative scheme provides a significant idea about the convergence of scheme. This thought was presented at first by Vrscay and Gilbert [22]. As of late, numerous researchers utilised this idea in their work; see, for instance [23,24] and references in therein. We consider different cases relating to the function $G ( Θ n )$ of family (3) to study the basins of attraction.
We select $z 0$ as the initial point belonging to D, where D is a rectangular region in $C$ containing all the roots of the equation $g ( z ) = 0 .$ An iterative method beginning at a point $z 0 ∈ D$ can converge to the zero of the function $g ( z )$ or diverge. In order to assess the basins, we consider $10 − 3$ as the stopping criterion for convergence up to maximum of 25 iterations. If this tolerance is not achieved in the required iterations, the procedure is dismissed with the result showing the divergence of the iteration function started from $z 0$. While drawing the basins, the following criterion is adopted: A colour is allotted to every initial guess $z 0$ in the attraction basin of a zero. If the iterative formula begins at the point of $z 0$ convergence, then it forms the basins of attraction with that assigned colour, and if the formula fails to converge in the required number of iterations, then it is draw in black.
We draw the basins of attraction by applying the methods M1–M8 (choosing $β = 10 − 2 , 10 − 4 , 10 − 8$) on the following two polynomials:
Problem 1.
In the first example, we consider the polynomial $P 1 ( z ) = ( z 5 − 2 z 4 − 2 z 3 + 8 z 2 − 8 z ) 2$ which has zeros ${ ± 2 , 0 , 1 ± i }$ with multiplicity 2. In this case, we use a grid of $500 × 500$ points in a rectangle $D ∈ C$ of size $[ − 3 , 3 ] × [ − 3 , 3 ]$ and assign the colours cyan, green, yellow, red and blue to each initial point in the basin of attraction of zero "$− 2$," "$1 + i$," "0," "$1 − i$" and "2." Basins obtained for the methods M1–M8 are shown in Figure 1, Figure 2 and Figure 3 corresponding to $β = 10 − 2 , 10 − 4 , 10 − 8$. When observing the behaviour of the methods, we see that the method M5 possesses a lesser number of divergent points, followed by M4 and M6. On the contrary, the methods M3 and M7 have higher numbers of divergent points, followed by M2 and M8. Notice that the basins are becoming wider as parameter β assumes smaller values.
Problem 2.
Let us take the polynomial $P 2 ( z ) = ( z 6 − 1 ) 2$ having zeros ${ 0 , ± 1 }$ with multiplicity 3. To see the dynamical view, we consider a rectangle $D = [ − 4 , 4 ] × [ − 4 , 4 ] ∈ C$ with $500 × 500$ grid points and allocate the colours colour cyan, green, yellow, red, purple and blue to each initial point in the basins of attraction of zero "$0 . 5 + 0 . 8660 i$," "$− 0 . 5 − 0 . 8660 i$," "1," "$0 . 5 − 0 . 8660 i$," "$− 0 . 5 + 0 . 8660 i$" and "$− 1$." Basins for this problem are shown in Figure 4, Figure 5 and Figure 6 corresponding to parameter choices $β = 10 − 2 , 10 − 4 , 10 − 8$ in the proposed methods. Basins obtained for the methods M1–M8 are shown in Figure 4, Figure 5 and Figure 6 corresponding to $β = 10 − 2 , 10 − 4 , 10 − 8$. When observing the behaviour of the methods, we see that the methods M3 and M6 possess lesser numbers of divergent points, followed by M1, M4, M8, M2, M3 and M7.
From the graphics, we can judge the behaviour and suitability of any strategy relying upon the conditions. In the event that we pick an underlying point $z 0$ in a zone where various basins of attraction meet one another, it is difficult to foresee which root will be attained by the iterative strategy that starts in $z 0$. Thus, the selection of $z 0$ in such a zone is anything but a decent one. Both the dark zones and the zones with various hues are not appropriate to speculate $z 0$ when we need to accomplish a specific root. The most alluring pictures show up when we have extremely perplexing wildernesses between basins of attraction, and they compare to the situations wherein the strategy is additionally requesting, as it does for the underlying point, and its dynamic conduct is progressively eccentric. We close this segment with a comment that the union conduct of proposed strategies relies on the estimation of parameter $β$. The smaller the value of $β$, the better the convergence of the technique.

## 4. Numerical Results

This section is committed to exhibiting the convergence behaviour of the displayed family. In this regard, we consider the special cases of the proposed class, M1–M8 by choosing $( a 1 = 1 / 10 )$, $( a 2 = 1 / 4 )$, $( a 3 = 1 / 10 )$, $( a 4 = 1 / 10 )$ and $( a 5 = 1 / 5 )$. Performance is compared with the existing classical Newton modified method (1). We select four test functions for correlation.
To check the theoretical order of convergence, we obtain the computational order of convergence (COC) using the formula (see [25]):
$C O C = ln | ( x n + 1 − α ) / ( x n − α ) | ln | ( x n − α ) / ( x n − 1 − α ) | .$
All computations were performed in the programming package Mathematica using multiple-precision arithmetic with 4096 significant digits. Numerical results displayed in Table 1, Table 2, Table 3 and Table 4 include: (i) number of iterations $( n )$ required to converge to the solution such that $| x n + 1 − x n | + | f ( x n ) | < 10 − 100$, (ii) values of last three consecutive errors $| x n + 1 − x n |$, (iii) residual error $| f ( x n ) |$ and (iv) computational order of convergence (COC).
Example 1
(Eigenvalue problem). One of the challenging tasks in linear algebra is concerned with the Eigenvalues of a large square matrix. Finding the zeros of the characteristic equation of a square matrix of order greater than 4 is another big job. So, we consider the following 9 × 9 matrix.
$M = 1 8 − 12 0 0 19 − 19 76 − 19 18 437 − 64 24 0 − 24 24 64 − 8 32 376 − 16 0 24 4 − 4 16 − 4 8 92 − 40 0 0 − 10 50 40 2 20 242 − 4 0 0 − 1 41 4 1 0 25 − 40 0 0 18 − 18 104 − 18 20 462 − 84 0 0 − 29 29 84 21 42 501 16 0 0 − 4 4 − 16 4 16 − 92 0 0 0 0 0 0 0 0 24 .$
The characteristic polynomial of the matrix $( M )$ is given as
$f 1 ( x ) = x 9 − 29 x 8 + 349 x 7 − 2261 x 6 + 8455 x 5 − 17663 x 4 + 15927 x 3 + 6993 x 2 − 24732 x + 12960 .$
This function has one multiple zero $α = 3$ with multiplicity 4. We choose initial approximations $x 0 = 2 . 75$ and obtain the numerical results as shown in Table 1.
Example 2
(Beam Designing Model). We consider a beam positioning problem (see [26]) where a beam of length r unit is leaning against the edge of a cubical box with sides of length 1 unit each, such that one end of the beam touches the wall and the other end touches the floor, as shown in Figure 7.
The problem is: What should be the distance along the floor from the base of the wall to the bottom of the beam? Suppose y is the distance along the beam from the floor to the edge of the box, and let x be the distance from the bottom of the box to bottom of the beam. Then, for a given value of r, we have
$x 4 + 4 x 3 − 24 x 2 + 16 x + 16 = 0 .$
The positive solution of the equation is a double root $x = 2$. We consider the initial guess $x 0 = 1 . 8$ to find the root. Numerical results of various methods are shown in Table 2.
Example 3.
The van der Waals equation of state, whose expression is
$P + a 1 n 2 V 2 ( V − n a 2 ) = n R T ,$
explains the behaviour of a real gas by taking in the ideal gas equations two more parameters, $a 1$ and $a 2$, specific for each gas. In order to determine the volume V of the gas in terms of the remaining parameters, we are required to solve the nonlinear equation in V.
$P V 3 − ( n a 2 P + n R T ) V 2 + a 1 n 2 V − a 1 a 2 n 3 = 0 .$
Given the constants $a 1$ and $a 2$ of a particular gas, one can find values for n, P and T, such that this equation has three real roots. By using the particular values, we obtain the following nonlinear function
$f 1 ( x ) = x 3 − 5 . 22 x 2 + 9 . 0825 x − 5 . 2675 ,$
having three roots, out of which one is a multiple zero $α = 1 . 75$ of multiplicity of order two, and other one is simple zero $ξ = 1 . 72$. However, we seek the multiple zero $α = 1 . 75$. We consider initial guess $x 0 = 1 . 8$ to obtain this zero. The numerical results so-obtained are shown in Table 3.
Example 4.
Lastly, consider the test function
$f 4 ( x ) = x 2 + 1 2 x e x 2 + 1 + x 3 − x cosh 2 π x 2 .$
The function has multiple zero $α = i$ of multiplicity 4. We chose the initial approximation $x 0 = 1 . 25 i ,$ for obtaining the zero of the function. Numerical results of various methods are shown in Table 4.

## 5. Conclusions

In the foregoing work, we have developed a class of optimal second order methods free from derivatives for solving nonlinear uni-variate functions. Convergence analysis has shown the second order convergence under standard presumptions with respect to the nonlinear functions whose zeros we were searching for. Some best cases of the class were discussed. Their stability was tested by analyzing complex geometry shown by drawing their basins. We have noticed from the graphics that the basins become wider and wider as the parameter assumes smaller values. The methods were employed to solve some nonlinear uni-variate equations and also compared with already defined techniques. We conclude the paper with the remark that derivative-free techniques may prove to be good options for Newton-like methods in the cases where derivatives are expensive to evaluate.

## Author Contributions

The contribution of all the authors have been similar. All of them worked together to develop the present manuscript. All authors have read and agreed to the published version of the manuscript.

## Funding

This research received no external funding.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. Argyros, I.K. Convergence and Applications of Newton-Type Iterations; Spring: New York, NY, USA, 2008. [Google Scholar]
2. Hoffman, J.D. Numerical Methods for Engineers and Scientists; McGraw-Hill Book Company: New York, NY, USA, 1992. [Google Scholar]
3. Schröder, E. Über unendlich viele Algorithmen zur Auflösung der Gleichungen. Math Ann. 1870, 2, 317–365. [Google Scholar] [CrossRef] [Green Version]
4. Kumar, D.; Sharma, J.R.; Cesarano, C. One-point optimal family of multiple root solvers of second-Order. Mathematics 2019, 7, 655. [Google Scholar] [CrossRef] [Green Version]
5. Dong, C. A family of multipoint iterative functions for finding multiple roots of equations. Int. J. Comput. Math. 1987, 21, 363–367. [Google Scholar] [CrossRef]
6. Geum, Y.H.; Kim, Y.I.; Neta, B. A class of two-point sixth-order multiple-zero finders of modified double-Newton type and their dynamics. Appl. Math. Comput. 2015, 70, 387–400. [Google Scholar] [CrossRef] [Green Version]
7. Hansen, E.; Patrick, M. A family of root finding methods. Numer. Math. 1977, 27, 257–269. [Google Scholar] [CrossRef]
8. Kansal, M.; Kanwar, V.; Bhatia, S. On some optimal multiple root-finding methods and their dynamics. Appl. Appl. Math. 2015, 10, 349–367. [Google Scholar]
9. Li, S.G.; Cheng, L.Z.; Neta, B. Some fourth-order nonlinear solvers with closed formulae for multiple roots. Comput. Math. Appl. 2010, 59, 126–135. [Google Scholar] [CrossRef] [Green Version]
10. Li, S.; Liao, X.; Cheng, L. A new fourth-order iterative method for finding multiple roots of nonlinear equations. Appl. Math. Comput. 2009, 215, 1288–1292. [Google Scholar]
11. Lotfi, T.; Sharifi, S.; Salimi, M.; Siegmund, S. A new class of three-point methods with optimal convergence order eight and its dynamics. Numer. Algor. 2015, 68, 261–288. [Google Scholar] [CrossRef]
12. Neta, B. New third order nonlinear solvers for multiple roots. Appl. Math. Comput. 2008, 202, 162–170. [Google Scholar] [CrossRef] [Green Version]
13. Osada, N. An optimal multiple root-finding method of order three. J. Comput. Appl. Math. 1994, 51, 131–133. [Google Scholar] [CrossRef] [Green Version]
14. Sharifi, M.; Babajee, D.K.R.; Soleymani, F. Finding the solution of nonlinear equations by a class of optimal methods. Comput. Math. Appl. 2012, 63, 764–774. [Google Scholar] [CrossRef] [Green Version]
15. Sharma, J.R.; Sharma, R. Modified Jarratt method for computing multiple roots. Appl. Math. Comput. 2010, 217, 878–881. [Google Scholar] [CrossRef]
16. Soleymani, F.; Babajee, D.K.R.; Lotfi, T. On a numerical technique for finding multiple zeros and its dynamics. J. Egypt. Math. Soc. 2013, 21, 346–353. [Google Scholar] [CrossRef]
17. Victory, H.D.; Neta, B. A higher order method for multiple zeros of nonlinear functions. Int. J. Comput. Math. 1983, 12, 329–335. [Google Scholar] [CrossRef]
18. Yun, B.I. A non-iterative method for solving non-linear equations. Appl. Math. Comput. 2008, 198, 691–699. [Google Scholar] [CrossRef]
19. Zhou, X.; Chen, X.; Song, Y. Constructing higher-order methods for obtaining the multiple roots of nonlinear equations. J. Comput. Appl. Math. 2011, 235, 4199–4206. [Google Scholar] [CrossRef] [Green Version]
20. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Mach. 1974, 21, 643–651. [Google Scholar] [CrossRef]
21. Traub, J.F. Iterative Methods for the Solution of Equations; Chelsea Publishing Company: New York, NY, USA, 1982. [Google Scholar]
22. Vrscay, E.R.; Gilbert, W.J. Extraneous fixed points, basin boundaries and chaotic dynamics for Schröder and König rational iteration functions. Numer. Math. 1988, 52, 1–16. [Google Scholar] [CrossRef]
23. Scott, M.; Neta, B.; Chun, C. Basin attractors for various methods. Appl. Math. Comput. 2011, 218, 2584–2599. [Google Scholar] [CrossRef]
24. Varona, J.L. Graphic and numerical comparison between iterative methods. Math. Intell. 2002, 24, 37–46. [Google Scholar] [CrossRef]
25. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
26. Zachary, J.L. Introduction to Scientific Programming: Computational Problem Solving Using Maple and C; Springer: New York, NY, USA, 2012. [Google Scholar]
Figure 1. Basins of attraction of M1–M8 for polynomial $P 1 ( z ) ( β = 10 − 2 )$.
Figure 1. Basins of attraction of M1–M8 for polynomial $P 1 ( z ) ( β = 10 − 2 )$.
Figure 2. Basins of attraction of M1–M8 for polynomial $P 1 ( z ) ( β = 10 − 4 )$.
Figure 2. Basins of attraction of M1–M8 for polynomial $P 1 ( z ) ( β = 10 − 4 )$.
Figure 3. Basins of attraction of M1–M8 for polynomial $P 1 ( z ) ( β = 10 − 8 )$.
Figure 3. Basins of attraction of M1–M8 for polynomial $P 1 ( z ) ( β = 10 − 8 )$.
Figure 4. Basins of attraction of M1–M8 for polynomial $P 2 ( z ) ( β = 10 − 2 )$.
Figure 4. Basins of attraction of M1–M8 for polynomial $P 2 ( z ) ( β = 10 − 2 )$.
Figure 5. Basins of attraction of M1–M8 for polynomial $P 2 ( z ) ( β = 10 − 4 )$.
Figure 5. Basins of attraction of M1–M8 for polynomial $P 2 ( z ) ( β = 10 − 4 )$.
Figure 6. Basins of attraction of M1–M8 for polynomial $P 2 ( z ) ( β = 10 − 8 )$.
Figure 6. Basins of attraction of M1–M8 for polynomial $P 2 ( z ) ( β = 10 − 8 )$.
Figure 7. Beam positioning problem.
Figure 7. Beam positioning problem.
Table 1. Comparison of the performances of methods for Example 1.
Table 1. Comparison of the performances of methods for Example 1.
Methodsn$| e n − 2 |$$| e n − 1 |$$| e n |$$f ( x n + 1 )$$COC$
MNM7$1.70 × 10 − 21$$6.84 × 10 − 43$$1.11 × 10 − 85$$5.90 × 10 − 681$2.000
M17$2.93 × 10 − 21$$2.09 × 10 − 42$$1.07 × 10 − 84$$4.70 × 10 − 673$2.000
M27$3.27 × 10 − 24$$1.87 × 10 − 48$$6.10 × 10 − 97$$1.43 × 10 − 771$2.000
M37$3.66 × 10 − 23$$1.84 × 10 − 46$$4.67 × 10 − 93$$6.47 × 10 − 741$2.000
M47$3.49 × 10 − 18$$4.41 × 10 − 36$$7.04 × 10 − 72$$8.30 × 10 − 570$2.000
M56$6.85 × 10 − 15$$5.28 × 10 − 30$$3.14 × 10 − 60$$1.21 × 10 − 478$2.000
M67$2.03 × 10 − 21$$9.79 × 10 − 43$$2.27 × 10 − 85$$1.82 × 10 − 678$2.000
M76$2.46 × 10 − 13$$8.35 × 10 − 27$$9.60 × 10 − 54$$2.06 × 10 − 426$2.000
M87$6.88 × 10 − 20$$1.36 × 10 − 39$$5.33 × 10 − 79$$3.56 × 10 − 627$2.000
Table 2. Comparison of the performances of methods for Example 2.
Table 2. Comparison of the performances of methods for Example 2.
Methodsn$| e n − 2 |$$| e n − 1 |$$| e n |$$f ( x n + 1 )$$COC$
MNM7$1.61 × 10 − 20$$6.51 × 10 − 41$$1.06 × 10 − 81$$1.86 × 10 − 324$2.000
M17$3.60 × 10 − 21$$2.94 × 10 − 41$$1.95 × 10 − 84$$1.77 × 10 − 335$2.000
M27$7.50 × 10 − 18$$2.11 × 10 − 35$$1.68 × 10 − 70$$2.71 × 10 − 279$2.000
M37$2.63 × 10 − 18$$2.44 × 10 − 36$$2.08 × 10 − 72$$5.58 × 10 − 287$2.000
M46$1.85 × 10 − 22$$4.10 × 10 − 47$$2.02 × 10 − 96$$5.76 × 10 − 388$2.000
M57$6.66 × 10 − 16$$2.23 × 10 − 31$$2.48 × 10 − 62$$2.30 × 10 − 246$2.000
M67$1.30 × 10 − 20$$4.26 × 10 − 41$$4.57 × 10 − 82$$6.60 × 10 − 326$2.000
M77$1.41 × 10 − 17$$7.79 × 10 − 35$$2.38 × 10 − 69$$1.19 × 10 − 274$2.000
M86$1.16 × 10 − 17$$6.61 × 10 − 36$$2.13 × 10 − 72$$1.18 × 10 − 288$2.000
Table 3. Comparison of the performances of methods for Example 3.
Table 3. Comparison of the performances of methods for Example 3.
Methodsn$| e n − 2 |$$| e n − 1 |$$| e n |$$f ( x n + 1 )$$COC$
MNM9$3.65 × 10 − 12$$2.22 × 10 − 22$$8.24 × 10 − 43$$3.84 × 10 − 168$2.000
M19$2.99 × 10 − 12$$1.49 × 10 − 22$$3.70 × 10 − 43$$1.55 × 10 − 169$2.000
M29$9.51 × 10 − 12$$1.52 × 10 − 21$$3.88 × 10 − 41$$1.92 × 10 − 161$2.000
M39$7.90 × 10 − 12$$1.05 × 10 − 21$$1.84 × 10 − 41$$9.58 × 10 − 163$2.000
M49$4.35 × 10 − 13$$3.11 × 10 − 24$$1.59 × 10 − 46$$5.18 × 10 − 183$2.000
M59$2.28 × 10 − 11$$8.82 × 10 − 21$$1.32 × 10 − 39$$2.58 × 10 − 155$2.000
M69$3.81 × 10 − 12$$2.42 × 10 − 22$$9.74 × 10 − 43$$7.49 × 10 − 168$2.000
M79$1.08 × 10 − 11$$1.95 × 10 − 21$$6.42 × 10 − 41$$1.44 × 10 − 160$2.000
M89$7.78 × 10 − 16$$1.01 × 10 − 29$$1.69 × 10 − 57$$6.83 × 10 − 227$2.000
Table 4. Comparison of the performances of methods for Example 4.
Table 4. Comparison of the performances of methods for Example 4.
Methodsn$| e n − 2 |$$| e n − 1 |$$| e n |$$f ( x n + 1 )$$COC$
MNM7$2.87 × 10 − 15$$2.75 × 10 − 30$$2.51 × 10 − 60$$5.82 × 10 − 478$2.000
M17$2.88 × 10 − 15$$2.77 × 10 − 30$$2.55 × 10 − 60$$6.58 × 10 − 478$2.000
M27$3.42 × 10 − 15$$3.97 × 10 − 30$$5.34 × 10 − 60$$2.59 × 10 − 475$2.000
M37$4.44 × 10 − 15$$6.86 × 10 − 30$$1.64 × 10 − 59$$2.27 × 10 − 471$2.000
M47$5.71 × 10 − 15$$1.16 × 10 − 29$$4.78 × 10 − 59$$1.30 × 10 − 467$2.000
M57$5.49 × 10 − 15$$1.07 × 10 − 29$$4.09 × 10 − 59$$3.73 × 10 − 468$2.000
M67$2.99 × 10 − 15$$2.99 × 10 − 30$$2.97 × 10 − 60$$2.22 × 10 − 477$2.000
M77$4.48 × 10 − 15$$6.99 × 10 − 30$$1.70 × 10 − 59$$3.07 × 10 − 471$2.000
M87$3.37 × 10 − 15$$3.83 × 10 − 30$$4.94 × 10 − 60$$1.37 × 10 − 475$2.000

## Share and Cite

MDPI and ACS Style

Kumar, D.; Sharma, J.R.; Argyros, I.K. Optimal One-Point Iterative Function Free from Derivatives for Multiple Roots. Mathematics 2020, 8, 709. https://doi.org/10.3390/math8050709

AMA Style

Kumar D, Sharma JR, Argyros IK. Optimal One-Point Iterative Function Free from Derivatives for Multiple Roots. Mathematics. 2020; 8(5):709. https://doi.org/10.3390/math8050709

Chicago/Turabian Style

Kumar, Deepak, Janak Raj Sharma, and Ioannis K. Argyros. 2020. "Optimal One-Point Iterative Function Free from Derivatives for Multiple Roots" Mathematics 8, no. 5: 709. https://doi.org/10.3390/math8050709

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.