Previous Article in Journal
Methylidyne Cavity Ring-Down Spectroscopy in a Microwave Plasma Discharge

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Improved Higher Order Compositions for Nonlinear Equations

1
Department of Mathematics, Hans Raj Mahila Mahavidyalaya, Jalandhar 144008, India
2
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
*
Author to whom correspondence should be addressed.
Foundations 2023, 3(1), 25-36; https://doi.org/10.3390/foundations3010003
Received: 14 November 2022 / Revised: 29 December 2022 / Accepted: 3 January 2023 / Published: 6 January 2023
(This article belongs to the Special Issue Iterative Methods with Applications in Mathematical Sciences II)

## Abstract

:
In the present study, two new compositions of convergence order six are presented for solving nonlinear equations. The first method is obtained from the third-order one given by Homeier using linear interpolation, and the second one is obtained from the third-order method given by Traub using divided differences. The first method requires three evaluations of the function and one evaluation of the first derivative, thereby enhancing the efficiency index. In the second method, the computation of a derivative is reduced by approximating it using divided differences. Various numerical experiments are performed which demonstrate the accuracy and efficacy of the proposed methods.

## 1. Introduction

The design and conceptualization of higher order iterative methods for solving nonlinear equations is of great importance in numerical analysis and many scientific branches [1,2,3,4,5,6]. A plethora of iterative methods [7,8,9,10,11,12,13,14] have been developed by various researchers to solve nonlinear equations of the form
$f ( x ) = 0 ,$
where $f : D ⊂ R → R$ is a continuously differentiable nonlinear function defined on an open interval $D$. One of the widely used iterative methods is Newton’s method with quadratic convergence, which is given as
$x k + 1 = x k − f ( x k ) f ′ ( x k ) , k = 0 , 1 , 2 , …$
Many other applications such as transportation, electron theory, the geometric theory of relativistic string, chemical speciation, chemical engineering, and queuing models also generate numerous such equations [15,16,17]. In most cases, the problems transformed into nonlinear equations can not be solved analytically. In order to approximate them numerically, adequate iterative methods are taken into consideration. The recent trend is to develop higher order iterative methods to solve nonlinear equations of the form (1) as they provide an efficient approximation and more accuracy in finding the solution. Higher-order iterative methods are important because many applications require faster convergence. But at the same time, it is very important to maintain an equilibrium between the convergence rate and the operational cost. Newton’s method has been modified in a number of ways at the additional cost of evaluation of a function, derivative and changes in the points of iteration in order to increase its efficiency index and order of convergence. Many researchers have proposed numerous higher order methods in order to improve the convergence of Newton’s method.
Neta  has developed sixth order iterative method (NEM). It is given for $k = 0 , 1 , 2 , …$ as:
$w k = x k − f ( x k ) f ′ ( x k ) , z k = w k − f ( x k ) + 2 f ( w k ) f ( x k ) f ( w k ) f ′ ( x k ) , x k + 1 = z k − f ( x k ) − f ( w k ) + f ( z k ) f ( x k ) − 3 f ( w k ) + f ( z k ) f ( z k ) f ′ ( x k ) ,$
This method requires three evaluations of f and one evaluation of its first derivative $f ′$ per iteration.
A variant of the Jarratt method (KLM) has been developed by Kou and Li  of order six. It is given for $k = 0 , 1 , 2 , …$ as:
$y k = x k − 2 3 f ( x k ) f ′ ( x k ) , z k = x k − J f ( x k ) f ( x k ) f ′ ( x k ) , x k + 1 = z k − f ( z k ) 3 2 J f ( x k ) f ′ ( y k ) + ( 1 − 3 2 J f ( x k ) ) f ′ ( x k ) ,$
where $J f ( x k ) = 3 f ′ ( y k ) + f ′ ( x k ) 6 f ′ ( y k ) − 2 f ′ ( x k )$. This method requires evaluations of two f and two $f ′$ per iteration.
Singh  has developed two sixth-order iterative methods for $k = 0 , 1 , 2 , …$. They are given as follows.
The first sixth-order Singh Method (SM1) is:
$y k = x k − f ( x k ) f ′ ( x k ) , z k = y k − 1 2 ( f ′ ( x k ) − f ′ ( y k ) f ( x k ) + f ′ ( x k ) ) f ( x k ) f ′ ( x k ) , x k + 1 = z k − 2 f ( z k ) ( f ( x k ) + f ′ ( x k ) ) 2 f ( x k ) f ′ ( y k ) + 4 f ′ ( x k ) f ′ ( y k ) − ( f ′ ( x k ) ) 2 − ( f ′ ( y k ) ) 2 .$
This method also requires two evaluations of f and $f ′$ each per iteration.
The second sixth-order Singh Method (SM2) is:
$y k = x k + f ( x k ) f ′ ( x k ) , z k = x k − f ( y k ) − f ( x k ) f ′ ( x k ) , x k + 1 = z k − f ( z k ) f [ z k , y k ] + f [ z k , x k , x k ] ( z k − y k ) .$
This method utilizes 3 f and 1 $f ′$ evaluations per iteartion.
Sharma et al.  proposed a sixth order iterative method (SSM). It is given for $k = 0 , 1 , 2 , …$ as:
$y k = x k − f ( x k ) f ′ ( x k ) , z k = x k − 3 2 − 1 2 f ′ ( y k ) f ′ ( x k ) f ( x k ) f ′ ( x k ) , x k + 1 = z k − 7 2 − − 4 + 3 2 f ′ ( y k ) f ′ ( x k ) f ′ ( y k ) f ′ ( x k ) f ( z k ) f ′ ( x k ) .$
This method utilizes two evaluations of the function f and two evaluations of the first order derivative $f ′$ per step.
Motivated by the ongoing research in this direction, we develop and analyze two higher order iterative methods to solve nonlinear equations using the techniques of linear interpolation and divided differences [5,22,23,24,25,26,27]. The first sixth order method is obtained by introducing a third step and approximating its derivative by linear interpolation. In a similar manner, a third step is added in a second third-order method but its derivative is approximated by divided differences up to second order leading it also to a sixth-order method. Convergence analysis of both the methods is established. The efficiency of the first proposed method is enhanced from $1.43097$ to $1.56508$ and the second method involves one less computation of derivative. This is the novelty behind the present work. Various nonlinear equations are solved and comparison results indicate better performance of the first presented scheme over the existing ones [18,19,20,21].
The contents of the paper are summarized as follows. Section 2 contains preliminaries, definitions and auxiliary results. Section 3 includes the establishment of the first sixth-order method along with convergence analysis using linear interpolation. The development and analysis of the second sixth-order method using divided differences are presented in Section 4. In Section 5, numerical examples are figured out to ascertain the theoretical postulates for comparing the proposed methods with the current methods. Section 6 contains the concluding remarks.

## 2. Preliminaries

In order to make the study as self contained as possible, we included some standard definitions and results.
Definition 1.
Let ${ v k }$ be a sequence convergent to some parameter ψ. Then, the convergence is called:
(i)
Linear, if there exists a parameter l and a natural number $k 0$ such that
$| v k + 1 − ψ | ≤ l | v k − ψ | for each k ≥ k 0 .$
(ii)
Of convergence order q, $q ≥ 2$ if there exist a parameter L, $L > 0$ and a natural number $k 0$ such that
$| v k + 1 − ψ | ≤ l | v k − ψ | q for each k ≥ k 0 .$
Definition 2.
Let ψ be root of the function f. Suppose that $v k − 1$, $v k$, $v k + 1$, $v k + 2$ are consecutive iterations close to ψ. Then, the convergence order (computational) ρ is defined by the formula
$ρ ≈ ln ( | v k + 1 − ψ | / | v k − ψ | ) ln ( | v k − ψ | / | v k − 1 − ψ | ) if ψ is known .$
A second type of convergence order (Approximate Computational) α is defined by the formula
$α ≈ ln ( | v k + 2 − v k + 1 | / | v k + 1 − v k | ) ln ( | v k + 1 − v k | / | v k − v k − 1 | ) if ψ is unknown .$
The efficiency index $q 1 / δ$, where q is the convergence order and δ is the total number of new function evaluations is often utilized to compute different methods.
Next, we restate Taylor’s expansion formula on realfunctions.
Lemma 1.
Let $f : R → R$ be m-times differentiable in an interval $S$. Then, the following expression holds for each $x , d ∈ S$
$f ( x + d ) = f ( x ) + f ′ ( x ) d + 1 2 ! f ″ ( x ) d 2 + 1 3 ! f ⋯ ( x ) d 3 + … + + 1 ( q − 1 ) ! f ( q − 1 ) ( x ) d q − 1 + r q ,$
where
$| r q | ≤ 1 q ! sup | f ( q ) ( x + θ d ) | ,$
for each $θ ∈ [ 0 , 1 ] .$

## 3. Development of First Sixth Order Iterative Method

In this section, we propose a three-step iterative method for solving the nonlinear equation of the form (1) from third-order Newton-type composition given by Homeier . This method is given as follows:
$y k = x k − f ( x k ) f ′ ( x k ) , x k + 1 = x k − 1 2 1 f ′ ( x k ) + 1 f ′ ( y k ) f ( x k ) .$
Extension of third order method (8) to obtain a sixth-order iterative method is done by adding a Newton-like step in the following manner:
$y k = x k − f ( x k ) f ′ ( x k ) , z k = x k − 1 2 1 f ′ ( x k ) + 1 f ′ ( y k ) f ( x k ) , x k + 1 = z k − f ( z k ) f ′ ( z k ) .$
where, $k = 0 , 1 , 2 , …$ and the initial approximation $x 0$ is chosen suitably. The efficiency index of this method is $6 1 5 = 1.43097$. The foremost aim of our study is to develop a novel sixth-order iterative method with a higher efficiency index. For this, we try to reduce the number of evaluations using the following linear interpolation formula on points $( x k , f ′ ( x k ) )$ and $( y k , f ′ ( y k ) )$ for approximating $f ′ ( z k )$ as follows:
$f ′ ( z k ) ≃ z k − x k y k − x k f ′ ( y k ) + z k − y k x k − y k f ′ ( x k ) .$
This simplification gives
$f ′ ( z k ) ≃ 1 2 2 f ′ ( x k ) f ′ ( y k ) + ( f ′ ( y k ) ) 2 − ( f ′ ( x k ) ) 2 f ′ ( y k ) .$
Substituting (11) in (9), the new three-step sixth-order method is given as:
$y k = x k − f ( x k ) f ′ ( x k ) , z k = x k − 1 2 1 f ′ ( x k ) + 1 f ′ ( y k ) f ( x k ) , x k + 1 = z k − 2 f ( z k ) f ′ ( y k ) 2 f ′ ( x k ) f ′ ( y k ) + ( f ′ ( y k ) ) 2 − ( f ′ ( x k ) ) 2 .$
This method utilizes two evaluations of the function f and two evaluations of the first order derivative $f ′$ at each step. The convergence analysis of the sixth-order method (12) is established in the next theorem.
Theorem 1.
Let $f : D ⊂ R → R$ be a sufficiently differentiable function in an open interval $D$ and $x 0$ be a close approximation to its simple root $ψ ∈ D$. Then, the iterative method (12) satisfies the following error equation:
$e k + 1 = − 5 4 a 2 a 3 2 e k 6 + O ( e k 7 ) ,$
where $a k = f k ( ψ ) k ! f ′ ( ψ )$, for $k = 2 , 3 , …$.
Proof.
Let $e k = x k − ψ$ be the error iteration in the $k t h$ iterate. Applying Taylor expansion of $f ( x k )$ and $f ′ ( x k )$ about $ψ$, we obtain
$f ( x k ) = f ′ ( ψ ) ( e k + a 2 e k 2 + a 3 e k 3 + a 4 e k 4 + a 5 e k 5 + a 6 e k 6 + O ( e k 7 ) ,$
$f ′ ( x k ) = f ′ ( ψ ) ( 1 + 2 a 2 e k + 3 a 3 e k 2 + 4 a 4 e k 3 + 5 a 5 e k 4 + 6 a 6 e k 5 + O ( e k 6 ) .$
Substituting (14) and (15) in first substep of (12), we obtain
$y k = ψ + a 2 e k 2 + 2 ( a 3 − a 2 2 ) e k 3 + ( 3 a 4 − 7 a 2 a 3 + 4 a 2 3 ) e k 4 − 2 ( 4 a 2 4 + 3 a 3 2 − 10 a 2 2 a 3 + 5 a 2 a 4 − 2 a 5 ) e k 5 + ( 16 a 2 5 + 33 a 2 a 3 2 − 52 a 2 3 a 3 + 28 a 2 2 a 4 − 13 a 2 a 5 − 17 a 3 a 4 + 5 a 6 ) e k 6 + O ( e k 7 ) .$
Then, the Taylor expansion about $ψ$ gives,
$f ( y k ) = f ′ ( ψ ) [ a 2 2 e k 2 − ( 2 a 2 2 + a 3 ) e k 3 + ( 5 a 2 3 + 3 a 4 − 7 a 2 a 3 ) e k 4 − ( 12 a 2 4 − 24 a 2 2 a 3 + 10 a 2 a 4 + 6 a 3 2 + 4 a 5 ) e k 5 + ( 28 a 2 5 + 37 a 2 a 3 2 − 73 a 2 3 a 3 + 34 a 2 2 a 4 − 13 a 2 a 5 − 17 a 3 a 4 + 5 a 6 ) e k 6 + O ( e k 7 ) ] ,$
and
$f ′ ( y k ) = f ′ ( ψ ) [ 1 + 2 a 2 2 e k 2 − 4 a 2 ( a 2 2 − a 3 ) e k 3 + a 2 ( 8 a 2 3 + 6 a 4 − 11 a 2 a 3 ) e k 4 − 4 a 2 ( 4 a 2 4 − 7 a 2 2 a 3 + 5 a 2 a 4 − 2 a 5 ) e k 5 + 2 ( 16 a 2 5 − 34 a 2 4 a 3 + 30 a 2 3 a 4 + 6 a 3 3 − 8 a 2 a 3 a 4 − 13 a 2 2 a 5 + 5 a 2 a 6 ) e k 6 + O ( e k 7 ) ] .$
Substituting (14), (15) and (17) in the second substep of (12) renders
$z k = ψ + 1 2 [ a 3 e k 3 + ( 2 a 2 3 − 3 a 2 a 3 + 2 a 4 ) e k 4 + ( − 8 a 2 4 + 15 a 2 2 a 3 − 6 a 3 2 − 4 a 2 a 4 + 3 a 5 ) e k 5 + ( 20 a 2 5 − 55 a 2 3 a 3 + 37 a 2 a 3 2 + 16 a 2 2 a 4 − 17 a 3 a 4 − 5 a 2 a 5 + 4 a 6 ) e k 6 + O ( e k 7 ) ] .$
Expanding $f ( z k )$ about $ψ$ and using Taylor expansion, we obtain
$f ( z k ) = f ′ ( ψ ) [ 1 2 a 3 e k 3 + ( a 2 3 − 3 2 a 2 a 3 + a 4 ) e k 4 + ( 4 a 2 4 − 15 2 a 2 2 a 3 + 3 a 3 2 + 2 a 2 a 4 − 3 2 a 5 ) e k 5 + ( 10 a 2 5 − 55 2 a 2 3 a 3 + 75 4 a 2 a 3 3 + 8 a 2 2 a 4 − 17 2 a 3 a 4 − 5 2 a 2 a 5 + 2 a 6 ) e k 6 + O ( e k 7 ) ] ,$
In view of (11), we obtain
$f ′ ( z k ) ≃ f ′ ( ψ ) [ 1 − 2 a 2 a 3 e k 3 + ( 2 a 2 4 + 3 a 2 2 a 3 − 9 2 a 3 2 − 2 a 2 a 4 ) e k 4 + ( − 8 a 2 5 + 6 a 2 3 a 3 − 12 a 3 a 4 + 12 a 2 a 3 2 − 2 a 2 a 5 ) e k 5 + ( 20 a 2 6 − 40 a 2 4 a 3 − 8 a 2 2 a 3 2 + 12 a 3 3 + 20 a 2 3 a 4 + 18 a 2 a 3 a 4 − 8 a 4 2 − 2 a 2 a 6 − 15 a 3 a 5 ) e k 6 + O ( e k 7 ) ] .$
By substituting (19) and (20) in last substep of (12), we obtain
$e k + 1 = x k + 1 − ψ = − 5 4 a 2 a 3 2 e k 6 + O ( e k 7 ) .$
The efficiency index of the method (12) is enhanced to $6 1 4 = 1.56508$, which is better than that of method (9).

## 4. Development of Second Sixth Order Iterative Method

This section describes another sixth-order iterative method for solving nonlinear equations and its convergence analysis. Traub  proposed a third-order iterative method for $k = 0 , 1 , 2 , … ,$ given as:
$y k = x k − f ( x k ) f ′ ( x k ) , x k + 1 = x k − 3 2 − 1 2 f ′ ( y k ) f ′ ( x k ) f ( x k ) f ′ ( x k ) .$
The new sixth-order iterative method obtained by extending (21) in a similar manner as done in the previous section is as follows:
$y k = x k − f ( x k ) f ′ ( x k ) , z k = x k − 3 2 − 1 2 f ′ ( y k ) f ′ ( x k ) f ( x k ) f ′ ( x k ) , x k + 1 = z k − f ( z k ) f ′ ( z k ) ,$
where $k = 0 , 1 , 2 , …$ and $x 0$ is suitably chosen initial approximation close to the root. This technique requires three evaluations of the function and two evaluations of the derivative per iteration. Here, the number of evaluations of the derivative is reduced by approximating $f ′ ( z k )$ using the technique of divided differences up to the second order. Expanding $f ( z k )$ by using Taylor expansion about $y k$ up to second order, we obtain
$f ( z k ) ≃ f ( y k ) + f ′ ( y k ) ( z k − y k ) + 1 2 f ″ ( y k ) ( z k − y k ) 2 .$
Thus,
$f ′ ( y k ) ≃ f [ z k , y k ] − 1 2 f ″ ( y k ) ( z k − y k ) ,$
where $f [ z k , y k ] = f ( z k ) − f ( y k ) z k − y k$ denotes the divided difference of first order . Similarly, the approximation of $f ″ ( y k )$ is given as:
$f ″ ( y k ) ≃ 2 f [ z k , x k ] − f [ x k , x k ] z k − x k = 2 f [ z k , x k , x k ] .$
To obtain $f ′ ( z k )$, differentiate (23)
$f ′ ( z k ) ≃ f ′ ( y k ) + f ″ ( y k ) ( z k − y k ) .$
Upon substitution of $f ′ ( y k )$ and $f ″ ( y k )$ in (24), we obtain
$f ′ ( z k ) ≃ f [ z k , y k ] + f [ z k , x k , x k ] ( z k − y k ) .$
Then, substituting (25) in the last of (22) the new three-step sixth order method is given as follows:
$y k = x k − f ( x k ) f ′ ( x k ) , z k = x k − 3 2 − 1 2 f ′ ( y k ) f ′ ( x k ) f ( x k ) f ′ ( x k ) , x k + 1 = z k − f ( z k ) f [ z k , y k ] + f [ z k , x k , x k ] ( z k − y k ) .$
This method utilizes three evaluations of the function f and two evaluations of the first order derivative $f ′$ at each step. The next theorem establishes the convergence of the iterative method (26).
Theorem 2.
Let $f : D ⊂ R → R$, $D$ being an open interval, be a sufficiently differentiable function. Let $x 0$ be a close approximation to its simple root $ψ ∈ D$. Then, for iterative method (26), the following error equation is satisfied:
$e k + 1 = 1 4 16 a 2 5 − 8 a 2 3 a 3 − 3 a 2 a 3 2 e k 6 + O ( e k 7 ) ,$
where $a k = f k ( ψ ) k ! f ′ ( ψ )$, for $k = 2 , 3 , …$.
Proof.
Let $e k = x k − ψ$ be the error iteration in the $k t h$ iterate. Applying Taylor expansion of $f ( x k )$ and $f ′ ( x k )$ about $ψ$, we obtain
$f ( x k ) = f ′ ( ψ ) ( e k + a 2 e k 2 + a 3 e k 3 + a 4 e k 4 + a 5 e k 5 + a 6 e k 6 + O ( e k 7 ) ,$
$f ′ ( x k ) = f ′ ( ψ ) ( 1 + 2 a 2 e k + 3 a 3 e k 2 + 4 a 4 e k 3 + 5 a 5 e k 4 + 6 a 6 e k 5 + O ( e k 6 ) .$
Substituting (28) and (29) in first substep of (26), we obtain
$y k = ψ + a 2 e k 2 + 2 ( a 3 − a 2 2 ) e k 3 + ( 3 a 4 − 7 a 2 a 3 + 4 a 2 3 ) e k 4 − 2 ( 4 a 2 4 + 3 a 3 2 − 10 a 2 2 a 3 + 5 a 2 a 4 − 2 a 5 ) e k 5 + ( 16 a 2 5 + 33 a 2 a 3 2 − 52 a 2 3 a 3 + 28 a 2 2 a 4 − 13 a 2 a 5 − 17 a 3 a 4 + 5 a 6 ) e k 6 + O ( e k 7 ) .$
Taylor expansion about $ψ$ gives,
$f ( y k ) = f ′ ( ψ ) [ a 2 2 e k 2 − ( 2 a 2 2 + a 3 ) e k 3 + ( 5 a 2 3 + 3 a 4 − 7 a 2 a 3 ) e k 4 − ( 12 a 2 4 − 24 a 2 2 a 3 + 10 a 2 a 4 + 6 a 3 2 + 4 a 5 ) e k 5 + ( 28 a 2 5 + 37 a 2 a 3 2 − 73 a 2 3 a 3 + 34 a 2 2 a 4 − 13 a 2 a 5 − 17 a 3 a 4 + 5 a 6 ) e k 6 + O ( e k 7 ) ] ,$
and
$f ′ ( y k ) = f ′ ( ψ ) [ 1 + 2 a 2 2 e k 2 − 4 a 2 ( a 2 2 − a 3 ) e k 3 + a 2 ( 8 a 2 3 + 6 a 4 − 11 a 2 a 3 ) e k 4 − 4 a 2 ( 4 a 2 4 − 7 a 2 2 a 3 + 5 a 2 a 4 − 2 a 5 ) e k 5 + 2 ( 16 a 2 5 − 34 a 2 4 a 3 + 30 a 2 3 a 4 + 6 a 3 3 − 8 a 2 a 3 a 4 − 13 a 2 2 a 5 + 5 a 2 a 6 ) e k 6 + O ( e k 7 ) ] .$
Substituting (28), (29) and (31) in second substep of (26) renders
$z k = ψ + 1 2 [ ( 4 a 2 2 + a 3 ) e k 3 + ( − 18 a 2 3 + 9 a 2 a 3 + 2 a 4 ) e k 4 + 3 ( 20 a 2 4 − 23 a 2 2 a 3 + a 3 2 + 4 a 2 a 4 + a 5 ) e k 5 + ( − 176 a 2 5 + 313 a 2 3 a 3 − 74 a 2 a 3 2 − 100 a 2 2 a 4 + 7 a 3 a 4 + 15 a 2 a 5 + 4 a 6 ) e k 6 + O ( e k 7 ) ] .$
Expanding $f ( z k )$ about $ψ$ using Taylor expansion, we obtain
$f ( z k ) = f ′ ( ψ ) [ 1 2 ( 4 a 2 2 + a 3 ) e k 3 + ( − 9 a 2 3 + 9 2 a 2 a 3 + a 4 ) e k 4 + 3 2 ( 20 a 2 4 − 23 a 2 2 a 3 + a 3 2 + 4 a 2 a 4 + a 5 ) e k 5 + 1 4 ( − 336 a 2 5 + 634 a 2 3 a 3 − 147 a 2 a 3 3 − 200 a 2 2 a 4 + 14 a 3 a 4 + 30 a 2 a 5 + 8 a 6 ) e k 6 + O ( e k 7 ) ] .$
We obtain from (25),
$f ′ ( z k ) ≃ f ′ ( ψ ) [ 1 + 4 ( a 2 3 − a 2 a 3 ) e k 3 + ( − 18 a 2 4 + 18 a 2 2 a 3 − 3 a 3 2 − a 2 a 4 ) e k 4 + 1 2 ( 120 a 2 5 − 198 a 2 3 a 3 − 17 a 3 a 4 + 60 a 2 a 3 2 + 48 a 2 2 a 4 − 2 a 2 a 5 ) e k 5 + ( − 176 a 2 6 + 409 a 2 4 a 3 − 201 a 2 2 a 3 2 + 39 2 a 3 3 − 142 a 2 3 a 4 + 157 2 a 2 a 3 a 4 − 6 a 4 2 + 31 a 2 2 a 5 − 11 a 3 a 5 − a 2 a 6 ) e k 6 + O ( e k 7 ) ] .$
Substituting (33) and (34) in last substep of (26), we obtain
$e k + 1 = 1 4 16 a 2 5 − 8 a 2 3 a 3 − 3 a 2 a 3 2 e k 6 + O ( e k 7 ) .$
The method (26) is better than (22) as it requires one less evaluation of derivative at each iteration than method (22).

## 5. Numerical Testing

In this section, the applicability is demonstrated of the proposed methods (12) and (26), which are now denoted by GM1 and GM2, respectively, on various nonlinear equations, thus validating the theoretical results obtained so far. Such nonlinear equations have implications to diverse areas of science and engineering [5,6]. The results are compared with methods SM1, SM2, NEM, KLM and SSM given by (5), (6), (3), (4) and (7), respectively. The test functions are displayed in Table 1, the root correct to 15 decimal places. The comparisons of the number of iterations and a total number of function evaluations are displayed in Table 2 and Table 3, respectively.
The comparison results for $| x k + 1 − x k |$ and $| f ( x k ) |$ for all considered examples are displayed in Table 4 and Table 5, respectively, up to the third iteration. All the computations are performed in programming package Mathematica  using 600 significant digits on Intel(R) Core(TM) $i 5 − 8250 U$ CPU @$1.60$ GHz $1.80$ GHz with 8 GB of RAM running on the Windows 10 Pro version 2017. It can be observed that the accuracy in numerical values of approximations to the root by the proposed method GM1 is higher than the existing methods in most of the examples while GM2 is competitive with other methods. Thus, numerical experiments demonstrate the novelty and applicability of the present study.

## 6. Conclusions

The current study includes the development of two sixth-order compositions for solving nonlinear equations. This has been done by adding a Newton-like step and approximating the derivative by linear interpolation and divided differences. The enhancement of the efficiency index of the first iterative method from $1.43097$ to $1.56508$ establishes the motivation behind the presented work. The second method involves one less evaluation of the derivative of the function thereby increasing its applicability. Numerical results corroborate the advantage of the proposed methods over the existing ones of the same order. In the future, we will extend these methods to Banach space-valued operators and equations.

## Author Contributions

Conceptualization, G.D. and I.K.A.; methodology, G.D. and I.K.A.; software, G.D. and I.K.A.; validation, G.D. and I.K.A.; formal analysis, G.D. and I.K.A.; investigation, G.D. and I.K.A.; resources, G.D. and I.K.A.; data curation, G.D. and I.K.A.; writing—original draft preparation, G.D. and I.K.A.; writing—review and editing, G.D. and I.K.A.; visualization, G.D. and I.K.A.; supervision, G.D. and I.K.A.; project administration, G.D. and I.K.A.; funding acquisition, G.D. and I.K.A. All authors have read and agreed to the published version of the manuscript.

## Funding

This research received no external funding.

Not applicable.

Not applicable.

Not applicable.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. Argyros, I.K.; Cho, Y.J.; Hilout, S. Numerical Methods for Equations and Its Applications; Taylor and Francis, CRC Press: New York, NY, USA, 2012. [Google Scholar]
2. Argyros, I.K.; Magreñán, Á.A. Iterative Methods and Their Dynamics with Applications; CRC Press: New York, NY, USA, 2017. [Google Scholar]
3. Argyros, I.K. Unified Convergence Criterion for Banach space valued methods with applications. Mathematics 2021, 9, 1942. [Google Scholar] [CrossRef]
4. Argyros, I.K. The Theory and Applications of Iteration Methods, 2nd ed.; Engineering Series; CRC Press, Taylor and Francis Publishing Group: Boca Raton, FL, USA, 2022. [Google Scholar]
5. Chapra, S.C.; Canale, R.P. Numerical Methods for Engineers; McGraw-Hill Book Company: New York, NY, USA, 1988. [Google Scholar]
6. Ortega, J.M.; Rheinholdt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
7. Abdul-Hassan, N.Y.; Ali, A.H.; Park, C.A. A new fifth-order iterative method free from second derivative for solving nonlinear equations. J. Appl. Math. Comput. 2022, 68, 2877–2886. [Google Scholar] [CrossRef]
8. Chun, C. A simply constructed third-order modifications of Newton’s method. J. Comput. Appl. Math. 2008, 219, 81–89. [Google Scholar] [CrossRef][Green Version]
9. Grau-Sanchez, M.; Grau, A.; Noguera, M. On the computational efficiency index and some iterative methods for solving system of nonlinear equations. J. Comput. Appl. Math. 2011, 236, 1259–1266. [Google Scholar] [CrossRef][Green Version]
10. Petković, M.S.; Petković, L.D. Families of optimal multipoint methods for solving nonlinear equations: A survey. Appl. Anal. Discret. Math. 2010, 4, 1–22. [Google Scholar] [CrossRef]
11. Petković, L.D.; Petković, M.S. A note on some recent methods for solving nonlinear equations. Appl. Math. Comput. 2007, 185, 368–374. [Google Scholar] [CrossRef]
12. Sharma, R.; Deep, G. A study of the local convergence of a derivative free method in Banach spaces. J. Anal. 2022. [Google Scholar] [CrossRef]
13. Soleymani, F.; Khdhr, F.W.; Saeed, R.K.; Golzarpoor, J. A family of high order iterations for calculating the sign of a matrix. Math. Methods Appl. Sci. 2020, 43, 8192–8203. [Google Scholar] [CrossRef]
14. Zhanlav, T.; Otgondorj, K. Higher order Jarratt-like iterations for solving systems of nonlinear equations. Appl. Math. Comput. 2021, 395, 125849. [Google Scholar] [CrossRef]
15. Liu, T. A multigrid-homotopy method for nonlinear inverse problems. Comput. Math. Appl. 2020, 79, 1706–1717. [Google Scholar] [CrossRef]
16. Liu, T. Porosity reconstruction based on Biot elastic model of porous media by homotopy perturbation method. Chaos Solitons Fractals 2022, 158, 112007. [Google Scholar] [CrossRef]
17. Soleymani, F.; Zhu, S. RBF-FD solution for a financial partial-integro differential equation utilizing the generalized multiquadric function. Comput. Math. Appl. 2021, 82, 161–178. [Google Scholar] [CrossRef]
18. Neta, B. A sixth order family of methods for nonlinear equations. Int. J. Comp. Math. 1979, 7, 157–161. [Google Scholar] [CrossRef]
19. Kou, J.; Li, Y. An improvement of Jarratt method. Appl. Math. Comput. 2007, 189, 1816–1821. [Google Scholar] [CrossRef]
20. Singh, S. Convergence of Higher Order Iterative Methods in Banach Spaces. Ph.D. Thesis, Indian Institute of Technology, Kharagpur, India, 2016. [Google Scholar]
21. Sharma, J.R.; Sharma, R.; Bahl, A. An improved Newton-Traub composition for solving systems of nonlinear equations. Appl. Math. Comput. 2016, 290, 98–110. [Google Scholar] [CrossRef]
22. Kou, J.; Li, Y.; Wang, X. Some variants of Ostrowski’s method with seventh-order convergence. J. Comput. Appl. Math. 2007, 209, 153–159. [Google Scholar] [CrossRef][Green Version]
23. Maheshwari, A.K. A fourth-order iterative method for solving nonlinear equations. Appl. Math. Comput. 2007, 188, 339–344. [Google Scholar] [CrossRef]
24. Parhi, S.K.; Gupta, D.K. A sixth order method for nonlinear equations. Appl. Math. Comput. 2008, 203, 50–55. [Google Scholar] [CrossRef]
25. Petković, M.S. On a general class of multipoint root-finding methods of high computational efficiency. SIAM J. Numer. Anal. 2010, 47, 4402–4414. [Google Scholar] [CrossRef]
26. Potra, F.A.; Pták, V. Nondiscrete Induction and Iterative Processes, Research Notes in Mathematics; Pitman: Boston, MA, USA, 1984. [Google Scholar]
27. Sharma, R.; Deep, G.; Bahl, A. Design and Analysis of an Efficient Multi step Iterative Scheme for systems of Nonlinear Equations. J. Math. Anal. 2021, 12, 53–71. [Google Scholar]
28. Homeier, H.H.H. A modified Newton method with cubic convergence: The multivariable case. J. Comput. Appl. Math. 2004, 169, 161–169. [Google Scholar] [CrossRef]
29. Traub, J.F. Iterative Methods for the Solution of Equations; Chelsea Publishing Company: New York, NY, USA, 1977. [Google Scholar]
30. Quarteroni, A.; Sacco, R.; Saleri, F. Numerical Mathematics; Springer: New York, NY, USA, 2000. [Google Scholar]
31. Wolfram, S. The Mathematica Book, 5th ed.; Wolfram Media: Champaign, IL, USA, 2003. [Google Scholar]
Table 1. Test Functions.
Table 1. Test Functions.
$f ( x )$$Root ( α )$
$f 1 ( x ) = x − 0.9995 sin ( x ) − 0.01$$0.389977774946362$
$f 2 ( x ) = x 3 − x 2 − 1$$1.465571231876768$
$f 3 ( x ) = exp ( − x 2 + x − 2 ) − cos ( x + 1 ) + x 3 + 1$$− 1.000000000000000$
$f 4 ( x ) = sin 2 x − x 2 − 1$$1.404491648215341$
$f 5 ( x ) = x exp ( x 2 ) − sin 2 ( x ) + 3 cos ( x ) + 5$$− 1.207647827130919$
$f 6 ( x ) = x 3 + 4 x 2 − 10$$1.365230013414097$
$f 7 ( x ) = x 2 exp ( x ) − sin ( x ) + x$$− 1.499393096901409$
$f 8 ( x ) = log ( x 2 + x + 2 ) − x + 1$$4.152590736757158$
$f 9 ( x ) = exp ( − x ) + cos ( x )$$1.365230013414097$
$f 10 ( x ) = arcsin ( x 2 − 1 ) − x / 2 + 1$$0.5948109683983692$
Table 2. Comparison of the number of iterations.
Table 2. Comparison of the number of iterations.
$Functions$$x 0$$SM 1$$SM 2$$NEM$$KLM$$SSM$$GM 1$$GM 2$
$f 1 ( x )$$2.99$4333433
$f 2 ( x )$23332323
$f 3 ( x )$$− 2$2333323
$f 4 ( x )$33333333
$f 5 ( x )$$− 1$2222423
$f 6 ( x )$43333333
$f 7 ( x )$$− 2$2322322
$f 8 ( x )$33222222
$f 9 ( x )$$− 0.5$2422222
$f 10 ( x )$12322222
Table 3. Comparison of the number of function evaluations.
Table 3. Comparison of the number of function evaluations.
$Functions$$x 0$$SM 1$$SM 2$$NEM$$KLM$$SSM$$GM 1$$GM 2$
$f 1 ( x )$$2.99$16121212161215
$f 2 ( x )$2121212812815
$f 3 ( x )$$− 2$812121212815
$f 4 ( x )$312121212121215
$f 5 ( x )$$− 1$888816815
$f 6 ( x )$412121212121215
$f 7 ( x )$$− 2$8128812810
$f 8 ( x )$3128888810
$f 9 ( x )$$− 0.5$816888810
$f 10 ( x )$1812888810
Table 4. Comparison of $| x k + 1 − x k |$ for all methods.
Table 4. Comparison of $| x k + 1 − x k |$ for all methods.
$f ( x )$k$SM 1$$SM 2$$NEM$$KLM$$SSM$$GM 1$$GM 2$
1$2.04 e − 000$$1.97 e − 000$$2.03 e − 000$$2.11 e − 000$$2.03 e − 000$$2.32 e − 000$$2.15 e − 000$
$f 1$2$5.08 e − 001$$5.81 e − 001$$5.26 e − 001$$5.15 e − 001$$4.83 e − 001$$2.86 e − 001$$4.24 e − 001$
3$4.98 e − 002$$4.44 e − 003$$3.98 e − 003$$2.94 e − 002$$8.82 e − 002$$1.33 e − 003$$2.41 e − 002$
1$5.27 e − 001$$5.32 e − 001$$5.32 e − 001$$5.36 e − 001$$5.18 e − 001$$5.35 e − 001$$5.30 e − 001$
$f 2$2$7.30 e − 003$$2.49 e − 003$$2.10 e − 003$$1.80 e − 003$$1.67 e − 002$$8.50 e − 004$$4.56 e − 003$
3$8.78 e − 013$$2.19 e − 016$$1.01 e − 016$$6.14 e − 018$$5.76 e − 010$$3.69 e − 020$$2.45 e − 014$
1$1.00 e − 000$$1.02 e − 000$$1.04 e − 000$$9.16 e − 001$$9.85 e − 001$$1.03 e − 000$$1.02 e − 000$
$f 3$2$4.20 e − 004$$1.90 e − 002$$3.72 e − 002$$8.45 e − 002$$1.48 e − 002$$2.57 e − 002$$1.68 e − 002$
3$3.96 e − 023$$1.83 e − 011$$4.69 e − 011$$7.62 e − 009$$2.58 e − 013$$1.12 e − 013$$5.96 e − 013$
1$1.56 e − 000$$1.52 e − 000$$2.49 e − 000$$1.55 e − 000$$1.57 e − 000$$1.62 e − 000$$1.59 e − 000$
$f 4$2$3.87 e − 002$$7.47 e − 002$$1.40 e − 001$$4.31 e − 002$$2.23 e − 002$$2.15 e − 002$$1.04 e − 002$
3$7.39 e − 009$$6.24 e − 008$$9.97 e − 008$$1.92 e − 010$$1.10 e − 009$$7.13 e − 013$$1.29 e − 012$
1$2.07 e − 001$$2.08 e − 001$$2.07 e − 001$$2.08 e − 001$$3.81 e − 001$$2.07 e − 001$$2.11 e − 001$
$f 5$2$7.71 e − 004$$3.90 e − 006$$2.54 e − 004$$1.97 e − 005$$1.71 e − 001$$3.81 e − 005$$3.00 e − 003$
3$2.54 e − 018$$1.32 e − 033$$7.56 e − 021$$7.35 e − 029$$1.83 e − 003$$2.30 e − 025$$8.91 e − 015$
1$2.36 e − 000$$2.51 e − 000$$2.49 e − 000$$2.80 e − 000$$2.29 e − 000$$2.67 e − 000$$2.47 e − 000$
$f 6$2$2.71 e − 001$$1.21 e − 001$$1.40 e − 001$$1.66 e − 001$$3.48 e − 001$$3.45 e − 001$$1.67 e − 001$
3$5.04 e − 005$$9.44 e − 008$$9.97 e − 008$$1.11 e − 007$$3.84 e − 004$$3.65 e − 012$$1.21 e − 006$
1$5.01 e − 001$$4.98 e − 001$$4.99 e − 001$$5.00 e − 001$$4.94 e − 001$$5.00 e − 001$$4.99 e − 001$
$f 7$2$1.23 e − 005$$2.46 e − 003$$1.96 e − 003$$4.20 e − 004$$6.16 e − 003$$1.29 e − 005$$1.84 e − 003$
3$1.89 e − 030$$4.76 e − 016$$5.23 e − 018$$8.84 e − 023$$1.16 e − 012$$2.25 e − 030$$9.89 e − 017$
1$1.18 e − 000$$1.15 e − 000$$1.15 e − 000$$1.15 e − 000$$1.15 e − 000$$1.15 e − 000$$1.15 e − 000$
$f 8$2$3.09 e − 002$$7.81 e − 004$$7.39 e − 005$$2.28 e − 005$$4.16 e − 004$$2.33 e − 005$$6.16 e − 005$
3$2.59 e − 013$$1.56 e − 023$$1.61 e − 030$$6.46 e − 034$$1.02 e − 025$$5.21 e − 034$$2.25 e − 031$
1$2.25 e − 000$$5.93 e − 000$$2.25 e − 000$$2.24 e − 000$$2.25 e − 000$$2.25 e − 000$$2.25 e − 000$
$f 9$2$9.20 e − 004$$7.62 e − 001$$5.68 e − 004$$3.41 e − 003$$8.97 e − 004$$8.98 e − 004$$1.12 e − 003$
3$2.78 e − 021$$2.71 e − 002$$1.35 e − 022$$3.81 e − 018$$7.12 e − 022$$1.34 e − 021$$8.31 e − 022$
1$4.05 e − 001$$4.09 e − 001$$4.06 e − 001$$4.05 e − 001$$4.04 e − 001$$4.06 e − 001$$4.05 e − 001$
$f 10$2$2.13 e − 004$$4.10 e − 003$$4.47 e − 004$$2.40 e − 004$$7.31 e − 004$$3.46 e − 004$$1.94 e − 004$
3$2.73 e − 024$$1.71 e − 016$$5.60 e − 023$$1.32 e − 024$$7.11 e − 021$$3.45 e − 025$$8.60 e − 025$
Table 5. Comparison of $| f ( x k ) |$ for all methods.
Table 5. Comparison of $| f ( x k ) |$ for all methods.
$f ( x )$k$SM 1$$SM 2$$NEM$$KLM$$SSM$$GM 1$$GM 2$
1$1.26 e − 001$$1.55 e − 001$$1.29 e − 001$$9.82 e − 002$$1.32 e − 001$$4.05 e − 002$$8.51 e − 002$
$f 1$2$4.24 e − 003$$3.74 e − 003$$3.32 e − 003$$2.06 e − 003$$8.27 e − 003$$1.00 e − 004$$1.93 e − 003$
3$2.08 e − 007$$3.37 e − 008$$1.45 e − 008$$1.04 e − 009$$1.98 e − 005$$8.67 e − 018$$3.31 e − 009$
1$2.58 e − 002$$8.78 e − 003$$7.39 e − 003$$6.32 e − 003$$5.96 e − 002$$2.98 e − 003$$1.61 e − 002$
$f 2$2$3.08 e − 012$$7.68 e − 016$$3.55 e − 016$$2.16 e − 017$$2.02 e − 009$$1.30 e − 019$$8.62 e − 014$
3$9.83 e − 072$$3.54 e − 094$$4.53 e − 096$$3.37 e − 104$$4.04 e − 054$$8.75 e − 118$$2.16 e − 081$
1$2.52 e − 003$$1.14 e − 001$$2.25 e − 001$$5.01 e − 001$$8.88 e − 002$$1.55 e − 001$$1.01 e − 001$
$f 3$2$2.38 e − 022$$1.10 e − 010$$2.81 e − 010$$4.57 e − 008$$1.55 e − 012$$6.72 e − 011$$3.57 e − 012$
3$1.68 e − 136$$8.72 e − 065$$1.23 e − 063$$3.77 e − 050$$4.25 e − 077$$4.30 e − 077$$6.71 e − 075$
1$9.91 e − 002$$1.96 e − 000$$2.47 e − 000$$1.11 e − 001$$5.64 e − 002$$5.24 e − 002$$2.60 e − 002$
$f 4$2$1.83 e − 008$$1.55 e − 007$$1.65 e − 006$$4.76 e − 010$$2.73 e − 009$$1.77 e − 012$$3.19 e − 012$
3$1.17 e − 048$$6.71 e − 044$$5.48 e − 043$$4.31 e − 060$$4.76 e − 053$$2.45 e − 075$$1.22 e − 071$
1$1.56 e − 002$$7.91 e − 005$$5.16 e − 003$$4.00 e − 004$$4.69 e − 000$$7.73 e − 005$$6.11 e − 002$
$f 5$2$5.16 e − 017$$2.68 e − 032$$1.53 e − 019$$1.49 e − 027$$3.73 e − 002$$4.68 e − 029$$1.81 e − 013$
3$6.54 e − 104$$4.05 e − 197$$1.06 e − 118$$4.05 e − 168$$2.21 e − 013$$2.29 e − 197$$1.27 e − 082$
1$5.10 e − 000$$2.12 e − 000$$2.47 e − 000$$2.52 e − 000$$6.77 e − 000$$5.61 e − 001$$2.98 e − 000$
$f 6$2$8.32 e − 004$$1.56 e − 006$$1.65 e − 006$$1.83 e − 006$$6.35 e − 003$$6.02 e − 011$$1.99 e − 005$
3$1.09 e − 025$$4.91 e − 043$$5.48 e − 043$$1.66 e − 043$$5.56 e − 020$$8.74 e − 071$$4.98 e − 036$
1$9.37 e − 006$$1.84 e − 003$$1.50 e − 003$$3.20 e − 004$$4.72 e − 003$$9.79 e − 005$$1.41 e − 003$
$f 7$2$1.44 e − 030$$3.02 e − 016$$3.98 e − 018$$6.73 e − 023$$8.80 e − 013$$1.71 e − 029$$7.53 e − 017$
3$1.89 e − 179$$2.15 e − 092$$1.24 e − 105$$5.86 e − 135$$4.12 e − 071$$4.91 e − 180$$1.82 e − 096$
1$1.86 e − 001$$4.70 e − 004$$4.45 e − 005$$1.37 e − 005$$2.50 e − 004$$1.40 e − 005$$3.71 e − 005$
$f 8$2$1.56 e − 013$$9.43 e − 024$$9.63 e − 031$$3.89 e − 034$$6.13 e − 026$$3.14 e − 034$$1.36 e − 031$
3$5.91 e − 080$$6.13 e − 142$$1.02 e − 184$$1.99 e − 205$$1.33 e − 155$$3.93 e − 206$$3.25 e − 190$
1$1.07 e − 003$$6.68 e − 001$$6.58 e − 004$$3.95 e − 003$$1.04 e − 003$$1.04 e − 003$$1.29 e − 003$
$f 9$2$3.22 e − 021$$2.69 e − 002$$1.56 e − 022$$4.42 e − 018$$8.25 e − 022$$1.55 e − 021$$9.63 e − 022$
3$2.42 e − 126$$1.45 e − 012$$2.79 e − 134$$8.64 e − 108$$2.05 e − 130$$1.71 e − 130$$1.63 e − 130$
1$2.26 e − 004$$4.33 e − 003$$4.73 e − 004$$2.54 e − 004$$7.74 e − 004$$3.67 e − 004$$2.06 e − 004$
$f 10$2$2.89 e − 024$$1.81 e − 016$$5.93 e − 023$$1.39 e − 024$$7.53 e − 021$$3.65 e − 025$$9.10 e − 025$
3$1.28 e − 143$$9.84 e − 097$$2.32 e − 136$$3.79 e − 146$$6.35 e − 123$$3.58 e − 147$$6.79 e − 147$
 Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

## Share and Cite

MDPI and ACS Style

Deep, G.; Argyros, I.K. Improved Higher Order Compositions for Nonlinear Equations. Foundations 2023, 3, 25-36. https://doi.org/10.3390/foundations3010003

AMA Style

Deep G, Argyros IK. Improved Higher Order Compositions for Nonlinear Equations. Foundations. 2023; 3(1):25-36. https://doi.org/10.3390/foundations3010003

Chicago/Turabian Style

Deep, Gagan, and Ioannis K. Argyros. 2023. "Improved Higher Order Compositions for Nonlinear Equations" Foundations 3, no. 1: 25-36. https://doi.org/10.3390/foundations3010003