Next Article in Journal
Linear Maps in Minimal Free Resolutions of Stanley-Reisner Rings
Next Article in Special Issue
Some New Oscillation Criteria for Second Order Neutral Differential Equations with Delayed Arguments
Previous Article in Journal
An Exact Algorithm for Minimum Vertex Cover Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Derivative Free One-Point Method with Memory for Solving Nonlinear Equations

1
Department of Mathematics, Sant Longowal Institute of Engineering and Technology, Longowal, Sangrur 148106, India
2
Section of Mathematics, International Telematic University UNINETTUNO, Corso Vittorio Emanuele II, 39, 00186 Roma, Italy
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(7), 604; https://doi.org/10.3390/math7070604
Submission received: 5 June 2019 / Revised: 1 July 2019 / Accepted: 4 July 2019 / Published: 6 July 2019
(This article belongs to the Special Issue Multivariate Approximation for solving ODE and PDE)

Abstract

:
We propose a derivative free one-point method with memory of order 1.84 for solving nonlinear equations. The formula requires only one function evaluation and, therefore, the efficiency index is also 1.84. The methodology is carried out by approximating the derivative in Newton’s iteration using a rational linear function. Unlike the existing methods of a similar nature, the scheme of the new method is easy to remember and can also be implemented for systems of nonlinear equations. The applicability of the method is demonstrated on some practical as well as academic problems of a scalar and multi-dimensional nature. In addition, to check the efficacy of the new technique, a comparison of its performance with the existing techniques of the same order is also provided.

1. Introduction

In this study, we consider the problem of solving the nonlinear equations F ( x ) = 0 ; wherein F : D R m R m is a univariate function when m = 1 or multivariate function when m > 1 on an open domain D , by iterative methods. Univariate function is usually denoted by f ( x ) .
Newton’s method [1,2,3] is one of the basic one-point methods which has quadratic convergence and requires one function and one derivative evaluation per iteration but it may diverge if the derivative is very small or zero. To overcome this problem, researchers have also proposed some derivative free one-point methods, for example, the Secant method [2], the Traub method [2], the Muller method [4,5], the Jarratt and Nudds method [6] and the Sharma method [7]. These methods are classified as one-point methods with memory whereas Newton’s method is a one-point method without memory (see Reference [2]). All the above mentioned one-point methods with memory require one function evaluation per iteration and possess order of convergence 1.84 except Secant which has order 1.62.
In this paper, we develop a new efficient one-point method with memory of convergence order 1.84 by using rational linear interpolation. The method consists of deriving the coefficients of a rational function that goes through by three points. Then, the derived coefficients are substituted into the derivative of the considered rational function which, when used in Newton’s scheme, gives the new scheme. The formula uses one function evaluation per step and has an efficiency index equal to the efficiency of the aforementioned methods of the same order. However, the main advantages of new method over the existing ones are its simplicity and suitability to solve systems of nonlinear equations.
The contents of the paper are organized as follows: in Section 2, the new method is developed and its convergence is discussed; in Section 3, some numerical examples are considered to verify the theoretical results and to compare the performance of proposed technique with existing techniques; the proposed method is generalized for solving the system of nonlinear equations in Section 4.

2. The Method and Its Convergence

Our aim is to develop a derivative-free iterative method by Newton’s scheme
x n + 1 = x n f ( x n ) f ( x n ) , n = 0 , 1 , 2 , ,
wherein f ( x n ) is approximated by using the rational linear function
R ( t ) = t x n + a b ( t x n ) + c ,
such that
R ( x n 2 ) = f ( x n 2 ) , R ( x n 1 ) = f ( x n 1 ) , R ( x n ) = f ( x n ) , n = 2 , 3 , 4 , .
Imposing the conditions (3) in (2), we have that
( b ( x n 2 x n ) + c ) f ( x n 2 ) = x n 2 x n + a , ( b ( x n 1 x n ) + c ) f ( x n 1 ) = x n 1 x n + a , c f ( x n ) = a .
Then
b ( x n 2 x n ) f ( x n 2 ) + c ( f ( x n 2 ) f ( x n ) ) = x n 2 x n , b ( x n 1 x n ) f ( x n 1 ) + c ( f ( x n 1 ) f ( x n ) ) = x n 1 x n ,
equivalently
b f ( x n 2 ) + c f [ x n 2 , x n ] = 1 , b f ( x n 1 ) + c f [ x n 1 , x n ] = 1 ,
where f [ s , t ] = f ( s ) f ( t ) s t is the Newton first order divided difference.
Solving for b and c, we obtain that
b = f [ x n 2 , x n ] f [ x n 1 , x n ] f [ x n 2 , x n ] f ( x n 1 ) f [ x n 1 , x n ] f ( x n 2 )
and
c = f ( x n 2 ) f ( x n 1 ) f [ x n 1 , x n ] f ( x n 2 ) f [ x n 2 , x n ] f ( x n 1 ) .
Some simple calculations yield
R ( x n ) = 1 b f ( x n ) c = f [ x n 1 , x n ] f [ x n 2 , x n ] f [ x n 2 , x n 1 ] .
Assuming that f ( x n ) is approximately equal to R ( x n ) , then, in view of (9) the method (1) can be presented as
x n + 1 = x n f [ x n 2 , x n 1 ] f [ x n 1 , x n ] f [ x n 2 , x n ] f ( x n ) .
The scheme (10) defines a one-point method with memory and requires one function evaluation per iteration.
In the following theorem, we shall find the order of convergence of (10). We use the concept of R- order of convergence given by Ortega and Rheinboldt [8]. Suppose { x n } is a sequence of approximation generated by an iteration method. If the sequence converges to a zero α of f with R- order r , then we write
e n + 1 e n r .
Theorem 1.
Suppose that f ( x ) , f ( x ) , f ( x ) and f ( x ) are continuous in the neighborhood D of a zero (say, α) of f. If the initial approximations x 0 , x 1 and x 2 are sufficiently close to α, then the R-order of convergence of the method (10) is 1.84.
Proof. 
Let e n = x n α , e n 1 = x n 1 α and e n 2 = x n 2 α be the errors in the n-th, n 1 -th and n 2 -th iterations, respectively. Using Taylor’s expansions of f ( x n ) , f ( x n 1 ) and f ( x n 2 ) about α and taking into account that f ( α ) = 0 and f ( α ) 0 , we have that
f ( x n ) = f ( α ) [ e n + A 2 e n 2 + A 3 e n 3 + ] ,
f ( x n 1 ) = f ( α ) [ e n 1 + A 2 e n 1 2 + A 3 e n 1 3 + ] ,
f ( x n 2 ) = f ( α ) [ e n 2 + A 2 e n 2 2 + A 3 e n 2 3 + ] ,
where A 1 = 1 and A i = ( 1 / i ! ) f ( i ) ( α ) / f ( α ) , i = 2 , 3 , 4 ,
Using Equations (12) and (13), we have
f [ x n 1 , x n ] = f ( α ) ( 1 + A 2 ( e n + e n 1 ) + A 3 ( e n 2 + e n 1 2 + e n e n 1 ) + ) .
Similarly we can obtain
f [ x n 2 , x n ] = f ( α ) ( 1 + A 2 ( e n + e n 2 ) + A 3 ( e n 2 + e n 2 2 + e n e n 2 ) + ) ,
f [ x n 2 , x n 1 ] = f ( α ) ( 1 + A 2 ( e n 2 + e n 1 ) + A 3 ( e n 2 2 + e n 1 2 + e n 2 e n 1 ) + ) .
Using Equations (12), (15)–(17) in (10), we obtain that
e n + 1 = e n e n + A 2 e n 2 + A 3 e n 3 + A 2 ( e n 2 e n + e n 1 e n ) + A 3 ( e n e n 1 e n 2 ) + 1 + A 2 ( e n + e n 2 ) + A 2 ( e n + e n 1 ) + A 2 2 ( e n 2 + e n e n 2 + e n 1 e n + e n 1 e n 2 ) + = ( A 2 2 A 3 ) e n e n 1 e n 2 + ,
that is
e n + 1 e n e n 1 e n 2 .
From (11), we have that
e n e n + 1 1 r ,
e n 1 e n 1 r
and
e n 2 e n 1 1 r e n 1 r 2 .
Combining (18), (20) and (21), it follows that
e n + 1 e n e n 1 r e n 1 r 2 = e n 1 + 1 r + 1 r 2 .
Comparison of the exponents of e n on the right hand side of (11) and (22) leads to
r 3 r 2 r 1 = 0 ,
which has a unique positive real root 1.84. That means the order r of method (10) is 1.84. □
Remark 1.
According to Ostrowski’s formula [9] for the efficiency measure of an iterative method of order r; if c is the computational cost measured in terms of the number of evaluations of the function f and its derivatives that are required for each iteration, then the efficiency index of the method is given by r 1 / c . Thus the efficiency index of Newton’s method is 1.414 , the Secant method is 1.62 whereas in the case of the Muller, Jarratt-Nudds, Traub, Sharma and new methods (10) this index is 1.84 .

3. Numerical Results

We check the performance of the new method (10), now denoted by NM, using the computational software package Mathematica [10] with multiple-precision arithmetic. For comparison purposes, we consider the Muller method (MM), the Traub method (TM), the Jarratt-Nudds method (JNM) and the Sharma method (SM). These methods are given as follows:
Muller method (MM):
x n + 1 = x n 2 a 2 a 1 ± a 1 2 4 a 0 a 2 ,
where
a 0 = 1 D ( x n x n 2 ) ( f ( x n ) f ( x n 1 ) ) ( x n x n 1 ) ( f ( x n ) f ( x n 2 ) ) , a 1 = 1 D ( x n x n 2 ) 2 ( f ( x n ) f ( x n 1 ) ) ( x n x n 1 ) 2 ( f ( x n ) f ( x n 2 ) ) , a 2 = f ( x n ) , D = ( x n x n 1 ) ( x n x n 2 ) ( x n 1 x n 2 ) .
Traub method (TM):
x n + 1 = x n f ( x n ) f [ x n , x n 1 ] + f ( x n ) f ( x n 1 ) f ( x n ) f ( x n 2 ) 1 f [ x n , x n 1 ] 1 f [ x n 1 , x n 2 ] .
Jarratt-Nudds method (JNM):
x n + 1 = x n + ( x n x n 1 ) ( x n x n 2 ) f ( x n ) ( f ( x n 1 ) f ( x n 2 ) ) ( x n x n 1 ) ( f ( x n 2 ) f ( x n ) ) f ( x n 1 ) + ( x n x n 2 ) ( f ( x n ) f ( x n 1 ) ) f ( x n 2 ) .
Sharma method (SM):
x n + 1 = x n 2 f ( x n ) ( b f ( x n ) d ) c ± c 2 4 a f ( x n ) ( b f ( x n ) d ) ,
where
c = a ( h 1 δ 2 h 2 δ 1 ) + b δ 1 δ 2 ( h 1 δ 1 h 2 δ 2 ) δ 2 δ 1 , d = a ( h 2 h 1 ) + b ( h 2 δ 2 2 h 1 δ 1 2 ) δ 2 δ 1 , h k = x n x n k , δ k = f ( x n ) f ( x n k ) x n x n k , k = 1 , 2 .
We consider five examples for numerical tests as follows:
Example 1.
Let us consider Kepler’s equation; f 1 ( x ) = x α 1 sin ( x ) K = 0 , where 0 α 1 < 1 and 0 K π . A numerical study, based on different values of parameters K and α 1 , has been performed in [11]. We solve the equation taking K = 0.1 and α 1 = 0.25 . For this set of values the solution α is 0.13320215082857313 . The numerical results are shown in Table 1.
Example 2.
Next, we consider isentropic supersonic flow across a sharp expansion corner (see Reference [12]). The relationship between the Mach number before the corner ( i . e . , M 1 ) and after the corner ( i . e . , M 2 ) is expressed by
f 2 ( x ) = b 1 / 2 tan 1 M 2 2 1 b 1 / 2 tan 1 M 1 2 1 b 1 / 2 tan 1 ( M 2 2 1 ) 1 / 2 tan 1 ( M 1 2 1 ) 1 / 2 δ ,
where b = γ + 1 γ 1 and γ is the specific heat ratio of the gas. We take values M 1 = 1.5 , γ = 1.4 and δ = 10 0 . The solution α of this problem is 1.8411294068501996 . The numerical results are shown in Table 2.
Example 3.
Consider the equation governing the L-C-R circuit in electrical engineering [13]
L d 2 q d t 2 + R d q d t + q C = 0 ,
whose solution q ( t ) is
q ( t ) = q 0 e R t / 2 L cos 1 L C R 2 L 2 t ,
where at t = 0 , q = q 0 .
A particular problem as a case study is given as: Assuming that the charge is dissipated to 1 percent of its original value ( q / q 0 = 0.01 ) in t = 0.05 s, with L = 5 Henry and C = 10 4 Farad. Determine the proper value of R ?
Using the given numerical data, the problem is given as
f 3 ( x ) = e 0.005 x cos 2000 0.01 x 2 ( 0.05 ) 0.01 = 0 ,
where x = R . Solution of this problem is, α = 328.15142908514817 . Numerical results are displayed in Table 3.
Example 4.
Law of population growth is given as (see References [14,15])
d N ( t ) d t = λ N ( t ) + ν ,
where N ( t ) = population at time t , λ = constant birth rate of population and ν = constant immigration rate. The solution of this differential equation is given by
N ( t ) = N 0 e λ t + ν λ ( e λ t 1 ) ,
where N 0 is initial population.
A particular problem for the above model can be formulated as: Suppose that a certain population consists of 1,000,000 people initially. Further suppose that 435,000 people immigrate into the community in the first year and 1,564,000 people are present at the end of one year. Determine the birth rate (λ) of this population.
To find the birth rate, we will solve the equation
f 4 ( x ) = 1564 1000 e x 435 x ( e x 1 ) = 0 ,
wherein x = λ . Solution of this problem is, α = 0.10099792968574979 . The numerical results are given in Table 4.
Example 5.
Next, we consider an example of academic interest, which is defined by
f 5 ( x ) = x 3 ln x 2 + x 5 x 4 , x 0 , 0 , x = 0 .
It has three zeros. Note that α = 0 is the multiple zero of multiplicity 3. We consider the zero α = 1 in our work. Numerical results are displayed in Table 5.
Numerical results shown in Table 1, Table 2, Table 3, Table 4 and Table 5 contain the required iterations n, computed estimated error | x n + 1 x n | in first three iterations (wherein A(-h) denotes A × 10 h ), computational order of convergence (COC) and CPU time (CPU-time) are measured during the execution of the program. Computational order of convergence (COC) is computed by using the formula [16]
COC = ln ( | x n + 1 x n | / | x n x n 1 | ) ln ( | x n x n 1 | / | x n 1 x n 2 | ) .
The necessary iterations ( n ) are obtained so as to satisfy the criterion ( | x n + 1 x n | + | f ( x n ) | ) < 10 100 . The first two initial approximations x 0 and x 1 are chosen arbitrarily, whereas third x 2 is taken as the average of these two. From the numerical results displayed in Table 1, Table 2, Table 3, Table 4 and Table 5, we can conclude that the accuracy of the new method (NM) is either equal to or better than existing methods. Moreover, it requires less CPU-time compared with that of existing methods. This character makes it more efficient than the existing ones.

4. Generalized Method

We end this work with a method for solving a system of nonlinear equations F ( x ) = 0 ; F : D R m R m is the given nonlinear function F = ( f 1 , f 2 , f m ) T and x = ( x 1 , , x m ) T . The divided difference F [ x , y ] of F is a matrix of order m × m (see Reference [2], p. 229) with elements
F [ x , y ] i j = f i ( x 1 , , x j , y j + 1 , , y m ) f i ( x 1 , , x j 1 , y j , , y m ) x j y j , 1 i , j m .
Keeping in mind (10), we can write the corresponding method for the system of nonlinear equations as:
x ( n + 1 ) = x ( n ) F [ x ( n 1 ) , x ( n ) ] 1 F [ x ( n 2 ) , x ( n 1 ) ] F [ x ( n 2 ) , x ( n ) ] 1 F ( x ( n ) ) ,
where F [ · , · ] 1 is the inverse of the divided difference operator F [ · , · ] .
Remark 2.
The computational efficiency of an iterative method for solving the system F ( x ) = 0 is calculated by the efficiency index E = r 1 / C , (see Reference [17]), where r is the order of convergence and C is the total cost of computation. The cost of computation C is measured in terms of the total number of function evaluations per iteration and the number of operations (that means products and divisions) per iteration. The various evaluations and operations that contribute to the cost of computation are described as follows. For the computation of F in any iterative function we evaluate m scalar functions f i , ( 1 i m ) and when computing a divided difference F [ x , y ] , we evaluate m ( m 1 ) scalar functions, wherein F ( x ) and F ( y ) are evaluated separately. Furthermore, one has to add m 2 divisions from any divided difference. For the computation of an inverse linear operator, a linear system can be solved that requires m ( m 1 ) ( 2 m 1 ) / 6 products and m ( m 1 ) / 2 divisions in the LU decomposition process, and m ( m 1 ) products and m divisions in the resolution of two triangular linear systems. Thus, taking into account the above considerations of evaluations and operations for the method (24), we have that
C = 2 3 m 3 + 8 m 2 8 3 m a n d E = 1.84 3 2 m 3 + 24 m 2 8 m .
Next, we apply the generalized method on the following problems:
Example 6.
The following system of m equations (selected from [18]) is considered:
j = 1 , j i m x j e x i = 0 , 1 i m .
In particular, we solve this problem for m = 10 , 30 , 50 , 100 by selecting initial guesses x ( 0 ) = { 2 , 2 , m , 2 } T , x ( 1 ) = { 1 , 1 , m , 1 } T and x ( 2 ) = { 1 2 , 1 2 , m , 1 2 } T towards the corresponding solution:
α = { 0.100488400337 , 0.100488400337 , 10 , 0.100488400337 } T , α = { 0.033351667835 , 0.033351667835 , 30 , 0.033351667835 } T , α = { 0.020003975040 , 0.020003975040 , 50 , 0.020003975040 } T , α = { 0.010000498387 , 0.010000498387 , 100 , 0.010000498387 } T .
Numerical results are displayed in Table 6.
Example 7.
Consider the system of m equations (selected from Reference [19]):
tan 1 ( x i ) + 1 2 j = 1 , j i m x j 2 = 0 , 1 i m .
Let us solve this problem for m = 10 , 30 , 50 , 100 with initial values x ( 0 ) = { 1 , 1 , m , 1 } T , x ( 1 ) = { 0 , 0 , m , 0 } T and x ( 2 ) = { 0.5 , 0.5 , m , 0.5 } T towards the corresponding solutions:
α = { 0.209906976944 , 0.209906976944 , 10 , 0.209906976944 } T , α = { 0.123008700800 , 0.123008700800 , 30 , 0.123008700800 } T , α = { 0.096056797272 , 0.096056797272 , 50 , 0.096056797272 } T , α = { 0.068590313107 , 0.068590313107 , 100 , 0.068590313107 } T .
Numerical results are displayed in Table 7.
In Table 6 and Table 7 we have shown the results of the new method only, because the other methods are not applicable for nonlinear systems. We conclude that there are numerous one-point iterative methods for solving a scalar equation f ( x ) = 0 . Contrary to this fact, such methods are rare for multi-dimensional cases, that is, for approximating the solution of F ( x ) = 0 . Since the method uses first divided difference, a drawback of the method is that if at some stage (say j) the denominator x j = y j in the Formula (23), then the method may fail to converge. However, this situation is rare since we have applied the method successfully on many other different problems. In the present work, an attempt has been made to develop an iterative scheme which is equally suitable for both categories viz. univariate and multivariate functions.

Author Contributions

The contribution of all the authors has been equal. All of them have worked together to prepare the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Argyros, I.K. Convergence and Applications of Newton-Type Iterations; Springer: New York, NY, USA, 2008. [Google Scholar]
  2. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  3. Kumar, D.; Sharma, J.R.; Cesarano, C. An efficient class of Traub-Steffensen-type methods for computing multiple zeros. Axioms 2019, 8, 65. [Google Scholar] [CrossRef]
  4. Gerald, C.F.; Wheatley, P.O. Applied Numerical Analysis; Addison-Wesley: Reading, MA, USA, 1994. [Google Scholar]
  5. Muller, D.E. A method of solving algebraic equations using an automatic computer. Math. Comp. 1956, 10, 208–215. [Google Scholar] [CrossRef]
  6. Jarratt, P.; Nudds, D. The use of rational functions in the iterative solution of equations on a digital computer. Comput. J. 1965, 8, 62–65. [Google Scholar] [CrossRef]
  7. Sharma, J.R. A family of methods for solving nonlinear equations using quadratic interpolation. Comput. Math. Appl. 2004, 48, 709–714. [Google Scholar] [CrossRef]
  8. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  9. Ostrowski, A.M. Solution of Equations and Systems of Equations; Academic Press: New York, NY, USA; London, UK, 1960; p. 20. [Google Scholar]
  10. Wolfram, S. The Mathematica Book, 5th ed.; Wolfram Media: Champaign, IL, USA, 2003. [Google Scholar]
  11. Danby, J.M.A.; Burkardt, T.M. The solution of kepler’s equation. Celest. Mech. 1983, 40, 95–107. [Google Scholar] [CrossRef]
  12. Hoffman, J.D. Numerical Methods for Engineers and Scientists; McGraw-Hill Book Company: New York, NY, USA, 1992. [Google Scholar]
  13. Chapra, S.C.; Canale, R.P. Numerical Methods for Engineers; McGraw-Hill Book Company: New York, NY, USA, 1988. [Google Scholar]
  14. Burden, R.L.; Faires, J.D. Numerical Analysis; Brooks/Cole: Boston, MA, USA, 2005. [Google Scholar]
  15. Assante, D.; Cesarano, C.; Fornaro, C.; Vazquez, L. Higher order and fractional diffusive equations. J. Eng. Sci. Technol. Rev. 2015, 8, 202–204. [Google Scholar] [CrossRef]
  16. Weerakoon, S.; Fernando, T.G.I. A variant of newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  17. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algorithms 2010, 55, 87–99. [Google Scholar] [CrossRef]
  18. Grau-Sànchez, M.; Noguera, M.; Amat, S. On the approximation of derivatives using divided difference operators preserving the local convergence order of iterative methods. J. Comput. Appl. Math. 2013, 237, 363–372. [Google Scholar] [CrossRef]
  19. Xiao, X.Y.; Yin, H.W. Increasing the order of convergence for iterative methods to solve nonlinear systems. Calcolo 2016, 53, 285–300. [Google Scholar] [CrossRef]
Table 1. Comparison of performance of methods for function f 1 ( x ) , taking x 0 = 0.5 , x 1 = 0.3 , x 2 = 0.1 .
Table 1. Comparison of performance of methods for function f 1 ( x ) , taking x 0 = 0.5 , x 1 = 0.3 , x 2 = 0.1 .
Methodsn | x 4 x 3 | | x 5 x 4 | | x 6 x 5 | COCCPU-Time
MM7 2.860 ( 4 ) 2.271 ( 7 ) 1.184 ( 13 ) 1.820.0952
TM7 2.832 ( 4 ) 2.247 ( 7 ) 1.143 ( 13 ) 1.820.0944
JNM7 2.846 ( 4 ) 2.259 ( 7 ) 1.163 ( 13 ) 1.820.1246
SM ( a = 1 , b = 1 ) 7 2.850 ( 4 ) 2.262 ( 7 ) 1.169 ( 13 ) 1.820.1107
SM ( a = 1 , b = 2 ) 7 2.845 ( 4 ) 2.258 ( 7 ) 1.162 ( 13 ) 1.820.0973
SM ( a = 1 , b = 1 ) 7 2.897 ( 4 ) 2.302 ( 7 ) 1.239 ( 13 ) 1.820.0984
NM7 2.670 ( 4 ) 2.116 ( 7 ) 1.013 ( 13 ) 1.820.0921
Table 2. Comparison of performance of methods for function f 2 ( x ) , taking x 0 = 1.1 , x 1 = 2.3 , x 2 = 1.7 .
Table 2. Comparison of performance of methods for function f 2 ( x ) , taking x 0 = 1.1 , x 1 = 2.3 , x 2 = 1.7 .
Methodsn | x 4 x 3 | | x 5 x 4 | | x 6 x 5 | COCCPU-Time
MM7 8.212 ( 3 ) 3.223 ( 5 ) 3.369 ( 9 ) 1.830.3312
TM7 8.228 ( 3 ) 4.906 ( 5 ) 6.104 ( 9 ) 1.830.3434
JNM7 8.220 ( 3 ) 4.048 ( 5 ) 4.636 ( 9 ) 1.830.3163
SM ( a = 1 , b = 1 ) 7 8.215 ( 3 ) 3.537 ( 5 ) 3.841 ( 9 ) 1.830.3754
SM ( a = 1 , b = 2 ) 7 8.217 ( 3 ) 3.752 ( 5 ) 4.175 ( 9 ) 1.830.3666
SM ( a = 1 , b = 1 ) 7 8.207 ( 3 ) 2.724 ( 5 ) 2.660 ( 9 ) 1.830.3627
NM7 8.228 ( 3 ) 4.905 ( 5 ) 5.395 ( 9 ) 1.830.3024
Table 3. Comparison of performance of methods for function f 3 ( x ) , taking x 0 = 430 , x 1 = 200 , x 2 = 315 .
Table 3. Comparison of performance of methods for function f 3 ( x ) , taking x 0 = 430 , x 1 = 200 , x 2 = 315 .
Methodsn | x 4 x 3 | | x 5 x 4 | | x 6 x 5 | COCCPU-Time
MM7 4.446 ( 1 ) 2.182 ( 3 ) 3.303 ( 8 ) 1.830.2075
TM7 6.515 ( 1 ) 4.078 ( 3 ) 1.381 ( 7 ) 1.840.2185
JNM7 1.259 ( 1 ) 1.767 ( 4 ) 2.058 ( 10 ) 1.830.1721
SM ( a = 1 , b = 1 ) 7 4.446 ( 1 ) 2.182 ( 3 ) 3.303 ( 8 ) 1.830.2126
SM ( a = 1 , b = 2 ) 7 4.446 ( 1 ) 2.182 ( 3 ) 3.303 ( 8 ) 1.830.1979
SM ( a = 1 , b = 1 ) 7 4.446 ( 1 ) 2.182 ( 3 ) 3.303 ( 8 ) 1.830.2034
NM7 1.818 ( 1 ) 3.112 ( 4 ) 6.976 ( 10 ) 1.830.1568
Table 4. Comparison of performance of methods for function f 4 ( x ) , taking x 0 = 0.4 , x 1 = 0.5 , x 2 = 0.05 .
Table 4. Comparison of performance of methods for function f 4 ( x ) , taking x 0 = 0.4 , x 1 = 0.5 , x 2 = 0.05 .
Methodsn | x 4 x 3 | | x 5 x 4 | | x 6 x 5 | COCCPU-Time
MM7 1.464 ( 3 ) 4.958 ( 6 ) 5.588 ( 11 ) 1.830.1924
TM7 3.061 ( 3 ) 1.670 ( 5 ) 7.659 ( 10 ) 1.840.1884
JNM7 7.093 ( 4 ) 1.013 ( 6 ) 2.558 ( 12 ) 1.820.1811
SM ( a = 1 , b = 1 ) 7 3.061 ( 3 ) 1.670 ( 5 ) 7.659 ( 10 ) 1.840.2194
SM ( a = 1 , b = 2 ) 7 3.061 ( 3 ) 1.670 ( 5 ) 7.659 ( 10 ) 1.840.2033
SM ( a = 1 , b = 1 ) 7 3.061 ( 3 ) 1.670 ( 5 ) 7.659 ( 10 ) 1.840.1875
NM7 1.980 ( 3 ) 1.078 ( 6 ) 8.160 ( 12 ) 1.850.1727
Table 5. Comparison of performance of methods for function f 5 ( x ) , taking x 0 = 1.2 , x 1 = 0.9 , x 2 = 1.05 .
Table 5. Comparison of performance of methods for function f 5 ( x ) , taking x 0 = 1.2 , x 1 = 0.9 , x 2 = 1.05 .
Methodsn | x 4 x 3 | | x 5 x 4 | | x 6 x 5 | COCCPU-Time
MM8 3.206 ( 3 ) 5.057 ( 5 ) 2.720 ( 8 ) 1.840.0943
TM9 1.090 ( 2 ) 9.025 ( 4 ) 5.898 ( 6 ) 1.830.0821
JNM8 4.915 ( 3 ) 1.525 ( 4 ) 1.955 ( 7 ) 1.850.0798
SM ( a = 1 , b = 1 ) 8 1.010 ( 2 ) 7.066 ( 4 ) 3.901 ( 6 ) 1.850.0942
SM ( a = 1 , b = 2 ) 8 1.048 ( 2 ) 7.960 ( 4 ) 4.777 ( 6 ) 1.830.0933
SM ( a = 1 , b = 1 ) 9 1.185 ( 2 ) 1.188 ( 3 ) 9.274 ( 6 ) 1.830.0931
NM8 1.930 ( 3 ) 4.728 ( 5 ) 1.766 ( 8 ) 1.850.0775
Table 6. Performance of new method (NM) for Example 6.
Table 6. Performance of new method (NM) for Example 6.
mn x 4 x 3 x 5 x 4 x 6 x 5 COCCPU-Time
107 1.139 ( 2 ) 7.395 ( 5 ) 1.310 ( 9 ) 1.841.935
307 6.776 ( 3 ) 1.937 ( 5 ) 5.181 ( 11 ) 1.8416.832
507 5.251 ( 3 ) 9.485 ( 6 ) 9.630 ( 12 ) 1.8457.704
1007 3.691 ( 3 ) 3.463 ( 6 ) 9.131 ( 13 ) 1.84407.912
Table 7. Performance of new method (NM) for Example 7.
Table 7. Performance of new method (NM) for Example 7.
mn x 4 x 3 x 5 x 4 x 6 x 5 COCCPU-Time
109 7.661 ( 2 ) 1.423 ( 2 ) 1.304 ( 4 ) 1.843.386
3010 4.195 ( 1 ) 5.623 ( 2 ) 6.824 ( 3 ) 1.8425.600
5010 6.603 ( 1 ) 5.572 ( 2 ) 1.354 ( 2 ) 1.8487.531
10010 1.076 2.307 ( 2 ) 1.106 ( 2 ) 1.84593.691

Share and Cite

MDPI and ACS Style

Sharma, J.R.; Kumar, S.; Cesarano, C. An Efficient Derivative Free One-Point Method with Memory for Solving Nonlinear Equations. Mathematics 2019, 7, 604. https://doi.org/10.3390/math7070604

AMA Style

Sharma JR, Kumar S, Cesarano C. An Efficient Derivative Free One-Point Method with Memory for Solving Nonlinear Equations. Mathematics. 2019; 7(7):604. https://doi.org/10.3390/math7070604

Chicago/Turabian Style

Sharma, Janak Raj, Sunil Kumar, and Clemente Cesarano. 2019. "An Efficient Derivative Free One-Point Method with Memory for Solving Nonlinear Equations" Mathematics 7, no. 7: 604. https://doi.org/10.3390/math7070604

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop