Next Article in Journal
On the Sum of Reciprocal of Polynomial Applied to Higher Order Recurrences
Next Article in Special Issue
An Optimal Eighth-Order Family of Iterative Methods for Multiple Roots
Previous Article in Journal
A Comparative Analysis of Simulated Annealing and Variable Neighborhood Search in the ATCo Work-Shift Scheduling Problem
Previous Article in Special Issue
A Seventh-Order Scheme for Computing the Generalized Drazin Inverse
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Fast Derivative-Free Iteration Scheme for Nonlinear Systems and Integral Equations

Department of Mathematics, Hamedan Branch, Islamic Azad University, Hamedan 15743-65181, Iran
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(7), 637; https://doi.org/10.3390/math7070637
Submission received: 23 June 2019 / Revised: 11 July 2019 / Accepted: 11 July 2019 / Published: 18 July 2019
(This article belongs to the Special Issue Iterative Methods for Solving Nonlinear Equations and Systems)

Abstract

:
Derivative-free schemes are a class of competitive methods since they are one remedy in cases at which the computation of the Jacobian or higher order derivatives of multi-dimensional functions is difficult. This article studies a variant of Steffensen’s method with memory for tackling a nonlinear system of equations, to not only be independent of the Jacobian calculation but also to improve the computational efficiency. The analytical parts of the work are supported by several tests, including an application in mixed integral equations.

1. Introductory Notes

1.1. Background

There exist many works handling the approximate solution of linear and nonlinear integral equations. However, tackling nonlinear integral equations would be more challenging due to the presence of nonlinearity which might be expensive for different solvers [1,2].
Some authors discussed the asymptotic error expansion of collocation-type and Nystrom-type methods for Volterra–Fredholm integral equations with nonlinearity, see [3] for a complete discussion on this issue. One class of nonlinear internal equations is the mixed Hammerstein integral equations with several application in engineering problems [2].
Since in the process of finding the solution of such integral equations, most of the time a system of algebraic equation would occur that must be solved quickly and accurately, thus we here bring the attention to develop and study a useful numerical solution scheme for solving nonlinear systems with application in tackling nonlinear integral equations.
Clearly, there are some other nonlinear problems in literature which could yield in tackling nonlinear system of equations, see e.g., [4,5].

1.2. Definition

Consider a nonlinear system of equations of algebraic type as follows [6]:
a 1 ( x 1 , x 2 , , x m ) = 0 , a 2 ( x 1 , x 2 , , x m ) = 0 , a m ( x 1 , x 2 , , x m ) = 0 ,
which contains m equations with m unknowns and A ( x ) = ( a 1 ( x ) , a 2 ( x ) , , a m ( x ) ) T while a 1 ( x ) , a 2 ( x ) , , a m ( x ) are the functions of coordinate. We can also write (1) using x = ( x 1 , x 2 , , x m ) in a more compact form as
A ( x ) = 0 .
The purpose of this work is to study finding the solution of system (1) via iteration process and discuss its application in solving nonlinear integral equations. As such, now let us briefly review some of the existing methods for finding its simple roots in the next subsection.

1.3. Existing Solvers

The Steffensen’s scheme for solving nonlinear systems is written as follows [7]:
w ( n ) = x ( n ) + A ( x ( n ) ) , x ( 0 ) R m , x ( n + 1 ) = x ( n ) [ x ( n ) , w ( n ) ; A ] 1 A ( x ( n ) ) , n = 0 , 1 , 2 , ,
which is based upon the divided difference operator (DDO). The 1st order DDO of A for the multidimensional nodes x and y is expressed by a component-to-component procedure as follows [8]:
[ x , y ; A ] i , j = A i ( x 1 , , x j , y j + 1 , , y m ) A i ( x 1 , , x j 1 , y j , , y m ) x j y j , 1 i , j m .
Recall that first-order divided difference of A on R m is a mapping as follows:
[ · , · ; A ] : D R m × R m L ( R m ) ,
that reads
[ y , x ; A ] ( y x ) = A ( y ) A ( x ) , x , y D .
Here L ( · ) shows the set of bounded linear functions. By considering h = y x , one we can also express the first-order DDO as follows [8]:
x + h , x ; A = 0 1 A ( x + t h ) d t , ( x , h ) R m × R m .
Traub in [9] investigated another way based on the function J ( x , H ) for approximating the Jacobian matrix of the Newton’s method and to obtain the Steffensen’s scheme based on a point-wise definition.
An improvement of (3) was given in [10,11] as follows:
z ( n ) = x ( n ) [ x ( n ) , w ( n ) ; A ] 1 A ( x ( n ) ) , x ( n + 1 ) = z ( n ) [ x ( n ) , w ( n ) ; A ] 1 A ( z ( n ) ) ,
wherein
w ( n ) = x ( n ) + β A ( x ( n ) ) , β R .
The point in (8) in contrast to (3) is that it applies two steps and of course two m-D functional evaluations to reach a higher rate than quadratic. Here, the idea is to freeze the DDO per cycle and then increase the sub steps so as to gain as much as possible of order improvement, as well as some improvements in the computational efficiency index of the scheme.
Let us also recall some of the iteration schemes having the requirement of Jacobian computation now. The Jarratt’s iteration having fourth rate of convergence for solving (1) is given by [12]:
z ( n ) = x ( n ) 2 3 A ( x ( n ) ) 1 A ( x ( n ) ) , x ( n + 1 ) = x ( n ) 1 2 ( 3 A ( z ( n ) ) A ( x ( n ) ) ) 1 × ( 3 A ( z ( n ) ) + A ( x ( n ) ) ) A ( x ( n ) ) 1 A ( x ( n ) ) .
This fourth-order iteration expression requires the computation of two matrix inverses (based on the resolution of linear systems) to achieve its rate, which manifest that getting higher rate of convergence in the form of a two-step method is costly.

1.4. Motivation

All methods discussed until now are without memory; some improvements over such schemes can be done by considering additional memory terms.
Our motivation of pursuing this aim is not only limited at tackling nonlinear systems, but a motivation is to apply such schemes for practical engineering problems such as the nonlinear mixed integral equations, see e.g., [13,14,15,16].
The goal in our development is to reach a higher computational efficiency using as low as possible number of linear systems of equations and the functional evaluations. This is directly interlinked with the concept of scientific computing and numerical analysis which gives a meaning to the investigation and proposing novel numerical procedures.

1.5. Achievement and Contribution

The objective of this work is to present a two-step higher order scheme to solve system of nonlinear equations. As such, we present an iteration method with memory for finding both real and complex zeros. Our scheme does not require computing the Fréchet derivatives of the function.

1.6. Organization

We unfold this article as follows. In Section 2, the derivation and contribution of an iteration expression is furnished. Section 3 provides an error analysis for its convergence rate. The computational efficiency of different solvers by including not only the number of functional evaluations, but also the number of system of involved linear equations, the number of LU (Lower-Upper) factorizations as well as the other similar operations will be discussed in Section 4 in detail. Section 5 discusses the application of the proposed scheme. Concluding remarks are given in Section 6.

2. A Derivative-Free Scheme

Here our attempt is to increase the computational efficiency index of (8) without imposing several more steps or further DDDs in each cycle. To complete such a task, we rely on the concept of methods with memory which state that the convergence speed and efficiency of iterative methods could be improved by saving and using the already computed values of the functions and nodes.
In fact, the error equation of the uni-parametric family of methods (8) includes a term of the form below:
I + β A ( α ) = 0 .
The free nonzero parameter β in (11) can clearly affect not only on the domain of convergence (attraction basins of the iterative method) but also to the improvement of the convergence order. When tackling a nonlinear system of equations, and since α is not known, we can use an approximation for A ( α ) to make the whole relation (11) approximately zero. Therefore, we may write
β A ( α ¯ ) 1 ,
wherein α ¯ is an approximation of the solution (per cycle).
It is important to discuss how we approximate the matrix β : = B ( n ) ( n 1 ) by employing some estimates to A ( α ) computed via the existing data.
To improve the performance of (8) using the notion of methods with memory, we consider the following iteration expression:
w ( n ) = x ( n ) + β A ( x ( n ) ) z ( n ) = x ( n ) [ x ( n ) , w ( n ) ; A ] 1 A ( x ( n ) ) , x ( n + 1 ) = z ( n ) [ x ( n ) , w ( n ) ; A ] 1 A ( z ( n ) ) .
To ease up the implementation of the scheme with memory, let us first consider
β : = B ( n ) = [ w ( n 1 ) , x ( n 1 ) ; A ] 1 = M ( n 1 ) 1 A ( α ) 1 .
and
M ( n 1 ) δ ( n ) = A ( x ( n ) ) , M ( n 1 ) γ ( n ) = A ( y ( n ) ) .
Thus, now we contribute the following scheme:
B ( n ) = [ w ( n 1 ) , x ( n 1 ) ; A ] 1 , n 1 , w ( n ) = x ( n ) + B ( n ) A ( x ( n ) ) , n 1 , y ( n ) = x ( n ) + δ ( n ) , n 0 , x ( n + 1 ) = y ( n ) + γ ( n ) .
Lemma 1.
Let D R m be a nonempty convex domain. Suppose that A is thrice Fréchet differentiable on D, and that [ u , v ; A ] L ( D , D ) , for any u , v D ( u v ) and the initial value x ( 0 ) and the solution α are close to each other. By considering B ( n ) = [ w ( n 1 ) , x ( n 1 ) ; A ] 1 and d ( n ) : = I + B ( n ) A ( α ) , one obtains the error relation below
d ( n ) e ( n 1 ) .
Proof. 
See [17] for more details.  ☐
To implement (16), one needs to solve some linear systems of algebraic equations. This means that at each new step, a new LU factorization is needed, and no information can be exploited from the previous steps. However, there exists a body of literature about recycling this kind of information to obtain updated preconditioner for iterative solvers, [18,19,20]. We leave future discussion about constructing and imposing such a preconditioner for future works in this field.
As long as the coefficient matrices are sparse or large and sparse, a Krylov subspace method can be employed to speed up the process. However, the merit in (16) is that the two linear systems have one same coefficient matrix. Hence, only one LU factorization would be enough and by saving the decomposition, one can act it to two different right-hand-side vectors to get the solution vectors per sub cycles of (16).
A challenging part of the implementation using (16) is the incorporation of B ( n ) . This is a not anymore a constant and it should be defined as a matrix. In this paper, whenever required the initial matrix B ( 0 ) is specified by:
B ( 0 ) 1 = diag 1 1000 .
The choice of the initial matrix B ( 0 ) affects directly on the whole process in order to arrive in the convergence phase as quickly as possible. Here, (18) is in agreement with the dynamical studies of Steffensen-type methods with memory at which the basins of attractions are larger as long as the free parameter is close to zero.
Noting also that updating B ( n ) per cycle is again based on the already computed LU factorization while it should only act on the identity matrix to proceed.

3. Rate of Convergence

It is known that via the Taylor expansion of A ( x + t h ) in the node x and integrating, one obtains:
0 1 A ( x + t h ) d t = A ( x ) + 1 2 A ( x ) h + 1 6 A ( x ) h 2 + 1 24 A ( i v ) ( x ) h 3 + O ( h 4 ) .
It is here assumed that A ( α ) is not singular and e ( n ) = x ( n ) α is called the error at the n-th iterate and [6,21]:
e ( n + 1 ) = H e ( n ) p + O ( e ( n ) p + 1 ) ,
(20) is the equation of error, whereas H is a p-linear function. This means that H L ( R m , R m , , R m ) . Moreover, we consider:
e ( n ) p = ( e ( n ) , e ( n ) , , e ( n ) p times ) ,
which would be a matrix.
Before stating the main theorem, it is pointed out that if A be differentiable in terms of Fréchet concept in D sufficiently. As in [22], the l-th differentiation of A at u R m , l 1 , is the following l-linear function
A ( l ) ( u ) : R m × × R m R m ,
so that A ( l ) ( u ) ( v 1 , v 2 , , v l ) R m . It is also famous that, for α + h R m locating in a neighborhood of a root α of (1), the Taylor expansion could be written and we have [22]:
A ( α + h ) = A ( α ) h + l = 2 p 1 C l h l + O ( h p ) ,
wherein
C l = ( 1 / l ! ) [ A ( α ) ] 1 A ( l ) ( α ) , l 2 .
One finds C l h l R m , because A ( l ) ( α ) L ( R m × × R m , R m ) and [ A ( α ) ] 1 L ( R m ) . Moreover, for A we have:
A ( α + h ) = A ( α ) I + l = 2 p 1 l C l h l 1 + O ( h p ) ,
where I is the unit matrix of appropriate size. Here, l C l h l 1 L ( R m ) .
Theorem 1.
Assume that in (1), A : D R m R m is Fréchet differentiable sufficiently at any points of D at α R m . Here we assume A ( α ) = 0 and d e t ( A ( x ) ) 0 . Then, (16) with a choice of suitable initial vector has 3.30 R-order of convergence.
Proof. 
For the iteration scheme (16) in the case without memory and using (23)–(25), we can obtain:
e ( n + 1 ) = ( β A ( α ) + I ) ( β A ( α ) + 2 I ) C 2 2 e ( n ) 3 + O ( e ( n ) 4 ) .
Let us now re-write (26) in the asymptotical form as comes next:
e ( n + 1 ) d 1 ( n ) e ( n ) 3 .
Several symbolic calculations by taking into account that the coefficient of the error terms in our m-D case are all matrices, Lemma 1, and their multiplications does not admit commutativity, one obtains that:
d 1 ( n ) e ( n 1 ) , n 1 .
Therefore, one attains:
d 1 ( n ) 2 e ( n 1 ) 2 , n 0 .
Combining (28) and (29) into (27), we attain:
e ( n + 1 ) e ( n 1 ) 1 e ( n ) 3 .
It shows that
1 p + 3 = p ,
wherein its convergence r-order is given by:
p = 1 2 13 + 3 3.30278 .
The proof is ended. ☐

4. Efficiency

Here we only need to compute one LU factorization per cycle and act it two times for different linear systems with two different right hand sides and one time on an identity matrix for the acceleration matrix B ( n ) to achieve a higher speed rate.
It is recalled that the classical index of efficiency is defined by [8]:
E = p 1 C ,
wherein p is the convergence rate and C is the whole burden per cycle considering the number of functional evaluations.
When dealing with nonlinear system of equations, the cost of functional evaluations per cycle can be expressed as follows:
  • To evaluate A, m evaluations of functions are required.
  • To evaluate the associated Jacobian matrix A needs m 2 evaluations of functions.
  • To evaluate the first-order DDO, we need m 2 m evaluations of functions.
  • In addition, the LU factorization cost is θ ( 2 m 3 3 ) plus θ ( 2 m 2 ) in tackling the two involved triangular systems.
wherein θ is a weight that connects the cost of 1 evaluation of function with one flops. Here it is assumed that θ = 1 . No preconditioning is imposed in each cycle of these methods for solving the linear systems. This is done for all the compared methods.
To be more precise, we consider that the cost for computing each of the scalar functions is unit. The cost for computing other involved calculations are all also a factor of this unity cost. This is the way to give a flops-like efficiency index [23].
Considering only the consumed functional evaluations per cycle might not be a key element for reporting the indices of efficiency when solving nonlinear systems of equations. The number of matrix products, scalar products, decomposition of LU and the solution of the triangular systems of algebraic linear equations are significant in estimating the real cost and superiority of a scheme in comparison to the existing solvers in literature [23].
Hence, the results can be summarized as follows for large m:
2 1 2 m 3 3 + 4 m 2 < 2 1 2 m 3 3 + 3 m 2 + m < 3 1 2 m 3 3 + 6 m 2 + m < 3.30 1 2 m 3 3 + 6 m 2 + m .
In our comparisons, we applied the Newton’s quadratically convergent iteration expression (NM) and also Steffensen’s method (SM), the third-order expression of Amat et al. (8) denoted by AM, and the presented approach (16) showed by PM, for tackling our nonlinear systems of algebraic equations. This is also plotted in Figure 1 showing the competitiveness of the scheme with memory (16).

5. Computational Tests

The aim of this section is to reveal the application of our proposed nonlinear solver for some practical problems. The software Mathematica 11.0, [24,25] was used for doing all calculations regarding the compared methods. We avoided computing any matrix inverse and the linear systems were solved applying the command LinearSolver [ ] . For the implementation of such schemes a possible stopping criterion can be defined based on the residual norm and imposed as follows:
| | A ( x ( k ) ) | | 2 ε ,
wherein ε is the required accuracy. | | · | | 2 is the l 2 norm.
To confirm the theoretical convergence speed in our numerical tests, we obtain the the numerical rate of convergence by employing the following definition:
ρ ln ( | | A ( x ( k + 1 ) ) | | 2 / | | A ( x ( k ) ) | | 2 ) ln ( | | A ( x ( k ) ) | | 2 / | | A ( x ( k 1 ) ) | | 2 ) .

5.1. An Academical Test

Example 1.
Here a nonlinear system of equations A ( x ) = 0 , having complex root is examined as comes next:
A ( x ) = 5 exp ( x 1 2 ) x 2 + 2 x 7 x 10 + 8 x 3 x 4 5 x 6 3 x 9 , 5 tan ( x 1 + 2 ) + cos ( x 9 x 10 ) + x 2 3 + 7 x 3 4 2 sin 3 ( x 6 ) , x 1 2 x 10 x 5 x 6 x 7 x 8 x 9 + tan ( x 2 ) + 2 x 3 x 4 5 x 6 3 , 2 tan ( x 1 2 ) + 2 x 2 + x 3 2 5 x 5 3 x 6 + x 8 cos ( x 9 ) , 10 x 1 2 x 10 + cos ( x 2 ) + x 3 2 5 x 6 3 2 x 8 4 x 9 , cos 1 ( x 1 2 ) sin ( x 2 ) 2 x 10 x 5 4 x 6 x 9 + x 3 2 , x 1 x 2 x 7 x 8 x 10 + x 3 5 5 x 5 3 + x 7 , cos 1 ( 10 x 10 + x 8 + x 9 ) + x 4 sin ( x 2 ) + x 3 15 x 5 2 + x 7 , 10 x 1 + x 3 2 5 x 5 2 + 10 x 6 x 8 sin ( x 7 ) + 2 x 9 , x 1 sin ( x 2 ) 2 x 10 x 8 + x 10 5 x 6 10 x 9 ,
wherein the exact solution just shown up to 10 decimal places as follows:
α ( 1.3273490437 + 0.3502924960 i , 1.058599346 1.748724664 i , 1.0276186794 0.0141308051 i , 3.273950008 + 0.127828308 i , 0.8318243937 + 0.0017551949 i , 0.4853245912 + 0.6848776400 i , 0.1693667630 + 0.1840917580 i , 1.534419958 0.321214766 i , 2.086379651 + 0.426342755 i , 1.989592331 + 1.478395393 i ) * .
The numerical evidences and the computational order of convergence ρ for this experiment are reported forward in Table 1 using 1000 fixed floating point arithmetic and the starting value x ( 0 ) = ( 1.2 + 0.3 I , 1.1 1.9 I , 1.0 0.1 I , 2.5 + 0.5 I , 0.8 0.1 I , 0.4 + 1 . I , 0.1 + 0.1 I , 1.3 0.7 I , 2.0 + 0.5 I , 1.9 + 1.4 I ) * . Here, the residual norm · 2 is reported.

5.2. An Integral Equation Using a Collocation Approach

Example 2.
The purpose of this test was to examine the performance of the new derivative-free scheme with memory for the following mixed Hammerstein integral equation [6]:
y ( s ) = 1 + 1 5 0 1 G ( s , t ) y ( t ) 3 d t ,
wherein y C [ 0 , 1 ] , s , t [ 0 , 1 ] and the kernel G is defined as follows:
G ( s , t ) = ( 1 s ) t , t s , s ( 1 t ) , t > s .
By employing the well-resulted Gauss-Legendre quadrature formula in discretization of integral equations given in the following form, we will be able to tackle (39):
0 1 y ( t ) d t j = 1 χ w j y ( t j ) ,
where the abscissas t j and the weights w j were determined via the formula of Gauss–Legendre quadrature.
The lower limit of integration in standard Gauss–Legendre quadrature formula is 1 . In order to approximate the integral (41) over [ 0 , 1 ] , we should map the roots of Legendre polynomials t j on this segment and scale the weights w j .
Showing the estimation y ( t i ) via x i ( i = 1 , 2 , , χ ), one would be able to transfigure the process of solving nonlinear mixed integral equations into a set of nonlinear algebraic equations as comes next:
5 x i 5 j = 1 χ c i j x j 3 = 0 , i = 1 , 2 , , χ ,
where
c i j = w j t j ( 1 t i ) , i f j i , w j t i ( 1 t j ) , i f i < j .
For this example, we employed 200 digits floating point in the computations with the stop termination as the following residual norm (35) with ε = 10 100 . The initial vector was selected as x ( 0 ) = ( 3 , 3 , , 3 ) * , while the results are shown in Figure 2 using χ = 40 as a list log plot of the function values by performing the cycle. It reveals a stable and fast performance of the new scheme with memory in solving integral equations. Recalling that the Figure 2 can be interpreted only as an error of numerical solution of the system (42) but not the error of solution of the source integral Equation (39).

6. Summary

For derivative-involved iteration schemes in solving nonlinear systems, we use the m × m Jacobian matrix, i.e., F ( x ) , with entries F ( x ) j k = x k f j ( x ) . Higher order schemes, such as Chebyshev methods, need higher multi dimensional derivatives which make them less practical. To be more precise, the first Fréchet derivative is a matrix with m 2 elements, while the 2nd order Fréchet differentiation has m 3 entries (ignoring the symmetric feature).
In this work, we have developed and introduced a variant of Steffensen’s method with memory for tackling nonlinear problems. The scheme consists of two steps and requires the the computation of only one LU factorization which makes its computational efficiency index higher than some of the existing solvers in the literature.
The application of the iteration scheme for nonlinear integral equations via the collocation approach was discussed and its application for other types of nonlinear discretized set of equations obtained from practical problems such as the ones in [26,27] can be investigated similarly.

Author Contributions

All authors contributed equally in preparing and writing this work.

Funding

This work was supported by Hamedan Branch of Islamic Azad university.

Acknowledgments

We are grateful to three anonymous referees for several comments which improved the readability of this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Qasim, S.; Ali, Z.; Ahmad, F.; Serra-Capizzano, S.; Ullah, M.Z.; Mahmood, A. Solving systems of nonlinear equations when the nonlinearity is expensive. Comput. Math. Appl. 2016, 71, 1464–1478. [Google Scholar] [CrossRef]
  2. Wazwaz, A.-M. Linear and Nonlinear Integral Equations; Higher Education Press: Beijing, China; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  3. Mashayekhi, S.; Razzaghi, M.; Tripak, O. Solution of the nonlinear mixed Volterra-Fredholm integral equations by hybrid of block-pulse functions and Bernoulli polynomials. Sci. World J. 2014, 2014. [Google Scholar] [CrossRef] [PubMed]
  4. Alzahrani, E.O.; Al-Aidarous, E.S.; Younas, A.M.M.; Ahmad, F.; Ahmad, S.; Ahmad, S. A higher order frozen Jacobian iterative method for solving Hamilton-Jacobi equations. J. Nonlinear Sci. Appl. 2016, 9, 6210–6227. [Google Scholar] [CrossRef] [Green Version]
  5. Soleymani, F. Pricing multi–asset option problems: A Chebyshev pseudo–spectral method. BIT Numer. Math. 2019, 59, 243–270. [Google Scholar] [CrossRef]
  6. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  7. Noda, T. The Steffensen iteration method for systems of nonlinear equations. Proc. Jpn. Acad. 1987, 63, 186–189. [Google Scholar] [CrossRef]
  8. Grau-Sánchez, M.; Grau, À.; Noguera, M. On the computational efficiency index and some iterative methods for solving systems of nonlinear equations. J. Comput. Appl. Math. 2011, 236, 1259–1266. [Google Scholar] [CrossRef] [Green Version]
  9. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice Hall: New York, NY, USA, 1964. [Google Scholar]
  10. Amat, S.; Busquier, S. Convergence and numerical analysis of a family of two-step Steffensen’s methods. Comput. Math. Appl. 2005, 49, 13–22. [Google Scholar] [CrossRef]
  11. Soleymani, F.; Sharifi, M.; Shateyi, S.; Haghani, F.K. A class of Steffensen-type iterative methods for nonlinear systems. J. Appl. Math. 2014, 2014. [Google Scholar] [CrossRef]
  12. Babajee, D.K.R.; Dauhoo, M.Z.; Darvishi, M.T.; Barati, A. A note on the local convergence of iterative methods based on Adomian decomposition method and 3-node quadrature rule. Appl. Math. Comput. 2008, 200, 452–458. [Google Scholar] [CrossRef]
  13. Alaidarous, E.S.; Ullah, M.Z.; Ahmad, F.; Al-Fhaid, A.S. An efficient higher-order quasilinearization method for solving nonlinear BVPs. J. Appl. Math. 2013, 2013. [Google Scholar] [CrossRef]
  14. Hanaç, E. The phase plane analysis of nonlinear equation. J. Math. Anal. 2018, 9, 89–97. [Google Scholar]
  15. Hasan, P.M.A.; Sulaiman, N.A. Numerical treatment of mixed Volterra-Fredholm integral equations using trigonometric functions and Laguerre polynomials. ZANCO J. Pure Appl. Sci. 2018, 30, 97–106. [Google Scholar]
  16. Qasim, U.; Ali, Z.; Ahmad, F.; Serra-Capizzano, S.; Ullah, M.Z.; Asma, M. Constructing frozen Jacobian iterative methods for solving systems of nonlinear equations, associated with ODEs and PDEs using the homotopy method. Algorithms 2016, 9, 18. [Google Scholar] [CrossRef]
  17. Ahmad, F.; Soleymani, F.; Khaksar Haghani, F.; Serra-Capizzano, S. Higher order derivative-free iterative methods with and without memory for systems of nonlinear equations. Appl. Math. Comput. 2017, 314, 199–211. [Google Scholar] [CrossRef]
  18. Bellavia, S.; Bertaccini, D.; Morini, B. Nonsymmetric preconditioner updates in Newton-Krylov methods for nonlinear systems. SIAM J. Sci. Comput. 2011, 33, 2595–2619. [Google Scholar] [CrossRef]
  19. Bellavia, S.; Morini, B.; Porcelli, M. New updates of incomplete LU factorizations and applications to large nonlinear systems. Optim. Methods Softw. 2014, 29, 321–340. [Google Scholar] [CrossRef]
  20. Bertaccini, D.; Durastante, F. Interpolating preconditioners for the solution of sequence of linear systems. Comput. Math. Appl. 2016, 72, 1118–1130. [Google Scholar] [CrossRef]
  21. Sharma, J.R.; Kumar, D.; Argyros, I.K.; Magreñán, Á.A. On a bi-parametric family of fourth order composite Newton-Jarratt methods for nonlinear systems. Mathematics 2019, 7, 492. [Google Scholar] [CrossRef]
  22. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algorithms 2010, 55, 87–99. [Google Scholar] [CrossRef]
  23. Montazeri, H.; Soleymani, F.; Shateyi, S.; Motsa, S.S. On a new method for computing the numerical solution of systems of nonlinear equations. J. Appl. Math. 2012, 2012, 1–15. [Google Scholar] [CrossRef]
  24. Sánchez León, J.G. Mathematica Beyond Mathematics: The Wolfram Language in the Real World; Taylor & Francis Group: Boca Raton, FL, USA, 2017. [Google Scholar]
  25. Wagon, S. Mathematica in Action, 3rd ed.; Springer: Berlin, Germany, 2010. [Google Scholar]
  26. Soheili, A.R.; Soleymani, F. Iterative methods for nonlinear systems associated with finite difference approach in stochastic differential equations. Numer. Algorithms 2016, 71, 89–102. [Google Scholar] [CrossRef]
  27. Soleymani, F.; Barfeie, M. Pricing options under stochastic volatility jump model: A stable adaptive scheme. Appl. Numer. Math. 2019, 145, 69–89. [Google Scholar] [CrossRef]
Figure 1. The comparison of flops-like efficiency indices for various schemes with and without memory by changing m.
Figure 1. The comparison of flops-like efficiency indices for various schemes with and without memory by changing m.
Mathematics 07 00637 g001
Figure 2. Error history for solving the integral equation in Example 2 using PM (Performed by Mathematica).
Figure 2. Error history for solving the integral equation in Example 2 using PM (Performed by Mathematica).
Mathematics 07 00637 g002
Table 1. Comparison evidences for Example 1.
Table 1. Comparison evidences for Example 1.
Met. A ( x ( 3 ) ) A ( x ( 4 ) ) A ( x ( 5 ) ) A ( x ( 6 ) ) A ( x ( 7 ) ) A ( x ( 8 ) ) A ( x ( 9 ) ) ρ
NM 8.19 E 1 2.73 E 2 1.79 E 5 1.28 E 11 2.52 E 23 8.28 E 47 2.50 E 94 2.02
SM 7.68 E 1 1.83 E 2 7.33 E 6 5.17 E 12 1.51 E 24 2.03 E 49 3.91 E 99 1.99
AM 8.50 E 2 6.50 E 6 4.98 E 17 3.68 E 50 1.26 E 149 4.83 E 448 3.00
PM 6.80 E 2 1.12 E 7 5.15 E 26 1.76 E 86 6.47 E 287 3.31

Share and Cite

MDPI and ACS Style

Rostami, M.; Lotfi, T.; Brahmand, A. A Fast Derivative-Free Iteration Scheme for Nonlinear Systems and Integral Equations. Mathematics 2019, 7, 637. https://doi.org/10.3390/math7070637

AMA Style

Rostami M, Lotfi T, Brahmand A. A Fast Derivative-Free Iteration Scheme for Nonlinear Systems and Integral Equations. Mathematics. 2019; 7(7):637. https://doi.org/10.3390/math7070637

Chicago/Turabian Style

Rostami, Mozafar, Taher Lotfi, and Ali Brahmand. 2019. "A Fast Derivative-Free Iteration Scheme for Nonlinear Systems and Integral Equations" Mathematics 7, no. 7: 637. https://doi.org/10.3390/math7070637

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop