Next Article in Journal
Theoretical Framework and Practical Considerations for Achieving Superior Multi-Robot Exploration: Hybrid Cheetah Optimization with Intelligent Initial Configurations
Next Article in Special Issue
Explicit Symplectic Runge–Kutta–Nyström Methods Based on Roots of Shifted Legendre Polynomial
Previous Article in Journal
Tunnel Boring Machine Performance Prediction Using Supervised Learning Method and Swarm Intelligence Algorithm
Previous Article in Special Issue
A New R-Function to Estimate the PDF of the Product of Two Uncorrelated Normal Variables
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing the Convergence Order from p to p + 3 in Iterative Methods for Solving Nonlinear Systems of Equations without the Use of Jacobian Matrices

by
Alicia Cordero
1,*,
Miguel A. Leonardo-Sepúlveda
2,3,
Juan R. Torregrosa
1 and
María P. Vassileva
2
1
Instituto de Matemática Multidisciplinar, Universitat Politècnica de València, Camino de Vera, s/n, 46022 Valencia, Spain
2
Área de Ciencia Básica y Ambiental, Instituto Tecnológico de Santo Domingo (INTEC), Av. Los Próceres, Gala, Santo Domingo 10602, Dominican Republic
3
Recinto Félix Evaristo Mejía (ISFODOSU), Av. Caonabo con esq. Leonardo Da Vinci, Restauradores, Santo Domingo 10114, Dominican Republic
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(20), 4238; https://doi.org/10.3390/math11204238
Submission received: 7 September 2023 / Revised: 28 September 2023 / Accepted: 5 October 2023 / Published: 10 October 2023
(This article belongs to the Special Issue Computational Mathematics and Numerical Analysis)

Abstract

:
In this paper, we present an innovative technique that improves the convergence order of iterative schemes that do not require the evaluation of Jacobian matrices. As far as we know, this is the first technique that allows us the achievement of an increase, from p to p + 3 units, in the order of convergence. This is constructed from any Jacobian-free scheme of order p. We conduct comprehensive numerical tests first in academical examples to validate the theoretical results, showing the efficiency and effectiveness of the new Jacobian-free schemes. Then, we apply them on the non-differentiable partial differential equations that models the nutrient diffusion in a biological substrate.

1. Introduction

Let us consider the system of nonlinear equations F ( x ) = 0 , where F : Ω R n R n and f i , i = 1 , 2 , , n , are the coordinate functions of F, with F ( x ) = f 1 ( x ) , f 2 ( x ) , , f n ( x ) T . Solving nonlinear systems is a challenging task, typically requiring either linearization of certain nonlinear problems or the application of a fixed-point function G : Ω R n R n , which leads to a fixed-point iteration scheme. Additionally, these schemes can be applied in different areas of knowledge, including engineering, chemistry, fluid dynamics, among others, making them valuable tools for solving nonlinear and nondifferentiable problems (see, for example, refs. [1,2,3]). Tackling such systems often involves employing linearization techniques to approximate solutions or employing fixed-point iteration methods based on function G. These strategies provide valuable insights and approximated solutions of the nonlinear systems.
In many of these methods, the evaluation of Jacobian matrices at one or more points per iteration is required. However, a significant challenge arises from the computation of the Jacobian matrix, especially in cases where it may not exist or, for high-dimensional scenarios, its computation becomes excessively costly or even infeasible. As a result, certain authors have attempted to overcome this issue by eliminating the dependence on the Jacobian matrix and replacing it with alternative techniques.
There are numerous methods for solving systems of nonlinear equations. Among them is Steffensen’s method which was presented by Samanskii in [4]. This stands out as one of the most renowned techniques in the literature on iterative methods without Jacobian. Its iterative scheme is given by the expression
x ( k + 1 ) = x ( k ) w ( k ) , x ( k ) ; F 1 F x ( k ) , k = 0 , 1 , 2 , ,
where w ( k ) = x ( k ) + F x ( k ) , [ · , · ; F ] : Ω × Ω R n × R n L R n being the divided difference operator of F on R n defined as (see [5])
[ x , y ; F ] ( x y ) = F ( x ) F ( y ) , for any x , y Ω .
In this work, we explore innovative iterative methods for the resolution of nonlinear systems of equations without the need to compute Jacobian matrices. Different researchers in this field have made significant contributions to the development of these strategies.
Several authors have developed memory-based methods that also utilize the Kurchatov divided difference to enhance the efficiency of known methods. Additionally, as part of our strategy, we adopted the mth-order vectorial divided differences proposed by Amiri et al. in [6]. These strategies, found in the literature, circumvent the complexity and costs associated with Jacobian matrices, making them particularly valuable for large-scale systems. Among the most commonly used approaches are the finite difference method, as well as polynomial interpolators, divided differences, and other clever techniques that have also been employed in the literature to avoid the direct evaluation of derivatives. These additional techniques further expand the tools available for tackling derivative-related problems in complex systems.
The advances achieved by these different procedures are reflected in numerous works by different authors. Chicharro et al. and Cordero et al. in [7,8], respectively, proposed new parametric families for solving nonlinear systems. In a similar way, the authors of manuscripts [9,10,11] replaced the Jacobian matrix by a particular divided difference operator. The technique of weight functions (in this case, matrix functions) play an important role for designing Jacobian-free schemes for solving nonlinear systems, as we can see in [12].
In this research, we present an approach that improves the convergence order from p to p + 3 , applicable to Jacobian-free schemes. We replace the traditional Jacobian matrix of F with x ( k ) + H x ( k ) , x ( k ) ; F , a concept introduced by Amiri et al. in [6], where H : Ω R n R n with H x ( k ) = f 1 m x ( k ) , f 2 m x ( k ) , , f n m x ( k ) T provides a remarkable approximation of the Jacobian matrix for the function F ( x ) at any given point. By leveraging this approach with m = 2 , we achieve a convergence order of p + 3 .
This novel approach not only circumvents the complexities associated with traditional Jacobian matrix computations, but also opens new avenues for improving the convergence rate of numerical methods, making it a valuable contribution to the field of numerical analysis and optimization.
The work is developed as follows: first, in Section 1, we introduce the basic concepts necessary for the development of the paper. Then, in Section 2, we prove in a general way that the scheme has a convergence order of p + 3 . To illustrate this, in Section 3, we solve some academic problems to confirm the reliability of the modified methods without Jacobian matrices. Finally, we address the solution of the problem entitled Modeling of nutrient diffusion in a biological substrate, which is formulated through a nonlinear non-differentiable second-order elliptic nonlinear partial differential equation. To address this question, we proceed to discretize the equation by means of the finite difference method, transforming it into a system of nonlinear equations. We then employ the modified Traub method with increased order of convergence, which we denote as Traub-M2, to solve the resulting system. We conclude by presenting the matrix representing the approximate solution of this system of equations.

Preliminary Concepts

First, we introduce a number of necessary concepts in order to develop the proposed Taylor series scheme and prove the order of convergence of the proposed iterative scheme.
Definition 1.
Let x ( k ) k 0 be a sequence in R n , n 1 , convergent to ξ. Then, convergence is
1. 
linear, if there exists M , 0 < M < 1 for M R , and k 0 N , such that
x ( k + 1 ) ξ M x ( k ) ξ , k k 0 .
2. 
sequence x ( k ) k 0 , which converges to ξ with order p , p > 1 , if there exists M , M > 0 for M R and k 0 N such that
x ( k + 1 ) ξ M x ( k ) ξ p k k 0 .
We let F : Ω R n R n sufficiently differentiable in Ω . The qth derivative of F in u R n , q 1 is the q-linear function F ( q ) ( u ) : R n × × R n R n such that F ( q ) ( u ) ω 1 , , ω q R n . It is easy to see that
  • F ( q ) ( u ) ω 1 , ω 2 , , ω q 1 , · is a linear operator that maps R n to L R n .
  • F ( q ) ( u ) ω τ ( 1 ) , , ω τ ( q ) = F ( q ) ( u ) ω 1 , , ω q , for every permutation τ of { 1 , 2 , , q } .
From the above properties, we define the following notation:
( a )
F ( q ) ( u ) ω 1 , , ω q = F ( q ) ( u ) ω 1 ω q ,
( b )
F ( q ) ( u ) ω q 1 F ( p ) ω p = F ( q ) ( u ) F ( p ) ( u ) ω q + p 1 .
On the other hand, for ξ + h R n which lies in a vicinity of a solution ξ of F ( x ) = 0 , we can apply the Taylor expansion, and if the Jacobian matrix F ( ξ ) is not singular, we can express
F ( ξ + h ) = F ( ξ ) h + j = 2 p 1 C j h j + O h p ,
where C j = ( 1 j ! ) F ( ξ ) 1 F ( j ) ( ξ ) , j 2 . We observe that C j h j R n since F ( j ) ( ξ ) L R n × × R n , R n and F ( ξ ) 1 L R n . In addition, we can express F ( ξ + h ) as
F ( ξ + h ) = F ( ξ ) I + j = 2 p 1 j C j h j 1 + O h p ,
where I is the identity matrix. Therefore, j C j h j 1 L R n . From (3), we can obtain
F ( ξ + h ) 1 = I X 2 h + X 3 h 2 X 4 h 3 + F ( ξ ) 1 + O h p ,
where
X 2 = 2 C 2 , X 3 = 4 C 2 2 3 C 3 , X 4 = 8 C 2 3 6 C 2 C 3 6 C 3 C 2 + 4 C 4 .
On the other hand, we denote by e ( k ) = x ( k ) ξ the error in the kth iteration. Equation
e ( k + 1 ) = M e ( k ) p + O e ( k ) p + 1 ,
where M is a p-linear function M L R n × × R n , R n , is called the error equation, and p is the order of convergence of the sequence x ( k ) generated by the iterative process. We observe that e ( k ) p is e ( k ) , e ( k ) , , e ( k ) .
To estimate the convergence order, we use the Approximated Computational Order of Convergence (ACOC), whose expression we can see below.
Definition 2.
Let ξ be a zero of function F and suppose that x ( k 1 ) , x ( k ) and x ( k + 1 ) are three consecutive iterations close to ξ. Then, the order of convergence p can be approximated using formula
p A C O C = ln x ( k + 1 ) x ( k ) / x ( k ) x ( k 1 ) ln x ( k ) x ( k 1 ) / x ( k 1 ) x ( k 2 ) .
To compare the different methods obtained by applying the results proposed in this work, we employ Ostrowski’s efficiency index I = p 1 / d (see [13]), where p is the order of convergence and d is the total number of functional evaluations required by the method (per iteration). This is the most commonly used index, but not the only one. In his book [14], Traub uses an operational index that is defined as C = p 1 / o p , where o p is the number of operations per iteration. We recall that the number of products and quotients needed to solve r linear systems with the same coefficient matrix, using the factorization L U , is calculated as follows:
1 3 n 3 + r n 2 1 3 n .
We use a combination of both indices, C I = p 1 / ( d + o p ) , which is called the computational efficiency index.
The formula of Gennochi–Hermite (see [5]),
[ x + h , x ; F ] = 0 1 F ( x + t h ) d t , x , h Ω R n ,
allows us the calculation of the Taylor expansion of the divided difference operator in terms of the successive derivatives of F,
[ x + h , x ; F ] = j = 0 p 1 ( j + 1 ) ! F ( j + 1 ) ( x ) h j + O h p + 1 , x , h Ω .
By denoting y = x + h and using the error at both points, e = x ξ , e y = y ξ , the Taylor expansion of the divided difference (1) can be written as
[ y , x ; F ] = F ( ξ ) I + C 2 e y + e + C 3 e y 2 + e y e + e 2 + C 4 e y 3 + e y 2 e + e y e 2 + e 3 + .
Now, we use these expressions to prove the following result.
In order to perform the Taylor development of H ( x ( k ) ) , we use the following result, presented in the work of Amiri et al. in [6].
Theorem 1.
Let H be a nonlinear operator H : Ω R n R n with coordinate functions h i , i = 1 , 2 , , n and m N such that m 1 . Let us consider the divided difference operator [ x ( k ) + λ H ( x ( k ) ) , x ( k ) ; F ] , where H ( x ( k ) ) = h 1 m ( x ( k ) ) , h 2 m ( x ( k ) ) , , h n m ( x ( k ) ) T and λ R ; then, the order of the divided difference [ x ( k ) + λ H ( x ( k ) ) , x ( k ) ; F ] as an approximation of the Jacobian matrix F ( x ( k ) ) is m.
Proof. 
Let h i ( x ) , i = 0 , 1 , 2 , , be the coordinate functions of H ( x ) . Let us consider the Taylor expansion of h i ( x ) around ξ :
h i ( x ) = h i ( ξ ) + j 1 = 1 n h i ( ξ ) x j 1 e j 1 + j 2 = 1 n j 1 = 1 n 2 h i ( ξ ) x j 2 x j 1 e j 1 e j 2 + j 3 = 1 n j 2 = 1 n j 1 = 1 n 3 h i ( ξ ) x j 3 x j 2 x j 1 e j 1 e j 2 e j 3 + + j l = 1 n j 2 = 1 n j 1 = 1 n r h i ( ξ ) x j 1 r 1 x j 2 r 2 x j l r l e j 1 r 1 e j 2 r 2 e j l r l + ,
where r s { 1 , 2 , , r } for s = 1 , 2 , , l and r = r 1 + r 2 + + r l , e = x ξ and e j s = x j s ξ j s , for s = 1 , 2 , , l is the j s th coordinate of error e. We can write (7) as
h i ( x ( k ) ) = A 1 i e ( k ) + A 2 i e ( k ) 2 + + A m 1 i e ( k ) m 1 + A m i e ( k ) m + A m + 1 i e ( k ) m + 1 + ,
with A t i L R n , R n for t = 1 , 2 , , since
r h i ( ξ ) x j 1 r 1 x j 2 r 2 x j l r l = m ( m 1 ) ( m r + 1 ) h i m r ( ξ ) r h i ( ξ ) x j 1 r 1 x j 2 r 2 = 0 , for all r < m ,
so A t i = 0 for t = 1 , 2 , , m 1 , and we have
h i ( x ( k ) ) = A m i e ( k ) m + A m + 1 i e ( k ) m + 1 + O e ( k ) m + 2 .
By introducing the multilinear operator A t = A t 1 , A t 2 , , A t n for t = 1 , 2 , , we can express the Taylor series of H ( x ( k ) ) around ξ as follows:
H ( x ( k ) ) = A m e ( k ) m + A m + 1 e ( k ) m + 1 + O e ( k ) m + 2 ,
so we define the error at w = x + λ H ( x ) as
e w = w ξ = x + λ H ( x ) ξ = e + H ( x ) .
Now, we let
F ( x ) = F ( ξ ) e + C 2 e 2 + C 3 e 3 + C 4 e 4 + C 5 e 5 + C 6 e 6 + C 7 e 7 + O e 8 .
Let us consider the Taylor expansion of F ( x ) around ξ . When we apply the Gennochi–Hermite Formula (6), we obtain
[ w , x ; F ] = F ( ξ ) I + C 2 e w + e + C 3 e w 2 + e w e + e 2 + C 4 e w 3 + e w 2 e + e w e 2 + e 3 + = F ( ξ ) I + 2 C 2 e + 3 C 3 e 2 + + m C m e m 1 + C 2 A m + ( m + 1 ) C m + 1 e m + C 2 A m + 1 + C 3 A m + ( m + 2 ) C m + 2 e m + 1 + .
Since the Taylor expansions of F ( x ) and [ w , x ; F ] around ξ coincide for the first m terms, the order of the divided difference [ w , x ; F ] is exactly m. □

2. Main Result

In the following, we present a technique that allows increasing the order of convergence from p to p + 3 units in Jacobian-free methods. To achieve this goal, it is necessary to introduce a second arbitrary scheme or class of two-step iterative methods whose predictor step must be Newton’s scheme, a second generic step φ x ( k ) , y ( k ) of order p. We add a third corrector step x ( k + 1 ) , which depends on real numbers α , β and γ guaranteeing that any scheme with such conditions will reach convergence order p + 3 , for which we state the following result.
In the following section, we introduce an iterative scheme that utilizes a Steffensen type method as a predictor. This concept is inspired in the idea of divided differences f [ w k , x k ] f ( x k ) = f w k f x k w k x k with w k = x k + λ f ( x k ) m to adapt various families of derivative-dependent iterative methods. This adaptation ensures consistent convergence orders when substituting derivatives with divided differences for different values of m, especially in the scalar case.
For the scalar case, the iterative expression of a Steffensen type method is defined as:
x k + 1 = x k f x k f w k , x k , w k = x k + λ f x k , k = 0 , 1 , 2 , .
By applying a similar concept and replacing Jacobian matrices with divided differences in the vectorial case, we create a generic scheme with an undetermined number of steps, where the first step is an extension of the Steffensen scheme (10) to systems. This extension enables an increase in the convergence order from p to p + 3 units.
In a more general way, we represent the scheme as follows:
y ( k ) = x ( k ) x ( k ) + λ H x ( k ) , x ( k ) ; F 1 F x ( k ) , z ( k ) = φ x ( k ) , y ( k ) , x ( k + 1 ) = z ( k ) α I + G x ( k ) , y ( k ) β I + γ G x ( k ) , y ( k ) x ( k ) + H x ( k ) , x ( k ) ; F 1 F z ( k ) ,
where
G x ( k ) , y ( k ) = x ( k ) + λ H x ( k ) , x ( k ) ; F 1 z ( k ) , y ( k ) ; F ,
and H ( x ( k ) ) = f 1 m ( x ( k ) ) , f 2 m ( x ( k ) ) , , f n m ( x ( k ) ) T , where f 1 ( x ( k ) ) , f 2 ( x ( k ) ) , , f n ( x ( k ) ) are coordinate functions of F : R n R n , converge to ξ with convergence order p + 3 for m = 2 . We demonstrate this assertion below.
Theorem 2.
Let F : Ω R n R n be a sufficiently differentiable function at each point in a neighborhood of an open convex set Ω containing ξ R n , such that F ( ξ ) = 0 is a solution of the system. Let x ( 0 ) be an initial estimate sufficiently close to ξ, and suppose that the Jacobian matrix F ( x ) is continuous and nonsingular at ξ.
Then, the method defined in (11) has a convergence order of p + 3 , where p is the order of the arbitrary scheme z ( k ) , whose first step is Steffensen type method, for α = 13 4 , β = 7 2 , and γ = 5 4 and its error equation is as follows:
e ( k + 1 ) = 1 2 2 C 2 C 2 A 2 A 2 C 2 3 C 3 + C 2 2 C 2 + C 2 C 3 M p e ( k ) p + 3 + O e ( k ) p + 4 .
Proof. 
To perform a Taylor series expansion of the divided difference, we need to develop the Taylor series of F x ( k ) , F x ( k ) , and F x ( k ) around ξ :
F x ( k ) = F ( ξ ) e ( k ) + C 2 e ( k ) 2 + C 3 e ( k ) 3 + O e ( k ) 4 ,
where C j = 1 j ! F ( ξ ) 1 [ F ( j ) ( ξ ) ] , j 2 with x ( k ) ξ = e ( k ) . The derivative F ( x ( k ) ) is
F ( x ( k ) ) = F ( ξ ) I + 2 C 2 e ( k ) + 3 C 3 e ( k ) 2 + 4 C 4 e ( k ) 3 + O e ( k ) 4 ,
F x ( k ) = F ( ξ ) 2 C 2 + 6 C 3 e ( k ) + 12 C 4 e ( k ) 2 + O e ( k ) 3 .
According to Theorem 1, x ( k ) + λ H x ( k ) , x ( k ) ; F for m = 2 is a second-order approximation of the Jacobian matrix F x ( k ) . Using (14) and (15), its Taylor series expansion is as follows:
x ( k ) + λ H x ( k ) , x ( k ) ; F = F ( ξ ) I + 2 C 2 e ( k ) + 3 C 3 e ( k ) 2 + 4 C 4 e ( k ) 3 + 1 2 F ( ξ ) 2 C 2 + 6 C 3 e ( k ) + 12 C 4 e ( k ) 2 λ H x ( k ) = F ( ξ ) I + 2 C 2 e ( k ) + 3 C 3 + λ C 2 A 2 e ( k ) 2 + 4 C 4 + λ C 2 A 3 + 3 λ C 3 A 2 e ( k ) 3 + O e ( k ) 4 .
Let us now calculate the Taylor series development of x ( k ) + λ H x ( k ) , x ( k ) ; F 1 . Forcing that equality
x ( k ) + λ H x ( k ) , x ( k ) ; F 1 x ( k ) + λ H x ( k ) , x ( k ) ; F = I
must be satisfied, we conjecture
x ( k ) + λ H x ( k ) , x ( k ) ; F 1 = I + X 2 e ( k ) + X 3 e ( k ) 2 + X 4 e ( k ) 3 F ξ 1 + O e ( k ) 4 ,
and, consequently,
I = I + X 2 e ( k ) + X 3 e ( k ) 2 + X 4 e ( k ) 3 I + 2 C 2 e ( k ) + 3 C 3 + λ C 2 A 2 e ( k ) 2 + 4 C 4 + λ C 2 A 3 + 3 λ C 3 A 2 e ( k ) 3 .
Therefore,
X 2 = 2 C 2 , X 3 = 4 C 2 2 3 C 3 λ C 2 A 2 , X 4 = λ C 2 A 3 + λ 4 C 2 2 3 C 3 A 2 8 C 2 3 + 6 C 2 C 3 + 6 C 3 C 2 4 C 4 .
Let us calculate the error equation for the first step of the scheme as
y ( k ) = x ( k ) x ( k ) + λ H x ( k ) , x ( k ) ; F 1 F x ( k ) .
It follows that
y ( k ) ξ = C 2 e ( k ) 2 + X 2 C 2 X 3 C 3 e ( k ) 3 + O e ( k ) 4 .
We denote B 3 = X 2 C 2 X 3 C 3 .
Now, let us suppose that the error of the second step φ x ( k ) , y ( k ) is of order p; then, the error in the kth iterate z ( k ) is
z ( k ) ξ = M p e ( k ) p + M p + 1 e ( k ) p + 1 + M p + 2 e ( k ) p + 2 + M p + 3 e ( k ) p + 3 + O e ( k ) p + 4 ,
and therefore the Taylor development of F z ( k ) around ξ is
F z ( k ) = F ( ξ ) M p e ( k ) p + M p + 1 e ( k ) p + 1 + M p + 2 e ( k ) p + 2 + M p + 3 e ( k ) p + 3 + C 2 M p 2 e ( k ) 2 p + O e ( k ) p + 4 .
Thus, the Taylor series expansion of the Jacobian matrix F y ( k ) using the divided difference z ( k ) , y ( k ) ; F is as follows:
z ( k ) , y ( k ) ; F = F ( ξ ) I + C 2 2 e ( k ) 2 + C 2 B 3 e ( k ) 3 + C 2 M p e ( k ) p + C 2 M p + 1 e ( k ) p + 1 + C 2 M p + 2 e ( k ) p + 2 + C 2 M p + 3 e ( k ) p + 3 + O e ( k ) p + 4 .
In order to calculate the error equation at the last step, we first calculate the product
G x ( k ) , y ( k ) = x ( k ) + λ H x ( k ) , x ( k ) ; F 1 z ( k ) , y ( k ) ; F .
For this, using (17) and (21), we have
G x ( k ) , y ( k ) = x ( k ) + λ H x ( k ) , x ( k ) ; F 1 z ( k ) , y ( k ) ; F = I + X 2 e ( k ) + D 2 e ( k ) 2 + D 3 e ( k ) 3 + D 4 e ( k ) p + D 5 e ( k ) p + 1 + D 6 e ( k ) p + 2 + D 7 e ( k ) p + 3 + O e ( k ) p + 4 ,
where
D 2 = X 3 + C 2 2 , D 3 = X 4 + C 2 B 3 + X 2 C 2 2 , D 4 = C 2 M p , D 5 = C 2 M p + 1 + X 2 C 2 M p , D 6 = C 2 M p + 2 + X 2 C 2 M p + 1 + X 3 C 2 M p , D 7 = C 2 M p + 3 + X 2 C 2 M p + 2 + X 3 C 2 M p + 1 + X 4 C 2 M p .
In addition,
G x ( k ) , y ( k ) 2 = x ( k ) + λ H x ( k ) , x ( k ) ; F 1 z ( k ) , y ( k ) ; F x ( k ) + λ H x ( k ) , x ( k ) ; F 1 z ( k ) , y ( k ) ; F = I + 2 X 2 e ( k ) + E 2 e ( k ) 2 + E 3 e ( k ) 3 + E 4 e ( k ) p + E 5 e ( k ) p + 1 + E 6 e ( k ) p + 2 + E 7 e ( k ) p + 3 + O e ( k ) p + 4 ,
where
E 2 = 2 D 2 + X 2 2 , E 3 = 2 D 3 + X 2 D 2 + D 2 X 2 , E 4 = 2 D 4 , E 5 = 2 D 5 + X 2 D 4 + D 4 X 2 , E 6 = 2 D 6 + X 2 D 5 + D 4 D 2 + D 5 X 2 + D 2 D 4 , E 7 = 2 D 7 + X 2 D 6 + D 2 D 5 + D 3 D 4 + D 4 D 3 + D 5 D 2 + D 6 X 2 .
Finally, we calculate the product x ( k ) + λ H x ( k ) , x ( k ) ; F 1 F ( z ( k ) ) using (17) and (20):
x ( k ) + λ H x ( k ) , x ( k ) ; F 1 F ( z ( k ) ) = M p e ( n ) p + M p + 1 + X 2 M p e ( k ) p + 1 + M p + 2 + X 2 M p + 1 + X 3 M p e ( k ) p + 2 + M p + 3 + X 2 M p + 2 + X 3 M p + 1 + X 4 M p e ( k ) p + 3 + O e ( k ) p + 4 .
Therefore, the error in the third step x ( k + 1 ) is calculated by using (19), (22), (24) and (26).
x ( k + 1 ) = z ( k ) α I + G x ( k ) , y ( k ) β I + γ G x ( k ) , y ( k ) x ( k ) + λ H x ( k ) , x ( k ) ; F 1 F z ( k ) , e ( k + 1 ) = M p e ( k ) p + M p + 1 e ( k ) p + 1 + M p + 2 e ( k ) p + 2 + M p + 3 e ( k ) p + 3 ( α + β + γ ) I + ( β + 2 γ ) X 2 e ( k ) + β D 2 + γ E 2 e ( k ) 2 + β D 3 + γ E 3 e ( k ) 3 + β D 4 + γ E 4 e ( k ) p + β D 5 + γ E 5 e ( k ) p + 1 + β D 6 + γ E 6 e ( k ) p + 2 + β D 7 + γ E 7 e ( k ) p + 3 · M p e ( k ) p + M p + 1 + X 2 M p e ( k ) p + 1 + M p + 2 + X 2 M p + 1 + X 3 M p e ( k ) p + 2 + M p + 3 + X 2 M p + 2 + X 3 M p + 1 e ( k ) p + 3 + O e ( k ) 4 , = ( 1 α β γ ) M p e ( k ) p + ( 1 α β γ ) M p + 1 ( α + 2 β + 3 γ ) X 2 M p e ( k ) p + 1 + [ ( 1 α β γ ) M p + 2 + ( α + 2 β + 3 γ ) X 2 M p + 1 ( α + 2 β + 3 γ ) X 3 M p ( 5 β + 14 γ ) C 2 2 M p ] e ( k ) p + 2 + ( 1 α β γ ) M p + 3 ( α + 2 β + 3 γ ) X 2 M p + 2 ( α + 2 β + 3 γ ) X 3 M p + 1 ( α + 2 β + 3 γ ) X 4 M p e ( k ) p + 3 + ( 5 β + 14 γ ) C 2 2 M p + 1 + ( 3 β + 8 γ ) C 2 X 3 M p ( β + 3 γ ) X 3 X 2 M p + ( 2 β + 10 γ ) C 2 3 M p + ( β + 2 γ ) C 2 C 3 M p e ( k ) p + 3 .
From (27), we derive system
1 α β γ = 0 , α + 2 β + 3 γ = 0 , 5 β + 14 γ = 0 ,
whose solution is α = 13 4 , β = 7 2 and γ = 5 4 . Simplifying the resulting error equation, we obtain
e ( k + 1 ) = 1 2 2 C 2 C 2 A 2 A 2 C 2 3 C 3 + C 2 2 C 2 + C 2 C 3 M p e ( k ) p + 3 + O e ( k ) p + 4 ,
showing that the order of convergence is p + 3 . This ends our proof. □

3. Numerical Results

Below, we present several systems of nonlinear equations that we solve to confirm the reliability of the methods based on the conditions stated in Theorem 2, specifically the methods that do not require the evaluation of Jacobian matrices.
Each table displays the initial approximation x ( 0 ) used to find the solutions. Additionally, we indicate the number of iterations (Iter) required for the schemes to converge, the approximate computational order of convergence (ACOC) for each method, and the approximate computational time (e-time) required (in seconds).
Calculations are performed with MATLAB programming software using variable precision arithmetics with 2000 digits of mantissa and an error tolerance of ϵ = 10 8 . In addition, the expression (5) is used to calculate the approximated computational order of convergence ACOC. Each table includes the CPU time used by each method to obtain the solution. It is important to note that the computer used for these calculations has the following software and hardware specifications: macOS Ventura operating system version 13.4.1, Intel Core i7 quad-core processor Chip M1 Pro, 16 GB Ram Memory LPDDR5 and year of manufacture 2021. In addition, we apply the following stopping criteria:
x ( k + 1 ) x ( k ) + F x ( k + 1 ) < ϵ .

3.1. Computational Efficiency

Now, we present five classes of iterative methods that we use to apply our technique and perform various numerical tests in relation to specific academic problems. Then, we apply the third step of our approach to each of these methods. In a first step, we compute the computational efficiency index of each of these methods in order to be able to compare the methods of order p with the schemes modified to p + 3 among the different classes of approaches.
The first scheme to be used to increase its convergence order and check its efficiency is a modification of the method presented by Cordero et al. in [15], where we eliminate the Jacobian matrix. For simplicity, we denote it as M E T 1 , λ , 4 , where λ represents the parameter selected in that scheme, four is the order of the method and m = 2 .
M E T 1 , λ , 4 : y ( k ) = x ( k ) x ( k ) + λ H x ( k ) , x ( k ) ; F 1 F ( x ( k ) ) , x ( k + 1 ) = x ( k ) β 6 G 3 + 2 G 2 + G + 1 x ( k ) + λ H x ( k ) , x ( k ) ; F 1 F x ( k ) , for β = 1 , G x ( k ) , y ( k ) = 1 x ( k ) + λ H x ( k ) , x ( k ) ; F 1 x ( k ) , y ( k ) ; F ,
where k = 0 , 1 , 2 .
The second test scheme we employ in this paper is Traub’s method, which is introduced by Traub in [16]. The system-transferred version of this method, where the Jacobian is also replaced by the split difference and m = 2 , is represented by the following expression:
M E T 2 , λ , 3 : y ( k ) = x ( k ) x ( k ) + λ H x ( k ) , x ( k ) ; F 1 F ( x ( k ) ) , x ( k + 1 ) = x ( k ) x ( k ) + λ H x ( k ) , x ( k ) ; F 1 F x ( k ) + F y ( k ) ,
where k = 0 , 1 , 2 .
Chun’s method was initially introduced in [17]. Now, we present a modification of this scheme, in which we replace the Jacobian matrix with the divided difference for m = 2 .
M E T 3 , λ , 4 : y ( k ) = x ( k ) x ( k ) + λ H x ( k ) , x ( k ) ; F 1 F x ( k ) , x ( k + 1 ) = y ( k ) 3 2 Γ ( k ) x ( k ) + λ H x ( k ) , x ( k ) ; F 1 F y ( k ) ,
being Γ ( k ) = x ( k ) + λ H x ( k ) , x ( k ) ; F 1 x ( k ) , y ( k ) ; F , where k = 0 , 1 , 2 .
Ostrowski’s method is presented in [13] and transferred to systems by Grau et al. in [18]; as in the previous cases, we modify this method by replacing the Jacobian matrix by the split difference proposed in this work for m = 2 .
M E T 4 , λ , 4 : y ( k ) = x ( k ) x ( k ) + λ H x ( k ) , x ( k ) ; F 1 F ( x ( k ) ) , x ( k + 1 ) = y ( k ) 2 [ x ( k ) , y ( k ) ; F ] x ( k ) + λ H x ( k ) , x ( k ) ; F 1 F y ( k ) ,
where k = 0 , 1 , 2 .
Finally, we employ the family of iterative methods proposed by Cordero et al. in [19]. This class, characterized by having convergence order p = 6 in three steps, allows us the application of our new technique, stated in Theorem 2, resulting in a transformation of the scheme into a four-step method, achieving an increase in its convergence order up to p = 9 units. The adoption of this advanced technique promises to offer a level of accuracy and efficiency in the results obtained. This scheme takes its new form after replacing the Jacobian matrix by the proposed divided difference for m = 2 ; its iterative expression in three steps is as follows:
M E T 5 , λ , 6 : y ( k ) = x ( k ) x ( k ) + λ H x ( k ) , x ( k ) ; F 1 F x ( k ) , z ( k ) = y ( k ) 2 x ( k ) , y ( k ) ; F x ( k ) + λ H x ( k ) , x ( k ) ; F 1 F y ( k ) , x ( k + 1 ) = z ( k ) Λ ( k ) F z ( k ) ( 1 + γ ) I Γ ( k ) x ( k ) + λ H x ( k ) , x ( k ) ; F 1 F z ( k ) ,
for Λ ( k ) = γ x ( k ) + λ H x ( k ) , x ( k ) ; F 1 + ( 1 γ ) x ( k ) , y ( k ) ; F 1 and Γ ( k ) = x ( k ) + λ H x ( k ) , x ( k ) ; F 1 x ( k ) , y ( k ) ; F .
We note that in each method we use the value λ = 0.001 . After several numerical tests, we find that the iterative schemes converge faster to the solution when λ is near to zero.
We apply to each of them the conditions of Theorem 2 in order to increase their orders of convergence from p to p + 3 .
In Table 1, we present information on the order of convergence, the number of scalar functional evaluations per iteration, the number of products/quotients per iteration, and the computational efficiency index (CI) for each method, both before and after applying our proposed third step. It is important to remember that in order to evaluate the function in one iteration, the computation of scalar functions is required. The number of scalar function evaluations is n 2 n for any split-difference evaluation [ x ( k ) , y ( k ) ; F ] and n 2 2 n for the expression x ( k ) + λ H x ( k ) , x ( k ) ; F . Furthermore, to each iterative class presented after applying our scheme, we refer as M E T 1 , λ , p through M E T 5 , λ , p , respectively, where p represents the convergence order of each modified method.
In Figure 1 and Figure 2, we can observe the computational efficiency behavior of our technique, both for the methods M E T 1 , λ , p through M E T 5 , λ , p and for the modified methods by incorporating our third step when applied to small and large systems.

3.2. Some Academical Problems

Example 1.
The first case we present is a system of equations of size n = 20 ,
( 2 x j 2 + 1 ) 2 m = 1 20 x m 2 + arctan x j = 0 , j = 1 , 2 , n .
The solution of System (28) is given by ξ ( 0.175768 , , 0.175768 ) T and the initial estimate used is x ( 0 ) = ( 1 2 , 1 2 , , 1 2 ) T .
Example 2.
The second case presented is a system of size n = 30 ,
x j cos 2 x j m = 1 30 x m = 0 , j = 1 , 2 , , n .
The solution of System (29) is ξ ( 0.486743 , , 0.486743 ) T and the initial estimate used is x ( 0 ) = 1 2 , 1 2 , 1 2 T .
Example 3.
The third case presented is a system of size n = 30 ,
x j 2 x j + 1 1 = 0 j = 1 , 2 , , n 1 ; x n 2 x 1 1 = 0 ,
whose exact solution is ξ = ( 1 , 1 , , 1 ) T , and initial estimate is x ( 0 ) = 3 2 , 3 2 , , 3 2 T .
Example 4.
The fourth system presented has size n = 40 ; the expression representing this system is given by
x j x j + 1 1 = 0 , j = 1 , 2 , , n 1 , x n x 1 1 = 0 .
The exact solution of System (31) is ξ = ( 1 , 1 , , 1 ) T , for which we use the initial estimate x ( 0 ) = 3 2 , 3 2 , , 3 2 T .
Example 5.
Finally, we consider system
x j sin x j + 1 1 = 0 , j = 1 , 2 , , n 1 , x n sin x 1 1 = 0 .
It is a system of size n = 40 , and we select the initial estimation x ( 0 ) = 3 4 , 3 4 , , 3 4 T to approximate the solution ξ ( 1.1141 , 1.1141 , 1.1141 ) T .
Table 2, Table 3, Table 4, Table 5 and Table 6 provide strong confirmation of our theoretical findings as the application of the proposed scheme to each class yielded a convergence order of p + 3 . Remarkably, the M E T 2 , λ class exhibited outstanding efficiency, particularly in terms of computational time, when compared to alternative approaches that do not utilize Jacobian matrices. This observation reinforces the practical viability of our technique and its potential to significantly accelerate numerical computations, supporting its relevance in academic problem solving.

3.3. Application of the Finite Difference Method on a Model of Nutrient Diffusion in a Biological Substrate

Now, we focus on the detailed study of a nonlinear elliptic initial value and contour problem, which was previously treated in [20]. To enrich and improve the analysis, we introduce an additional term in the equation, namely | u | , which makes the problem non-differentiable.The proposed partial differential equation has the ability to model a wide range of nonlinear phenomena in physical, chemical and biological systems.
In the following, we delve into an applied problem concerning the modeling of nutrient diffusion in a biological substrate. We state the problem and discuss the potential implications that solving it could have for researchers in various fields and for farmers, as well as the interpretation of its solution.
In the context of agriculture and biotechnology, let us consider a two-dimensional biological substrate that represents a cultivation area or a growth medium for microorganisms. We aim to understand how nutrients diffuse and distribute within this substrate, impacting the growth and health of the organisms present,
2 u x 2 + 2 u y 2 = u ( x , y ) 3 + | u ( x , y ) | , with Ω = ( x , y ) R 2 : x , y [ 0 , 1 ] , u ( x , 0 ) = 2 x 2 x + 1 , u ( x , 1 ) = 2 , 0 x 1 , u ( 0 , y ) = 2 y 2 y + 1 , u ( 1 , y ) = 2 , 0 y 1 .
In this scenario, u ( x , y ) represents the concentration of nutrients at each point ( x , y ) within the substrate. The equation reflects how nutrients diffuse within the substrate, while the term u ( x , y ) 3 + | u ( x , y ) | incorporates the interaction between nutrient concentration and the biochemical processes present in the substrate.
The boundary conditions are derived from the conditions at the edges of the cultivation area. The conditions at the lateral edges ( x = 0 and x = 1 ) could reflect the initial nutrient concentration or the constant input of nutrients into the substrate. The conditions at the upper and lower edges ( y = 0 and y = 1 ) could represent nutrient absorption by plant roots or interaction with microorganisms on the surface.
Solving this problem would enable researchers and farmers to understand how nutrients are distributed within the substrate and how they impact the growth and health of the organisms present, providing valuable insights for optimizing agricultural practices and enhancing biological yields.
As an illustrative example, we solve this equation for a small system, employing a block-wise approach to represent it as follows.
We create a mesh for discretizing the problem h = 1 n + 1 , k = 1 m + 1 ; we also have the mesh point ( x i , y j ) with x i = i h , i = 0 , , n + 1 and y i = j k , j = 0 , , m + 1 , such that
u x x ( x i , y j ) + u y y ( x i , y j ) = u ( x i , y j ) 3 + u ( x i , y j ) .
Therefore, by approximating the partial derivatives by central divided differences, we have
u ( x i + 1 , y j ) 2 u ( x i , y j ) + u ( x i 1 , y j ) h 2 + u ( x i , y j + 1 ) 2 u ( x i , y j ) + u ( x i , y j 1 ) k 2 = u ( x i , y j ) 3 + | u ( x i , y j ) | ,
for i = 1 , , n and j = 1 , , m .
Now, we denote u x i , y j = u i , j . Simplifying the notation, we obtain
2 λ 2 + 1 u i , j λ 2 u i , j + 1 + λ 2 u i , j 1 u i + 1 , j + u i 1 , j + h 2 u i j 3 + u i j = 0 ,
with λ = h k , i = 1 , , n and j = 1 , , m .
Equation (34) together with the boundary and initial conditions form a nonlinear system of size ( n m ) × ( n m ) is given by
τ u ( x , y ) + h 2 ν u ( x , y ) = W ,
where
τ = A B 0 0 B A B 0 0 B 0 0 B A , A = 2 λ 2 + 1 1 0 0 1 2 λ 2 + 1 1 0 0 1 0 0 1 2 λ 2 + 1 ,
B = λ 2 · I n × n , u = u 1 , , u d T , ν ( u ) = u 1 3 , , u d 3 T + u 1 , , u d T , with d = n m . In addition, W denotes a vector containing the boundary conditions. We can set the nonlinear system as follows:
P ( u ) = τ u ( x , y ) + h 2 v u ( x , y ) W = 0 .
Let us define the differentiable part as F ( u ) = h 2 u 1 , , u d + h 2 u 1 3 , , u d 3 T W , and the non-differentiable part as G ( u ) = h 2 u 1 , , u d T , so that equation F ( u ) + G ( u ) = 0 holds.
To solve Equation (35), we employ the iterative method M E T 2 , λ , 6 M o d . We generate an initial approximation u ( 0 ) = ( 1 , 1 , , 1 ) of the exact solution u ( x , y ) . During the execution of the iterative process, we use variable precision arithmetics with a precision of 100 digits. We define the stopping criterion as the difference between two consecutive iterations, such that u ( n + 1 ) u ( n ) < 10 8 and also P u ( k ) 10 8 . The technical specifications of the computer used to solve this case are identical to those employed in solving academical problems. In this instance, we select n = m = 25 .
Solving the system associated with this PDE, we write the solution of the system represented in the following matrix.
u ( x i , y j ) = 1 0.96449704 0.911242 0.89349112 1.78106509 1.88757396 2 0.96449704 0.94077958 0.89904994 0.88363618 1.18118983 0.86468611 2 0.93491124 0.91837384 0.8853521 0.87175486 0.80447293 0.49872502 2 0.9112426 0.89904994 0.87168096 0.85914444 0.57490717 0.3266623 2 0.89349112 0.88363618 0.85914444 0.8468333 0.43058144 0.23355179 2 0.8816568 0.87257659 0.84851767 0.83559769 0.33553995 0.17732774 2 0.93491124 0.90648025 0.84363755 0.8109766 0.1225449 0.06200892 2 0.96449704 0.9290334 0.85300742 0.81435483 0.10830767 0.05468412 2 1.49704142 1.27264469 0.89476719 0.75181019 0.03252994 0.01627615 2 1.58579882 1.29317295 0.83884556 0.68304563 0.02596847 0.012988 2 1.68047337 1.27872037 0.73904355 0.5790819 0.01945358 0.00972662 2 1.78106509 1.18118983 0.57490717 0.43058144 0.01296163 0.00647928 2 1.88757396 0.86468611 0.3266623 0.23355179 0.00647928 0.00323844 2 2 2 2 2 2 2 2 .
After three iterations, the obtained solution satisfies u ( 3 ) u ( 2 ) < 5.34 × 10 5 , and the norm of the nonlinear operator P ( u ) evaluated at the last iteration is such that P u ( 3 ) 7.035 e × 10 34 . The approximate solution in R 625 , resized to a 25 × 25 matrix for i , j = 1 , , 25 , and then embedded in the solution matrix within the grid bounded by the boundary conditions, is presented in the matrix above.
We can affirm that the values u ( x i , y j ) obtained fall within the range 0 < u ( x i , y j ) < 1.3 . In the context of the posed problem involving the diffusion of nutrients in a two-dimensional biological substrate, these observations take on fundamental significance. The concentration of nutrients u ( x i , y j ) demonstrates a trend in which values are bounded between 0 and 1.3 . This limitation in concentration implies a biological equilibrium in the system, where absorption, diffusion, and biochemical reactions harmonize. This characteristic suggests that the system is stable and well-regulated in terms of nutrient availability.
The coherence between the obtained values and the initial and boundary conditions of the problem reinforces the validity of the solution. This indicates that the nutrient distribution aligns with the influences of the boundary conditions, thus supporting the biological interpretation of the model.
The influence of the differential equation u ( x , y ) 3 + | u ( x , y ) | is reflected in the limited concentration range. This implies that biochemical interactions between nutrients and present species are influencing nutrient distribution in the substrate.
In summary, the observation that the internal values of the solution matrix u ( x i , y j ) lie within the range of 0 to 1.3 suggests a balanced, stable, and regulated biological system in terms of nutrient concentration. The alignment with the conditions of the problem and the influence of biochemical interactions endorse the validity of the obtained results and their interpretation in the context of agriculture and biotechnology.

4. Conclusions

In this paper, we presented an innovative proposal that shows an ability to increase the order of convergence from p to p + 3 , without relying on Jacobian matrices. Our validation was solidly supported by consistently obtaining this three-unit increase, as evidenced in various academic problems. This approach generated equally significant results, translating into an increase in the order of convergence by p + 3 units in each of the iterative classes we used to perform our numerical tests as we already assumed. Finally, we successfully tackled the resolution of a problem modeled by a nonlinear partial differential equation describing the phenomenon of nutrient diffusion in a biological substrate.

Author Contributions

Conceptualization, A.C.; methodology, M.P.V.; software, M.A.L.-S.; validation, J.R.T.; formal analysis, M.A.L.-S.; writing—original draft preparation, M.A.L.-S.; writing—review and editing, A.C., J.R.T. and M.P.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the anonymous reviewers for their comments and suggestions that have improved the final version of this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ali, R.; Ali, A.; Iqbal, S. Iterative methods for solving absolute value equations. J. Math. Comput. Sci. 2022, 26, 322–329. [Google Scholar] [CrossRef]
  2. Khan, W. Numerical simulation of Chun-Hui He’s iteration method with applications in engineering. Int. J. Numer. Methods Heat Fluid Flow 2022, 32, 944–955. [Google Scholar] [CrossRef]
  3. Khan, W.; Arif, M.; Mohammed, M.; Farooq, U.; Farooq, F.; Elbashir, M.; Rahman, J.; AlHussain, Z. Numerical and Theoretical Investigation to Estimate Darcy Friction Factor in Water Network Problem Based on Modified Chun-Hui He’s Algorithm and Applications. Math. Probl. Eng. 2022, 2022, 8116282. [Google Scholar] [CrossRef]
  4. Samanskii, V. On a modification of the Newton method. Ukr. Math. J. 1967, 19, 133–138. [Google Scholar]
  5. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; SIAM: Philadelphia, PA, USA, 2000. [Google Scholar]
  6. Amiri, A.; Cordero, A.; Darvishi, M.T.; Torregrosa, J.R. Preserving the order of convergence: Low-complexity Jacobian-free iterative schemes for solving nonlinear systems. J. Comput. Appl. Math. 2018, 337, 87–97. [Google Scholar] [CrossRef]
  7. Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. A new efficient parametric family of iterative methods for solving nonlinear systems. J. Differ. Equ. Appl. 2019, 25, 1454–1467. [Google Scholar] [CrossRef]
  8. Cordero, A.; Villalba, E.G.; Torregrosa, J.R.; Triguero-Navarro, P. Convergence and stability of a parametric class of iterative schemes for solving nonlinear systems. Mathematics 2021, 9, 86. [Google Scholar] [CrossRef]
  9. Singh, A. An efficient fifth-order Steffensen-type method for solving systems of nonlinear equations. Int. J. Comput. Sci. Math. 2018, 9, 501–514. [Google Scholar] [CrossRef]
  10. Sharma, J.; Arora, H. Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comput. Appl. Math. 2016, 35, 269–284. [Google Scholar] [CrossRef]
  11. Wang, X.; Zhang, T.; Qian, W.; Teng, M. Seventh-order derivative-free iterative method for solving nonlinear systems. Numer. Algorithms 2015, 10, 545–558. [Google Scholar] [CrossRef]
  12. Sharma, R.; Guha, J.; Sharma, R. A efficient fourth-order wheighted-Newton method for systems of nonlinear equations. Numer. Algorithms 2013, 62, 307–323. [Google Scholar] [CrossRef]
  13. Ostrowski, A. Solution of Equations and Systems of Equations; Academic Press: New York, NY, USA, 1966. [Google Scholar]
  14. Traub, J.F. Iterative Methods for the Solution of Equations; American Mathematical Society: Providence, RI, USA, 1982; Volume 312. [Google Scholar]
  15. Cordero, A.; Leonardo Sepúlveda, M.A.; Torregrosa, J.R. Dynamics and Stability on a Family of Optimal Fourth-Order Iterative Methods. Algorithms 2022, 15, 387. [Google Scholar] [CrossRef]
  16. Traub, J. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  17. Chun, C. Construction of Newton-like iterative methods for solving nonlinear equations. Numer. Math. 2006, 104, 297–315. [Google Scholar] [CrossRef]
  18. Grau-Sánchez, M.; Grau, Á.; Noguera, M. Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput. 2011, 218, 2377–2385. [Google Scholar] [CrossRef]
  19. Cordero, A.; Maimó, J.G.; Rodríguez-Cabral, A.; Torregrosa, J.R. Convergence and Stability of a New Parametric Class of Iterative Processes for Nonlinear Systems. Algorithms 2023, 16, 163. [Google Scholar] [CrossRef]
  20. Villalba, E.G.; Hernandez, M.; Hueso, J.L.; Martínez, E. Using decomposition of the nonlinear operator for solving non-differentiable problems. Math. Methods Appl. Sci. 2022. [Google Scholar] [CrossRef]
Figure 1. Efficiency index for methods of order p. (a) Efficiency index of order p for small systems of equations; (b) Efficiency index of order p for large systems of equations.
Figure 1. Efficiency index for methods of order p. (a) Efficiency index of order p for small systems of equations; (b) Efficiency index of order p for large systems of equations.
Mathematics 11 04238 g001
Figure 2. Efficiency index for methods of order p + 3 . (a) Efficiency index of order p + 3 for small systems of equations; (b) Efficiency index of order p + 3 for large systems of equations.
Figure 2. Efficiency index for methods of order p + 3 . (a) Efficiency index of order p + 3 for small systems of equations; (b) Efficiency index of order p + 3 for large systems of equations.
Mathematics 11 04238 g002
Table 1. Computational efficiency index of order p and p + 3 .
Table 1. Computational efficiency index of order p and p + 3 .
Original Methods
SchemeC. OrderdOpCI
M E T 1 , λ , 4 4 2 n 2 2 n 1 3 n 3 + 12 n 2 1 3 n 4 1 / ( ( 1 / 3 ) n 3 + 14 n 2 ( 7 / 3 ) n )
M E T 2 , λ , 3 3 n 2 1 3 n 3 + 2 n 2 1 3 n 3 1 / ( ( 1 / 3 ) n 3 + 3 n 2 ( 1 / 3 ) n )
M E T 3 , λ , 4 4 2 n 2 n 1 3 n 3 + 4 n 2 1 3 n 4 1 / ( ( 1 / 3 ) n 3 + 6 n 2 ( 4 / 3 ) n )
M E T 4 , λ , 4 4 2 n 2 n 1 3 n 3 + 2 n 2 1 3 n 4 1 / ( ( 1 / 3 ) n 3 + 4 n 2 ( 4 / 3 ) n )
M E T 5 , λ , 6 6 2 n 2 1 3 n 3 + 6 n 2 1 3 n 6 1 / ( ( 1 / 3 ) n 3 + 9 n 2 ( 1 / 3 ) n )
Schemes with Modified Order
M E T 1 , λ , 7 M o d 7 2 n 2 n 1 3 n 3 + 18 n 2 1 3 n    7 1 / ( ( 1 / 3 ) n 3 + 20 n 2 ( 4 / 3 ) n )
M E T 2 , λ , 6 M o d 6 n 2 + n 1 3 n 3 + 8 n 2 1 3 n 6 1 / ( ( 1 / 3 ) n 3 + 9 n 2 + ( 2 / 3 ) n )
M E T 3 , λ , 7 M o d 7 2 n 2 1 3 n 3 + 10 n 2 1 3 n 7 1 / ( ( 1 / 3 ) n 3 + 12 n 2 ( 1 / 3 ) n )
M E T 4 , λ , 7 M o d 7 2 n 2 1 3 n 3 + 8 n 2 1 3 n 7 1 / ( ( 1 / 3 ) n 3 + 10 n 2 ( 1 / 3 ) n )
M E T 5 , λ , 9 M o d 9 2 n 2 + n 1 3 n 3 + 12 n 2 1 3 n 9 1 / ( ( 1 / 3 ) n 3 + 20 n 2 + ( 2 / 3 ) n )
Table 2. Numerical results for Example 1.
Table 2. Numerical results for Example 1.
x ( 0 ) Schemes x ( k + 1 ) x ( k ) F x ( k + 1 ) IterACOCe-Time
1 2 1 2 MET 1 , 0.0001 , 7 5.97084 × 10 9 2.28557 × 10 56 3736.042129
MET 2 , 0.0001 , 6 1.49472 × 10 37 5.57149 × 10 220 4629.970914
MET 3 , 0.0001 , 7 6.24761 × 10 9 3.19194 × 10 56 3735.809333
MET 4 , 0.0001 , 7 3.21705 × 10 11 6.09164 × 10 73 3732.057150
MET 5 , 0.0001 , 9 2.59355 × 10 16 2.57919 × 10 139 3935.072059
Table 3. Numerical results for Example 2.
Table 3. Numerical results for Example 2.
x ( 0 ) Schemes x ( k + 1 ) x ( k ) F x ( k + 1 ) IterACOCe-Time
1 2 1 2 MET 1 , 0.0001 , 7 4.79372 × 10 43 1.05769 × 10 292 3762.517649
MET 2 , 0.0001 , 6 4.0445 × 10 34 1.62857 × 10 197 3641.767678
MET 3 , 0.0001 , 7 4.99649 × 10 43 1.43001 × 10 292 3760.174395
MET 4 , 0.0001 , 7 3.79676 × 10 44 9.34063 × 10 301 3759.730706
MET 5 , 0.0001 , 9 8.03053 × 10 69 1.7569 × 10 608 3961.974947
Table 4. Numerical results for Example 3.
Table 4. Numerical results for Example 3.
x ( 0 ) Schemes x ( k + 1 ) x ( k ) F x ( k + 1 ) IterACOCe-Time
3 2 3 2 MET 1 , 0.0001 , 7 4.29798 × 10 13 1.74729 × 10 89 3749.968165
MET 2 , 0.0001 , 6 7.04681 × 10 10 1.88785 × 10 57 3633.644034
MET 3 , 0.0001 , 7 4.59589 × 10 13 2.84399 × 10 89 3747.877513
MET 4 , 0.0001 , 7 7.00027 × 10 17 7.71875 × 10 117 3748.574717
MET 5 , 0.0001 , 9 3.93811 × 10 25 4.0315 × 10 224 3947.829303
Table 5. Numerical results for Example 4.
Table 5. Numerical results for Example 4.
x ( 0 ) Schemes x ( k + 1 ) x ( k ) F x ( k + 1 ) IterACOCe-Time
3 2 3 2 MET 1 , 0.0001 , 7 3.42900 × 10 23 1.73936 × 10 162 3785.570920
MET 2 , 0.0001 , 6 2.12985 × 10 17 1.49899 × 10 104 3656.147216
MET 3 , 0.0001 , 7 3.23467 × 10 23 2.78702 × 10 162 3783.842336
MET 4 , 0.0001 , 7 4.28905 × 10 27 1.69352 × 10 190 3781.009858
MET 5 , 0.0001 , 9 6.93376 × 10 42 8.80969 × 10 378 3981.163539
Table 6. Numerical results for Example 5.
Table 6. Numerical results for Example 5.
x ( 0 ) Schemes x ( k + 1 ) x ( k ) F x ( k + 1 ) IterACOCe-Time
3 4 3 4 MET 1 , 0.0001 , 7 6.85093 × 10 36 4.40923 × 10 255 3788.183026
MET 2 , 0.0001 , 6 7.36696 × 10 31 1.36910 × 10 189 3657.656010
MET 3 , 0.0001 , 7 7.34097 × 10 36 7.15417 × 10 255 3784.166798
MET 4 , 0.0001 , 7 2.15009 × 10 37 1.29829 × 10 265 3782.999383
MET 5 , 0.0001 , 9 3.52145 × 10 57 4.99971 × 10 519 3983.408546
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cordero, A.; Leonardo-Sepúlveda, M.A.; Torregrosa, J.R.; Vassileva, M.P. Enhancing the Convergence Order from p to p + 3 in Iterative Methods for Solving Nonlinear Systems of Equations without the Use of Jacobian Matrices. Mathematics 2023, 11, 4238. https://doi.org/10.3390/math11204238

AMA Style

Cordero A, Leonardo-Sepúlveda MA, Torregrosa JR, Vassileva MP. Enhancing the Convergence Order from p to p + 3 in Iterative Methods for Solving Nonlinear Systems of Equations without the Use of Jacobian Matrices. Mathematics. 2023; 11(20):4238. https://doi.org/10.3390/math11204238

Chicago/Turabian Style

Cordero, Alicia, Miguel A. Leonardo-Sepúlveda, Juan R. Torregrosa, and María P. Vassileva. 2023. "Enhancing the Convergence Order from p to p + 3 in Iterative Methods for Solving Nonlinear Systems of Equations without the Use of Jacobian Matrices" Mathematics 11, no. 20: 4238. https://doi.org/10.3390/math11204238

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop