1. Introduction
Most of the applied science problems are nonlinear in nature because nature itself is nonlinear instead of simple or linear. The solutions of nonlinear problems are more complicated as compared to the linear and simple problems. Therefore, we consider a nonlinear problem of the following form:
(
is an analytic function). Such equations originate from the applied and computer science, engineering, statistics, economics, chemistry, biology and physics, etc. (see details in [
1,
2,
3]). The application of iterative methods can also be found for the computation of approximate solutions of stationary and evolutionary problems, which are associated with differential and partial differential equations (more details in [
4,
5]). The exact solutions of such problems are almost non-existent. Thus, we have to focus or depend on the approximate solutions that can be obtained with the help of iterative methods. One of the most famous schemes is known as Newton’s method, which is given by
Undoubtedly, this scheme has second-order convergence and is a widely used method for nonlinear equations. There are several problems with this scheme. Some of the major ones are: it is a one-point method (for convergence and efficiency problems; details are given in [
1,
2,
3]), it has a linear order of convergence for multiple zeros and the calculation of the first-order derivative at each substep. Finding the derivative is quite a rigorous problem because, sometimes, the derivative of a function consumes a large amount of time in achieving the final result or does not exist.
Therefore, higher-order optimal derivative-free methods came into demand. Then, some scholars suggested a few such methods that have fourth-order optimal convergence. Some of the most important members are given below.
In 2015, Hueso et al. [
6] suggested
where
is denoted by
.
In 2019, Sharma et al. [
7] proposed
where
is denoted by
. The suggested scheme (
4) is one of their best methods among others proposed by Sharma et al. [
7].
In 2019, Sharma et al. [
8] gave
where
and
, with
. The expression (
5) is one of their best schemes among other methods presented by Sharma et al. [
8]. We call it
.
In 2020, Kumar et al. [
9] presented
where
, which is called
. The expression (
6) is one of their best schemes among others given by Kumar et al. [
9].
In 2020, Behl et al. [
10] suggested:
where
and
, which is called
. Some other higher-order derivative-free techniques can be found in [
11,
12,
13,
14,
15].
We aspire to suggest a new two-step, more general and cost-effective family of iterative methods. The new scheme is derivative-free and has optimal convergence of order four. The derivation of this two-step scheme is based on the weight function technique. Further, we present three main Theorems 1–3, which demonstrate the fourth-order convergence for , when the value of is known in advance. The applicability of our methods is illustrated on four numerical problems. Two of them are real-life, the third one is root clustering (which originates from applied mathematics) and the last one is an academic problem. The numerical outcomes demonstrate preferable results in terms of absolute residual errors, CPU timing, approximated zeros and the absolute error difference between two consecutive iterations, in contrast to previous studies.
The rest of the paper is summarized as follows.
Section 2 includes the construction as well as the convergence analysis of our scheme. The convergence analysis is studied thoroughly in three Theorems 1–3.
Section 3 is devoted to the numerical experiments, where we illustrate the efficiency and convergence of our scheme. In addition, we also propose three weight functions that satisfy the hypotheses of Theorems 1–3. Further, four numerical problems are chosen to confirm the theoretical results. Finally, the concluding remarks are presented in
Section 4.
2. Construction of Higher-Order Scheme
We suggest a new form of iterative scheme that has fourth-order optimal convergence for multiple zeros, which is given by
where
and
is the known multiplicity of the needed zero. Further, the maps
and
are weight functions and analytic in the neighborhood of origin (0). Moreover, we considered
and
and
two multi-valued maps. We assume that the principal root (see [
16]) is given by
, with
for
. The choice of
for
agrees with that of
, which is depicted in the numerical section. In an analogous way, we obtain
, where
.
In Theorems 1–3, we demonstrate the convergence analysis of our scheme (
8), without adopting any extra value of
f at some other points.
Theorem 1. We assume that is a multiple zero of order two () of function f. Consider the map , which is analytic in in the neighborhood of the needed zero ξ. Then, our scheme (8) attains fourth-order of convergence, if where . The scheme (8) satisfies the following error equation: where .
Proof. We assume that
and
are the terms of error (in
th iteration) and asymptotic constant, respectively. We choose the Taylor’s series expansions for
f at two different points
and
in the neighborhood of
with hypotheses
and
. Then, we obtain
and
respectively, with
.
By inserting expressions (
10) and (
11) into scheme (
8), we have
It is clear from expression (
12) that we have
. Thus, we can easily expand
in the neighborhood of origin
in the following way:
By adopting expressions (
12) and (
13) in (
8), we obtain
From (
14), we observe that the scheme will attain at least second-order of convergence, when
By adopting expression (
15) in (
14), we obtain
With the help of Taylor’s series expansions, we obtain
By adopting (
12), (
13) and (
17), we further yield
and
where
.
From expression (
18), we have
. Thus, we expand
in the neighborhood of origin
, which is defined as
By inserting expressions (
10)–(
20) into expression (
8), we obtain
where
, e.g.,
, etc.
The coefficients of
,
and
should be simultaneously zero, in order to deduce the fourth-order convergence. This can be easily obtained by the following values:
We have the following error equation by adopting (
22) in (
21):
and
. We deduce from expression (
23) that our scheme (
8) has obtained the fourth-order of convergence for
and
. In addition, we have attained this convergence order without adopting any extra value of
f at some other points. Hence, (
8) is an optimal scheme. □
Theorem 2. Suppose that is a multiple solution of order three () of function f. Consider the map , which is analytic in surrounding the needed zero ξ. Then, our scheme (8) attains fourth-order convergence, ifwhere and . Scheme (8) satisfies the following error equation: where .
Proof. We assume that
and
are the terms of error (in
th iteration) and asymptotic constant, respectively. We choose the Taylor’s series expansions for
f at two different points
and
in the neighborhood
with hypotheses
and
. Then, we have
and
respectively, with
.
By using expressions (
25) and (
26) in scheme (
8), we obtain
Next, from expression (
27), we have
. Thus, we expand the weight function
in the neighborhood of origin
in the following way:
By using expressions (
27) and (
28) in scheme (
8), we have
From (
29), we observe that the scheme will attain at least second-order convergence, when
Substituting expression (
30) in (
29), we have
Again with the help of Taylor’s series expansions, we obtain
From expressions (
25), (
26) and (
32), we further yield
From expression (
33), we have
. Thus, we expand
in the neighborhood of origin
as:
By using expressions (
25)–(
35) in scheme (
8), we have
where
. For example, the first coefficient is explicitly written as
, etc.
The coefficients of
,
and
should be simultaneously zero, in order to deduce the fourth-order convergence. This can be easily attained by the following values:
We have the following error equation by adopting (
37) in (
36):
We deduce from expression (
38) that our scheme (
8) has obtained the fourth order of convergence for
and
. In addition, we have attained this convergence order without adopting any extra value of
f at some other points. Hence, (
8) is an optimal scheme. □
3. Numerical Experimentation
We demonstrate the efficiency and convergence of some members from our scheme (
8). Therefore, we choose the following three weight functions:
Clearly, all the above three weight functions satisfy the conditions provided in Theorems 1–3. Now, we use these weight functions in our scheme (
8) and call them
,
and
, respectively. We consider two applied science problems, one clustering root problem and an academic problem for the numerical tests.
There are no fixed criteria for the comparison of two different iterative methods. However, we assume the following six different aspects for the comparison:
The values of iterate at ;
The absolute residual error;
The differences between two consecutive iterations;
CPU timing;
The number of iterations for attaining accuracy up to ;
Computational order of convergence (COC) based on the accuracy.
The values of the above mentioned parameters are depicted in
Table 1,
Table 2,
Table 3,
Table 4,
Table 5,
Table 6,
Table 7 and
Table 8, along with initial guesses. The values of
,
, COC and
were calculated in Mathematica-9 for a minimum of 3000 significant digits, which minimizes the rounding off error. However, we depict these values up to 15 (with exponent), 2 (with exponent), 6 and 2 (with exponent) significant digits, respectively.
We adopted the following rules
and
in order to calculate the computational order of convergence and the approximate computational order of convergence (ACOC) [
17], respectively. Further, the CPU timing is obtained by the command “AbsoluteTiming[]” in
. We execute the same programming five times, and their average time is depicted in
Table 7. The
stands for
in
Table 1,
Table 2,
Table 3,
Table 4,
Table 5 and
Table 6.
The configurations and outline of the adopted laptop are defined as follows:
Processor: Intel(R) Core(TM)2 Duo CPU T6400 @ 2.00 GHz;
Manufacturer: HP;
Installed memory (RAM): 4:00 GB;
Windows edition: Windows 7 Professional;
System type: 64-bit-Operating System.
In order to maintain uniformity in the case of the comparison of the iterative methods, we choose
in the existing as well as our methods. We consider five existing methods for comparison, namely (
3)–(
7). The details of these methods are given in the Introduction.
Remark 2. For the following specific values of the weight functions,We can obtain the Behl’s scheme [18] as a special case of our method. Example 1 (Eigenvalue problem).
Eigenvalues and vectors are one of the most basic and challenging problems of linear algebra. The quality of a thing or object can be determined with the help of the eigenvalue problem. It is not always suitable to use the linear algebra technique. Thus, we have to rely on numerical techniques, which provide the approximate zero. Therefore, we choose the proceeding square matrix of , which has multiple zeros:whose characteristic equation is given below:The function has , a multiple zero with . The computational results along with starting guesses are depicted in Table 1 and Table 2. From Table 1, we can conclude that methods and display the most outstanding behavior among the mentioned methods in terms of accurate iterate , the difference between two consecutive iterations and absolute residual errors. Further, we can say that the other methods have almost two times larger residual errors than and . We can observe from Table 2 that the desired root is closer from the second iteration onward in our suggested methods and as compared to the mentioned ones. In addition, the other existing methods have almost three times larger residual errors, which demonstrates the better performance of our methods and .
Table 1.
Behavior of iterative methods on eigenvalue problem with .
Table 1.
Behavior of iterative methods on eigenvalue problem with .
Methods | | | | |
---|
HM | 1 | | | 5.3(−3) |
| 2 | | | |
| 3 | | | |
SM1 | 1 | | | |
| 2 | | | |
| 3 | | | |
SM2 | 1 | | | |
| 2 | | | |
| 3 | | | |
KM | 1 | | | |
| 2 | | | |
| 3 | | | |
BM | 1 | | | |
| 2 | | | |
| 3 | | | |
| 1 | | | |
| 2 | | | |
| 3 | | | |
PM2 | 1 | | | |
| 2 | | | |
| 3 | | | |
PM3 | 1 | | | |
| 2 | | | |
| 3 | | | |
Table 2.
Behavior of iterative methods on eigenvalue problem with .
Table 2.
Behavior of iterative methods on eigenvalue problem with .
Methods | | | | |
---|
HM | 1 | | | |
| 2 | | | |
| 3 | | | |
SM1 | 1 | | | |
| 2 | | | |
| 3 | | | |
SM2 | 1 | | | |
| 2 | | | |
| 3 | | | |
KM | 1 | | | |
| 2 | | | |
| 3 | | | |
BM | 1 | | | |
| 2 | | | |
| 3 | | | |
PM1 | 1 | | | |
| 2 | | | |
| 3 | | | |
PM2 | 1 | | | |
| 2 | | | |
| 3 | | | |
PM3 | 1 | | | |
| 2 | | | |
| 3 | | | |
Example 2 (Continuous stirred tank reactor (CSTR)).
Here, we consider another problem of applied science, namely an isothermal continuous stirred tank reactor (CSTR). The components and are the fed rates of the reactors and , respectively. In this way, we have the proceeding reaction scheme (for details, see [19]):Douglas [20] invented this model (
55)
while designing a simple model that can control the feedback systems. He transformed the expression (
55)
in the mathematical form, which is given bywhere is the gain of the proportional controller. For a particular value of , we obtainThe solutions of are called poles of the open-loop transfer function. The zeros of are . Among them, is a multiple zero with . The starting points and numerical results for are illustrated in Table 3 and Table 4. From Table 3, we find that the lowest residual error among the existing methods is ; however, our methods and have , and , respectively. Thus, we can say that the existing methods have almost two times larger residual errors than our methods. This also indicates the faster convergence of our methods and as compared to others. Our techniques and also perform much better in terms of and as compared to other existing ones. We can observe from Table 4 that our method has the lowest residual error as compared to (which is the lowest among other existing ones ). This clearly indicates that has the fastest convergence and smallest residual error among others. Our methods and have almost a two times lower error difference and better as compared to other existing ones.
Table 3.
Behavior of iterative methods on CSTR problem with .
Table 3.
Behavior of iterative methods on CSTR problem with .
Methods | | | | |
---|
HM | 1 | | | |
| 2 | | | |
| 3 | | | |
SM1 | 1 | | | |
| 2 | | | |
| 3 | | | |
SM2 | 1 | | | |
| 2 | | | |
| 3 | | | |
KM | 1 | | | |
| 2 | | | |
| 3 | | | |
BM | 1 | | | |
| 2 | | | |
| 3 | | | |
PM1 | 1 | | | |
| 2 | | | |
| 3 | | | |
PM2 | 1 | | | |
| 2 | | | |
| 3 | | | |
PM3 | 1 | | | |
| 2 | | | |
| 3 | | | |
Table 4.
Behavior of iterative methods on CSTR problem with .
Table 4.
Behavior of iterative methods on CSTR problem with .
Methods | | | | |
---|
HM | 1 | | | |
| 2 | | | |
| 3 | | | |
SM1 | 1 | | | |
| 2 | −2.85000000000000 | | |
| 3 | | | |
SM2 | 1 | | | |
| 2 | | | |
| 3 | | | |
KM | 1 | | | |
| 2 | | | |
| 3 | | | |
BM | 1 | | | |
| 2 | | | |
| 3 | | | |
PM1 | 1 | | | |
| 2 | | | |
| 3 | | | |
PM2 | 1 | | | |
| 2 | | | |
| 3 | | | |
PM3 | 1 | | | |
| 2 | | | |
| 3 | | | |
Example 3 (Root clustering problem).
We chose a root clustering problem similar to Zeng [21]:The zeros of are and of multiplicity and , respectively. All of the zeros are quite close to each other. Therefore, this is known as a root clustering problem. We chose as the multiple zero of multiplicity 191
for the numerical experiment. The computational results are depicted in Table 5 with an initial approximation. Undoubtedly, demonstrates slightly better behavior than our and existing methods, as shown in Table 5. However, there is no huge difference, as our methods show in the previous Table 1, Table 2, Table 3 and Table 4. Our results are also significantly closer to in terms of . It is merely a difference of only four significant digits in the case of .
Table 5.
Behavior of iterative methods on root clustering problem with .
Table 5.
Behavior of iterative methods on root clustering problem with .
Methods | | | | |
---|
HM | 1 | | | |
| 2 | | | |
| 3 | | | |
SM1 | 1 | | | |
| 2 | | | |
| 3 | | | |
SM2 | 1 | | | |
| 2 | | | |
| 3 | | | |
KM | 1 | | | |
| 2 | | | |
| 3 | | | |
BM | 1 | | | |
| 2 | | | |
| 3 | | | |
PM1 | 1 | | | |
| 2 | | | |
| 3 | | | |
PM2 | 1 | | | |
| 2 | | | |
| 3 | | | |
PM3 | 1 | | | |
| 2 | | | |
| 3 | | | |
Example 4 (Academic problem).
We chose another academic problem, which is given byThe zero of is of multiplicity . Computational results are depicted in Table 6 with initial approximation. Undoubtedly, demonstrates slightly better behavior than our and existing methods, as shown in Table 6. However, there is no huge difference, as our methods show in the previous Table 1, Table 2, Table 3 and Table 4. Our results are also significantly closer to in terms of , with a difference of only four significant digits in the case of .
Table 6.
Behavior of iterative methods on with .
Table 6.
Behavior of iterative methods on with .
Methods | | | | |
---|
HM | 1 | | | |
| 2 | | | |
| 3 | | | |
SM1 | 1 | | | |
| 2 | | | |
| 3 | | | |
SM2 | 1 | | | |
| 2 | | | |
| 3 | | | |
KM | 1 | | | |
| 2 | | | |
| 3 | | | |
BM | 1 | | | |
| 2 | | | |
| 3 | | | |
PM1 | 1 | | | |
| 2 | | | |
| 3 | | | |
PM2 | 1 | | | |
| 2 | | | |
| 3 | | | |
PM3 | 1 | | | |
| 2 | | | |
| 3 | | | |
Remark 3. From Table 7, we find that has the lowest average execution time for attaining the desired accuracy. The average execution times of the computational results of methods and , respectively, are two and three times those of and . Further, and also consume less CPU time (on average) as compared to and .
Table 7.
CPU timing on the basis of number of iterations.
Table 7.
CPU timing on the basis of number of iterations.
| Ex. (1) | Ex. (2) | Ex. (3) | Ex. (4) | | |
---|
| | | | | | | | |
HM | 0.060000 | 0.350000 | 0.015001 | 0.060000 | 17.610078 | 0.0023191 | 18.0973921 | 3.01623202 |
SM1 | 0.062001 | 0.340000 | 0.010000 | 0.045003 | 25.445229 | 0.0015465 | 25.9037795 | 4.31729658 |
SM2 | 0.050000 | 0.340000 | 0.010000 | 0.046001 | 10.761069 | 0.001541 | 11.208611 | 1.86810183 |
KM | 0.060000 | 0.332004 | 0.010000 | 0.048003 | 10.118055 | 0.0077654 | 10.5758274 | 1.7626379 |
BM | 0.050000 | 0.331000 | 0.010000 | 0.040000 | 10.063077 | 0.0014246 | 10.4955016 | 1.74925027 |
PM1 | 0.050000 | 0.320000 | 0.002000 | 0.031000 | 7.608033 | 0.0014013 | 8.0124343 | 1.33540572 |
PM2 | 0.050000 | 0.316002 | 0.003000 | 0.040000 | 7.762041 | 0.0014151 | 8.1724581 | 1.36207635 |
PM3 | 0.240000 | 0.320000 | 0.004000 | 0.040000 | 7.605034 | 0.0018232 | 8.2108572 | 1.3684762 |
Remark 4. On the basis of the obtained number of iterations in Table 8, we conclude that requires the fewest average iterations (in order to attain the desired accuracy) as compared to the existing methods. In addition, the average number of iterations of our methods and (4) is also lower as compared to (which is the lowest among the existing methods). Thus, we deduce that our method is the fastest among other mentioned methods.
Table 8.
Number of iterations required in order to attain the desired accuracy.
Table 8.
Number of iterations required in order to attain the desired accuracy.
| Ex. (1) | Ex. (2) | Ex. (3) | Ex. (4) | | |
---|
| | | | | | | | |
HM | 6 | 6 | 7 | 7 | 4 | 5 | 35 | 5.83 |
SM1 | 5 | 4 | 5 | 4 | 4 | 3 | 26 | 4.3 |
SM2 | 5 | 4 | 5 | 5 | 4 | 3 | 26 | 4.3 |
KM | 5 | 4 | 5 | 5 | 4 | 3 | 26 | 4.3 |
BM | 5 | 4 | 5 | 5 | 4 | 3 | 26 | 4.3 |
PM1 | 5 | 4 | 4 | 4 | 4 | 3 | 24 | 4 |
PM2 | 4 | 4 | 4 | 4 | 4 | 3 | 23 | 3.83 |
PM3 | 4 | 4 | 4 | 4 | 4 | 4 | 24 | 4 |
Remark 5. From Table 9, it is straightforward to say that methods and exhibit consistent COC (except Example 4) in contrast to the other existing methods. The calculation of COC is based on the number of iterations (which is depicted in Table 8 corresponding to the methods and examples).
Table 9.
COC based on the number of iterations required in order to attain the desired accuracy.
Table 9.
COC based on the number of iterations required in order to attain the desired accuracy.
| Ex. (1) | Ex. (2) | Ex. (3) | Ex. (4) |
---|
| | | | | | |
HM | 4.000 | 4.000 | 2.000 | 2.000 | 4.000 | 3.000 |
SM1 | 4.000 | 4.000 | 1.325 | 1.321 | 5.883 | 5.000 |
SM2 | 4.000 | 4.000 | 1.330 | 6.012 | 4.000 | 5.000 |
KM | 4.000 | 4.000 | 1.330 | 6.014 | 4.000 | 5.000 |
BM | 4.000 | 4.000 | 1.329 | 6.017 | 4.000 | 5.000 |
PM1 | 4.000 | 4.000 | 4.000 | 4.000 | 4.000 | 5.000 |
PM2 | 4.000 | 4.000 | 4.000 | 4.000 | 4.000 | 5.000 |
PM3 | 4.000 | 4.000 | 4.000 | 4.000 | 4.000 | 5.000 |