Next Article in Journal
On a Conjecture about the Sombor Index of Graphs
Next Article in Special Issue
Equivalent Conditions of the Reverse Hardy-Type Integral Inequalities
Previous Article in Journal
Solvability Conditions and General Solution of a System of Matrix Equations Involving η-Skew-Hermitian Quaternion Matrices
Previous Article in Special Issue
Surfaces and Curves Induced by Nonlinear Schrödinger-Type Equations and Their Spin Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Inexact Optimal Hybrid Conjugate Gradient Method for Solving Symmetric Nonlinear Equations

by
Jamilu Sabi’u
1,2,
Kanikar Muangchoo
3,*,
Abdullah Shah
1,
Auwal Bala Abubakar
4,5 and
Kazeem Olalekan Aremu
5,6
1
Department of Mathematics, COMSATS University Islamabad, Islamabad 44000, Pakistan
2
Department of Mathematics, Yusuf Maitama Sule University Kano, Kano 700241, Nigeria
3
Faculty of Science and Technology, Rajamangala University of Technology Phra Nakhon (RMUTP), 1381, Pracharat 1 Road, Wongsawang, Bang Sue, Bangkok 10800, Thailand
4
Department of Mathematical Sciences, Faculty of Physical Sciences, Bayero University, Kano 700241, Nigeria
5
Department of Mathematics and Applied Mathematics, Sefako Makgatho Health Sciences University, Ga-Rankuwa, Pretoria, Medunsa 0204, South Africa
6
Department of Mathematics, Usmanu Danfodiyo University Sokoto, Sokoto P.M.B. 2346, Nigeria
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(10), 1829; https://doi.org/10.3390/sym13101829
Submission received: 2 March 2021 / Revised: 22 March 2021 / Accepted: 23 March 2021 / Published: 1 October 2021
(This article belongs to the Special Issue Symmetry in Abstract Differential Equations)

Abstract

:
This article presents an inexact optimal hybrid conjugate gradient (CG) method for solving symmetric nonlinear systems. The method is a convex combination of the optimal Dai–Liao (DL) and the extended three-term Polak–Ribiére–Polyak (PRP) CG methods. However, two different formulas for selecting the convex parameter are derived by using the conjugacy condition and also by combining the proposed direction with the default Newton direction. The proposed method is again derivative-free, therefore the Jacobian information is not required throughout the iteration process. Furthermore, the global convergence of the proposed method is shown using some appropriate assumptions. Finally, the numerical performance of the method is demonstrated by solving some examples of symmetric nonlinear problems and comparing them with some existing symmetric nonlinear equations CG solvers.

1. Introduction

Consider the symmetric nonlinear system
F ( x ) = 0 , x R n ,
where F : R n R n is a continuously differentiable mapping. The symmetry of the function F ( x ) , means that the Jacobian of F ( x ) is symmetric. Such a class of problems could be defined from the gradient mapping of an unconstrained optimization problem, the Karush–Kuhn–Tucker (KKT) of equality constrained optimization problem, the discretized two-point boundary value problem, the saddle point problem, the discretized elliptic boundary value problem, and so on, see in [1,2,3]. The Newton method and its modifications are among the common methods for solving (1) despite their drawbacks [4,5,6]. Among the main drawbacks is that if the Jacobian of F ( x ) is singular, the Newton method may not be useful for finding the solution of (1). Nonetheless, the solving of linear equation or the storage of Jacobian inverse at each iteration are also known to be among the drawbacks of both Newton and quasi-Newton methods.
The conjugate gradient method is known to be a remedy for all matrix requiring iterative methods for finding the solution of (1), see in [7]. The method performs the iteration
x 0 R n , x k = x k 1 + γ k d k 1 , k = 1 , 2 , ,
where γ k is the step size to be determined via any suitable line search and d k is the CG direction defined by
d 0 = F ( x 0 ) , d k = F ( x k ) + β k d k 1 , k 1 .
The term β k in (3) is a scalar known as the CG parameter. This scalar is what distinguished the various CG methods [8,9,10,11,12]. However, the CG parameter that was proposed by Dai–Liao (DL) is considered to be one of the most efficient CG parameters whenever the non-negative constant is appropriately selected [13]. The DL CG parameter is given by
β k D L = F k T ( y k 1 t s k 1 ) d k 1 T y k 1 ,
with t 0 , s k 1 = x k x k 1 , y k 1 = F k F k 1 , and F k = F ( x k ) . Moreover, in an attempt to ensure the appropriate and optimal selection of t, Babaie-Kafaki and Ghanbari [14] proposed the following choices:
t k 1 = y k 1 s k 1 ,
and
t k 2 = y k 1 T s k 1 s k 1 2 + y k 1 s k 1 .
Notwithstanding, the CG parameter proposed by Polak, Ribiére, and Polyak (PRP) is also numerically efficient and possess a restart feature that avoids jamming [8,9]. The PRP parameter is given by
β k P R P = F k T y k 1 F k 1 2 .
The following descent modification of the PRP parameter has been proposed by Babaie-Kafaki and Ghanbari [10] based on the Dai–Liao approach [13] as
β k D P R P = F k T y k 1 F k 1 2 + ζ F k T d k 1 F k 1 2 ,
where ζ is a real constant. Based on comprehensive singular value analysis, the authors suggested
ζ k * = d k 1 T y k 1 d k 1 2
to be the optimal choice of the real constant ζ , see in [15]. Moreover, for the symmetric nonlinear equations, Li and Wang [7] proposed an effective algorithm for solving symmetric nonlinear equations by combining the modified Fletcher–Reeves CG method [16] with an inexact gradient relation introduced in [1]. The reported numerical experiments illustrated that their algorithm is promising in handling large-scale symmetric nonlinear equations. This result in further studies on conjugate gradient methods for solving symmetric nonlinear equations were inspired. Zhou and Chen used the modified three-term CG method [17] with the approximate gradient and suggested HS-type CG method for solving symmetric nonlinear equations [18]. Thereafter, some efficient CG methods for unconstrained optimization problems were incorporated with the approximate gradient relation to solve large scale symmetric nonlinear equations. For example, the inexact PRP method [5], the norm descent CG method [19], the derived CG algorithm for symmetric nonlinear systems [20], and so on, see in [12,21,22,23] for more details. In this article, motivated by the efficiencies of both the optimal DL CG method [14] and the optimal PRP CG method [15], we will propose an inexact optimal hybrid method for solving symmetric nonlinear equations. The proposed method is derivative-free and matrix-free, and therefore could sufficiently handle large-scale symmetric nonlinear systems efficiently.
The remaining parts of this paper are as follows. The next section is the derivation and details of the proposed method, followed by a convergence analysis, numerical experiment, and conclusion.

2. A Class of Optimal Hybrid CG Method

This section describes the proposed optimal hybrid CG method for solving symmetric nonlinear equations with two different choices for the selection of the convex parameter at every iteration. Recall that Li and Fukushima [1] suggested the following approximation:
h k = F ( x k + γ k F k ) F k γ k ,
for the gradient function f ( x ) , where γ k is an arbitrary scalar and the function f ( x ) is specified by
f ( x ) : = 1 2 F ( x ) 2 .
Now, we proposed the optimal hybrid CG parameter as
β k H * = ( 1 ω k ) β k D L * + ω k β k P R P * , ω k 0 , 1 ,
where
β k D L * = h k T ( y k 1 t k * s k 1 ) d k 1 T y k 1 ,
β k P R P * = h k T y k 1 h k 1 2 + ζ k * h k T d k 1 h k 1 2 ,
y k 1 = h k h k 1 , s k 1 = x k x k 1 , t k * = t k 1 or t k 2 , and γ k to be determined such that
f ( x k + γ k d k ) f ( x k ) τ 1 | | γ k F ( x k ) | | 2 τ 2 | | γ k d k | | 2 + ξ k f ( x k ) ,
with τ 1 , τ 2 ( 0 , 1 ) , ξ k is a non-negative sequence defined by
k = 0 ξ k < .
Moreover, we defined our optimal hybrid direction as
d k = h k , k = 0 a n d d k = h k + β k H * d k 1 k 1 .
However, an optimal selection of ω k 0 , 1 in (12) would yield an optimal β k H . Therefore, in order to have an optimal choice of ω k 0 , 1 at every iteration, we proceed as follows.

2.1. The First Choice

The newton method is known to contain the full information of the Jacobian matrix; therefore, the first hybridization parameter will be derived by combining the proposed direction and the Newton direction. Recall that the default Newton direction is defined by
d k = J k 1 F k ,
where J k 1 is the Jacobian inverse. Using the approximate gradient (10), the Newton direction can be rewritten as
d k = J k 1 h k ,
Combining (19) with (17), we get
J k 1 h k = h k + β k H * d k 1 .
Multiply (20) by s k 1 T J k to obtain
s k T h k = s k 1 T J k h k β k H * s k 1 T J k d k 1 .
It can be observed that (21) has a Jacobian matrix; therefore, to eliminate the Jacobian matrix we employ the following secant equation:
y k 1 = J k s k 1 .
Now, using (22) in (21) together with the symmetry of the Jacobian matrix, we have
s k 1 T h k = y k 1 T h k β k H * y k 1 T d k 1 ,
this yield
s k 1 T h k = y k 1 T h k ( 1 ω k ) β k D L * + ω k β k P R P * y k 1 T d k 1 .
Now, solving for ω k , we have
ω k 1 = s k 1 T h k + β k D L * y k 1 T s k 1 y k 1 T h k β k D L * β k P R P * y k 1 T s k 1 .

2.2. The Second Choice

The second choice of the hybridization parameter is obtained by utilizing the conjugacy condition. Recall that the conjugacy condition is given by
y k T d k = 0 , k .
Now, using the direction (17) and Equation (26), we obtain that
y k 1 T h k + β k H * y k 1 T d k 1 = 0 ,
and using the definition of β k H * in (27), we get
y k 1 T h k + β k D L * y k 1 T d k 1 + ω k ( β k P R P * β k D L * ) y k 1 T d k 1 = 0 .
Therefore, solving for ω k in (28), we get the second formula for the computation of ω k at every iteration as follows:
ω k 2 = y k 1 T h k β k D L * y k 1 T d k 1 ( β k P R P * β k D L * ) y k 1 T d k 1 .
Furthermore, note that the proposed choices for ω k in (25) and (29) may be outside the interval 0 , 1 . However, to maintain the convex combination in (12), we choose ω k to be 1 whenever ω k > 1 , and ω k = 0 whenever ω k < 0 . Some of the interesting features of the proposed method is that if ω k = 1 , the proposed method reduces to PRP-type method [24] and for ω k = 0 , we have an inexact Dai–Liao CG method for symmetric nonlinear equations which has not yet been presented in the literature. Below is the proposed Algorithm 1:
Algorithm 1: Optimal hybrid CG method (OHCG).
step 0: Select x 0 R n , and initialize the constants s 0 , 1 , ϵ , τ 1 , τ 2 0 . Set k = 0 and choose the positive sequence ξ k .
step 1: Whenever F k ϵ , stop, if not go to Step 2.
step 2: Calculate the search direction via (17) using any of the optimal choice (25) or (29).
step 3: Determine γ k = max 1 , s , s 2 , satisfying
f ( x k + γ k d k ) f ( x k ) τ 1 | | γ k F ( x k ) | | 2 τ 2 | | γ k d k | | 2 + ξ k f ( x k ) .
step 4: Compute x k using (2).
step 5: Set k = k + 1 and go to Step 1.

3. Global Convergence

This section will prove the global convergence of Algorithm 1 using the following assumptions: Start by defining the level set
χ = x | f ( x ) exp ξ f ( x 0 ) ,
such that ξ fulfills the condition (16).
Assumption 1.
1.
The level set (31) is bounded.
2.
In a neighborhood W of χ, the Jacobian of h ( x ) is bounded and symmetric positive definite, i.e., there exists some q q 2 > 0 such that
f ( x ) q , x W ,
and
q 2 g 2 g T f ( x ) g , x W , g R n .
The Assumption 1 above implies that there exists R 1 , R 2 , q 1 > 0 such that
J ( x ) R 1 , F ( x ) R 2 , x W ,
and
f ( x ) f ( z ) q 1 x z , x , z W .
Lemma 1.
Let x k be generated by Algorithm 1 (OHCG). Then, F k converges and x k χ .
Proof. 
From the step length (30) and using the definition (11), it is not difficult to see that
F k + 1 ( 1 + ξ k ) F k .
As the sequence ξ k is bounded, then we apply the result of the Lemma 3.3 in [25] and hence F k converges. Again, from the result
F k + 1 ( 1 + ξ k ) 1 2 F k i = 0 k ( 1 + ξ k ) 1 2 F 0 F 0 1 k + 1 i = 0 k ( 1 + ξ i ) k + 1 2 F 0 1 + 1 k + 1 i = 0 k ξ i k + 1 2 F 0 1 + ξ k + 1 k + 1 e ξ F 0 .
This implies x k χ . □
Lemma 2.
Suppose our assumption hold. Then,
lim k γ k d k = lim k | | s k | | = 0 , lim k γ k F k = 0 .
Proof. 
The proof is directly follow from (16) and (30). □
Theorem 1.
Algorithm 1 (OHCG) converges globally whenever our assumption hold, i.e.,
lim inf k f ( x k ) = 0 .
Proof. 
Suppose (39) is not true, there exists α > 0 such that
f ( x k ) α , k 0 .
As f ( x k ) = J k T F k , we can deduce from (40) that there exists α 1 > 0 satisfying
F k α 1 , k 0 .
CASE I: lim sup k γ k > 0 , then (38) implies that lim sup k F k = 0 . This fact and that of Lemma 1 implies that lim k F k = 0 . Therefore, this contradicts with (41).
CASE II: lim sup k γ k = 0 . Already γ k 0 , this shows that
lim k γ k = 0 .
Let us consider the definition of h k in (10) and (34), then
h k = 0 1 J ( x k + l γ k 1 F k ) F k d l R 1 R 2 , k 0 .
Now using (33), we have
y k = 0 1 h ( x k + l s k ) d l s k q s k .
Furthermore, by the mean-value theorem, we have
y k T s k = s k T ( h k h k 1 ) = s k T h ( ϑ ) s k q 2 s k 2 ,
where ϑ = x k 1 + υ ( x k x k 1 ) , υ ( 0 , 1 ) . Again, from the definition of β k D L * and β k P R P * , we get
d k = h k + β k H * d k 1 = h k + ( 1 ω k ) β k D L * + ω k β k P R P * d k 1 = h k + ( 1 ω k ) h k T ( y k 1 t k 1 s k 1 ) d k 1 T y k 1 + ω k h k T y k 1 h k 1 2 + ω k ζ k * h k T d k 1 h k 1 2 d k 1 h k + ( 1 ω k ) h k ( y k 1 + t k 1 s k 1 ) d k 1 T y k 1 + ω k h k y k 1 h k 1 2 + ω k ζ k * h k d k 1 h k 1 2 d k 1 R 1 R 2 + ( 1 ω k ) 2 R 1 R 2 q q 2 + 2 ω k R 1 R 2 q s k 1 α 2 d k 1 ,
where we used Cauchy–Schwartz inequality in the first equality and inequalities (43), (44), and (45) in the last inequality. Now from (38) there exist c ^ ( 0 , 1 ) such that s k 1 c ^ , thus we have
d k R 1 R 2 + M d k 1 R 1 R 2 ( 1 + M + M 2 + + M k ) + M k d 0 R 1 R 2 1 M + R 1 R 2 = R 1 R 2 ( 2 M ) 1 M ,
where M = ( 1 ω k ) 2 R 1 R 2 q q 2 + 2 ω k R 1 R 2 c ^ q α 2 . This shows that our direction is bounded. As lim k γ k = 0 , then γ k = γ k a does not satisfied (30), that is to say,
f ( x k + γ k d k ) f ( x k ) > τ 1 γ k F ( x k ) 2 τ 2 γ k d k 2 + ξ k f ( x k ) ,
which means that
f ( x k + γ k h k ) f ( x k ) γ k > τ 1 γ k F ( x k ) 2 τ 2 γ k d k 2 .
By mean value theorem, δ ( 0 , 1 ) exists in such a way that
f ( x k + γ k d k ) f ( x k ) γ k = f ( x k + δ γ k d k ) T d k .
As χ is bounded and x χ , then we assume that x k x * and have the following result using (3) and (10):
lim k d k = lim k h k + lim k β k H * d k 1 = f ( x * ) .
On the other hand,
lim k f ( x k + δ γ k d k ) = f ( x * ) .
Therefore, using (48)–(51), we get f ( x * ) T f ( x * ) 0 which indicate that f ( x * ) = 0 . This yields a contradiction and thus the proof is complete. □

4. Numerical Experiment

This section will provide the numerical experiments comparison between the proposed algorithm and some efficient CG solvers for solving symmetric nonlinear equations. However, for the numerical experiment of the proposed algorithm, we chose the optimal choice t k 1 defined by (5) for the optimal DL method and ζ k * defined by (9) for the optimal PRP methods. The algorithms were written in Matlab R2014a and ran on a 1.6 GHz CPU processor with 8GB RAM memory. To justify the numerical comparison, the parameters for the NDAS method [22] and the ICGM method [21] were set the same as in the reference papers. However, we set s = 0.3 , τ 1 = τ 2 = 0.0001 , γ k 1 = 0.01 , and ϵ = 10 4 for our proposed methods. In our experiment, we considered the following problems:
Problem 1.
 [26] The precise description of the F ( x ) function is described as
F ( x i ) = exp ( x i ) 1 , for i = 1 , 2 , 3 , , n .
Problem 2.
 [27] The precise description of the F ( x ) function is described as
F 1 ( x ) = x 1 ( x 1 2 + x 2 2 ) 1
F i ( x ) = x i ( x i 1 2 + 2 x i 2 + x i + 1 2 ) 1 , i = 2 , 3 , , n 1 ,
F n ( x ) = x n ( x n 1 2 + x n 2 )
Problem 3.
 [5] The precise description of the F ( x ) function is described as
F i ( x ) = i 10 ( 1 x i 2 e x i 2 ) , i = 1 , 2 , , n 1
F n ( x ) = n 10 ( 1 e x n 2 )
Problem 4.
 [28] The precise description of the F ( x ) function is described as
F 1 ( x ) = 3 x 1 3 + 2 x 2 5 sin ( x 1 x 2 ) sin ( x 1 + x 2 )
F i ( x ) = x i 1 exp ( x i 1 x i ) + x i ( 4 + 3 x i 2 ) + sin ( x i x i + 1 ) sin ( x i + x i + 1 ) , i = 2 , 3 , , n 1 ,
F n ( x ) = x n 1 exp ( x n 1 x n ) + 4 x n 3
Problem 5.
 [27] The precise description of the F ( x ) function is described as
F ( x 1 ) = x 1 exp cos ( x 1 + x 2 ) n + 1
F ( x i ) = x i exp cos ( x i 1 + x i + x i + 1 ) n + 1 i = 2 , 3 , , n 1 .
F ( x n ) = x n exp cos ( x n 1 + x n ) n + 1
Problem 6.
 [29] The precise description of the F ( x ) function is described as
F ( x i ) = log ( x i + 1 ) x i n , i = 1 , 2 , 3 , , n .
Problem 7.
 [29] The precise description of the F ( x ) function is described as
F ( x i ) = 2 x i sin x i , for i = 1 , 2 , 3 , , n .
Problem 8.
 [29] The precise description of the F ( x ) function is described as
F 1 ( x ) = x 1 + exp ( cos ( h x 1 + x 2 ) ) ,
F i ( x ) = x i + exp ( cos ( h x i 1 + x i + x i + 1 ) ) , i = 2 , 3 , , n 1 , h = 1 n + 1
F n ( x ) = x n + exp ( cos ( h x n 1 + x n ) )
Problem 9.
 [29] The precise description of the F ( x ) function is described as
F ( x i ) = x i sin x i 1 , for i = 1 , 2 , 3 , , n .
The numerical efficacy of our algorithm in terms of number of iterations and the CPU time in second is shown in Table 1, Table 2, Table 3, Table 4, Table 5. The term “ITR” reflects the number of iterations, “TIME” for the CPU time, and “NORM” for the norm value of the function at the stopping point. Iteration ends whenever F k 10 4 or the number of iterations reaches 1000. During the entire experiment, the following initial points were used, namely, x 1 = ( 0.1 , 0.1 , , 0.1 ) , x 2 = ( 0.2 , 0.2 , , 0.2 ) , x 3 = ( 0.3 , 0.3 , , 0.3 ) , x 4 = ( 1 , 1 2 , 2 3 , , n n ) , x 5 = ( 1 1 , 1 1 2 , 1 2 3 , , 1 n n ) , x 6 = ( 0.1 , 0.1 , , 0.1 ) , x 7 = 0.1 ( 1 , 1 2 , 2 3 , , n n ) , and x 8 = ( 1 1 , 1 1 2 , 1 2 3 , , 1 n n ) .
From Table Table 1, which includes Problems 1–2, it can be observed that our algorithm wins both the number of iterations and the CPU time, followed by the ICGM and NDAS algorithms. Nevertheless, note that the NDAS method failed to solve Problem 1 for almost six initial points, while the NDAS method has an advantage over the ICGM method for Problem 2 both for the number of iterations and computing time. Similarly, for all remaining problems, our algorithm has the less number of iterations and CPU time, except for Problems 5 and 8. For these two problems, the NDAS and ICGM methods have substantially less computing time and compete with the proposed algorithm for the number of iterations. We provide a picture of the overall performance for both the number of iterations and the CPU time using the Dolan and Moré [30] performance profile. Based on their profile, the best method is the method whose curve is at the top left corner. It could be observed that our algorithm reflected the top left curves in Figure 1 and Figure 2 for the number of iterations and the time of the CPU, respectively. This clearly shows that our algorithm is the best in terms of the number of iteration and CPU time compared to the NDAS and ICGM algorithms.

5. Conclusions

This paper presented an inexact optimal hybrid CG algorithm for solving a system of symmetric nonlinear equations. The hybrid method presented here is the convex combination of the optimal DL method and the optimal three-term PRP CG method using the Li and Fukushima approximate gradient relation. The method is matrix-free and derivative-free, enabling it to handle large-scale system of symmetric nonlinear equations effectively. Moreover, some mild assumptions are used to prove the global convergence of the method. Nevertheless, the numerical efficiency of the method was also demonstrated using some test problems by comparing the number of iterations and the CPU time compared to the NDAS method [22] and ICGM method [21]. Its overall success has shown that our proposed approach is a better alternative for solving symmetric nonlinear equations in terms of the number of iterations and the CPU time. Generally, the CG methods for the symmetric nonlinear systems can be improved by devising new efficient CG parameters or modifying the existing CG directions using the appropriate techniques. As a possibility for future research, time complexity analysis and comparison with evolutionary optimization algorithms for global optimization problems could be looked into.

Author Contributions

Conceptualization, J.S.; methodology, J.S.; software, A.B.A.; validation, K.M. and A.S.; formal analysis, K.M. and K.O.A.; investigation, K.M. and A.B.A.; resources, K.M.; data curation, A.B.A. and A.S.; writing—original draft preparation, J.S.; writing—review and editing, A.B.A.; visualization, K.O.A.; supervision, K.M.; project administration, A.S.; funding acquisition, K.O.A. All authors have read and agreed to the published version of the manuscript.

Funding

The first author is grateful to TWAS/CUI for the Award of FR number: 3240299486.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The second author was financially supported by Rajamangala University of Technology Phra Nakhon (RMUTP) Research Scholarship. The last two authors acknowledge with thanks the Department of Mathematics and Applied Mathematics at the Sefako Makgatho Health Sciences University.

Conflicts of Interest

The authors declare no conflict of interest.

Sample Availability

Samples of the compounds are available from the authors.

References

  1. Li, D.; Fukushima, M. A Globally and Superlinearly Convergent Gauss-Newton-Based BFGS Method for Symmetric Nonlinear Equations. SIAM J. Numer. Anal. 1999, 37, 152–172. [Google Scholar] [CrossRef]
  2. Gu, G.Z.; Li, D.H.; Qi, L.; Zhou, S.Z. Descent directions of quasi-Newton methods for symmetric nonlinear equations. SIAM J. Numer. Anal. 2002, 40, 1763–1774. [Google Scholar] [CrossRef] [Green Version]
  3. Waziri, M.Y.; Sabi’u, J. A derivative-free conjugate gradient method and its global convergence for solving symmetric nonlinear equations. Int. J. Math. Math. Sci. 2015, 2015, 961487. [Google Scholar] [CrossRef] [Green Version]
  4. Yuan, G.; Lu, X.; Wei, Z. BFGS trust-region method for symmetric nonlinear equations. J. Comput. Appl. Math. 2009, 230, 44–58. [Google Scholar] [CrossRef] [Green Version]
  5. Zhou, W.; Shen, D. An inexact PRP conjugate gradient method for symmetric nonlinear equations. Num. Funct. Anal. Opt. 2014, 35, 370–388. [Google Scholar] [CrossRef]
  6. Waziri, M.Y.; Sabi’u, J. An alternative conjugate gradient approach for large-scale symmetric nonlinear equations. J. Math. Comput. Sci. 2016, 6, 855–874. [Google Scholar]
  7. Li, D.H.; Wang, X.L. A modified Fletcher-Reeves-type derivative-free method for symmetric nonlinear equations. Numer. Algebra Control Optim. 2011, 1, 71–82. [Google Scholar] [CrossRef]
  8. Polyak, B.T. The conjugate gradientmethod in extreme problems. USSR Comput. Math. Math. Phys. 1969, 9, 94–112. [Google Scholar] [CrossRef]
  9. Polak, E.; Ribiére, G. Note on the convergence of methods of conjugate directions. RIRO 1969, 3, 35–43. [Google Scholar]
  10. Babaie-Kafaki, S.; Ghanbari, R. A descent extension of the Polak-Ribiére-Polyak conjugate gradient method. Comput. Math. Appl. 2014, 68, 2005–2011. [Google Scholar] [CrossRef]
  11. Yu, G.; Guan, L.; Li, G. Global convergence of modified Polak-Ribiére-Polyak conjugate gradient methods with sufficient descent property. J. Ind. Manag. Optim. 2008, 4, 565. [Google Scholar] [CrossRef]
  12. Sabi’u, J. Enhanced derivative-free conjugate gradient method for solving symmetric nonlinear equations. Int. J. Adv. Appl. Sci. 2016, 5, 50–57. [Google Scholar] [CrossRef]
  13. Dai, Y.H.; Liao, L.Z. New conjugacy conditions and related nonlinear conjugate gradient methods. Appl. Math. Opt. 2001, 43, 87–101. [Google Scholar] [CrossRef]
  14. Babaie-Kafaki, S.; Ghanbari, R. The Dai-Liao nonlinear conjugate gradient method with optimal parameter choices. Eur. J. Opt. Res. 2014, 234, 625–630. [Google Scholar] [CrossRef]
  15. Babaie-Kafaki, S.; Ghanbari, R. An optimal extension of the Polak-Ribiére-Polyak conjugate gradient method. Numer. Func. Anal. Opt. 2017, 38, 1115–1124. [Google Scholar] [CrossRef]
  16. Zhang, L.; Zhou, W.; Li, D.H. Global convergence of a modified Fletcher-Reeves conjugate gradient method with Armijo-type line search. Numer. Math. 2006, 104, 561–572. [Google Scholar] [CrossRef]
  17. Zhang, L.; Zhou, W.; Li, D. Some descent three-term conjugate gradient methods and their global convergence. Optim. Methods Softw. 2007, 22, 697–711. [Google Scholar] [CrossRef]
  18. Zhou, W.; Chen, X. On the convergence of a derivative-free HS type method for symmetric nonlinear equations. Adv. Model. Opt. 2012, 3, 645–654. [Google Scholar]
  19. Xiao, Y.; Wu, C.; Wu, S.Y. Norm descent conjugate gradient methods for solving symmetric nonlinear equations. J. Glob. Opt. 2015, 62, 751–762. [Google Scholar] [CrossRef]
  20. Dauda, M.K.; Mamat, M.; Mohamed, M.A.; Mohamad, F.S.; Waziri, M.Y. Derived Conjugate Gradient Parameter For Solving Symmetric Systems of Nonlinear Equations. Far East J. Math. Sci. FJMS 2017, 102, 2017. [Google Scholar] [CrossRef]
  21. Liu, J.K.; Feng, Y.M. A norm descent derivative-free algorithm for solving large-scale nonlinear symmetric equations. J. Comput. Appl. Math. 2018, 344, 89–99. [Google Scholar] [CrossRef]
  22. Abubakar, A.B.; Kumam, P.; Awwal, A.M. An inexact conjugate gradient method for symmetric nonlinear equations. Comput. Math. Methods 2019, 1, e1065. [Google Scholar] [CrossRef] [Green Version]
  23. Waziri, M.Y.; Yusuf Kufena, M.; Halilu, A.S. Derivative-Free Three-Term Spectral Conjugate Gradient Method for Symmetric Nonlinear Equations. Thai J. Math. 2020, 18, 1417–1431. [Google Scholar]
  24. Sabi’u, J.; Muangchoo, J.K.; Shah, A.; Abubakar, A.B.; Jolaoso, L.O. A Modified PRP-CG Type Derivative-Free Algorithm with Optimal Choices for Solving Large-Scale Nonlinear Symmetric Equations. Symmetry 2021, 13, 234. [Google Scholar] [CrossRef]
  25. Dennis, J.E.; More, J.J. A characterization of superlinear convergence and its application to quasi-Newton methods. Math. Comp. 1974, 28, 549–560. [Google Scholar] [CrossRef]
  26. Yakubu, U.A.; Mamat, M.; Mohamad, M.A.; Rivaie, M.; Sabi’u, J. A recent modification on Dai-Liao conjugate gradient method for solving symmetric nonlinear equations. Far East J. Math. Sci. 2018, 12, 1961–1974. [Google Scholar] [CrossRef]
  27. Sabi’u, J.; Gadu, A.M. A Projected Hybrid Conjugate Gradient Method for Solving Large-scale System of Nonlinear Equations. Malays. J. Comput. Appl. Math. 2018, 1, 10–20. [Google Scholar]
  28. Cruz, W.L.; Martínez, J.; Raydan, M. Spectral residual method without gradient information for solving large-scale nonlinear systems of equations. Math. Comput. 2006, 75, 1429–1448. [Google Scholar] [CrossRef] [Green Version]
  29. Mohammad, H.; Abubakar, A.B. A descent derivative-free algorithm for nonlinear monotone equations with convex constraints. RAIRO Op. Res. 2020, 54, 489–505. [Google Scholar] [CrossRef] [Green Version]
  30. Dolan, E.D.; Moré, J.J. Benchmarking optimization software with performance profiles. Math. Program. 2002, 91, 201–213. [Google Scholar] [CrossRef]
Figure 1. Performance of Algorithm 1 with the choice of β k H * ( ω k 1 ) and β k H * ( ω k 2 ) versus NDAS method [22] and ICGM method [21] for number of iterations.
Figure 1. Performance of Algorithm 1 with the choice of β k H * ( ω k 1 ) and β k H * ( ω k 2 ) versus NDAS method [22] and ICGM method [21] for number of iterations.
Symmetry 13 01829 g001
Figure 2. Performance of Algorithm 1 with the choice of β k H * ( ω k 1 ) and β k H * ( ω k 2 ) versus NDAS method [22] and ICGM method [21] for the CPU time in second.
Figure 2. Performance of Algorithm 1 with the choice of β k H * ( ω k 1 ) and β k H * ( ω k 2 ) versus NDAS method [22] and ICGM method [21] for the CPU time in second.
Symmetry 13 01829 g002
Table 1. Numerical efficiency of Algorithm 1 with the choice of β k H * ( ω k 1 ) and β k H * ( ω k 2 ) versus NDAS method [22] and ICGM method [21].
Table 1. Numerical efficiency of Algorithm 1 with the choice of β k H * ( ω k 1 ) and β k H * ( ω k 2 ) versus NDAS method [22] and ICGM method [21].
Problem 1 β k H * ( ω k 1 ) β k H * ( ω k 2 ) NDAS ICGM
DIMGUESSITRTIMENORMITRTIMENORMITRTIMENORMITRTIMENORM
50,000 x 1 40.0510171.32 × 10 10 50.7111949.38 × 10 11 10005.3581130.000606100.0835817.98 × 10 5
x 2 40.036151.27 × 10 5 60.1274256.69 × 10 6 10005.7694650.001268120.086184.37 × 10 5
x 3 50.0644148.12 × 10 7 70.1362234.81 × 10 7 10006.5457120.001127130.0905015.89 × 10 5
x 4 370.3207132.01 × 10 8 901.0879172.48 × 10 8 510.3132946.54 × 10 5 450.3123449.66 × 10 6
x 5 370.3738991.33 × 10 8 900.9524272.05 × 10 8 510.3858196.47 × 10 5 450.3077831.04 × 10 5
x 6 90.0975536.76 × 10 7 130.1602687.64 × 10 10 10007.5269050.000645180.1174763.26 × 10 5
x 7 30.0355754.37 × 10 5 40.10745.78 × 10 6 10006.6229590.00233680.0691546.78 × 10 5
x 8 70.0734641.92 × 10 6 120.1846623.25 × 10 10 10006.605930.000566100.0727911.17 × 10 5
100,000 x 1 40.088171.86 × 10 10 50.2165731.57 × 10 10 100012.851360.000858110.1635693.39 × 10 5
x 2 40.1127811.8 × 10 5 60.188731.31 × 10 5 100012.029280.001794120.1616876.18 × 10 5
x 3 50.1336551.15 × 10 6 80.2410271.19 × 10 7 100012.127030.001593130.2007148.33 × 10 5
x 4 370.9583612.56 × 10 8 902.4392651.41 × 10 5 510.6706459.21 × 10 5 450.5713581.39 × 10 5
x 5 370.8811232.08 × 10 8 902.3899121.31 × 10 5 510.8122289.16 × 10 5 450.5730771.44 × 10 5
x 6 90.2412149.56 × 10 7 130.3293192.73 × 10 7 100013.235390.000913180.2306044.61 × 10 5
x 7 30.0808556.18 × 10 5 40.1184521.32 × 10 5 100012.862330.00330380.1105589.59 × 10 5
x 8 70.1762452.71 × 10 6 120.3396651.73 × 10 9 100012.071580.000801100.1606221.65 × 10 5
Problem 2 β k H * ( ω k 1 ) β k H * ( ω k 2 ) NDAS ICGM
DIMGUESSITRTIMENORMITRTIMENORMITRTIMENORMITRTIMENORM
50,000 x 1 491.3883619.42 × 10 5 732.3099856.85 × 10 5 70.247055NaN53710.941059.91 × 10 5
x 2 461.2117259.52 × 10 5 632.0349989.98 × 10 5 1333.1250149.48 × 10 5 63012.9879.96 × 10 5
x 3 681.5609728.81 × 10 5 732.6376339.47 × 10 5 1283.2208169.87 × 10 5 54311.740059.94 × 10 5
x 4 1292.9329398.14 × 10 5 1343.8323349.47 × 10 5 2465.6272169.44 × 10 5 3737.8273059.93 × 10 5
x 5 1172.4064486.01 × 10 5 1153.2236628.55 × 10 5 1763.7717493.71 × 10 5 68114.58449.99 × 10 5
x 6 721.5320649.72 × 10 5 812.3255018.45 × 10 5 1383.111749.54 × 10 5 62012.98839.87 × 10 5
x 7 1373.0468065.47 × 10 5 642.2507339.24 × 10 5 1423.3203129.39 × 10 5 1753.6407269.91 × 10 5
x 8 1533.1559474.9 × 10 5 1294.4219726.39 × 10 5 511.2575459.36 × 10 5 49010.214399.98 × 10 5
100,000 x 1 562.1344869.31 × 10 5 796.9006539.86 × 10 5 70.58915NaN57322.360949.95 × 10 5
x 2 753.0265437.88 × 10 5 756.6936999.03 × 10 5 1196.1035069.78 × 10 5 65326.020269.99 × 10 5
x 3 682.6951338.66 × 10 5 726.20769.2 × 10 5 964.5404889.76 × 10 5 37614.54419.88 × 10 5
x 4 1345.2379328.17 × 10 5 1518.9149328.62 × 10 5 28711.586599.12 × 10 5 62223.510929.93 × 10 5
x 5 1274.7085718.25 × 10 5 1196.7584947.34 × 10 5 1797.3209565.78 × 10 5 64524.981219.91 × 10 5
x 6 853.387159.14 × 10 5 793.9834389.18 × 10 5 813.4178919.45 × 10 5 50118.892379.95 × 10 5
x 7 1706.8370269.79 × 10 5 713.739599.81 × 10 5 1285.4343819.72 × 10 5 50519.196999.93 × 10 5
x 8 1385.1927397.29 × 10 5 1286.5795889.15 × 10 5 461.9122226.6 × 10 5 48719.588429.86 × 10 5
Table 2. Numerical efficiency of Algorithm 1 with the choice of β k H * ( ω k 1 ) and β k H * ( ω k 2 ) versus NDAS method [22] and ICGM method [21].
Table 2. Numerical efficiency of Algorithm 1 with the choice of β k H * ( ω k 1 ) and β k H * ( ω k 2 ) versus NDAS method [22] and ICGM method [21].
Problem 3 β k H * ( ω k 1 ) β k H * ( ω k 2 ) NDAS ICGM
DIMGUESSITRTIMENORMITRTIMENORMITRTIMENORMITRTIMENORM
50,000 x 1 70.1304599.01 × 10 6 70.3267681.05 × 10 5 50.1472549.59 × 10 5 60.1614769.33 × 10 6
x 2 30.0975674.49 × 10 5 70.2081945.53 × 10 5 30.1009193.96 × 10 5 50.2050281.46 × 10 5
x 3 40.1379493.04 × 10 5 70.2698811.38 × 10 5 80.3044961.99 × 10 5 100.2976459.95 × 10 8
x 4 50.1568429.3 × 10 5 80.3169072.48 × 10 5 90.363572.23 × 10 6 60.1767846.58 × 10 6
x 5 00.006896000.01035000.007833000.0060280
x 6 70.2710754.55 × 10 5 30.1757151.79 × 10 5 30.202744NaN70.2851334.01 × 10 6
x 7 70.1903349.01 × 10 6 70.2960951.05 × 10 5 50.24079.59 × 10 5 60.1571749.33 × 10 6
x 8 00.007218000.008271000.008564000.005370
100,000 x 1 40.2987261.8 × 10 6 40.3499143.16 × 10 7 70.5952644.7 × 10 5 60.4375881.72 × 10 5
x 2 60.3767124.15 × 10 8 70.7392553.14 × 10 6 70.6610262.1 × 10 5 40.3361462.27 × 10 5
x 3 80.499522.45 × 10 6 70.649523.11 × 10 5 80.7309844.59 × 10 6 120.7748738.18 × 10 5
x 4 100.7349792.85 × 10 6 131.3273255.86 × 10 6 60.4239142.68 × 10 8 60.3905284.64 × 10 5
x 5 00.015278000.01714000.017242000.0119460
x 6 80.740082.32 × 10 5 131.6023051.73 × 10 6 20.396171NaN70.6063073.21 × 10 5
x 7 40.2673111.8 × 10 6 40.3631023.16 × 10 7 70.6093144.7 × 10 5 60.3938491.72 × 10 5
x 8 00.015626000.017721000.017815000.0150420
Problem 4 β k H * ( ω k 1 ) β k H * ( ω k 2 ) NDAS ICGM
DIMGUESSITRTIMENORMITRTIMENORMITRTIMENORMITRTIMENORM
50,000 x 1 442.6902918.41 × 10 5 434.6175719.04 × 10 5 121.863923NaN624.5241618.63 × 10 5
x 2 452.6635719.78 × 10 5 444.5214439.53 × 10 5 53.054371NaN897.6710639.14 × 10 5
x 3 442.5750538.38 × 10 5 494.7507227.64 × 10 5 30.230201NaN61.01848NaN
x 4 311.9449466.56 × 10 5 362.8441929.48 × 10 5 755.6698296.84 × 10 5 433.0529058.39 × 10 5
x 5 462.770797.82 × 10 5 403.0928227.46 × 10 5 1026.9780158.05 × 10 5 473.3494628.76 × 10 5
x 6 532.9794795.87 × 10 5 564.5665339.46 × 10 5 40.600135NaN1000178.821246.85852
x 7 412.3023638.68 × 10 5 463.3018158.57 × 10 5 50.549722NaN614.6073468.1 × 10 5
x 8 472.6984419.37 × 10 5 20312.168557.1 × 10 5 40.327506NaN1000128.28773.78474
100,000 x 1 414.9725177.93 × 10 5 495.8130927.83 × 10 5 82.774162NaN6310.429857.91 × 10 5
x 2 455.2108477.98 × 10 5 405.0382914.01 × 10 5 55.590234NaN7011.080618.1 × 10 5
x 3 454.7638928.99 × 10 5 506.623489.66 × 10 5 30.353785NaN62.195849NaN
x 4 323.4556637.19 × 10 5 374.6941699.81 × 10 5 506.9883865.91 × 10 5 406.2601338.2 × 10 5
x 5 485.2943639.67 × 10 5 445.6595287.97 × 10 5 12314.072058.1 × 10 5 476.3535868.95 × 10 5
x 6 495.3027936.83 × 10 5 546.6989188.51 × 10 5 41.395233NaN23462.218427.96 × 10 5
x 7 434.6154657.44 × 10 5 485.7891326.57 × 10 5 51.312244NaN7911.944929.2 × 10 5
x 8 454.8279337.53 × 10 5 20223.193526.59 × 10 5 40.682058NaN24036.762787.83 × 10 5
Table 3. Numerical efficiency of Algorithm 1 with the choice of β k H * ( ω k 1 ) and β k H * ( ω k 2 ) versus NDAS method [22] and ICGM method [21].
Table 3. Numerical efficiency of Algorithm 1 with the choice of β k H * ( ω k 1 ) and β k H * ( ω k 2 ) versus NDAS method [22] and ICGM method [21].
Problem 5 β k H * ( ω k 1 ) β k H * ( ω k 2 ) NDAS ICGM
DIMGUESSITRTIMENORMITRTIMENORMITRTIMENORMITRTIMENORM
50,000 x 1 10.0528298.57 × 10 6 10.0285018.57 × 10 6 10.0399288.57 × 10 6 10.0380718.57 × 10 6
x 2 10.036929.07 × 10 6 10.0293469.07 × 10 6 10.0358699.07 × 10 6 10.0262999.07 × 10 6
x 3 10.0362279.51 × 10 6 10.0388339.51 × 10 6 10.0356619.51 × 10 6 10.0271969.51 × 10 6
x 4 10.0363279.94 × 10 6 10.0396559.94 × 10 6 10.0358129.94 × 10 6 10.0271089.94 × 10 6
x 5 10.0355259.94 × 10 6 10.0432989.94 × 10 6 10.0363879.94 × 10 6 10.0277469.94 × 10 6
x 6 10.0369851.3 × 10 6 10.0425551.3 × 10 6 10.0360661.3 × 10 6 10.0273221.3 × 10 6
x 7 10.0355478.29 × 10 6 10.0428268.29 × 10 6 10.037198.29 × 10 6 10.0267498.29 × 10 6
x 8 10.0360864.74 × 10 6 10.0578764.74 × 10 6 10.0376944.74 × 10 6 10.0299474.74 × 10 6
100,000 x 1 10.0681633.03 × 10 6 10.1285983.03 × 10 6 10.0727843.03 × 10 6 10.0595653.03 × 10 6
x 2 10.0634263.21 × 10 6 10.1049583.21 × 10 6 10.073713.21 × 10 6 10.0675073.21 × 10 6
x 3 10.0708293.36 × 10 6 10.1038593.36 × 10 6 10.0738613.36 × 10 6 10.0565673.36 × 10 6
x 4 10.0720023.52 × 10 6 10.1000463.52 × 10 6 10.0729743.52 × 10 6 10.0554193.52 × 10 6
x 5 10.0711993.52 × 10 6 10.1039753.52 × 10 6 10.0743583.52 × 10 6 10.0502253.52 × 10 6
x 6 10.0707714.59 × 10 7 10.108154.59 × 10 7 10.0775244.59 × 10 7 10.056614.59 × 10 7
x 7 10.074752.93 × 10 6 10.1113322.93 × 10 6 10.0778372.93 × 10 6 10.0540522.93 × 10 6
x 8 10.0747731.68 × 10 6 10.1177761.68 × 10 6 10.075811.68 × 10 6 10.0507721.68 × 10 6
Problem 6 β k H * ( ω k 1 ) β k H * ( ω k 2 ) NDAS ICGM
DIMGUESSITRTIMENORMITRTIMENORMITRTIMENORMITRTIMENORM
50,000 x 1 30.0626245.81 × 10 5 70.2701783.31 × 10 8 10008.225620.056283100.1219596.44 × 10 5
x 2 40.0486619.16 × 10 7 70.2009562.12 × 10 5 100010.066220.069926110.1114438.61 × 10 5
x 3 50.0673825.23 × 10 9 80.2578185.23 × 10 8 10009.8598710.071884120.1215487.02 × 10 5
x 4 60.078567.36 × 10 5 110.3183321.49 × 10 7 60.0613331.18 × 10 5 140.1361753.66 × 10 5
x 5 60.0801877.36 × 10 5 110.3474291.49 × 10 7 60.0704421.17 × 10 5 140.1498833.66 × 10 5
x 6 80.1388346.23 × 10 6 100.2387582.98 × 10 13 100011.31730.07266150.1460644.63 × 10 5
x 7 30.0538785.54 × 10 5 50.1374014.42 × 10 5 50.0517181.07 × 10 7 80.0804373.88 × 10 5
x 8 60.095117.47 × 10 8 100.246388.29 × 10 5 90.1009246.67 × 10 6 100.105521.77 × 10 5
100,000 x 1 30.1085837.98 × 10 5 70.4707612.9 × 10 8 100019.143540.079596100.223119.1 × 10 5
x 2 40.1332731.06 × 10 6 70.4550453.77 × 10 5 100018.090040.098893120.2293773.65 × 10 5
x 3 50.1600683.72 × 10 9 80.5131874.32 × 10 8 100017.214910.101663120.2428169.92 × 10 5
x 4 70.220961.96 × 10 9 110.5680553.09 × 10 7 60.1132981.67 × 10 5 140.2714545.17 × 10 5
x 5 70.2504541.96 × 10 9 110.5683723.09 × 10 7 60.1482851.67 × 10 5 140.2844365.17 × 10 5
x 6 80.3291741.01 × 10 5 100.5409622.59 × 10 5 100019.086730.102761150.2983496.56 × 10 5
x 7 30.1278187.87 × 10 5 70.4159792.38 × 10 9 50.0968841.43 × 10 7 80.1695845.47 × 10 5
x 8 60.2272381.08 × 10 7 110.5796371.2 × 10 8 90.2155879.58 × 10 6 100.1950592.5 × 10 5
Table 4. Numerical efficiency of Algorithm 1 with the choice of β k H * ( ω k 1 ) and β k H * ( ω k 2 ) versus NDAS method [22] and ICGM method [21].
Table 4. Numerical efficiency of Algorithm 1 with the choice of β k H * ( ω k 1 ) and β k H * ( ω k 2 ) versus NDAS method [22] and ICGM method [21].
Problem 7 β k H * ( ω k 1 ) β k H * ( ω k 2 ) NDAS ICGM
DIMGUESSITRTIMENORMITRTIMENORMITRTIMENORMITRTIMENORM
50,000 x 1 30.0753668.32 × 10 7 30.082448.32 × 10 7 100.1424764.43 × 10 5 90.087957.07 × 10 5
x 2 50.0929367.6 × 10 16 60.1423854.95 × 10 17 120.1750394.6 × 10 5 110.1010395.09 × 10 5
x 3 50.0907294.11 × 10 11 60.1551892.87 × 10 12 130.2128165.58 × 10 5 120.1354335.15 × 10 5
x 4 100.1840067.24 × 10 10 100.2313938.25 × 10 13 190.3216986.65 × 10 5 200.2430738.12 × 10 5
x 5 100.1785087.24 × 10 10 100.2361078.25 × 10 13 190.3164336.65 × 10 5 200.2118388.05 × 10 5
x 6 70.1332788.2 × 10 7 90.2082761.58 × 10 10 170.3042498.33 × 10 5 140.1483484.43 × 10 5
x 7 30.0602967.62 × 10 5 40.1089391.41 × 10 6 130.2301276.01 × 10 5 120.141976.93 × 10 5
x 8 90.1634114.97 × 10 5 190.48879.6 × 10 5 200.3362585.31 × 10 5 180.2493336.81 × 10 5
100,000 x 1 30.1155191.18 × 10 6 30.157521.18 × 10 6 100.3653946.26 × 10 5 90.2160169.99 × 10 5
x 2 50.1866641.16 × 10 15 60.3147271.03 × 10 16 120.4212846.5 × 10 5 110.2455327.2 × 10 5
x 3 50.1832135.81 × 10 11 60.2765424.06 × 10 12 130.4493687.89 × 10 5 120.2418297.28 × 10 5
x 4 100.3634771.02 × 10 9 100.482961.17 × 10 12 190.6406349.41 × 10 5 210.5352855.45 × 10 5
x 5 100.3609381.02 × 10 9 100.4508661.17 × 10 12 190.6499329.41 × 10 5 210.5190555.44 × 10 5
x 6 70.2640491.16 × 10 6 90.4552136.79 × 10 9 180.6345084.24 × 10 5 140.311586.27 × 10 5
x 7 40.1472674.94 × 10 6 40.1855842.66 × 10 7 130.4562718.49 × 10 5 120.2752539.8 × 10 5
x 8 90.3197357.03 × 10 5 150.6971248.88 × 10 5 200.6777967.71 × 10 5 180.4778719.63 × 10 5
Problem 8 β k H * ( ω k 1 ) β k H * ( ω k 2 ) NDAS ICGM
DIMGUESSITRTIMENORMITRTIMENORMITRTIMENORMITRTIMENORM
50,000 x 1 10.0337148.57 × 10 6 10.0555978.57 × 10 6 10.0297968.57 × 10 6 10.0287798.57 × 10 6
x 2 10.0288819.07 × 10 6 10.0484499.07 × 10 6 10.0287389.07 × 10 6 10.0279029.07 × 10 6
x 3 10.032619.51 × 10 6 10.0476329.51 × 10 6 10.029729.51 × 10 6 10.0244099.51 × 10 6
x 4 10.0355899.94 × 10 6 10.0455749.94 × 10 6 10.0329159.94 × 10 6 10.0294299.94 × 10 6
x 5 10.0356069.94 × 10 6 10.0461169.94 × 10 6 10.0350129.94 × 10 6 10.0310849.94 × 10 6
x 6 10.037511.3 × 10 6 10.0465661.3 × 10 6 10.0349911.3 × 10 6 10.0285031.3 × 10 6
x 7 10.0376058.29 × 10 6 10.0456828.29 × 10 6 10.0359258.29 × 10 6 10.0278888.29 × 10 6
x 8 10.037894.74 × 10 6 10.0507784.74 × 10 6 10.0367824.74 × 10 6 10.022984.74 × 10 6
100,000 x 1 10.0765233.03 × 10 6 10.1108743.03 × 10 6 10.0767473.03 × 10 6 10.049653.03 × 10 6
x 2 10.0756183.21 × 10 6 10.0989323.21 × 10 6 10.0713533.21 × 10 6 10.0461963.21 × 10 6
x 3 10.0736833.36 × 10 6 10.0890283.36 × 10 6 10.0718953.36 × 10 6 10.0569123.36 × 10 6
x 4 10.0753733.52 × 10 6 10.0973823.52 × 10 6 10.073013.52 × 10 6 10.0633823.52 × 10 6
x 5 10.0750463.52 × 10 6 10.1075673.52 × 10 6 10.0726063.52 × 10 6 10.0555073.52 × 10 6
x 6 10.0740184.59 × 10 7 10.0921884.59 × 10 7 10.0720674.59 × 10 7 10.0575774.59 × 10 7
x 7 10.0761132.93 × 10 6 10.0976922.93 × 10 6 10.0716512.93 × 10 6 10.0554312.93 × 10 6
x 8 10.0762651.68 × 10 6 10.1083561.68 × 10 6 10.0722781.68 × 10 6 10.0498411.68 × 10 6
Table 5. Numerical efficiency of Algorithm 1 with the choice of β k H * ( ω k 1 ) and β k H * ( ω k 2 ) versus NDAS method [22] and ICGM method [21].
Table 5. Numerical efficiency of Algorithm 1 with the choice of β k H * ( ω k 1 ) and β k H * ( ω k 2 ) versus NDAS method [22] and ICGM method [21].
Problem 9 β k H * ( ω k 1 ) β k H * ( ω k 2 ) NDAS ICGM
DIMGUESSITRTIMENORMITRTIMENORMITRTIMENORMITRTIMENORM
50,000 x 1 50.0665577.42 × 10 5 70.1890462.75 × 10 5 70.1192318.69 × 10 5 110.1268842.96 × 10 5
x 2 50.066245.83 × 10 5 90.2240138.41 × 10 5 80.1516181.25 × 10 5 110.1518362.69 × 10 5
x 3 50.0836321.92 × 10 5 70.1773671.33 × 10 5 70.1387948.75 × 10 5 100.1280268.97 × 10 5
x 4 90.1457053.32 × 10 5 110.3145925.97 × 10 5 140.268091.99 × 10 5 180.216814.32 × 10 5
x 5 90.1667162.08 × 10 5 110.3029153.68 × 10 5 120.2233624.23 × 10 5 180.1980654.32 × 10 5
x 6 60.1199158.96 × 10 5 90.2617538.82 × 10 5 100.1862997.51 × 10 5 120.1376264.19 × 10 5
x 7 90.1680877.22 × 10 5 110.2553167.07 × 10 5 90.1775887.87 × 10 5 180.1935964.86 × 10 5
x 8 420.8730118.91 × 10 6 110.2575218.54 × 10 5 170.3121014.43 × 10 5 240.2620225.32 × 10 5
100,000 x 1 60.2402665.41 × 10 6 80.395917.43 × 10 5 80.3237461.5 × 10 5 110.2471764.19 × 10 5
x 2 50.2111078.24 × 10 5 80.4165732.59 × 10 5 80.3260421.76 × 10 5 110.271413.81 × 10 5
x 3 50.2015442.72 × 10 5 80.3972994.02 × 10 5 80.3210371.51 × 10 5 110.2440462.88 × 10 5
x 4 90.3551434.39 × 10 5 120.5874864.35 × 10 5 130.484679.19 × 10 5 180.3935966.11 × 10 5
x 5 90.3626493.53 × 10 5 110.5429965.21 × 10 5 130.4867467.21 × 10 5 180.3832996.11 × 10 5
x 6 70.2783696.54 × 10 6 100.5007346.47 × 10 6 110.4196322.05 × 10 5 120.22685.93 × 10 5
x 7 100.373365.27 × 10 6 120.5456599.29 × 10 5 100.3947671.36 × 10 5 180.3821066.87 × 10 5
x 8 481.8958476.76 × 10 5 120.5789365.08 × 10 5 170.6128656.27 × 10 5 240.518447.52 × 10 5
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sabi’u, J.; Muangchoo, K.; Shah, A.; Abubakar, A.B.; Aremu, K.O. An Inexact Optimal Hybrid Conjugate Gradient Method for Solving Symmetric Nonlinear Equations. Symmetry 2021, 13, 1829. https://doi.org/10.3390/sym13101829

AMA Style

Sabi’u J, Muangchoo K, Shah A, Abubakar AB, Aremu KO. An Inexact Optimal Hybrid Conjugate Gradient Method for Solving Symmetric Nonlinear Equations. Symmetry. 2021; 13(10):1829. https://doi.org/10.3390/sym13101829

Chicago/Turabian Style

Sabi’u, Jamilu, Kanikar Muangchoo, Abdullah Shah, Auwal Bala Abubakar, and Kazeem Olalekan Aremu. 2021. "An Inexact Optimal Hybrid Conjugate Gradient Method for Solving Symmetric Nonlinear Equations" Symmetry 13, no. 10: 1829. https://doi.org/10.3390/sym13101829

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop