Next Article in Journal
Analysis of Large Vehicle Interference on Transiting Test for Measuring Building Wind Pressure Coefficient
Next Article in Special Issue
Bi-Univalent Problems Involving Certain New Subclasses of Generalized Multiplier Transform on Analytic Functions Associated with Modified Sigmoid Function
Previous Article in Journal
Research on the Control Method for the Reasonable State of Self-Anchored Symmetry Suspension Bridge Stiffening Girders
Previous Article in Special Issue
A Stochastic Discrete Empirical Interpolation Approach for Parameterized Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Scaled Three-Term Conjugate Gradient Methods for Solving Monotone Equations with Application

1
Department of Mathematics, Faculty of Science, Yusuf Maitama Sule University, Kano P.M.B. 3099, Nigeria
2
Numerical Optimization Research Group, Bayero University, Gwarzo Road, Kano P.M.B. 3011, Nigeria
3
Department of Mathematics and Applied Mathematics, Sefako Makgatho Health Sciences University, Ga-Rankuwa, Pretoria, Medunsa 0204, South Africa
4
Department of Mathematics, Usmanu Danfodiyo University, Sokoto P.M.B. 2346, Nigeria
5
Mathematics Department, College of Science, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
6
Department of Mathematics, COMSATS University, Islamabad Park Road, Islamabad 45550, Pakistan
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(5), 936; https://doi.org/10.3390/sym14050936
Submission received: 19 March 2022 / Revised: 9 April 2022 / Accepted: 14 April 2022 / Published: 5 May 2022
(This article belongs to the Special Issue Symmetry in Functional Equations and Analytic Inequalities III)

Abstract

:
In this paper, we derived a modified conjugate gradient (CG) parameter by adopting the Birgin and Mart i ´ nez strategy using the descent three-term CG direction and the Newton direction. The proposed CG parameter is applied and suggests a robust algorithm for solving constrained monotone equations with an application to image restoration problems. The global convergence of this algorithm is established using some proper assumptions. Lastly, the numerical comparison with some existing algorithms shows that the proposed algorithm is a robust approach for solving large-scale systems of monotone equations. Additionally, the proposed CG parameter can be used to solve the symmetric system of nonlinear equations as well as other relevant classes of nonlinear equations.

1. Introduction

Consider the system
F ( x ) = 0 , x Ω ,
such that the function F : Ω R n is monotone and continuous. The monotonicity of F implies that
( F ( x ) F ( z ) ) T ( x z ) 0 , x , z Ω ,
where Ω is a closed convex subset of R n . The system (1) arises in the areas of Bregman distances [1], financial forecasting problems [2], compressive sensing [3], and the monotone variational inequalities [4]. Additionally, whenever the Jacobian of F is symmetric, the system (1) is referred to as a symmetric system of nonlinear equations [5,6]. The symmetric system of nonlinear equations typically originates from the gradient mapping of unconstrained optimization problems and other relevant applications; more information on symmetric systems of equations can be found in [7,8], and the references therein. There are many methods or algorithms for solving the system of nonlinear equations in the literature. The most prominent ones included the Newton method [9] and the quasi-Newton methods [10,11]. These methods require the computation and the storage of the Jacobian matrix or its approximate at every iteration, as such, they are not efficient when dealing with large-scale problems and nonsmooth systems. These lead to the advance of the matrix-free and derivative-free methods for solving the system of nonlinear equations, see [12,13,14,15,16,17,18,19] and the references therein.
Moreover, among the most robust matrix-free methods for solving the system (1) is the conjugate gradient (CG) method, which has the following iterative procedure:
x k + 1 = x k + α k d k ,
where x k is the initial guess, α k is to be obtained using any line search procedures, and d k is the CG search direction defined by
d k = F ( x k ) , k = 0 , d k = F ( x k ) + β k d k 1 k 1 ,
β k is the CG parameter that differentiates the CG methods. Some recent CG methods for solving monotone equations with convex constraints are as follows: the efficient three-term conjugate gradient method [20], the iterative derivative-free method [21], the spectral gradient projection method [22], the hybrid gradient method [23], the Rivaie, Mustafa, Ismail, and Leong (RMIL) CG-type method [24], the modified double directions approach [25], the inertial two-hybrid spectral method [26], and the spectral projections CG method [27]. Each of these methods used the concept of the projection technique and addressed the large-scale system of monotone equations efficiently with some applications. Some recent papers on methods for solving convex constrained monotone systems can also be found in [28,29,30,31] and the references therein.Inspired by the success of these algorithms and the descent three-term conjugate gradient direction [32], we will propose a modified CG parameter by adopting the Birgin and Mart i ´ nez [33] strategy, we will further use the proposed CG parameter and suggest a robust spectral three-term CG algorithm for solving the large-scale monotone equations with convex constraints with an application in image recovery.
The remaining paper is as follows: the next section is the derivation of the modified CG parameter and the proposed spectral three-term CG algorithm. Section 3 contains the global convergence analysis of the proposed algorithm. Section 4 provides the experimental comparison of the proposed algorithm with applications to the system of monotone equations with convex constraints and the image restoration problems. We finally give some concluding remarks with some future directions.

2. The Scaled Three-Term CG Method

The Birgin and Mart i ´ nez [33] scaled two-term conjugate gradient (CG) method generates sequence x k of approximate solutions of F, in which
x k + 1 = x k + α k d k ,
d k + 1 = γ k + 1 F k + 1 + β k s k ,
where α k satisfies some line search techniques, F k = F ( x k ) , β k is a CG parameter, s k = x k + 1 x k and γ k + 1 is a scalar parameter or a matrix to be determined. The iterative process is initialized with an initial point x 0 and d 0 = F 0 . Now, we proposed a scaled three-term CG direction for solving monotone equations based on the sufficient three-term descent direction [32], in which
d k + 1 = γ k + 1 F k + 1 + β k s k β k F k + 1 T s k F k + 1 T p k p k ,
where p k is any vector in R n . Observe that if γ k + 1 = 1 , then we get the default sufficient three-term CG direction based on the choice of the scalar β k . On the other hand, if β k = 0 , we get another class of algorithms depending on γ k + 1 (usually a scalar or a positive definite matrix). If γ k + 1 is a positive scalar and β k 0 , then we have a scaled three-term conjugate gradient method.
Moreover, we consider the following procedure to determine the parameter β k . The Newton method for solving system of nonlinear equations generates the direction d k + 1 = J k + 1 1 F k + 1 at every iteration for k 1 . Thus, from the equality relation
J k + 1 1 F k + 1 = γ k + 1 F k + 1 + β k s k β k F k + 1 T s k F k + 1 T p k p k ,
we obtain
β k = s k T J k γ k + 1 F k + 1 s k T F k + 1 F k + 1 T p k s k T J k s k F k + 1 T p k s k T J k p k F k + 1 T s k .
Using the Tailor’s series expansion and adopting the Birgin and Mart i ´ nez [33] strategy, we arrived at
β k = γ k + 1 y k s k T F k + 1 y k T s k F k + 1 T p k y k T p k F k + 1 T s k F k + 1 T p k
However, the denominator of (10) may be undefined if y k T s k F k + 1 T p k is equal to y k T p k F k + 1 T s k . Hence, we approximated it with a positive constant y k T s k to have
β k = γ k + 1 y k s k T F k + 1 y k T s k F k + 1 T p k ,
where y k = F k + 1 F k + σ s k , σ ( 0 , 1 ) and s k = x k + 1 x k .
Furthermore, inspired by the Birgin and Mart i ´ nez algorithm [33], we consider the spectral gradient parameter γ k + 1 as
γ k + 1 = s k T s k y k T s k .
The parameter γ k + 1 is the inverse of the Rayleigh quotient and is always well-defined and positive since it guarantees that y k T s k > 0 whenever F is monotone.
An operator T Ω : R n Ω is called a projection operator defined by
T Ω [ x ] = arg min x z | z Ω , x R n .
This operator satisfies the non-expansive property
T Ω [ x ] T Ω [ z ] x z , x , z R n .
It is not difficult to see that F k T d k F k 2 for all k 0 using simple algebra. The Lipschitz continuous assumption will be used in the following section to establish the global convergence of Algorithm 1.
Algorithm 1 Scaled Three-term CG method (STCG)
Step 0 Initialize: ϵ 0 , ζ > 0 , λ > 0 , τ > 0 , σ ( 0 , 1 ) , and x 0 R n . Set k = 0 and d 0 = F 0 .
Step 1 If F k ϵ , stop, else go to Step 2.
Step 2 Compute the search direction d k using (7) and (11).
Step 3 Set m k = x k + α k d k and determine the step-length α k = max ζ λ i : i = 0 , 1 , 2 , satisfying
F ( m k ) T d k τ α k F ( m k ) d k 2 .
Step 4 If m k Ω and F ( m k ) = 0 stop, otherwise
x k + 1 = P Ω [ x k q k F ( m k ) ] ,
where
q k = F ( m k ) T ( x k m k ) F ( m k ) 2 .
Step 5 Set k = k + 1 and then go to Step 1.

3. Analysis of the Global Convergence

This section provides a global convergence analysis of the STCG algorithm under the assumption that the function F is Lipschitz continuous, i.e., for c > 0 ,
F ( x ) F ( z ) c x z , x , z R n .
Lemma 1.
The line search procedure (15) is well-defined.
Proof. 
Suppose that there is a constant k 0 0 , such that (15) is not true for any non-negative integer i, then
F ( x k 0 + ζ λ i d k 0 ) T d k 0 < τ λ ζ i F ( x k 0 + ζ λ i d k 0 ) d k 0 2 .
Allowing i in (19), we have
F ( x k 0 ) T d k 0 0 .
Additionally, from sufficient descent property of the STCG algorithm, we have
F ( x k 0 ) T d k 0 F ( x k 0 ) 2 > 0 .
Clearly (20) and (21) yield a contradiction. Therefore, the line search procedure (15) is well-defined. □
Lemma 2
([34]). If F is monotone and Lipschitz continuous, then
lim k x k m k = lim k α k d k = 0 ,
lim k x k + 1 x k = 0 ,
x k x ¯ x 0 x ¯ , k 0 ,
where x ¯ is any arbitrary solution of F.
The proof is skipped, because it is similar to the one given in [34].
Lemma 3.
The STCG direction generated using (7) and (11) is bounded.
Proof. 
Assume that x ¯ is a solution of F, then
F ( x k ) = F ( x k ) F ( x ¯ ) c x k x ¯ c x 0 x ^ = κ .
Again, from the monotone and Lipschitz assumptions on F
y k T s k σ s k 2 , y k c + σ s k ,
therefore,
γ k + 1 1 σ .
Choosing p k = F k + 1 , then from (11), (26), and (27), we get
β k γ k + 1 y k + s k σ s k 2 F k + 1 3 1 σ c + σ + 1 σ s k F k + 1 3 .
Hence, from the above and (7), we get
d k + 1 1 σ F k + 1 + 2 1 σ c + σ + 1 σ s k F k + 1 3 s k 1 σ κ + 2 1 σ c + σ + 1 σ κ 3 = p .
This shows the boundedness of the proposed direction. □
Theorem 1.
If the sequence { x k } is generated by STCG algorithm, then
lim inf k F k = 0 .
Proof. 
Suppose that there exists a constant a > 0 such that
F k > a , k 0 .
From the sufficient descent property and the Cauchy Schwarz inequality
F ( x k ) 2 F ( x k ) T d k F ( x k ) d k , k 0 .
Thus,
d k > 0 .
On the other hand, d k p k 0 . Then it is obvious that for α k ζ , α k λ 1 does not satisfy (15), i.e.,
F ( x k + λ 1 α k d k ) T d k τ λ 1 α k F ( x k + λ 1 α k d k ) d k 2 ,
also, by the sufficient descent property and letting m ¯ k = x k + λ 1 α k d k , we have
F k 2 F k T d k = ( F ( z ) F k ) T d k F ( z ) T d k c λ 1 α k d k 2 + τ λ 1 α k F ( m ¯ k ) d k 2 ,
where the last inequality follows from (34) and the Cauchy Schwarz inequality. Hence from (35), we get
α k d k λ F k 2 ( c + τ F ( m ¯ k ) ) d k 2 d k .
Using the boundedness of { x k } and { α k d k } , we get that the sequence { x k + λ 1 α k d k } is also bounded. Hence by the continuity of F, there is a constant ω , such that F ( m ¯ k ) ω . Thus
α k d k λ a 2 ( c + ω ) p .
which clearly contradicts (22). Hence (30) is true. □

4. Numerical Experiment and Applications

This section presents a numerical experiment of the STCG algorithm on a system of monotone nonlinear equations with convex constraints, as well as its application to image restoration problems.

4.1. Application to the Monotone Nonlinear Equations

This subsection provides a numerical comparison of the STCG algorithm with some relevant algorithms for solving monotone nonlinear equations with convex constraints. We compared the proposed algorithm with the projection method for convex constrained monotone nonlinear equations with applications (PCG) [34], the derivative-free spectral projection algorithm (DFSP) [27], and the modified spectral gradient projection (MSGP) algorithm [22]. Moreover, for the proposed algorithm we set the following values for the initial parameters: σ = 0.1 , λ = 0.9 , ζ = 1 , τ = 0.0001 , and ϵ = 10 8 . The remaining algorithms are implemented with the same parameters as implemented in the respective papers except for the stopping criteria. The following test problems are considered:
Problem 1.
This problem is from reference [35], given by
F 1 ( x ) = e x 1 1 ,
F i ( x ) = e x i + x i 1 1 , i = 2 , 3 , , n 1 , and Ω = R + n .
Problem 2.
This problem is from reference [35], given by
F i ( x ) = log ( x i + 1 ) x i n , i = 2 , 3 , , n ,
and Ω = { x R + n : i = 1 n x i n , x i 1 , i = 1 , 2 , , n } .
Problem 3.
This problem is from reference [36], given by
F 1 ( x ) = cos ( x 1 ) 9 + 3 x 1 + 8 exp ( x 2 )
F i ( x ) = cos ( x i ) 9 + 3 x i + 8 exp ( x i 2 ) , for i = 2 , 3 , , n , and Ω = R + n .
Problem 4.
This problem is from reference [35], given by
F i ( x ) = min ( min ( | x i | , x i 2 ) , max ( | x i | , x i 3 ) ) , i = 1 , 2 , n , and Ω = R + n .
Problem 5.
This problem is from reference [35], given by
F i ( x ) = e x i 1 , i = 1 , 2 , , n , and Ω = R + n .
From Table 1, Table 2, Table 3, Table 4 and Table 5, ITER stands for the number of iteration, TIME represents the computational time, and FVAL and NORM depict the number of function evaluations and the norm of the function value at the termination point respectively. We considered the initial points: x 1 = ( 1 , 1 , , 1 ) , x 2 = ( 1 , 2 2 , 2 3 , , 2 n ) , x 3 = ( 0.01 , 0.01 , , 0.1 ) , x 4 = ( 1 n , 2 n , , 1 ) , x 5 = ( 1 1 n , 1 2 n , , 0 ) , x 6 = ( 1 , 1 , , 1 ) , x 7 = ( n 1 n , n 2 n , , n 1 ) and x 8 = ( 1 2 , 1 , 2 3 , , 2 n ) .
Moreover, a numerical comparison of Problem 1 indicates that the PCG algorithm failed on one initial point despite a large number of iterations. The STCG algorithm recorded fewer iterations compared to the PCG, DFSP, and MSGP algorithms. In terms of the number of iterations, the STCG method won over 90% of the time for Problems 3 and 5. Whereas the for remaining problems, the STCG algorithm performed more efficiently with more than 50% less number of iterations. However, when comparing the number of function evaluations and computing time, the STCG algorithm is likewise highly efficient for Problems 1, 3, and 5. On average, the STCG algorithm is competing with the remaining algorithms in terms of function evaluation and computing time. In general, the STCG algorithm is the most efficient across all problems, followed by the MSGP algorithm and the other two algorithms.
Furthermore, we simplified the numerical comparison by visualizing the performance profiles of the number of iterations, computation time, and the number of function evaluations using the well-known Dolan and Mor e ´ technique [37]. According to Figure 1, Figure 2 and Figure 3, the STCG method is best in terms of the number of iterations and is competing with both the DFSP and MSGP algorithms for the minimal number of function evaluations and computing time.

4.2. Application in Signal Recovery

The pursuit of the denoising problem that arises in compressive sensing is described by:
min s x 1 + 1 2 Q x r 2 2 ,
where r R k is an observation, x R n , Q R k × n ( k < < n ) is a linear operator, s is a nonnegative parameter, and . 1 , . 2 represent the l 1 and l 2 norms respectively, see [3,38,39,40]. This problem can be reformulated into the basic bound-constrained quadratic problem and further into the following system of monotone nonlinear equations
F ( z ) = min z , A v + b = 0 .
For more detail about the transformation (39), Lipschitz continuity, and monotonicity of F ( z ) , see [41,42].
We have made an experiment with the image restoration problem. In the experiment, we considered four different algorithms, namely the STCG algorithm, the Perry-type derivative-free method for solving nonlinear system of equations (PSGM) [43], the conjugate gradient method for convex constrained monotone equations with applications in compressive sensing (CGD) [44], and the three-term Polak–Ribi e ´ re–Polyak derivative-free method (TPRP) [45]. We applied these algorithms to four different benchmark problems, namely the fox, Lena, horse, and the frog. All the experiments in this work are coded using Matlab R2014a and run on a personal computer HP core i5, 8th Gen. We used the mean of squared error (MSE) and signal-to-ratio (SNR) to measure the quality of the restored images
MSE = 1 n x * x 2 ,
and,
SNR = 20 × log 10 x * x x * .
According to Figure 4, Figure 5, Figure 6 and Figure 7, the STCG algorithm restored all of the images with fewer iterations and less computation time than the PSGM, CGD, and TPRP methods. In addition, when compared to the CGD and TPRP algorithms, the PSGM approach is highly promising in terms of the number of iterations and computing time. Moreover, when compared to the other three algorithms, the STCG has the smallest MSE. Furthermore, the TPRP algorithm has the highest SNR values in all of the problems considered, followed by the CGD, PSGM, and STCG algorithms. These findings revealed that the STCG algorithm is a great alternative for dealing with image restoration problems with fewer iterations, MSE, and computation time.

5. Conclusions

We proposed a modified conjugate gradient parameter by combining the descent three-term CG direction with the well-known Newton direction by adopting the Birgin and Mart i ´ nez strategy. We also demonstrated the efficacy of the proposed CG parameter by proposing a scaled three-term conjugate gradient algorithm for solving the system of monotone equations with convex constraints with an application in image restoration problems. Furthermore, we showed the global convergence analysis of the proposed algorithm using the Lipchitz continuous assumption. The proposed algorithm is a viable alternative for solving convex constraint monotone equations with fewer iterations. It is also demonstrated that the suggested technique can restore blurred pictures with less MSE, iterations, and computational time. We further highlight that the proposed CG parameter can be used in many fields of CG method applications, such as the symmetric system of nonlinear equations, unconstrained optimization problems, and many more. It is vital to note that the method’s stability analysis will be taken into account in our future work.

Author Contributions

Formal analysis, K.O.A.; investigation, J.S.; software, A.A.; supervision, A.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Taif University Researches Supporting Project number (TURSP- 2020/326), Taif University, Taif, Saudi Arabia.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The second author acknowledges with thanks, the Department of Mathematics and Applied Mathematics at the Sefako Makgatho Health Sciences University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Iusem, A.N.; Solodov, M.V. Newton-type methods with generalized distances for constrained optimization. Optimization 1997, 41, 257–278. [Google Scholar] [CrossRef]
  2. Dai, Z.; Zhou, H.; Wen, F.; He, S. Efficient predictability of stock return volatility: The role of stock market implied volatility. N. Am. J. Econ. Financ. 2020, 52, 101174. [Google Scholar] [CrossRef]
  3. Figueiredo, M.; Nowak, R.; Wright, S.J. Gradient projection for sparse reconstruction, application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 2007, 1, 586–597. [Google Scholar] [CrossRef] [Green Version]
  4. Zhao, Y.B.; Li, D.H. Monotonicity of fixed point and normal mapping associated with variational inequality and its application. SIAM J. Optim. 2001, 4, 962–973. [Google Scholar] [CrossRef]
  5. Li, D.; Fukushima, M. A Globally and Superlinearly Convergent Gauss-Newton-Based BFGS Method for Symmetric Nonlinear Equations. SIAM J. Numer. Anal. 1999, 37, 152–172. [Google Scholar] [CrossRef]
  6. Waziri, M.Y.; Sabi’u, J. A derivative-free conjugate gradient method and its global convergence for solving symmetric nonlinear equations. Int. J. Math. Math. Sci. 2015, 2015, 961487. [Google Scholar] [CrossRef] [Green Version]
  7. Sabi’u, J.; Muangchoo, K.; Shah, A.; Abubakar, A.B.; Aremu, K.O. An inexact optimal hybrid conjugate gradient method for solving symmetric nonlinear equations. Symmetry 2021, 13, 1829. [Google Scholar] [CrossRef]
  8. Sabi’u, J.; Muangchoo, K.; Shah, A.; Abubakar, A.B.; Jolaoso, L.O. A modified PRP-CG type derivative-free algorithm with optimal choices for solving large-scale nonlinear symmetric equations. Symmetry 2021, 13, 234. [Google Scholar] [CrossRef]
  9. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: Cambridge, MA, USA, 1970. [Google Scholar]
  10. Zhou, G.; Toh, K.C. Superlinear convergence of a Newton-type algorithm for monotone equations. J. Optimiz. Theory Appl. 2005, 125, 205–221. [Google Scholar] [CrossRef]
  11. Zhou, W.J.; Li, D.H. A globally convergent BFGS method for nonlinear monotone equations without any merit functions. Math. Comput. 2008, 77, 2231–2240. [Google Scholar] [CrossRef]
  12. Sabi’u, J.; Shah, A.; Waziri, M.Y.; Ahmed, K. Modified Hager-Zhang conjugate gradient methods via singular value analysis for solving monotone nonlinear equations with convex constraint. Int. J. Comput. Methods 2020, 18, 2050043. [Google Scholar] [CrossRef]
  13. Sabi’u, J.; Shah, A.; Waziri, M.Y. A modified Hager-Zhang conjugate gradient method with optimal choices for solving monotone nonlinear equations. Int. J. Comput. Math. 2022, 99, 332–354. [Google Scholar] [CrossRef]
  14. Sabi’u, J.; Shah, A. An efficient three-term conjugate gradient-type algorithm for monotone nonlinear equations. RAIRO Oper. Res. 2021, 55, S1113–S1127. [Google Scholar] [CrossRef]
  15. Waziri, M.Y.; Ahmed, K.; Sabi’u, J.; Halilu, A.S. Enhanced Dai–Liao conjugate gradient methods for systems of monotone nonlinear equations. SeMA J. 2021, 78, 15–51. [Google Scholar] [CrossRef]
  16. Abubakar, A.B.; Sabi’u, J.; Kumam, P.; Shah, A. Solving nonlinear monotone operator equations via modified sr1 update. J. Appl .Math. Comput. 2021, 67, 343–373. [Google Scholar] [CrossRef]
  17. Waziri, M.Y.; Hungu, K.A.; Sabi’u, J. Descent Perry conjugate gradient methods for systems of monotone nonlinear equations. Numer. Algorithms 2020, 85, 763–785. [Google Scholar] [CrossRef]
  18. Waziri, M.Y.; Ahmed, K.; Sabi’u, J. A Dai–Liao conjugate gradient method via modified secant equation for system of nonlinear equations. Arab. J. Math. 2020, 9, 443–457. [Google Scholar] [CrossRef] [Green Version]
  19. Sabi’u, J.; Shah, A.; Waziri, M.Y. Two optimal Hager-Zhang conjugate gradient methods for solving monotone nonlinear equations. Appl. Numer. Math. 2020, 153, 217–233. [Google Scholar] [CrossRef]
  20. Gao, P.; He, C. An efficient three-term conjugate gradient method for nonlinear monotone equations with convex constraints. Calcolo 2018, 55, 1–17. [Google Scholar] [CrossRef]
  21. Liu, J.; Feng, Y. A derivative-free iterative method for nonlinear monotone equations with convex constraints. Numer. Algorithms 2019, 82, 245–262. [Google Scholar] [CrossRef]
  22. Zheng, L.; Yang, L.; Liang, Y. A modified spectral gradient projection method for solving non-linear monotone equations with convex constraints and its application. IEEE Access 2020, 8, 92677–92686. [Google Scholar] [CrossRef]
  23. Halilu, A.S.; Majumder, A.; Waziri, M.Y.; Ahmed, K. Signal recovery with convex constrained nonlinear monotone equations through conjugate gradient hybrid approach. Math. Comput. Simul. 2021, 187, 520–539. [Google Scholar] [CrossRef]
  24. Koorapetse, M.; Kaelo, P.; Lekoko, S.; Diphofu, T. A derivative-free RMIL conjugate gradient projection method for convex constrained nonlinear monotone equations with applications in compressive sensing. Appl. Numer. Math. 2021, 165, 431–441. [Google Scholar] [CrossRef]
  25. Halilu, A.S.; Majumder, A.; Waziri, M.Y.; Awwal, A.M.; Ahmed, K. On solving double direction methods for convex constrained monotone nonlinear equations with image restoration. Comput. Appl. Math. 2021, 40, 1–27. [Google Scholar] [CrossRef]
  26. Aji, S.; Kumam, P.; Awwal, A.M.; Yahaya, M.M.; Kumam, W. Two hybrid spectral methods with inertial effect for solving system of nonlinear monotone equations with application in robotics. IEEE Access 2021, 9, 30918–30928. [Google Scholar] [CrossRef]
  27. Amini, K.; Faramarzi, P.; Bahrami, S. A spectral conjugate gradient projection algorithm to solve the large-scale system of monotone nonlinear equations with application to compressed sensing. Int. J. Comput. Math. 2022, 2047180. [Google Scholar] [CrossRef]
  28. Waziri, M.Y.; Ahmed, K.; Halilu, A.S. A modified PRP-type conjugate gradient projection algorithm for solving large-scale monotone nonlinear equations with convex constraint. J. Comput. Appl. Math. 2022, 406, 114035. [Google Scholar] [CrossRef]
  29. Waziri, M.Y.; Ahmed, K. Two Descent Dai-Yuan Conjugate Gradient Methods for Systems of Monotone Nonlinear Equations. J. Sci. Comput. 2022, 90, 1–53. [Google Scholar] [CrossRef]
  30. Waziri, M.Y.; Ahmed, K.; Halilu, A.S.; Sabi’u, J. Two new Hager–Zhang iterative schemes with improved parameter choices for monotone nonlinear systems and their applications in compressed sensing. RAIRO Oper. Res. 2022, 56, 239–273. [Google Scholar] [CrossRef]
  31. Meli, E.; Morini, B.; Porcelli, M.; Sgattoni, C. Solving nonlinear systems of equations via spectral residual methods: Stepsize selection and applications. J. Sci. Comput. 2022, 90, 1–41. [Google Scholar] [CrossRef]
  32. Narushima, Y.; Yabe, H.; Ford, J.A. A three-term conjugate gradient method with sufficient descent property for unconstrained optimization. SIAM J. Optim. 2011, 21, 212–230. [Google Scholar] [CrossRef] [Green Version]
  33. Birgin, E.; Martinez, J.M. A spectral conjugate gradient method for unconstrained optimization. Appl. Math. Optim. 2001, 43, 117–128. [Google Scholar] [CrossRef] [Green Version]
  34. Liu, J.K.; Lia, S.J. A projection method for convex constrained monotone nonlinear equations with applications. Comput. Math. Appl. 2015, 70, 2442–2453. [Google Scholar] [CrossRef]
  35. La Cruz, W.; Martinez, J.; Raydan, M. Spectral residual method without gradient information for solving large-scale nonlinear systems of equations. Math. Comput. 2006, 75, 1429–1448. [Google Scholar] [CrossRef] [Green Version]
  36. Hu, Y.; Wei, Z. A modified Liu-Storey conjugate gradient projection algorithm for nonlinear monotone equations. Int. Math. Forum. 2014, 9, 1767–1777. [Google Scholar] [CrossRef] [Green Version]
  37. Dolan, E.D.; More, J.J. Benchmarking optimization software with performance profiles. Math. Program. 2002, 91, 201–213. [Google Scholar] [CrossRef]
  38. Hale, E.T.; Yin, W.; Zhang, Y. A fixed-point continuation method for l1 regularized minimization with applications to compressed sensing. SIAM J. Optim. 2008, 19, 1107–1130. [Google Scholar] [CrossRef]
  39. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  40. Van den Berg, E.; Friedlander, M.P. Probing the pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 2008, 31, 890–912. [Google Scholar] [CrossRef] [Green Version]
  41. Xiao, Y.H.; Wang, Q.Y.; Hu, Q.J. Non-smooth equations based method for l1-norm problems with applications to compressed sensing. Nonlinear Anal. Theory Methods Appl. 2011, 74, 3570–3577. [Google Scholar] [CrossRef]
  42. Pang, J.S. Inexact Newton methods for the nonlinear complementarity problem. Math. Program. 1986, 36, 54–71. [Google Scholar] [CrossRef]
  43. Awwal, A.M.; Kumam, P.; Mohammad, H.; Watthayu, W.; Abubakar, A.B. A Perry-type derivative-free algorithm for solving nonlinear system of equations and minimizing l1 regularized problem. Optimization 2021, 70, 1231–1259. [Google Scholar] [CrossRef]
  44. Xiao, Y.H.; Zhu, H. A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing. J. Math. Anal. 2013, 405, 310–319. [Google Scholar] [CrossRef]
  45. Ibrahim, A.H.; Deepho, J.; Abubakar, A.B.; Adamu, A. A three-term Polak-Ribiére-Polyak derivative-free method and its application to image restoration. Sci. Afr. 2021, 13, e00880. [Google Scholar]
Figure 1. Performance profile of the the STCG algorithm versus the PCG [34], DFSP [27], and MSGP [22] algorithms for number of iterations.
Figure 1. Performance profile of the the STCG algorithm versus the PCG [34], DFSP [27], and MSGP [22] algorithms for number of iterations.
Symmetry 14 00936 g001
Figure 2. Performance profile of the STCG algorithm versus the PCG [34], DFSP [27], and MSGP [22] algorithms for CPU time.
Figure 2. Performance profile of the STCG algorithm versus the PCG [34], DFSP [27], and MSGP [22] algorithms for CPU time.
Symmetry 14 00936 g002
Figure 3. Performance profile of the STCG algorithm versus the PCG [34], DFSP [27], and MSGP [22] algorithms for number of function evaluations.
Figure 3. Performance profile of the STCG algorithm versus the PCG [34], DFSP [27], and MSGP [22] algorithms for number of function evaluations.
Symmetry 14 00936 g003
Figure 4. Original image, blurred image, restored image by STCG with time: 0.58 s, iterations: 7, MSE: 1.6745 × 10 2 , and SNR: 20.86, restored image by PSGM with time: 1.05 s, iterations: 18, MSE: 7.3735 × 10 , and SNR: 24.42, restored image by CGD with time: 33.36 s, iterations: 474, MSE: 7.4512 × 10 , and SNR: 24.38, and restored image by TPRP with time: 250.69 s, iterations: 58, MSE: 7.7598 × 10 , and SNR: 24.20.
Figure 4. Original image, blurred image, restored image by STCG with time: 0.58 s, iterations: 7, MSE: 1.6745 × 10 2 , and SNR: 20.86, restored image by PSGM with time: 1.05 s, iterations: 18, MSE: 7.3735 × 10 , and SNR: 24.42, restored image by CGD with time: 33.36 s, iterations: 474, MSE: 7.4512 × 10 , and SNR: 24.38, and restored image by TPRP with time: 250.69 s, iterations: 58, MSE: 7.7598 × 10 , and SNR: 24.20.
Symmetry 14 00936 g004
Figure 5. Original image, blurred image, restored image by STCG with time: 0.59 s, iterations: 7, MSE: 1.6745 × 10 2 , and SNR: 20.86, restored image by PSGM with time: 1.94 s, iterations: 26, MSE: 5.8083 × 10 , and SNR: 18.38, restored image by CGD with time: 33.36 s, iterations: 491, MSE: 7.4512 × 10 , and SNR: 23.23, and restored image by TPRP with time: 809.14 s, iterations: 101, MSE: 5.9071 × 10 , and SNR: 23.16.
Figure 5. Original image, blurred image, restored image by STCG with time: 0.59 s, iterations: 7, MSE: 1.6745 × 10 2 , and SNR: 20.86, restored image by PSGM with time: 1.94 s, iterations: 26, MSE: 5.8083 × 10 , and SNR: 18.38, restored image by CGD with time: 33.36 s, iterations: 491, MSE: 7.4512 × 10 , and SNR: 23.23, and restored image by TPRP with time: 809.14 s, iterations: 101, MSE: 5.9071 × 10 , and SNR: 23.16.
Symmetry 14 00936 g005
Figure 6. Original image, blurred image, restored image by STCG with time: 0.86 s, iterations: 7, MSE: 1.9173 × 10 2 , and SNR: 17.48, restored image by PSGM with time: 0.95 s, iterations: 20, MSE: 5.8083 × 10 , and SNR: 18.38, restored image by CGD with time: 40.16 s, iterations: 645, MSE: 1.0997 × 10 2 , and SNR: 19.90, and restored image by TPRP with time: 552.25 s, iterations: 175, MSE: 1.0642 × 10 2 , and SNR: 20.04.
Figure 6. Original image, blurred image, restored image by STCG with time: 0.86 s, iterations: 7, MSE: 1.9173 × 10 2 , and SNR: 17.48, restored image by PSGM with time: 0.95 s, iterations: 20, MSE: 5.8083 × 10 , and SNR: 18.38, restored image by CGD with time: 40.16 s, iterations: 645, MSE: 1.0997 × 10 2 , and SNR: 19.90, and restored image by TPRP with time: 552.25 s, iterations: 175, MSE: 1.0642 × 10 2 , and SNR: 20.04.
Symmetry 14 00936 g006
Figure 7. Original image, blurred image, restored image by STCG with time: 0.38 s, iterations: 7, MSE: 1.9752 × 10 2 , and SNR: 12.06, restored image by PSGM with time: 1.44 s, iterations: 27, MSE: 1.3110 × 10 2 , and SNR: 13.84, restored image by CGD with time: 61.20 s, iterations: 935, MSE: 1.2850 × 10 2 , and SNR: 13.92, and restored image by TPRP with time: 770.35 s, iterations: 512, MSE: 1.2772 × 10 2 , and SNR: 13.95.
Figure 7. Original image, blurred image, restored image by STCG with time: 0.38 s, iterations: 7, MSE: 1.9752 × 10 2 , and SNR: 12.06, restored image by PSGM with time: 1.44 s, iterations: 27, MSE: 1.3110 × 10 2 , and SNR: 13.84, restored image by CGD with time: 61.20 s, iterations: 935, MSE: 1.2850 × 10 2 , and SNR: 13.92, and restored image by TPRP with time: 770.35 s, iterations: 512, MSE: 1.2772 × 10 2 , and SNR: 13.95.
Symmetry 14 00936 g007
Table 1. Numerical comparison of the STCG algorithm versus the PCG [34], DFSP [27], and MSGP [22] algorithms.
Table 1. Numerical comparison of the STCG algorithm versus the PCG [34], DFSP [27], and MSGP [22] algorithms.
Problem 1 STCGPCGDFSPMSGP
DIMENSION >INITIAL POINT ITER FVAL TIME NORM ITER FVAL TIME NORM ITER FVAL TIME NORM ITER FVAL TIME NORM
500 x 1 617800.2847198.26 × 10 9 171390.1021506690.0310980101060.3909380
x 2 516540.0921127.86 × 10 9 121060.033507830.01442606710.0283970
x 3 2270.0113660391300.035166.27 × 10 9 4410.00611503220.0058230
x 4 3380.0130190819940.1261187.87 × 10 9 5570.01108907640.0081970
x 5 3380.0126480131090.02142405560.01184407640.0073590
x 6 2190.0085102170.00612102170.00526102180.0052160
x 7 3380.0072330131090.02129605560.01003907640.0111640
x 8 6580.0174580141460.01385806720.01326906610.0105910
1000 x 1 9970.0606040FAILFAILFAILFAIL9780.019464010980.0221130
x 2 516540.3271197.86 × 10 9 121060.02625807830.01973406710.0155240
x 3 2270.0160960131020.02879106660.01523804350.0095220
x 4 3380.016324011840.01416805560.01461608780.0319710
x 5 3380.017409011840.02413905560.01812108780.0231440
x 6 2190.0102602170.00822102170.00848802180.0079010
x 7 3380.017569011840.02425705560.02464308780.0226470
x 8 6570.022786FAILFAILFAILFAILFAIL6720.02065706610.0175880
10,000 x 1 455551.1403539.2 × 10 9 141050.08242609680.1132860111040.2233130
x 2 516541.3850217.86 × 10 9 121060.13697207830.15408306710.092230
x 3 2270.0367880151180.15216605490.0783204350.0503410
x 4 4400.0523260141100.08485105560.10083907630.0909270
x 5 4290.0758480141090.1457305560.08458507630.0895630
x 6 2190.04489902170.02645903300.05117702180.029560
x 7 4290.0694350141090.08499605560.09143507630.0868630
x 8 6570.119670FAILFAILFAILFAIL6720.11464806610.0884110
50,000 x 1 6450.2611770141020.34712207710.493357010900.5572460
x 2 516544.2869897.86 × 10 9 121060.57777107830.59695306710.4371550
x 3 2270.2756290222090.84725608730.49609705400.2515530
x 4 2250.141720141100.63544605560.39356907630.3965890
x 5 2250.2278920141100.62125905560.38247507630.3968180
x 6 2190.17401102170.10683203300.20361802180.1171850
x 7 2250.144870141100.49152405560.39408207630.3929170
x 8 6570.4082970FAILFAILFAILFAIL6720.4752506610.378160
100,000 x 1 7088113.660748.42 × 10 9 151231.2274760111241.686574010981.2128340
x 2 516549.7408057.86 × 10 9 121061.11987607831.21057606710.9454030
x 3 2270.457120525914.8286505360.6263405370.4607980
x 4 2250.4899640141100.80182405560.86933507630.8269820
x 5 2250.3170640141101.07593705560.78694307630.8688220
x 6 2190.40654102170.20333803300.50418402180.2455380
x 7 2250.492010141101.20338505560.94848307630.8291460
x 8 6571.0951330FAILFAILFAILFAIL6721.04110506610.8553130
Table 2. Numerical comparison of the STCG algorithm versus the PCG [34], DFSP [27], and MSGP [22] algorithms.
Table 2. Numerical comparison of the STCG algorithm versus the PCG [34], DFSP [27], and MSGP [22] algorithms.
Problem 2 STCGPCGDFSPMSGP
DIMENSION INITIAL POINT ITER FVAL TIME NORM ITER FVAL TIME NORM ITER FVAL TIME NORM ITER FVAL TIME NORM
500 x 1 250.005374011230.0109485.14 × 10 9 16330.0138913.08 × 10 9 372.0325660
x 2 4200.02075904140.0072170250.004510250.0043770
x 3 250.00588907150.0086342.65 × 10 9 11230.0109869.07 × 10 9 370.00440
x 4 301720.05389805110.00481405120.0072590370.004780
x 5 151490.04373505110.00706105120.0068970370.0048810
x 6 130.0044080130.0029250130.0038680130.0032110
x 7 151490.04633905110.00791205120.0062260370.0048040
x 8 4200.0055960490.004330490.00580304200.0080880
1000 x 1 250.006361011230.0091046.68 × 10 9 16330.0170055.66 × 10 9 3180.0085460
x 2 4200.01209304140.0107330250.0051670250.0048110
x 3 250.00659607150.0064353.06 × 10 9 12250.0146173.3 × 10 9 3180.0088870
x 4 131150.05427705110.00942805120.0088520370.0062270
x 5 6130.04338305110.00584605120.0089440370.0055880
x 6 130.0027640130.0047380130.0040720130.0038210
x 7 6130.02644405110.00896305120.0091120370.0054920
x 8 4200.0063690490.0084960490.00713204200.0095030
10,000 x 1 250.019078012250.0672212.1 × 10 9 18370.0809394.04 × 10 9 370.0186810
x 2 4200.05503104150.033590250.0141110250.0129740
x 3 250.01854307150.0242647.84 × 10 9 12250.0571378.87 × 10 9 3180.0372960
x 4 6130.04531505110.03558405120.0328850370.0214480
x 5 6130.04338305110.02097905120.0314530370.0168910
x 6 130.0109350130.0102240130.0099620130.0075330
x 7 6130.02644405110.03240105120.029180370.017150
x 8 4200.054960490.0271780490.02585304200.0329590
50,000 x 1 250.084485012250.2450724.67 × 10 9 19390.3546585.43 × 10 9 370.0582910
x 2 4200.20198704150.0867150250.0435530250.0424190
x 3 250.04154208170.1552481.85 × 10 9 13270.2502255.86 × 10 9 370.0587580
x 4 8660.70203805110.10817705120.0977180370.0485210
x 5 4310.32870305110.06888105120.1133440370.057550
x 6 130.0329380130.0280520130.0298980130.0207890
x 7 4310.33290305110.11080605120.1068260370.063940
x 8 4200.1285460490.0542060490.08351904200.124710
100,000 x 1 250.081947012250.4710866.59 × 10 9 20410.6576143.14 × 10 9 370.1111390
x 2 4200.29459104150.1412740250.0961780250.0660170
x 3 250.07586908170.3193372.6 × 10 9 13270.432828.27 × 10 9 3180.2393690
x 4 4340.67414905110.21711105120.208860370.0869760
x 5 4340.75606205110.22403205120.2206140370.100080
x 6 130.060540130.0560490130.0542790130.0505710
x 7 4340.48776205110.130705120.2343480370.098410
x 8 4200.2667420490.1755850490.15533604200.2845550
Table 3. Numerical comparison of the STCG algorithm versus the PCG [34], DFSP [27], and MSGP [22] algorithms.
Table 3. Numerical comparison of the STCG algorithm versus the PCG [34], DFSP [27], and MSGP [22] algorithms.
Problem 3 STCGPCGDFSPMSGP
DIMENSION INITIAL POINT ITER FVAL TIME NORM ITER FVAL TIME NORM ITER FVAL TIME NORM ITER FVAL TIME NORM
500 x 1 1140.008851011680.0198163.49 × 10 9 2240.00351501140.0246580
x 2 1140.0073850121360.02937806760.013465091180.0189530
x 3 1140.008368010620.0112431.85 × 10 9 2230.00874103180.0040180
x 4 1140.0457220485900.1095505630.01821804530.0080940
x 5 1140.004585010980.02240205630.01946303400.0100350
x 6 9360.0174053.07 × 10 10 170.0036260190.00584601120.0047650
x 7 1140.005084010980.02454605630.02474503400.0089470
x 8 1140.00778805110.00799302240.00852303400.0099530
1000 x 1 1140.012929011680.0241554.93 × 10 9 2240.01341301140.0067590
x 2 1140.0088680121360.04153806760.033582091180.032690
x 3 1140.006702010620.0214172.62 × 10 9 2230.0105203180.0075840
x 4 1140.0113040141480.0436905630.02280104530.0220230
x 5 1140.010613010970.01930605630.02810703400.0177490
x 6 9360.0137324.33 × 10 10 170.0060270190.00797901120.0083220
x 7 1140.011291010970.03233605630.02872303400.017870
x 8 1140.0098150192360.06319402240.01138903400.0163780
10,000 x 1 1140.046693012740.1437051.63 × 10 9 2240.06027701140.0295360
x 2 1140.025090121360.12992606760.143179091180.2305340
x 3 1140.053757010620.1170188.28 × 10 9 2230.05211403290.0708710
x 4 1140.0463070131350.22608305630.14424204530.1027260
x 5 1140.026234010960.09503505630.12753404530.1037450
x 6 9360.1164241.37 × 10 9 170.0174840190.02384501120.0291270
x 7 1140.045158010960.10089505630.1497204530.1126230
x 8 1140.0245660182230.19928202240.05449203400.0870670
50,000 x 1 1140.18426012740.6043093.65 × 10 9 2240.23120801140.1335050
x 2 1140.1577240121360.93309606760.673996091181.0904320
x 3 1140.110849011680.5611771.94 × 10 9 2230.20392403180.1592910
x 4 1140.2011930131350.92965105630.62632604530.479790
x 5 1140.175146010960.40064205630.55610904530.4577530
x 6 9360.4821543.07 × 10 9 170.066780190.09735501120.1101010
x 7 1140.195195010960.6785105630.58792204530.4393180
x 8 1140.15581403627258.6659419.93 × 10 9 2240.24438703400.3376120
100,000 x 1 1140.357991012740.6751055.17 × 10 9 2240.50904301140.3002530
x 2 1140.3218840121361.65880706761.545788091182.2691490
x 3 1140.368256011680.9983552.74 × 10 9 2230.41593103290.5612950
x 4 1140.3659940121221.52950805631.29731304531.0324020
x 5 1140.376287010961.33950805631.3625804531.129560
x 6 9361.0770334.34 × 10 9 170.1244620190.18322301120.2202940
x 7 1140.381831010961.23728305631.27918604531.0686340
x 8 1140.193508045190317.882539.97 × 10 9 2240.47385403400.8697840
Table 4. Numerical comparison of the STCG algorithm versus the PCG [34], DFSP [27], and MSGP [22] algorithms.
Table 4. Numerical comparison of the STCG algorithm versus the PCG [34], DFSP [27], and MSGP [22] algorithms.
Problem 4 STCGPCGDFSPMSGP
DIMENSION INITIAL POINT ITER FVAL TIME NORM ITER FVAL TIME NORM ITER FVAL TIME NORM ITER FVAL TIME NORM
500 x 1 130.004658010220.0082211.8 × 10 9 130.0043940131.8121870
x 2 455200.185998014930.006089025740.0500851.76 × 10 10 34910.0542589.91 × 10 9
x 3 130.0047809200.005172 × 10 9 15320.0080773.22 × 10 9 250.005250
x 4 9960.03561509190.01483505460.00473609211640.5127129.99 × 10 9
x 5 101090.07202509190.01734706600.01110209411900.4744549.86 × 10 9
x 6 130.0049580130.0050730130.0023270130.004210
x 7 141390.04906309190.01011306600.01205209411900.4768489.86 × 10 9
x 8 303250.18058305110.01172907390.02326805330.0240180
1000 x 1 130.004922010220.0096122.54 × 10 9 130.0041980130.0041740
x 2 475460.332351014930.016606026730.0825643.54 × 10 10 461150.1090989.61 × 10 9
x 3 130.00361109200.0087012.83 × 10 9 15320.0128274.56 × 10 9 250.0065270
x 4 4310.04330509190.01920105460.013441012816321.2800529.96 × 10 9
x 5 4310.03865509190.02467806590.015477012816320.945549.95 × 10 9
x 6 130.0068360130.0060310130.0038190130.0044790
x 7 4310.03820109190.01465306590.015738012816321.0811129.95 × 10 9
x 8 303250.32132505110.01632707390.03124205330.0285140
10,000 x 1 130.014354010220.0306348.04 × 10 9 130.0109480130.0094060
x 2 455201.594754014940.068906570.0660850771770.7297641 × 10 8
x 3 130.01522609200.0266218.96 × 10 9 16340.0536224.33 × 10 9 250.0175320
x 4 4310.18093809190.0739335.84 × 10 10 6590.07196905510.059350
x 5 4310.32080709190.058761.76 × 10 9 6590.06701605510.0575870
x 6 130.0347520130.0153920130.0103470130.0119760
x 7 4310.2472109190.0940121.76 × 10 9 6590.07136705510.0636610
x 8 303252.70697507150.04511405500.05779206660.0738770
50,000 x 1 130.048857011240.1208841.93 × 10 9 130.0236010130.0335870
x 2 4754611.85282014940.28791606570.2605330851933.3425539.94 × 10 9
x 3 130.03657010220.1093352.15 × 10 9 16340.2113819.69 × 10 9 3180.2693390
x 4 4410.7328364.41 × 10 10 9190.1375683.02 × 10 9 6590.2818705510.2460780
x 5 4411.4415364.41 × 10 10 9190.1485623 × 10 9 6590.3221505510.2307360
x 6 130.0866360130.0524350130.0227420130.0254530
x 7 4411.2975974.41 × 10 10 9190.3435813 × 10 9 6590.29519705510.251350
x 8 303259.67262036272511.399279.93 × 10 9 5500.23091906660.2977730
100,000 x 1 130.057776011240.2515552.73 × 10 9 130.0536330130.0288510
x 2 4552025.59638014940.58986906570.5409310861954.9302069.84 × 10 9
x 3 130.126164010220.2027033.04 × 10 9 17360.4610834.12 × 10 9 3180.3966150
x 4 4423.0088871.56 × 10 11 9190.313044.25 × 10 9 6590.54578905510.4911550
x 5 4422.6176221.56 × 10 11 9190.5640874.23 × 10 9 6590.61611105510.4813290
x 6 130.2528640130.0510670130.0464370130.0472670
x 7 4422.0304631.56 × 10 11 9190.7134594.23 × 10 9 6590.56927705510.4701850
x 8 3437716.45906045190327.568369.97 × 10 9 5500.49665406660.6071540
Table 5. Numerical comparison of the STCG algorithm versus the PCG [34], DFSP [27], and MSGP [22] algorithms.
Table 5. Numerical comparison of the STCG algorithm versus the PCG [34], DFSP [27], and MSGP [22] algorithms.
Problem 5 STCGPCGDFSPMSGP
DIMENSION INITIAL POINT ITER FVAL TIME NORM ITER FVAL TIME NORM ITER FVAL TIME NORM ITER FVAL TIME NORM
500 x 1 10270.0105746.64 × 10 9 10220.0108981.8 × 10 9 19410.0041763.72 × 10 9 3100.0092950
x 2 5600.021266014930.01796206570.01330805490.0115380
x 3 1140.00449109200.0090332 × 10 9 15320.0152053.22 × 10 9 380.0022870
x 4 3330.0078240141010.01999605460.01431805510.0076570
x 5 3330.00706014970.01757206600.01629105510.0110890
x 6 130.0042940130.0043870130.0045970130.0039460
x 7 3330.009805014970.01092706600.01899405510.0109370
x 8 4490.00847012810.01456405500.01569906660.0123590
1000 x 1 10270.0105749.39 × 10 9 10220.0121552.54 × 10 9 19410.0218545.27 × 10 9 3210.0091330
x 2 5600.013321014930.01227306570.02119405490.0137660
x 3 1140.00682409200.0110872.83 × 10 9 15320.0189194.56 × 10 9 3190.0072920
x 4 3330.020399014990.02225805460.01038505510.0146270
x 5 3330.009816014970.01326306590.02179305510.0167090
x 6 130.0034770130.0046520130.0049240130.0035840
x 7 3330.009404014970.02241306590.02219905510.014550
x 8 4490.012058012810.01157505500.01905706660.0155670
10,000 x 1 11290.0515252.7 × 10 9 10220.0428538.04 × 10 9 20430.0973155.12 × 10 9 3100.0172240
x 2 5600.07864014940.10389306570.05632505490.0602140
x 3 1140.01971909200.0250038.96 × 10 9 16340.0809014.33 × 10 9 380.01850
x 4 4350.040926014980.10937606590.10110905510.0616690
x 5 4350.043046014980.06204106590.1023405510.0644110
x 6 130.0063050130.0081240130.0098810130.0069580
x 7 4350.054718014980.06332106590.10660705510.0637830
x 8 4490.055413012810.08928305500.08426406660.0820330
50,000 x 1 11290.1728196.04 × 10 9 11240.106621.93 × 10 9 21450.4066223.58 × 10 9 3100.0609210
x 2 5600.306848014940.38755506570.25542505490.2639690
x 3 1140.063016010220.1477842.15 × 10 9 16340.287729.69 × 10 9 380.0456550
x 4 2200.095903014980.26135606590.41269805510.2457820
x 5 2200.087273014980.26417806590.31547505510.2527240
x 6 130.0184490130.0224320130.0274220130.0197790
x 7 2200.090757014980.26012306590.23743705510.2416240
x 8 4490.285972012810.35810405500.32359306660.3466470
100,000 x 1 11290.3326798.54 × 10 9 11240.2065882.73 × 10 9 21450.5252935.22 × 10 9 3210.2158060
x 2 5600.602989014940.76762906570.72309805490.5387450
x 3 1140.153681010220.1951233.04 × 10 9 17360.6441174.12 × 10 9 3190.1955240
x 4 2200.200692014980.60265306590.78572605510.5164320
x 5 2200.182723014980.80178606590.7692105510.4760880
x 6 130.0366970130.0463730130.056280130.033210
x 7 2200.203324014980.49981606590.76023105510.5479490
x 8 4490.474225012810.56027105500.6366706660.5930460
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sabi’u, J.; Aremu, K.O.; Althobaiti, A.; Shah, A. Scaled Three-Term Conjugate Gradient Methods for Solving Monotone Equations with Application. Symmetry 2022, 14, 936. https://doi.org/10.3390/sym14050936

AMA Style

Sabi’u J, Aremu KO, Althobaiti A, Shah A. Scaled Three-Term Conjugate Gradient Methods for Solving Monotone Equations with Application. Symmetry. 2022; 14(5):936. https://doi.org/10.3390/sym14050936

Chicago/Turabian Style

Sabi’u, Jamilu, Kazeem Olalekan Aremu, Ali Althobaiti, and Abdullah Shah. 2022. "Scaled Three-Term Conjugate Gradient Methods for Solving Monotone Equations with Application" Symmetry 14, no. 5: 936. https://doi.org/10.3390/sym14050936

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop