Next Article in Journal
MemConFuzz: Memory Consumption Guided Fuzzing with Data Flow Analysis
Next Article in Special Issue
Hybrid Fixed Point Theorems of Fuzzy Soft Set-Valued Maps with Applications in Integral Inclusions and Decision Making
Previous Article in Journal
Bifurcation of Some Novel Wave Solutions for Modified Nonlinear Schrödinger Equation with Time M-Fractional Derivative
Previous Article in Special Issue
A New Accelerated Algorithm Based on Fixed Point Method for Convex Bilevel Optimization Problems with Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A One-Parameter Memoryless DFP Algorithm for Solving System of Monotone Nonlinear Equations with Application in Image Processing

1
Department of Mathematics, COMSATS University Islamabad, Park Road, Islamabad 45550, Pakistan
2
Department of Applied Mathematics and Statistics, Stony Brook University, Stony Brook, New York, NY 11794, USA
3
Department of Mathematics, College of Computing and Mathematics, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia
4
Department of Mathematics, Yusuf Maitama Sule University, Kano 700282, Nigeria
5
Department of Mathematics, Faculty of Science, Gombe State University (GSU), Gombe 760214, Nigeria
6
GSU-Mathematics for Innovative Research Group, Gombe State University (GSU), Gombe 760214, Nigeria
7
Mathematics and Computing Science Program, Faculty of Science and Technology, Phetchabun Rajabhat University, Phetchabun 67000, Thailand
8
Department of Physics, Abdul Wali Khan University Mardan, Mardan 23200, Pakistan
9
Research Group in Mathematics and Applied Mathematics, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
10
Data Science Research Center, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(5), 1221; https://doi.org/10.3390/math11051221
Submission received: 31 December 2022 / Revised: 21 February 2023 / Accepted: 23 February 2023 / Published: 2 March 2023
(This article belongs to the Special Issue Fixed Point, Optimization, and Applications II)

Abstract

:
In matrix analysis, the scaling technique reduces the chances of an ill-conditioning of the matrix. This article proposes a one-parameter scaling memoryless Davidon–Fletcher–Powell (DFP) algorithm for solving a system of monotone nonlinear equations with convex constraints. The measure function that involves all the eigenvalues of the memoryless DFP matrix is minimized to obtain the scaling parameter’s optimal value. The resulting algorithm is matrix and derivative-free with low memory requirements and is globally convergent under some mild conditions. A numerical comparison showed that the algorithm is efficient in terms of the number of iterations, function evaluations, and CPU time. The performance of the algorithm is further illustrated by solving problems arising from image restoration.

1. Introduction

The classical quasi–Newton methods are numerically efficient due to their ability to use approximate Jacobian matrices. Consider the following system of monotone nonlinear equations (SMNE) with convex constraints
F ( x ) = 0 , x X ,
where x = ( x 1 , x 2 , x 3 , . . . , x n ) T , X R n is a nonempty closed convex set and F : R n R n is a continuous and monotone function. The system F = F i ( i = 1 , 2 , 3 , . . . , n ) is monotone if
( F ( x ) F ( y ) ) T ( x y ) 0 , x , y R n .
The solution of system (1) is important in many fields of science and engineering such as control systems [1], signal recovery and image restoration in compressive sensing [2,3,4,5], communications and networking [6], data estimation and modeling [7], and geophysics [8].
Newton’s method is one of the old techniques for solving a system (1) if the Jacobian matrix is invertible [9]. Initially, Davidon [10] derived a method, which was later on reformulated by Fletcher and Powell [11] known as the Davidon–Fletcher–Powell (DFP) method for approximating the Hessian matrix of the unconstrained optimization problems. The method has been widely used for solving nonlinear optimization problems [12,13,14,15].
In recent years, there has been a lot of interest in developing methods for solving the convex-constrained nonlinear monotone problems. Some examples include: Levenberg–Marquardt method [16], scaled trust–region method [17], interior global method [18], Dogleg method [19], Polak–Ribière–Polyak (PRP) method [20], Dai and Yuan method [21], descent derivative–free method [22], projection–based method [23], double projection method [24], multivariate spectral gradient method [25], extension of CG_DESCENT projection method [26], new derivative–free SCG–type projection method [27], partially symmetrical derivative–free Liu–Storey projection method [28], modified spectral gradient projection method [29], efficient generalized conjugate gradient (CG) algorithm [30], modified Fletcher–Reeves method [31], modified decent three–term CG method [32], new hybrid CG projection method [33], self–adaptive three–term CG method [34], efficient three–term CG method [35], derivative–free RMIL CG method [36], and spectral three–term conjugate descent method [37]. A new technique called the inexact Newton method has been introduced by Solodov and Svaiter [9] having an interesting property that the entire system converges to a solution without any assumptions. Later on, Zhou and Toh [38] proved that Solodov and Svaiter method has superlinear convergence under some assumptions. Furthermore, this algorithm was extended by Zhou and Li [39,40] to the Broyden–Fletcher–Goldfarb–Shanno (BFGS) and limited memory BFGS methods. Zhang and Zhou proposed a new method named spectral gradient projection method [41] for the solution of the system (1), which is the combination of spectral gradient method [42] and the projection method [9]. Wang et al. [43] extended the work of Solodov and Svaiter [9] for the solution of monotone equations with convex constraints. Yu et al. [44] extended the spectral gradient projection method for convex constraint problems. Xiao and Zhu [45] extended the CG_DESCENT method [46,47] for large–scale nonlinear convex-constrained monotone equations. Moreover, Awwal et al. [48] derived a new hybrid spectral gradient projection method for the solution of the system (1). Currently, Sabi’u et al. [49] modified the Hager–Zhang CG method by using singular value analysis for solving the system (1).
Inspired by the work of Sabi’u et al. [50,51] for finding some optimal choices of non–negative constants that are involved in some nonlinear CG methods and measure function scaling techniques introduced by Neculai Andrei [52,53]
  • We scaled one term of the DFP update formula and found the optimal value of the scaled parameter using the idea of measure function.
  • Based on the optimal value of the scaled parameter, we derived a new search direction for the DFP algorithm.
  • We proposed a projection-based DFP algorithm for solving large-scale systems of nonlinear monotone equations.
  • We provided the global convergence result for the proposed algorithm under some mild assumptions.
  • The algorithm is successfully implemented for solving some image restoration problems.
The remainder of this paper is organized as follows. The derivation of the proposed algorithm is given in Section 2. The global convergence of the algorithm is given in Section 3. The numerical results are presented in Section 4. Section 5 contains some applications from the image restoration with its physical explanation. The conclusion is provided in Section 6.

2. One-Parameter Scaled DFP Algorithm

Quasi-Newton schemes are efficient due to their ability to use the Jacobian approximation in the scheme. The DFP update is some of the popular quasi-Newton schemes used for solving large-scale systems of algebraic nonlinear equations. In this section, we present a one-parameter scaled DFP algorithm for solving the system (1). The default iterative formula for the DFP algorithm is as follows
x k + 1 = x k + α k d k , k = 0 , 1 , 2 , . . . ,
where x k is the previous iteration, x k + 1 is the current iteration, α k is the step–length and d k is the DFP direction defined as
d k = H k F k , k = 0 , 1 , 2 , . . . ,
with F k = F ( x k ) and H k is the DFP matrix at x k . The updating formula for DFP can be found in [10,54] as
H k + 1 = H k H k y k y k T H k y k T H y k + s k s k T s k T y k , k = 0 , 1 , 2 , . . . ,
where s k = x k + 1 x k , and y k = F k + 1 F k . The symmetry of H k + 1 is directly concern with the symmetry of H k . One of the well-known properties of the DFP update is that H k + 1 is positive definite for s k T y k > 0  [54]. The main contribution of the scaling technique in the DFP update formula is that it reduces the chances of an ill–conditioning of the matrix H k for all k 0 . Now, by multiplying the third term on right-hand side of (5) with a positive scalar γ k , we have
H k + 1 = H k H k y k y k T H k y k T H y k + γ k s k s k T s k T y k , k = 0 , 1 , 2 , . . . ,
where γ k needs to be determined.
The memoryless concept is applied to the DFP formula to avoid the computation and storage of the matrix H k at each iteration. In this way, we are replacing the H k with an identity matrix I n (where n is dimension of the problem) in (6), so that formula (6) becomes
H k + 1 = I n y k y k T y k T y k + γ k s k s k T s k T y k , k = 0 , 1 , 2 , . . . , .
In addition, we want to utilize the concept of measure function φ on (7), introduced by Byrd and Nocedal [55];
φ ( H k + 1 ) = tr ( H k + 1 ) ln ( det ( H k + 1 ) ) ,
where tr denotes trace of a positive definite matrix H k + 1 , ln is the natural logarithm, and det is the determinant of H k + 1 . The measure function φ works with tr and det of H k + 1 at the same time to modify the quasi–Newton method and to collect information about the behavior of the quasi–Newton method. It is the measure of matrices involving all the eigenvalues of H k + 1  [56]. If H k + 1 = I , then the function φ is strictly convex [57], while φ becomes unbounded in case of either H k + 1 is singular or infinite.
Now, the determinant of H k + 1 can be calculated by the Sherman–Morrison formula [58], i.e.,
det ( I + v 1 v 2 T + v 3 v 4 T ) = ( 1 + v 1 T v 2 ) ( 1 + v 3 T v 4 ) ( v 1 T v 4 ) ( v 2 T v 3 ) ,
to have
det ( H k + 1 ) = γ k y k T s k y k 2 .
Using simple algebra on (7), the trace of H k + 1 is given by
tr ( H k + 1 ) = tr ( I n ) tr y k y k T y k T y k + γ k tr s k s k T s k T y k = tr ( I n ) y k T y k y k T y k + γ k s k T s k s k T y k = n y k 2 y k 2 + γ k s k 2 s k T y k = n 1 + γ k s k 2 s k T y k .
Now, putting (10) and (11) into (8), we have
φ ( H k + 1 ) = n 1 + γ k s k 2 s k T y k ln γ k y k T s k y k 2 = n 1 + γ k s k 2 s k T y k ln ( γ k ) ln ( y k T s k ) + ln ( y k 2 ) .
Differentiating (12) with respect to γ k , we have
φ γ k = s k 2 s k T y k 1 γ k .
Using the optimality condition on (13), i.e., φ γ k = 0 , we have
s k 2 s k T y k = 1 γ k ,
implies that
γ k = s k T y k s k 2 .
Using (15) into (7), we have
H k + 1 = I n y k y k T y k T y k + s k T y k s k 2 s k s k T s k T y k = I n y k y k T y k T y k + s k s k T s k 2 .
Next, using (16) in (4), the search direction is
d k + 1 = I n y k y k T y k 2 + s k s k T s k 2 F k + 1 = F k + 1 + y k T F k + 1 y k 2 y k s k T F k + 1 s k 2 s k .
Furthermore, Solodov and Svaiter [9] proposed a predictor-corrector method, in which
z k = x k + α k d k , ( predictor )
and
x k + 1 = x k λ k F ( z k ) , ( corrector )
where
λ k = F ( z k ) T ( x k z k ) F ( z k ) 2 .
In this work, we use an iterative scheme as
x k + 1 = P χ x k ξ λ k F ( z k ) ,
where ξ is any positive constant and P is the projection operator on a convex subset χ defined by
P χ [ x ] = arg min y x y χ , x R n .
We recommend that readers read [59] for further details on the advantages and applications of the projection operator. Now, the One-parameter Scaled Memoryless DFP (SMDFP) algorithm for solving system (1) is summarized in Algorithm 1:
Algorithm 1: Scaled Memoryless DFP (SMDFP) Method
Step 0 Given a starting point x 0 R n .
Step 1 Choose values for ξ , ϵ , ρ > 0 with θ ( 0 , 1 ) . Compute d 0 = F ( x 0 ) , set k = 0 .
Step 2 Compute F ( x k ) , while testing the stopping criteria, i.e. If F k ϵ , then stop, otherwise move to the next step.
Step 3 Compute the step size α k = max ρ i ; i = 0 , 1 , 2 , satisfying the line search
F ( x k + α k d k ) T d k θ α k F ( x k + α k d k ) d k 2 .
Step 4 Compute the difference s k = x k + 1 x k and y k = F k + 1 F k .
Step 5 Compute the search direction d k + 1 using (17).
Step 6 Compute iterative relations x k + 1 using (21).
Step 7 Set k = k + 1 and go to Step 2.
Remark 1. 
The proposed SMDFP algorithm is a matrix-free and also derivative-free approach. These make the proposed algorithm more efficient in solving large-scale problems.

3. Convergence Analysis

This section provides the global convergence of the SMDFP algorithm using the following assumptions:
(a)
The solution set of system (1) is nonempty and the function F is monotone on R n , i.e.,
( F ( x ) F ( y ) ) T ( x y ) 0 , x , y R n .
(b)
For a constant μ > 0 , the function F is Lipschitz continuous on R n , i.e.,
F ( x ) F ( y ) μ x y , x , y R n .
(c)
Let x * X be the solution of the problem (1) such that F ( x * ) = 0 .
Now, we will proceed with a non–expansive property of projection operator [60].
Lemma 1. 
Suppose χ is a nonempty closed and convex subset of R n . Then the projection operator P can be written as
P χ [ x ] P χ [ y ] x y , x , y R n ,
which shows that, P is a Lipschitz continuous on R n with μ = 1 .
Lemma 2. 
Our search direction d k + 1 defined by (17) satisfies the descent condition, i.e.,
F k + 1 T d k + 1 0 , k 0 .
Proof. 
Multiplying (17) by F k + 1 T , we have
F k + 1 T d k + 1 = F k + 1 T F k + 1 + y k T F k + 1 y k 2 y k s k T F k + 1 s k 2 s k = F k + 1 2 + ( y k T F k + 1 ) F k + 1 T y k 2 y k ( s k T F k + 1 ) F k + 1 T s k 2 s k = F k + 1 2 + ( y k T F k + 1 ) 2 y k 2 ( s k T F k + 1 ) 2 s k 2 F k + 1 2 + ( y k T F k + 1 ) 2 y k 2 F k + 1 2 + y k 2 F k + 1 2 y k 2 = F k + 1 2 + F k + 1 2 = 0 ,
where the second inequality follows from the Cauchy–Schwarz inequality. Hence
F k + 1 T d k + 1 0 , k 0 .
Lemma 3. 
Let the assumptions (a), (b), and (c) hold, then the sequences z k and x k generated by the SMDFP algorithm are bounded. Moreover, we have
lim k x k z k = 0 ,
and
lim k x k + 1 x k = 0 .
Proof. 
Firstly, we will show that the sequences z k and x k are bounded. Let x * X be any solution of the problem (1). By monotonicity of F, we get
F z k , x k x * = F z k , x k z k + F z k , z k x * F z k , x k z k + F x * , z k x * = F z k , x k z k ,
and from the definition of z k and line search (23), we have
F z k , x k z k = α k F z k , d k θ α k 2 F z k d k 2 = θ F z k x k z k 2 > 0 .
Now, by using (21) and (26), we get
x k + 1 x * 2 = P X x k ξ λ k F z k x * 2 x k ξ λ k F z k x * 2 = x k x * 2 2 ξ λ k F z k , x k x * + ξ λ k F z k 2 .
Putting (32) into (34), and then using the value of λ k from (20), we have
x k + 1 x * 2 x k x * 2 2 ξ λ k F z k , x k z k + ξ 2 λ k 2 F z k 2 = x k x * 2 2 ξ F ( z k ) T ( x k z k ) F ( z k ) 2 F z k , x k z k + ξ 2 F ( z k ) T ( x k z k ) F ( z k ) 2 2 F z k 2 = x k x * 2 2 ξ F z k , x k z k F ( z k ) 2 + ξ 2 F z k , x k z k F ( z k ) 2 = x k x * 2 ξ ( 2 ξ ) F z k , x k z k F ( z k ) 2 .
Now, by using (33) on (35), we have
x k + 1 x * 2 x k x * 2 ξ ( 2 ξ ) θ F z k x k z k 2 F ( z k ) 2 = x k x * 2 ξ ( 2 ξ ) θ 2 x k z k 4 .
Hence the sequence x k x * is decreasing and convergent. Moreover, the sequence x k is bounded. Furthermore, from (36), we can write
x k + 1 x * 2 x k x * 2 , k 0 ,
and repeating the same process, we get the result that
x k x * 2 x 0 x * 2 , k 0 .
Moreover, from assumption (b), we have
F x k = F x k F ( x * ) μ x k x * μ x 0 x * .
We suppose that μ x 0 x * = ω , then the sequence F x k is bounded, i.e.,
F x k ω , k 0 .
By the Cauchy–Schwarz inequality, monotonicity of F and inequality (33), it holds that
0 < θ F z k x k z k 2 F z k , x k z k F z k x k z k .
From the inequality (41), it implies that
θ x k z k 1 ,
which shows that the sequence z k is also bounded. Moreover, from inequality (36), it follows that
ξ ( 2 ξ ) θ 2 k = 1 x k z k 4 k = 1 x k x * 2 x k + 1 x * 2 x 0 x * < ,
which implies that
lim k x k z k = 0 .
Since x k X , so by using (26) and then putting the value of λ k from (20), we have
x k + 1 x k = P X x k ξ λ k F z k x * x k ξ λ k F z k x k = ξ λ k F z k = ξ F z k , x k z k F ( z k ) 2 F z k = ξ F z k , x k z k F ( z k ) .
Now using the Cauchy–Schwarz inequality on (45), we have
x k + 1 x k ξ F z k x k z k 2 F ( z k ) = ξ x k z k 2 , k 0 .
Thus, by using (30), we have
lim k x k + 1 x k = 0 .
Hence, the proof of (31) is completed. □
Remark 2. 
Let the sequence x k be generated by the SMDFP method. Then, using (18) on (44), we have
lim k z k x k = lim k x k + α k d k x k = lim k α k d k = 0 , k 0 .
Lemma 4. 
The direction generated by the SMDFP algorithm is bounded. That is
d k + 1 q , k 0 ,
where q is some positive constant.
Proof. 
It follows from (17) that
d k + 1 = F k + 1 + y k T F k + 1 y k 2 y k s k T F k + 1 s k 2 s k F k + 1 + y k T F k + 1 y k 2 y k + s k T F k + 1 s k 2 s k F k + 1 + F k + 1 + F k + 1 = 3 F k + 1 3 ω = q ,
where the second inequity follows from Cauchy–Schwarz inequality. Thus,
d k + 1 q , k 0 .
Theorem 1. 
Let the sequences z k and x k be generated by the SMDFP algorithm, then
lim inf k F k = 0 .
Proof. 
To prove that (51) holds, we consider the following two cases;
Case 1. Suppose the sequence x k is generated by the SMDFP method. Then we have
lim k α k d k = 0 .
Assume that if
lim inf k d k = 0 ,
then
lim inf k F k = 0 .
Then by continuity property of F, there will be some accumulation point x * in sequence x k such that F ( x * ) = 0 . Since x k x * is going to converge and x k has an accumulation point x * , x k is going to converge to x * .
Case 2. Suppose (51) is not true, and then there exists some positive constant δ such that
F k δ > 0 , k 0 .
By using Cauchy–Schwarz inequality on descent condition, we have
F k d k F k T d k F k 2 0 , k 0 ,
which implies that
d k F k > 0 , k 0 .
Using (52) and (55), we have
lim inf k d k > 0 .
Now, by using (52) and (56), we have
lim k α k = 0 .
Since (57) is true, by the definition of α k , ρ 1 α k does not satisfy the line search, i.e.,
F ( x k + α k ρ 1 d k ) T d k < θ α k ρ 1 F ( x k + α k ρ 1 d k ) d k 2 ,
and from the boundedness of x k and d k , we can choose a sub–sequence such that k approaches to infinity in the above inequality results, then (58) becomes
F ( x * ) T d ¯ < 0 .
Moreover, k approaches to in (29), which implies that
F ( x * ) T d ¯ 0 .
From (59) and (60), we concluded that it is a clear contradiction to each other. Hence,
lim inf k F k = 0 ,
is true and the proof is complete. □

4. Numerical Experimentation

In this section, we perform some numerical experiments to validate the SMDFP algorithm by comparing the computed results with the conjugate gradient hybrid (CGH) method [61] and with the generalized hybrid CGPM–based (GHCGP) method [62]. All algorithms are written in Matlab R2014 on an HP CORE i5 Intel 8th Gen personal computer. In our experiment with the uniform stopping condition, we used the published initial values for the comparison algorithms. However, we used θ = 0.0001 and ρ = 0.9 in the SMDFP algorithm. The numerical simulations are stopped either exceeding the dimension or F ( x k ) 10 11 . For the following problems see [63,64,65,66,67,68].
Problem 1 
([63,64]). Set X = R + n and function F ( x ) is described as
F i ( x ) = exp ( x i ) 1 , for i = 1 , 2 , 3 , , n .
Problem 2 
([65]). Set X = R + n and function F ( x ) is described as
F 1 ( x ) = cos ( x 1 ) + 3 x 1 + 8 exp ( x 2 ) 9 ,
F i ( x ) = cos ( x i ) + 3 x i + 8 exp ( x i 1 ) 9 , for i = 2 , 3 , 4 , , n .
Problem 3 
([66]). Set X = R + n and function F ( x ) is described as
F i ( x ) = x i 0.1 x i + 1 2 , for i = 2 , 3 , 4 , , n 1 ,
F n ( x ) = x n 0.1 x 1 2 .
Problem 4 
([67]). Set X = R + n and function F ( x ) is described as
F i ( x ) = x i cos ( x i 1 n ) x n sin ( x i ) 1 ( x i 1 ) 2 1 n i = 1 n x i , for i = 1 , 2 , 3 , , n .
Problem 5 
([68]). Set X = R + n and function F ( x ) is described as
F i ( x ) = x i 2 + 10 x i , for i = 1 , 2 , 3 , , n .
Problem 6 
([63]). Set X = R + n and function F ( x ) is described as
F i ( x ) = l o g x i + 1 x i n , for i = 1 , 2 , 3 , , n .
Problem 7 
([63,64]). Set X = R + n and function F ( x ) is described as
F i ( x ) = 2 x i sin x i , for i = 1 , 2 , 3 , , n .
Problem 8 
([63]). Set X = R + n and function F ( x ) is described as
F 1 ( x ) = exp ( x 1 ) 1 ,
F i ( x ) = exp ( x i ) + x i 1 , for i = 2 , 3 , 4 , , n .
In Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8, we used the initial points x 1 = ( 1 , 1 , , 1 ) , x 2 = ( 1 , 1 2 , 1 3 , , 1 n ) , x 3 = ( 0.1 , 0.1 , , 0.1 ) , x 4 = ( 1 n , 2 n , , 1 ) , x 5 = ( 1 1 n , 1 2 n , , 0 ) , x 6 = ( 1 , 1 , , 1 ) , x 7 = ( n 1 n , n 2 n , , n 1 ) and x 8 = ( 1 2 , 1 , 2 3 , , 2 n ) . Moreover, the terms ITER, FEV, CPUT, and NORM stand for the number of iterations, the number of function evaluations, CPU times (in seconds), and the norm of the function evaluations, respectively. The failure of a certain algorithm is denoted by “_”. We further show the numerical performance of the SMDFP algorithm along with the CGH [61] and GHCGP [62] methods in terms of the number of iterations, function evaluations, CPU times, and error estimation.
In Table 1 and Table 2, the SMDFP algorithm has fewer iterations and function evaluations, shorter CPU times, and smaller errors compared to CGH [61] and GHCGP [62] methods. However, the SMDFP algorithm has more number of iterations, function evaluations, shorter CPU times, and smaller errors than the two compared methods in Table 3, Table 5 and Table 6. The CGH method failed for Problems 4 and 7, as shown in Table 4 and Table 7. Further from Table 8, it is noted that the GHCGP algorithm failed for Problem 8. It is concluded that the overall performance of the SMDFP algorithm is more efficient than both the CGH approach and GHCGP algorithm in terms of the number of iterations, function evaluations, CPU times, and error estimation, as shown in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8.
Next, the robustness of the SMDFP algorithm is illustrated by Figure 1, Figure 2 and Figure 3 showing the performance profiles based on Dolan and Moré procedure [69]. In Figure 1, the top curve showed the SMDFP algorithm with respect to the number of iterations, which leads the remaining curves of CGH [61] and GHCGP methods [62]. Similarly, in the case of function evaluations and CPU times, the SMDFP algorithm has better performance, as shown in Figure 2 and Figure 3, respectively. All of the above factors clearly showed that our proposed algorithm is leading and the best in all aspects, i.e., in terms of iterations, function evaluations, and CPU times.

5. Applications

5.1. Image Restoration

This section describes techniques for reducing noise and recovering lost image resolutions. Its applications are in the field of medical imaging [70], astronomical imaging [71], file restoration [72], and image coding [73]. Let x * be the original sparse signal, and t R m be an observation satisfying
t = U x * ,
where U R m × n ( m < < n ) is a linear operator. This problem is used for finding solutions to the sparse ill-conditioned linear system of equations. According to Bruckstein et al. [74], a function containing a quadratic ( l 2 ) error term as well as a sparse l 1 –regularization term is minimized as,
min x 1 2 t U x 2 2 + β x 1 ,
where x R n , and β is a nonnegative balance parameter, x 2 denotes the Euclidean norm of x, and x 1 is the l 1 –norm of x. Problem (62) is a convex unconstrained minimization problem commonly used in compressive sensing if the original signal is sparse or near to sparse on some orthogonal basis.
Many iterative techniques have been introduced in literature such as Figueiredo et al. [75], Hale et al. [76], Figueiredo et al. [77], Van Den Berg et al. [78], Beck et al. [79], and Hager et al. [80] for finding a solution to (62). The GPRS method has the following steps to present (62) as a quadratic problem.
Let x R n be categorized into two different classes, namely, its positive and negative parts:
x = q r , q 0 , r 0 ,
where q i = x i + , r i = x i + for all i = 1 , 2 , , n , and ( . ) + = max { 0 , . } . Since, by definition of 1 –norm, we have x 1 = e n T q + e n T r , where e n = ( 1 , 1 , , 1 ) T R n . Thus, (62) can be written as:
min q , r 1 2 t U ( q r ) 2 2 + β e n T q + β e n T r , q 0 , r 0 ,
which is a bound–constrained and quadratic problem. Moreover, following Figueiredo et al. [77], a standard form of problem (64) can be written as:
min v 1 2 v T A v + c T v , such that v 0 ,
where v = q r , b = U T t , c = β e 2 n + b b , and A = U T U U T U U T U U T U . Problem (65) is convex quadratic because A is a positive semi-definite matrix as proved by Xiao et al. [5]. He also transformed (65) into a linear variable inequality problem, which is equivalent to a linear complementarity problem. However, v is a solution to the linear complementarity problem if and only if it is a solution to the nonlinear equation
f ( v ) = min { v , A v + c } = 0 ,
which is continuously monotone, as shown by Xiao et al. [5]. So, problem (66) can be solved using the SMDFP method.

5.2. Implementation

Table 9 shows the numerical comparison for four different methods, namely; SMDFP method, CGD method [45], PSGM method [81], and TPRP method [82] by applying on seven different images, namely, baby, COMSATS, fox, horse, Lena, and Thai-Culture.The mean of squared error (MSE) [45] between the blurred x * and restored x images is
MSE = 1 n x * x 2 ,
and the signal–to–ratio (SNR) [45] of the recovered images is
SNR = 20 × log 10 x * x x * .
By both MSE and SNR, we measure the image restoration quality by taking ρ = 6 and θ = 0.0001 . We studied a compressive sensing situation in which the aim was to reconstruct a length-n sparse signal from m observations, where m < < n . We test a modest size signal with n = 211 , m = 29 , and the original has 26 randomly non-zero elements due to the PC’s storage restrictions. The random U is the Gaussian matrix created by the Matlab command r a n d n ( m , n ) . The measurement t in this test involves noise,
t = U x * + ϕ ,
where ϕ is the Gaussian noise distributed as N ( 0 , σ 2 I ) with σ 2 = 10 4 . We used f ( x ) = 1 2 t U x 2 2 + β x 1 as the merit function, x 0 = U T t , β is compelled to decrease as adopted in [75], and the iteration terminates if
Tolerance = f ( x k ) f ( x k 1 ) f ( x k 1 ) < 10 10 .
 All of the codes in this section are also executed on the same machine and software as described in Section 4. In summary, the performance of the SMDFP method is better than CGD [45], PSGM [81], and TPRP [82] methods for the number of iterations, CPU time, and recovered images SNR quality as clear from Table 9. Consequently, it can be seen from Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10 that the proposed method has restored all the problems with fewer iterations and CPU time.

6. Conclusions

This paper presents a one-parameter SMDFP method for solving a system of monotone nonlinear equations with convex constraints. We scaled one term of the DFP update formula and found the optimal value of the scaled parameter using the idea of measure function. Based on the new optimal value of the scaled parameter, we derived a modified search direction for the DFP algorithm. The proposed method is globally convergent using the monotonicity and Lipschitz continuous assumptions. The method’s robustness is demonstrated by solving large-scale monotone nonlinear equations and making comparisons with the related CGH and GHCGP methods. Lastly, the algorithm is successfully implemented for solving some image restoration problems. The proposed scale DFP direction can further be applied to solve unconstrained optimization problems, motion control of two coplanar joint robot manipulators, and many more problems.

Author Contributions

Conceptualization, N.U., J.S., X.J., A.M.A., N.P. and B.P.; Methodology, N.U., A.S., X.J., A.M.A. and N.P.; Software, J.S., X.J., A.M.A. and N.P.; Validation, J.S., X.J., N.P. and S.K.S.; Formal analysis, A.S.; Investigation, N.U., A.S. and S.K.S.; Resources, A.M.A.; Data curation, N.U. and S.K.S.; Writing—original draft, N.U., J.S., X.J., A.M.A., N.P., S.K.S. and B.P.; Writing—review & editing, N.U., A.S., N.P. and B.P.; Visualization, A.M.A. and N.P.; Supervision, A.S., J.S., X.J. and B.P.; Project administration, N.P. and B.P.; Funding acquisition, N.P. and B.P. All authors have read and agreed to the published version of the manuscript.

Funding

The sixth author was supported by Phetchabun Rajabhat University and Thailand Science Research and Innovation (grant number 182093). The eighth author was partially supported by Chiang Mai University and Fundamental Fund 2023, Chiang Mai University, and the NSRF via the Program Management Unit for Human Resources and Institutional Development, Research and Innovation (grant number B05F640183).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Prajna, S.; Parrilo, P.A.; Rantzer, A. Nonlinear control synthesis by convex optimization. IEEE Trans. Autom. Control 2004, 49, 310–314. [Google Scholar] [CrossRef] [Green Version]
  2. Abubakar, A.B.; Kumam, P.; Mohammad, H.; Awwal, A.M. An efficient conjugate gradient method for convex constrained monotone nonlinear equations with applications. Mathematics 2019, 7, 767. [Google Scholar] [CrossRef] [Green Version]
  3. Hu, Y.; Wang, Y. An efficient projected gradient method for convex constrained monotone equations with applications in compressive sensing. J. Appl. Math. Phys. 2020, 8, 983–998. [Google Scholar] [CrossRef]
  4. Liu, J.K.; Du, X.L. A gradient projection method for the sparse signal reconstruction in compressive sensing. Appl. Anal. 2018, 97, 2122–2131. [Google Scholar] [CrossRef]
  5. Xiao, Y.; Wang, Q.; Hu, Q. Non–smooth equations based method for (l1)–norm problems with applications to compressive sensing. Nonlinear Anal. Theory Methods Appl. 2011, 74, 3570–3577. [Google Scholar] [CrossRef]
  6. Luo, Z.Q.; Yu, W. An introduction to convex optimization for communications and signal processing. IEEE J. Sel. Areas Commun. 2006, 24, 1426–1438. [Google Scholar]
  7. Evgeniou, T.; Pontil, M.; Toubia, O. A convex optimization approach to modelling consumer heterogeneity in conjoint estimation. Mark. Sci. 2007, 26, 805–818. [Google Scholar] [CrossRef] [Green Version]
  8. Bello, L.; Raydan, M. Convex constrained optimization for the seismic reflection tomography problem. J. Appl. Geophys. 2007, 62, 158–166. [Google Scholar] [CrossRef]
  9. Solodov, M.V.; Svaiter, B.F. A globally convergent inexact Newton method for systems of monotone equations. In Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods; Fukushima, M., Qi, L., Eds.; Springer: Boston, MA, USA, 1998; Volume 22, pp. 355–369. [Google Scholar]
  10. Davidon, W.C. Variable metric method for minimization. SIAM J. Optim. 1991, 1, 1–17. [Google Scholar] [CrossRef] [Green Version]
  11. Fletcher, R.; Powell, M.J.D. A rapidly convergent descent method for minimization. Comput. J. 1963, 6, 163–168. [Google Scholar] [CrossRef] [Green Version]
  12. Dingguo, P. Superlinear convergence of the DFP algorithm without exact line search. Acta Math. Appl. Sin. 2001, 17, 430–432. [Google Scholar] [CrossRef]
  13. Dingguo, P.; Weiwen, T. A class of Broyden algorithms with revised search directions. Asia–Pac. J. Oper. Res. 1997, 14, 93–109. [Google Scholar]
  14. Pu, D. Convergence of the DFP algorithm without exact line search. J. Optim. Theory Appl. 2002, 112, 187–211. [Google Scholar] [CrossRef]
  15. Pu, D.; Tian, W. The revised DFP algorithm without exact line search. J. Comput. Appl. Math. 2003, 154, 319–339. [Google Scholar] [CrossRef] [Green Version]
  16. Kanzow, C.; Yamashita, N.; Fukushima, M. Levenberg–Marquardt methods with strong local convergence properties for solving nonlinear equations with convex constraints. J. Comput. Appl. Math. 2005, 173, 321–343. [Google Scholar] [CrossRef] [Green Version]
  17. Bellavia, S.; Macconi, M.; Morini, B. A scaled trust–region solver for constrained nonlinear equations. Comput. Optim. Appl. 2004, 28, 31–50. [Google Scholar] [CrossRef]
  18. Bellavia, S.; Morini, B. An interior global method for nonlinear systems with simple bounds. Optim. Methods Softw. 2005, 20, 453–474. [Google Scholar] [CrossRef]
  19. Bellavia, S.; Morini, B.; Pieraccini, S. Constrained Dogleg methods for nonlinear systems with simple bounds. Comput. Optim. Appl. 2012, 53, 771–794. [Google Scholar] [CrossRef] [Green Version]
  20. Yu, G. A derivative–free method for solving large–scale nonlinear systems of equations. J. Ind. Manag. Optim. 2010, 6, 149–160. [Google Scholar] [CrossRef]
  21. Liu, J.; Feng, Y. A derivative–free iterative method for nonlinear monotone equations with convex constraints. Numer. Algorithms 2019, 82, 245–262. [Google Scholar] [CrossRef]
  22. Mohammad, H.; Abubakar, A.B. A descent derivative–free algorithm for nonlinear monotone equations with convex constraints. RAIRO–Oper. Res. 2020, 54, 489–505. [Google Scholar] [CrossRef] [Green Version]
  23. Wang, C.; Wang, Y. A super–linearly convergent projection method for constrained systems of nonlinear equations. J. Glob. Optim. 2009, 44, 283–296. [Google Scholar] [CrossRef]
  24. Ma, F.; Wang, C. Modified projection method for solving a system of monotone equations with convex constraints. J. Appl. Math. Comput. 2010, 34, 47–56. [Google Scholar] [CrossRef]
  25. Yu, G.; Niu, S.; Ma, J. Multivariate spectral gradient projection method for nonlinear monotone equations with convex constraints. J. Ind. Manag. Optim. 2013, 9, 117–129. [Google Scholar] [CrossRef]
  26. Liu, J.K.; Li, S.J. A projection method for convex constrained monotone nonlinear equations with applications. Comput. Math. Appl. 2015, 70, 2442–2453. [Google Scholar] [CrossRef]
  27. Ou, Y.; Li, J. A new derivative–free SCG–type projection method for nonlinear monotone equations with convex constraints. J. Appl Math Comput. 2018, 56, 195–216. [Google Scholar] [CrossRef]
  28. Liu, J.K.; Xu, J.L.; Zhang, L.Q. Partially symmetrical derivative–free Liu–Storey projection method for convex constrained equations. Int. J. Comput. Math. 2019, 96, 1787–1798. [Google Scholar] [CrossRef]
  29. Zheng, L.; Yang, L.; Liang, Y. A modified spectral gradient projection method for solving non–linear monotone equations with convex constraints and its application. IEEE Access. 2020, 8, 92677–92686. [Google Scholar] [CrossRef]
  30. Liu, Y.; Storey, C. Efficient generalized conjugate gradient algorithms, Part 1: Theory. J. Optim. Theory Appl. 1991, 69, 129–137. [Google Scholar] [CrossRef]
  31. Dai, Y.H.; Yuan, Y. A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 1999, 10, 177–182. [Google Scholar] [CrossRef] [Green Version]
  32. Liu, S.Y.; Huang, Y.Y.; Jiao, H.W. Sufficient decent conjugate gradient methods for solving convex constrained nonlinear monotone equations. Abstr. Appl. Anal. 2014, 2014, 305643. [Google Scholar]
  33. Sun, M.; Liu, J. New hybrid conjugate gradient projection method for the convex constrained equations. Calcolo 2016, 53, 399–411. [Google Scholar] [CrossRef]
  34. Wang, X.Y.; Li, S.J.; Kou, X.P. A self–adaptive three–term conjugate gradient method for monotone nonlinear equations with convex constraints. Calcolo 2016, 53, 133–145. [Google Scholar] [CrossRef]
  35. Gao, P.; He, C. An efficient three–term conjugate gradient method for nonlinear monotone equations with convex constraints. Calcolo 2018, 55, 53. [Google Scholar] [CrossRef]
  36. Ibrahim, A.H.; Garba, A.I.; Usman, H.; Abubakar, J.; Abubakar, A.B. Derivative–free RMIL conjugate gradient method for convex constrained equations. Thai J. Math. 2019, 18, 212–232. [Google Scholar]
  37. Abubakar, A.B.; Rilwan, J.; Yimer, S.E.; Ibrahim, A.H.; Ahmed, I. Spectral three–term conjugate descent method for solving nonlinear monotone equations with convex constraints. Thai J. Math. 2020, 18, 501–517. [Google Scholar]
  38. Zhou, G.; Toh, K.C. Superlinear convergence of a Newton type algorithm for monotone equations. J. Optim. Theory Appl. 2005, 125, 205–221. [Google Scholar] [CrossRef]
  39. Zhou, W.J.; Li, D.H. A globally convergent BFGS method for nonlinear monotone equations without any merit functions. Math. Comput. 2008, 77, 2231–2240. [Google Scholar] [CrossRef]
  40. Zhou, W.; Li, D. Limited memory BFGS method for nonlinear monotone equations. J. Comput. Math. 2007, 25, 89–96. [Google Scholar]
  41. Zhang, L.; Zhou, W. Spectral gradient projection method for solving nonlinear monotone equations. J. Comput. Appl. Math. 2006, 196, 478–484. [Google Scholar] [CrossRef] [Green Version]
  42. Barzilai, J.; Borwein, J.M. Two–point step size gradient methods. IMA J. Numer. Anal. 1988, 8, 141–148. [Google Scholar] [CrossRef]
  43. Wang, C.; Wang, Y.; Xu, C. A projection method for a system of nonlinear monotone equations with convex constraints. Math. Methods Oper. Res. 2007, 66, 33–46. [Google Scholar] [CrossRef]
  44. Yu, Z.; Lin, J.; Sun, J.; Xiao, Y.; Liu, L.; Li, Z. Spectral gradient projection method for monotone nonlinear equations with convex constraints. Appl. Numer. Math. 2009, 59, 2416–2423. [Google Scholar] [CrossRef]
  45. Xiao, Y.; Zhu, H. A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing. J. Math. Anal. Appl. 2013, 405, 310–319. [Google Scholar] [CrossRef]
  46. Hager, W.W.; Zhang, H. A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 2005, 16, 170–192. [Google Scholar] [CrossRef] [Green Version]
  47. Hager, W.W.; Zhang, H. CG_DESCENT, a conjugate gradient method with guaranteed descent. ACM Trans. Math. Softw. 2006, 32, 113–137. [Google Scholar] [CrossRef]
  48. Muhammed, A.A.; Kumam, P.; Abubakar, A.B.; Wakili, A.; Pakkaranang, N. A new hybrid spectral gradient projection method for monotone system of nonlinear equations with convex constraints. Thai J. Math. 2018, 16, 125–147. [Google Scholar]
  49. Sabi’u, J.; Shah, A.; Waziri, M.Y.; Ahmed, K. Modified Hager–Zhang conjugate gradient methods via singular value analysis for solving monotone nonlinear equations with convex constraint. Int. J. Comput. Methods 2021, 18, 2050043. [Google Scholar] [CrossRef]
  50. Sabi’u, J.; Shah, A.; Waziri, M.Y. A modified Hager–Zhang conjugate gradient method with optimal choices for solving monotone nonlinear equations. Int. J. Comput. Math. 2021, 99, 332–354. [Google Scholar] [CrossRef]
  51. Sabi’u, J.; Shah, A. An efficient three–term conjugate gradient–type algorithm for monotone nonlinear equations. RAIRO Oper. Res. 2021, 55, 1113–1127. [Google Scholar] [CrossRef]
  52. Andrei, N. A double parameter self–scaling memoryless BFGS method for unconstrained optimization. Comput. Appl. Math. 2020, 39, 1–14. [Google Scholar] [CrossRef]
  53. Andrei, N. A note on memory–less SR1 and memory–less BFGS methods for large–scale unconstrained optimization. Numer. Algorithms 2021, 99, 223–240. [Google Scholar] [CrossRef]
  54. Fletcher, R. Practical Methods of Optimization, 2nd ed.; John Wiley & Sons: New York, NY, USA, 1990. [Google Scholar]
  55. Byrd, R.H.; Nocedal, J. A tool for the analysis of quasi–Newton methods with application to unconstrained minimization. SIAM J. Numer. Anal. 1989, 26, 727–739. [Google Scholar] [CrossRef]
  56. Andrei, N. A double parameter scaled BFGS method for unconstrained optimization. J. Comput. Appl. Math. 2018, 332, 26–44. [Google Scholar] [CrossRef]
  57. Fletcher, R. An overview of unconstrained optimization, In The State of the Art; Algorithms for Continuous Optimization; Spedicato, E., Ed.; Kluwer Academic Publishers: Boston, MA, USA, 1994; pp. 109–143. [Google Scholar]
  58. Sun, W.; Yuan, Y.X. Optimization Theory and Methods, Nonlinear Programming; Springer Science + Business Media: New York, NY, USA, 2006. [Google Scholar]
  59. Behrens, R.T.; Scharf, L.L. Signal processing applications of oblique projection operators. IEEE Trans. Signal Process 1994, 42, 1413–1424. [Google Scholar] [CrossRef] [Green Version]
  60. Zarantonello, E.H. Projections on convex sets in Hilbert space and spectral theory: Part I. Projections on convex sets: Part II. Spectral theory. Contrib. Nonlinear Funct. Anal. 1971, 5, 237–424. [Google Scholar]
  61. Halilu, A.S.; Majumder, A.; Waziri, M.Y.; Ahmed, K. Signal recovery with convex constrained nonlinear monotone equations through conjugate gradient hybrid approach. Math. Compu. Simul. 2021, 187, 520–539. [Google Scholar] [CrossRef]
  62. Yin, J.H.; Jian, J.B.; Jiang, X.Z. A generalized hybrid CGPM–based algorithm for solving large–scale convex constrained equations with applications to image restoration. J. Comput. Appl. Math. 2021, 391, 113423. [Google Scholar] [CrossRef]
  63. Sabi’u, J.; Shah, A.; Waziri, M.Y. Two optimal Hager–Zhang conjugate gradient methods for solving monotone nonlinear equations. Appl. Numer. Math. 2020, 153, 217–233. [Google Scholar] [CrossRef]
  64. Ullah, N.; Sabi’u, J.; Shah, A. A derivative–free scaling memoryless Broyden–Fletcher–Goldfarb–Shanno method for solving a system of monotone nonlinear equations. Numeri. lin. alge. with appl. 2021, 28, e2374. [Google Scholar] [CrossRef]
  65. Abubakar, A.B.; Sabi’u, J.; Kumam, P.; Shah, A. Solving nonlinear monotone operator equations via modified SR1 update. J. Appl. Math. Comput. 2021, 67, 343–373. [Google Scholar] [CrossRef]
  66. Halilu, A.S.; Waziri, M.Y. A transformed double step length method for solving large–scale systems of nonlinear equations. J. Numeri. Math. Stoch. 2017, 9, 20–32. [Google Scholar]
  67. Waziri, M.Y.; Muhammad, L.; Sabi’u, J. A simple three-term conjugate gradient algorithm for solving symmetric systems of nonlinear equations. Int. J. Adv. in Appl. Sci. 2016, 5, 118–127. [Google Scholar] [CrossRef]
  68. Birgin, E.G.; Martínez, J.M. A spectral conjugate gradient method for unconstrained optimization. Appl Math. Optm 2001, 43, 117–128. [Google Scholar] [CrossRef] [Green Version]
  69. Dolan, E.D.; Moré, J.J. Benchmarking optimization software with performance profiles. Math. Prog. 2002, 91, 201–213. [Google Scholar] [CrossRef]
  70. Yasrib, A.; Suhaimi, M.A. Image processing in medical applications. J. Info. Tech. 2003, 3, 63–68. [Google Scholar]
  71. Collins, K.A.; Kielkopf, J.F.; Stassun, K.G.; Hessman, F.V. Image processing and photometric extraction for ultra-precise astronomical light curves. The Astro. J. 2017, 153, 177. [Google Scholar] [CrossRef]
  72. Mishra, R.; Mittal, N.; Khatri, S.K. Digital image restoration using image filtering techniques. IEEE Int. Conf. Autom. Comput. Tech. Manag. 2019, 6, 268–272. [Google Scholar]
  73. Sun, S.; He, T.; Chen, Z. Semantic structured image coding framework for multiple intelligent applications. IEEE Trans. Cir. Syst. Video Tech. 2021, 31, 3631–3642. [Google Scholar] [CrossRef]
  74. Bruckstein, A.M.; Donoho, D.L.; Elad, M. From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev. 2009, 51, 34–81. [Google Scholar] [CrossRef] [Green Version]
  75. Figueiredo, M.A.; Nowak, R.D. An EM algorithm for wavelet–based image restoration. IEEE Trans. Image Process 2003, 12, 906–916. [Google Scholar] [CrossRef] [Green Version]
  76. Hale, E.T.; Yin, W.; Zhang, Y. A Fixed–Point Continuation Method for (l1)–Regularized Minimization with Applications to Compressed Sensing; Technical Report TR07–07; Rice University: Houston, TX, USA, 2007; Volume 43, 44p. [Google Scholar]
  77. Figueiredo, M.A.; Nowak, R.D.; Wright, S.J. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process 2007, 1, 586–597. [Google Scholar] [CrossRef] [Green Version]
  78. Van Den Berg, E.; Friedlander, M.P. Probing the Pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 2008, 31, 890–912. [Google Scholar] [CrossRef] [Green Version]
  79. Beck, A.; Teboulle, M. A fast iterative shrinkage–thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  80. Hager, W.W.; Phan, D.T.; Zhang, H. Gradient–based methods for sparse recovery. SIAM J. Imaging Sci. 2011, 4, 146–165. [Google Scholar]
  81. Awwal, A.M.; Kumam, P.; Mohammad, H.; Watthayu, W.; Abubakar, A.B. A Perry–type derivative–free algorithm for solving nonlinear system of equations and minimizing l1 regularized problem. Optimization 2021, 70, 1231–1259. [Google Scholar] [CrossRef]
  82. Ibrahim, A.H.; Deepho, J.; Abubakar, A.B.; Adamu, A. A three–term Polak–Ribière–Polyak derivative–free method and its application to image restoration. Sci. Afri. 2021, 13, e00880. [Google Scholar] [CrossRef]
Figure 1. Performance of the SMDFP, CGH [61], and GHCGP [62] methods for the number of iterations.
Figure 1. Performance of the SMDFP, CGH [61], and GHCGP [62] methods for the number of iterations.
Mathematics 11 01221 g001
Figure 2. Performance of the SMDFP, CGH [61], and GHCGP [62] methods for the number of function evaluations.
Figure 2. Performance of the SMDFP, CGH [61], and GHCGP [62] methods for the number of function evaluations.
Mathematics 11 01221 g002
Figure 3. Performance of the SMDFP, CGH [61], and GHCGP [62] methods for the number of CPU time.
Figure 3. Performance of the SMDFP, CGH [61], and GHCGP [62] methods for the number of CPU time.
Mathematics 11 01221 g003
Figure 4. Comparison of the original image, blurred image, SMDFP image, CGD image, PSGM image, and TPRP image for the number of iterations, CPU time, MSE, and SNR.
Figure 4. Comparison of the original image, blurred image, SMDFP image, CGD image, PSGM image, and TPRP image for the number of iterations, CPU time, MSE, and SNR.
Mathematics 11 01221 g004
Figure 5. Comparison of the original image, blurred image, SMDFP image, CGD image, PSGM image, and TPRP image for the number of iterations, CPU time, MSE, and SNR.
Figure 5. Comparison of the original image, blurred image, SMDFP image, CGD image, PSGM image, and TPRP image for the number of iterations, CPU time, MSE, and SNR.
Mathematics 11 01221 g005
Figure 6. Comparison of the original image, blurred image, SMDFP image, CGD image, PSGM image, and TPRP image for the number of iterations, CPU time, MSE, and SNR.
Figure 6. Comparison of the original image, blurred image, SMDFP image, CGD image, PSGM image, and TPRP image for the number of iterations, CPU time, MSE, and SNR.
Mathematics 11 01221 g006
Figure 7. Comparison of the original image, blurred image, SMDFP image, CGD image, PSGM image, and TPRP image for the number of iterations, CPU time, MSE, and SNR.
Figure 7. Comparison of the original image, blurred image, SMDFP image, CGD image, PSGM image, and TPRP image for the number of iterations, CPU time, MSE, and SNR.
Mathematics 11 01221 g007
Figure 8. Comparison of the original image, blurred image, SMDFP image, CGD image, PSGM image, and TPRP image for the number of iterations, CPU time, MSE, and SNR.
Figure 8. Comparison of the original image, blurred image, SMDFP image, CGD image, PSGM image, and TPRP image for the number of iterations, CPU time, MSE, and SNR.
Mathematics 11 01221 g008
Figure 9. Comparison of the original image, blurred image, SMDFP image, CGD image, PSGM image, and TPRP image for the number of iterations, CPU time, MSE, and SNR.
Figure 9. Comparison of the original image, blurred image, SMDFP image, CGD image, PSGM image, and TPRP image for the number of iterations, CPU time, MSE, and SNR.
Mathematics 11 01221 g009
Figure 10. Comparison of the original image, blurred image, SMDFP image, CGD image, PSGM image, and TPRP image for the number of iterations, CPU time, MSE, and SNR.
Figure 10. Comparison of the original image, blurred image, SMDFP image, CGD image, PSGM image, and TPRP image for the number of iterations, CPU time, MSE, and SNR.
Mathematics 11 01221 g010
Table 1. Numerical comparison of the SMDFP, CGH [61], and GHCGP [62] methods.
Table 1. Numerical comparison of the SMDFP, CGH [61], and GHCGP [62] methods.
Problem 1 SMDFP CGH GHCGP
DIMENSIONINITIAL
POINT
ITERFEVCPUTNORMITERFEVCPUTNORMITERFEVCPUTNORM
1000 x 1 130.035122013016802.2364768.41 × 10 9 30610.1430567.2 × 10 9
x 2 130.0147510130.007694031640.0300849.34 × 10 9
x 3 130.008640130.004303032680.0283659.12 × 10 9
x 4 130.0046090130.003516031660.0262676.23 × 10 9
x 5 130.0042320130.0034990130.0039610
x 6 130.0045010713920.1414678.73 × 10 9 30610.025298.66 × 10 9
x 7 130.00422409512360.1478449.5 × 10 9 29590.0251365.33 × 10 9
x 8 130.0042190130.0033110130.0040240
5000 x 1 130.01969013417321.8262288.17 × 10 9 31630.2820298.05 × 10 9
x 2 130.016550130.007681033680.4600685.22 × 10 9
x 3 130.0132030130.006739034720.4610735.1 × 10 9
x 4 130.035350130.006651032680.6056446.97 × 10 9
x 5 130.0119060130.0204840130.0133280
x 6 130.016778011114440.636658.47 × 10 9 31630.5484169.68 × 10 9
x 7 130.0126909912880.620489.22 × 10 9 30610.6293815.95 × 10 9
x 8 130.0156830130.0065590130.0170170
15,000 x 1 130.01634013517451.4343419.38 × 10 9 32650.9863645.69 × 10 9
x 2 130.0170120130.007016033681.1363867.38 × 10 9
x 3 130.0212890130.007927034721.1534977.21 × 10 9
x 4 130.0147810130.009047032681.0395889.86 × 10 9
x 5 130.0155410130.0068130130.0251950
x 6 130.015866011214571.408399.72 × 10 9 32651.2964896.85 × 10 9
x 7 130.020727010113141.663468.59 × 10 9 30611.4072968.42 × 10 9
x 8 130.0140680130.0144230130.028780
50,000 x 1 130.035269013917978.6601529.1 × 10 9 33672.6989266.36 × 10 9
x 2 130.0364290130.026383034702.6205598.25 × 10 9
x 3 130.040110130.02646035742.3676718.06 × 10 9
x 4 130.0332720130.029999034722.2396525.51 × 10 9
x 5 130.0324130130.1587570130.0802050
x 6 130.0694920116150939.250099.44 × 10 9 33672.2841777.65 × 10 9
x 7 130.031998010513668.3756918.34 × 10 9 31632.4398089.42 × 10 9
x 8 130.0426040130.0241090130.04870
100,000 x 1 130.0729120141182315.782748.48 × 10 9 33672.8000379 × 10 9
x 2 130.0905270130.044907035723.2626035.84 × 10 9
x 3 130.0786450130.045901036763.0945765.7 × 10 9
x 4 130.0753460130.054425034722.7373217.79 × 10 9
x 5 130.1266570130.0517590130.1284150
x 6 130.0712920118153514.856328.8 × 10 9 34693.0821055.41 × 10 9
x 7 130.0918890106137912.699579.57 × 10 9 32652.5895036.66 × 10 9
x 8 130.1115110130.036060130.0846730
Table 2. Numerical comparison of the SMDFP, CGH [61], and GHCGP [62] methods.
Table 2. Numerical comparison of the SMDFP, CGH [61], and GHCGP [62] methods.
Problem 2 SMDFP CGH GHCGP
DIMENSIONINITIAL
POINT
ITERFEVCPUTNORMITERFEVCPUTNORMITERFEVCPUTNORM
1000 x 1 130.0051110130.201878013550.0574291.93 × 10 9
x 2 130.0049340130.004667012500.022922.44 × 10 9
x 3 130.0048620130.004516014610.028382 × 10 9
x 4 130.004810130.004824015690.0307711.66 × 10 9
x 5 130.0047740130.004738012550.0239972.48 × 10 9
x 6 130.0048730130.004792012500.0229781.78 × 10 9
x 7 130.0045630130.004552011450.019461.91 × 10 9
x 8 130.0057620130.004795014670.0283112.31 × 10 9
5000 x 1 130.0181520130.016958013551.2301514.31 × 10 9
x 2 130.019620130.054983012500.9882195.45 × 10 9
x 3 130.0213610130.071534014612.2083044.46 × 10 9
x 4 130.0495040130.028522015692.3341713.72 × 10 9
x 5 130.0288370130.026218012550.9764655.54 × 10 9
x 6 130.0144890130.023154012501.1200863.97 × 10 9
x 7 130.0208180130.036165011451.0117874.27 × 10 9
x 8 130.0356580130.023837014671.6453715.18 × 10 9
15,000 x 1 130.0243380130.042924013551.5542336.09 × 10 9
x 2 130.0259280130.0635012502.3110097.71 × 10 9
x 3 130.0286520130.033185014611.5746.31 × 10 9
x 4 130.0270350130.049415015691.4089585.26 × 10 9
x 5 130.0285360130.231511012551.1765727.84 × 10 9
x 6 130.0201130130.03914012501.1318565.62 × 10 9
x 7 130.0339930130.086277011451.1106476.04 × 10 9
x 8 130.0247690130.102031014671.0899787.32 × 10 9
50,000 x 1 130.1005480130.081128014591.8408331.63 × 10 9
x 2 130.0596290130.107223013541.4953332.07 × 10 9
x 3 130.0604390130.089753015652.1813821.69 × 10 9
x 4 130.0667350130.083639016732.3072871.41 × 10 9
x 5 130.0574370130.09488013591.9425672.1 × 10 9
x 6 130.0608670130.111695013541.8331591.51 × 10 9
x 7 130.0640290130.156286012491.235581.62 × 10 9
x 8 130.0659880130.326879015712.2453181.96 × 10 9
100,000 x 1 130.1190220130.172609014592.5017672.31 × 10 9
x 2 130.1275090130.314452013542.9472862.93 × 10 9
x 3 130.1323270130.224496015652.8869442.4 × 10 9
x 4 130.1208370130.28043016733.2759972 × 10 9
x 5 130.1630170130.218818013592.9792612.97 × 10 9
x 6 130.1472340130.317258013541.9774792.13 × 10 9
x 7 130.1143790130.164473012492.3497582.29 × 10 9
x 8 130.1551110130.164645015712.8607742.78 × 10 9
Table 3. Numerical comparison of the SMDFP, CGH [61], and GHCGP [62] methods.
Table 3. Numerical comparison of the SMDFP, CGH [61], and GHCGP [62] methods.
Problem 3 SMDFP CGH GHCGP
DIMENSIONINITIAL
POINT
ITERFEVCPUTNORMITERFEVCPUTNORMITERFEVCPUTNORM
1000 x 1 273520.1173647.01 × 10 9 10413530.3183949.22 × 10 9 32650.0615229.06 × 10 9
x 2 283650.1235487.87 × 10 9 2160.006879034690.0319745.68 × 10 9
x 3 293780.1207016.79 × 10 9 2160.00681035710.0316235.47 × 10 9
x 4 303910.1340759.73 × 10 9 2160.007006036730.0325788.43 × 10 9
x 5 334300.1464457.54 × 10 9 9730.0186689.97 × 10 9 39790.0363478.49 × 10 9
x 6 263390.1128257.14 × 10 9 10213270.2972558.12 × 10 9 31630.0281168.15 × 10 9
x 7 243130.1005426.87 × 10 9 9412230.2856199.71 × 10 9 29590.02846.01 × 10 9
x 8 ------------
5000 x 1 283652.1392386.82 × 10 9 108140513.726128.95 × 10 9 34692.2149685.06 × 10 9
x 2 293782.2038597.66 × 10 9 2160.095661035712.0195776.35 × 10 9
x 3 303912.1793316.61 × 10 9 2160.100748036731.7223366.12 × 10 9
x 4 314041.6936199.47 × 10 9 2160.168732037751.4935529.43 × 10 9
x 5 344431.4190287.34 × 10 9 9730.6119322.23 × 10 9 40813.06259.49 × 10 9
x 6 273520.7467126.95 × 10 9 10513667.8249719.71 × 10 9 32652.2773459.11 × 10 9
x 7 253260.7613516.69 × 10 9 9812755.2461639.43 × 10 9 30611.5970366.72 × 10 9
x 8 ------------
10,000 x 1 283651.483549.65 × 10 9 11014316.3555678.34 × 10 9 34691.9589657.16 × 10 9
x 2 303911.3900344.71 × 10 9 2160.074192035712.8655968.98 × 10 9
x 3 303911.4312569.35 × 10 9 2160.067912036732.3827788.65 × 10 9
x 4 324171.536835.83 × 10 9 2160.052701038772.5294816.66 × 10 9
x 5 354561.3527574.52 × 10 9 9730.5847483.15 × 10 9 41833.0486916.71 × 10 9
x 6 273520.9240189.83 × 10 9 10713925.417079.05 × 10 9 33672.1227356.44 × 10 9
x 7 253260.8985039.46 × 10 9 10013014.892728.78 × 10 9 30611.5775859.5 × 10 9
x 8 ------------
50,000 x 1 293784.1407489.39 × 10 9 113147011.605759.98 × 10 9 35712.6547838.01 × 10 9
x 2 314044.1579864.58 × 10 9 2160.120349037752.5372685.02 × 10 9
x 3 314043.6554499.1 × 10 9 2160.140539037752.572479.67 × 10 9
x 4 334303.8918955.67 × 10 9 2160.14212039792.6417957.45 × 10 9
x 5 364694.2139194.39 × 10 9 9730.7309197.05 × 10 9 42853.0881767.51 × 10 9
x 6 283653.4386469.56 × 10 9 111144410.966638.78 × 10 9 34692.174347.2 × 10 9
x 7 263393.2964829.2 × 10 9 10413539.5679168.53 × 10 9 32651.8412475.31 × 10 9
x 8 ------------
100,000 x 1 303917.6757925.78 × 10 9 115149618.448369.29 × 10 9 36733.1913285.66 × 10 9
x 2 314047.1273166.48 × 10 9 2160.173612037753.533667.1 × 10 9
x 3 324177.3651495.6 × 10 9 2160.260983038773.1794146.84 × 10 9
x 4 334307.3232628.02 × 10 9 2160.238622040813.4794035.27 × 10 9
x 5 364697.9544146.21 × 10 9 9731.2615949.97 × 10 9 43874.1375745.31 × 10 9
x 6 293786.6416385.89 × 10 9 113147021.350798.19 × 10 9 35712.872595.09 × 10 9
x 7 273526.0819615.66 × 10 9 105136615.937759.79 × 10 9 32652.9298857.51 × 10 9
x 8 ------------
Table 4. Numerical comparison of the SMDFP, CGH [61], and GHCGP [62] methods.
Table 4. Numerical comparison of the SMDFP, CGH [61], and GHCGP [62] methods.
Problem 4 SMDFP CGH GHCGP
DIMENSIONINITIAL
POINT
ITERFEVCPUTNORMITERFEVCPUTNORMITERFEVCPUTNORM
1000 x 1 130.0050960----25750.0743885.14 × 10 9
x 2 130.0053380130.003663026780.0468034.09 × 10 9
x 3 130.0058160130.005606026790.047166.99 × 10 9
x 4 130.005750130.005469026800.0463847.42 × 10 9
x 5 130.0058510130.005245026810.0476345.28 × 10 9
x 6 130.0059180130.005187026790.0481494.88 × 10 9
x 7 130.0048530130.005062023700.0426557.88 × 10 9
x 8 130.0057340130.005934026840.0510137.34 × 10 9
5000 x 1 130.0384060----26782.1507754.61 × 10 9
x 2 130.0261290130.021583026781.4833849.18 × 10 9
x 3 130.0199680130.017991027822.7516386.28 × 10 9
x 4 130.0304780130.027568027832.0817536.66 × 10 9
x 5 130.0343820130.023907027842.0406734.75 × 10 9
x 6 130.0197770130.025307027822.466024.37 × 10 9
x 7 130.0359940130.040065024732.2287557.05 × 10 9
x 8 130.0313550130.039022027871.8303326.58 × 10 9
10,000 x 1 130.0282430----26781.6777166.53 × 10 9
x 2 130.029320130.021768027812.0646595.2 × 10 9
x 3 130.0336280130.043117027822.8129768.88 × 10 9
x 4 130.0291820130.027671027832.7477619.42 × 10 9
x 5 130.050260130.023297027841.7918716.72 × 10 9
x 6 130.0295880130.017539027822.4404356.18 × 10 9
x 7 130.0318050130.032294024731.6484559.97 × 10 9
x 8 130.0381520130.02327027871.7104349.31 × 10 9
50,000 x 1 130.0785730----27812.7426835.84 × 10 9
x 2 130.1035230130.052539028843.5099854.65 × 10 9
x 3 130.0963280130.060336028853.070017.95 × 10 9
x 4 130.0874260130.055083028862.8487448.43 × 10 9
x 5 130.0949590130.059838028873.4963316.01 × 10 9
x 6 130.0928290130.054967028853.2664815.53 × 10 9
x 7 130.0926970130.063267025762.2691988.92 × 10 9
x 8 130.0963160130.074379028902.7836918.33 × 10 9
100,000 x 1 130.1501450----27814.251158.26 × 10 9
x 2 130.18470130.084738028844.9379266.58 × 10 9
x 3 130.1749930130.096997029884.2787574.5 × 10 9
x 4 130.2486370130.124909029894.491344.77 × 10 9
x 5 130.1782610130.126066028874.3228998.5 × 10 9
x 6 130.1557950130.125207028854.356027.81 × 10 9
x 7 130.1532260130.120531026794.0622145.04 × 10 9
x 8 130.1828430130.288392029934.2998524.71 × 10 9
Table 5. Numerical comparison of the SMDFP, CGH [61], and GHCGP [62] methods.
Table 5. Numerical comparison of the SMDFP, CGH [61], and GHCGP [62] methods.
Problem 5 SMDFP CGH GHCGP
DIMENSIONINITIAL
POINT
ITERFEVCPUTNORMITERFEVCPUTNORMITERFEVCPUTNORM
1000 x 1 140.0047420130.095815015610.0191065.85 × 10 9
x 2 1100.0061290130.0042630130.0040590
x 3 1140.0079350170.0049390130.0040830
x 4 1140.00732501120.0062230130.0040350
x 5 1140.00735501140.0063780130.0038650
x 6 130.0044340130.004388015610.0186633.98 × 10 9
x 7 130.0040130.004763014570.0172734.93 × 10 9
x 8 1140.00755501140.0062070130.0053590
5000 x 1 140.0372560130.030271016650.6759612.62 × 10 9
x 2 1100.0824440130.03170130.0222370
x 3 1140.0847660170.0714990130.0229720
x 4 1140.1031301120.1042740130.0265110
x 5 130.03183601140.1060670130.0487990
x 6 130.0347840130.025469015610.8871188.9 × 10 9
x 7 130.0499760130.030696015611.1634822.2 × 10 9
x 8 130.03977301140.1008210130.0257770
10,000 x 1 140.0540140130.397306016651.0785133.7 × 10 9
x 2 1100.0994020130.4003620130.0297860
x 3 1140.1330060170.0962160130.0422650
x 4 1140.11441101120.1481450130.0403030
x 5 130.03353501140.2486530130.0311490
x 6 130.0309940130.24238016651.2921462.52 × 10 9
x 7 130.0261890130.059157015611.2008333.12 × 10 9
x 8 130.0261401140.1399920130.0439850
50,000 x 1 140.0710460130.099884016651.5726048.28 × 10 9
x 2 1100.1697750130.0836290130.0906890
x 3 1140.2278580170.1879580130.0990920
x 4 130.08154301120.5109940130.32730
x 5 130.06503101140.1932260130.0672680
x 6 130.0943220130.077699016653.2556435.63 × 10 9
x 7 130.0601920130.06505015612.1916396.97 × 10 9
x 8 130.05995101140.4984530130.1368170
100,000 x 1 140.1363620130.112411017693.4763772.34 × 10 9
x 2 1100.339910130.4841940130.1163410
x 3 130.1206040170.2379960130.103360
x 4 130.12323401120.3238030130.2999150
x 5 130.10549401140.2914970130.1070720
x 6 130.107250130.1356016652.429267.96 × 10 9
x 7 130.0951510130.412831015611.7041929.85 × 10 9
x 8 130.12342601140.2934790130.0844750
Table 6. Numerical comparison of the SMDFP, CGH [61], and GHCGP [62] methods.
Table 6. Numerical comparison of the SMDFP, CGH [61], and GHCGP [62] methods.
Problem 6 SMDFP CGHM GHCGPM
DIMENSIONINITIAL
POINT
ITERFEVCPUTNORMITERFEVCPUTNORMITERFEVCPUTNORM
1000 x 1 283650.1778636.4 × 10 9 10113140.5095688.63 × 10 9 33670.0345288.84 × 10 9
x 2 303910.189425.21 × 10 9 10113140.3908289.95 × 10 9 35710.0363228.9 × 10 9
x 3 314040.1938116.75 × 10 9 10113140.4125859.85 × 10 9 37750.0374836.28 × 10 9
x 4 334300.2125297.34 × 10 9 10013010.4334229.8 × 10 9 39790.0395438.22 × 10 9
x 5 364690.2292795.1 × 10 9 9312100.3655248.57 × 10 9 42850.0436537.71 × 10 9
x 6 273520.1747784.76 × 10 9 9912880.3901799.68 × 10 9 32650.032585.96 × 10 9
x 7 243130.1618377.74 × 10 9 9412230.3681788.53 × 10 9 29590.0299216.68 × 10 9
x 8 374820.246537.75 × 10 9 3400.014419044890.0457356.28 × 10 9
5000 x 1 293782.9808866.05 × 10 9 105136616.496038.52 × 10 9 34692.0162129.63 × 10 9
x 2 314043.3196024.92 × 10 9 10513669.6043199.83 × 10 9 36731.8711939.68 × 10 9
x 3 324172.9931086.36 × 10 9 10513666.8393249.74 × 10 9 38772.4286326.81 × 10 9
x 4 344432.7983186.89 × 10 9 10413534.0961099.74 × 10 9 40813.5889298.89 × 10 9
x 5 374822.7242224.76 × 10 9 9712622.3616679.43 × 10 9 43873.4212948.29 × 10 9
x 6 283651.8216984.51 × 10 9 10313402.7114089.55 × 10 9 33672.1890476.5 × 10 9
x 7 253261.4644417.35 × 10 9 9812752.8860328.4 × 10 9 30611.8211437.3 × 10 9
x 8 384951.9088817.2 × 10 9 3400.079459045912.7891996.72 × 10 9
10,000 x 1 293781.8987848.53 × 10 9 10613794.3062179.8 × 10 9 35712.2329176.79 × 10 9
x 2 314042.0695266.93 × 10 9 10713924.2915529.18 × 10 9 37752.1959536.82 × 10 9
x 3 324171.8316738.96 × 10 9 10713924.3232279.1 × 10 9 38773.496479.6 × 10 9
x 4 344431.8795579.69 × 10 9 10613793.9354849.1 × 10 9 41832.1093026.26 × 10 9
x 5 374822.0732776.69 × 10 9 9912883.0945138.92 × 10 9 44892.1722125.83 × 10 9
x 6 283651.652926.35 × 10 9 10513663.43798.91 × 10 9 33671.4880579.17 × 10 9
x 7 263391.4620724.51 × 10 9 9912883.2607199.66 × 10 9 31632.1130365.15 × 10 9
x 8 395082.1369444.4 × 10 9 3400.098955045912.3227949.46 × 10 9
50,000 x 1 303915.5102348.28 × 10 9 110143112.021049.53 × 10 9 36732.3105237.57 × 10 9
x 2 324175.4220926.72 × 10 9 111144411.200528.93 × 10 9 38772.4296777.6 × 10 9
x 3 334305.2118468.69 × 10 9 111144410.381668.85 × 10 9 40812.631475.35 × 10 9
x 4 354565.4188869.4 × 10 9 11014319.8649328.86 × 10 9 42852.5923656.97 × 10 9
x 5 384955.9112436.48 × 10 9 10313409.2432948.77 × 10 9 45912.8872716.5 × 10 9
x 6 293784.5478136.16 × 10 9 10914189.684578.67 × 10 9 35713.0692085.11 × 10 9
x 7 273524.5096854.38 × 10 9 10313408.9836119.39 × 10 9 32652.1500555.74 × 10 9
x 8 395086.31069.8 × 10 9 3400.315881047953.4158 5.26 × 10 9
100,000 x 1 314049.1248795.09 × 10 9 112145719.998888.88 × 10 9 37753.5327235.35 × 10 9
x 2 324179.2676319.5 × 10 9 113147018.604118.32 × 10 9 39793.6154995.37 × 10 9
x 3 344439.7692365.34 × 10 9 113147019.533588.25 × 10 9 40813.7120927.56 × 10 9
x 4 3646910.32975.78 × 10 9 112145721.292438.25 × 10 9 42854.3543429.86 × 10 9
x 5 3849510.863449.16 × 10 9 105136620.037868.18 × 10 9 45914.1655789.18 × 10 9
x 6 293788.4105248.71 × 10 9 110143118.726439.95 × 10 9 35713.3006637.23 × 10 9
x 7 273527.7378036.19 × 10 9 105136616.810348.76 × 10 9 32653.3130938.12 × 10 9
x 8 4052111.2756.03 × 10 9 3400.561181047954.4305737.44 × 10 9
Table 7. Numerical comparison of the SMDFP, CGH [61], and GHCGP [62] methods.
Table 7. Numerical comparison of the SMDFP, CGH [61], and GHCGP [62] methods.
Problem 7 SMDFP CGH GHCGP
DIMENSIONINITIAL
POINT
ITERFEVCPUTNORMITERFEVCPUTNORMITERFEVCPUTNORM
1000 x 1 130.004846010814050.3493768.21 × 10 9 32650.0303795.96 × 10 9
x 2 130.004706011915480.3564979.04 × 10 9 32650.0301916.4 × 10 9
x 3 130.0051360----29590.0283138.3 × 10 9
x 4 130.00510130.003102031640.0285239.58 × 10 9
x 5 130.0043440----32650.0307166.9 × 10 9
x 6 130.00465010313400.3393878.52 × 10 9 31630.0288046.97 × 10 9
x 7 130.00437109512360.2775418.16 × 10 9 29590.0271495.88 × 10 9
x 8 130.0051420130.004269035740.033326.36 × 10 9
5000 x 1 130.020114011114444.9098279.82 × 10 9 33671.9296456.66 × 10 9
x 2 130.013025012316006.3376278.77 × 10 9 33672.0865617.15 × 10 9
x 3 130.026390----30611.8935469.28 × 10 9
x 4 130.0162530130.010931033681.3907245.36 × 10 9
x 5 130.0266790----33672.234397.72 × 10 9
x 6 130.027297010713921.2593048.27 × 10 9 32651.9531127.79 × 10 9
x 7 130.03161309812751.8401779.76 × 10 9 30611.3497746.57 × 10 9
x 8 130.0367520130.009336036762.0605817.11 × 10 9
10,000 x 1 130.031447011314703.1369759.15 × 10 9 33671.8187389.42 × 10 9
x 2 130.033583012516263.8510128.18 × 10 9 34691.9540615.06 × 10 9
x 3 130.0243030----31632.3572446.56 × 10 9
x 4 130.0522240130.011473033683.2379997.57 × 10 9
x 5 130.0466820----34691.5867115.46 × 10 9
x 6 130.046979010814052.0247269.49 × 10 9 33671.7046475.51 × 10 9
x 7 130.057642010013013.1258119.09 × 10 9 30611.6699279.29 × 10 9
x 8 130.0259220130.011375037782.0683575.03 × 10 9
50,000 x 1 130.0649740117152210.187138.89 × 10 9 35712.755235.27 × 10 9
x 2 130.0617010128166510.092259.78 × 10 9 35712.3871135.66 × 10 9
x 3 130.0643870----32652.0558577.34 × 10 9
x 4 130.0671860130.019284034701.994988.47 × 10 9
x 5 130.0899110----35712.4150856.1 × 10 9
x 6 130.059953011214575.9881959.21 × 10 9 34692.6270246.16 × 10 9
x 7 130.059474010413537.0269848.83 × 10 9 32652.1610085.19 × 10 9
x 8 130.1024860130.028383038802.5333435.62 × 10 9
100,000 x 1 130.1504960119154818.044428.28 × 10 9 35713.7093177.45 × 10 9
x 2 130.1192970130169117.325839.11 × 10 9 35714.1062318 × 10 9
x 3 130.1204140----33673.1931295.19 × 10 9
x 4 130.1673580130.055669035723.4869735.99 × 10 9
x 5 130.147360----35713.1041158.63 × 10 9
x 6 130.1238770114148312.506948.59 × 10 9 34693.5586068.71 × 10 9
x 7 130.1132690106137914.347358.22 × 10 9 32653.1085057.35 × 10 9
x 8 130.1119480130.046828038803.5328057.95 × 10 9
Table 8. Numerical comparison of the SMDFP, CGH [61], and GHCGP [62] methods.
Table 8. Numerical comparison of the SMDFP, CGH [61], and GHCGP [62] methods.
Problem 8 SMDFP CGH GHCGP
DIMENSIONINITIAL
POINT
ITERFEVCPUTNORMITERFEVCPUTNORMITERFEVCPUTNORM
1000 x 1 250.00625707260.0645180----
x 2 130.00457509300.0138560501080.0609250
x 3 250.00567706240.0110350----
x 4 250.006564013116710.4152518.78 × 10 9 ----
x 5 250.00589807260.01266304170.0078690
x 6 250.00566107260.0133130----
x 7 250.006256013516790.3983628.98 × 10 9 ----
x 8 370.00633505110.009160----
5000 x 1 250.07174407260.2789150----
x 2 130.03275709300.335638019460.3217046.24 × 10 9
x 3 250.07447506240.2277030----
x 4 250.1502270139178610.124139.92 × 10 9 17420.1400037.16 × 10 9
x 5 250.05463405220.30957805200.0547820
x 6 250.10250307260.2906410----
x 7 250.067849012916239.429688.65 × 10 9 ----
x 8 250.125880490.17989606260.0336510
10,000 x 1 250.10985307260.4299110----
x 2 130.06496709300.433819018440.1134566.24 × 10 9
x 3 250.10994306240.2491190----
x 4 250.1019690140179911.309779.87 × 10 9 16400.0942437.16 × 10 9
x 5 250.06671105220.33935705200.0477660
x 6 250.06933807260.2471310----
x 7 250.08683809300.3004320----
x 8 250.0830510490.11585707290.0564130
50,000 x 1 250.13137707263.6886160----
x 2 130.07050209300.623855015380.2614149.98 × 10 9
x 3 250.25799406240.5770140----
x 4 250.158740135173417.447689.73 × 10 9 14360.2416425.73 × 10 9
x 5 250.14407305220.3714506230.1572150
x 6 250.12861407260.4269150----
x 7 250.15214109303.6626050----
x 8 250.17188105223.37498607290.1723930
100,000 x 1 250.383221072612.887370----
x 2 130.25694409300.769409014360.5464889.98 × 10 9
x 3 250.23147606240.529020----
x 4 250.3277780133170822.377728.54 × 10 9 13340.4979455.73 × 10 9
x 5 250.2551905220.33048206230.3588250
x 6 250.31241907260.4872920----
x 7 250.228052093011.789430----
x 8 250.32028305227.51098407290.394180
Table 9. Numerical comparison of SMDFP, CGD [45], PSGM [81], and TPRP [82] methods.
Table 9. Numerical comparison of SMDFP, CGD [45], PSGM [81], and TPRP [82] methods.
SMDFPCGDPSGMTPRP
PicturesITERCPUTMSESNRSSIMITERCPUTMSESNRSSIMITERCPUTMSESNRSSIMITERCPUTMSESNRSSIM
Baby694.553.88 × 10 1 22.790.9271342.194.46 × 10 1 22.190.93271.553.87 × 10 1 22.80.933171366.584.56 × 10 1 22.090.92
COMSATS140.641.83 × 10 2 23.90.8767149.021.69 × 10 2 24.240.86231.881.52 × 10 2 24.710.87501020.661.87 × 10 2 23.790.87
Fox394.257.33 × 10 1 24.450.8447462.837.45 × 10 1 24.380.86182.367.37 × 10 1 24.420.8458228.417.76 × 10 1 24.20.84
Horse161.161.20 × 10 2 19.520.8164544.061.06 × 10 2 20.070.85422.591.07 × 10 2 200.83109248.861.10 × 10 2 19.90.81
Lena140.786.30 × 10 1 22.880.8449134.636.05 × 10 1 23.050.88262.055.81 × 10 1 23.230.871011175.65.91 × 10 1 23.160.84
Marwat342.021.52 × 10 2 20.430.8185658.331.53 × 10 2 20.40.84382.421.46 × 10 2 20.610.82472707.051.61 × 10 2 20.180.81
Thai fabric pattern in Phetchabun120.972.46 × 10 2 17.690.6656548.232.47 × 10 2 17.670.7221.842.38 × 10 2 17.830.681561445.322.31 × 10 2 17.970.69
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ullah, N.; Shah, A.; Sabi’u, J.; Jiao, X.; Awwal, A.M.; Pakkaranang , N.; Shah, S.K.; Panyanak, B. A One-Parameter Memoryless DFP Algorithm for Solving System of Monotone Nonlinear Equations with Application in Image Processing. Mathematics 2023, 11, 1221. https://doi.org/10.3390/math11051221

AMA Style

Ullah N, Shah A, Sabi’u J, Jiao X, Awwal AM, Pakkaranang  N, Shah SK, Panyanak B. A One-Parameter Memoryless DFP Algorithm for Solving System of Monotone Nonlinear Equations with Application in Image Processing. Mathematics. 2023; 11(5):1221. https://doi.org/10.3390/math11051221

Chicago/Turabian Style

Ullah, Najib, Abdullah Shah, Jamilu Sabi’u, Xiangmin Jiao, Aliyu Muhammed Awwal, Nuttapol Pakkaranang , Said Karim Shah, and Bancha Panyanak. 2023. "A One-Parameter Memoryless DFP Algorithm for Solving System of Monotone Nonlinear Equations with Application in Image Processing" Mathematics 11, no. 5: 1221. https://doi.org/10.3390/math11051221

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop