Next Article in Journal
On a New Family of Runge–Kutta–Nyström Pairs of Orders 6(4)
Previous Article in Journal
A Traffic Event Detection Method Based on Random Forest and Permutation Importance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Inertial Extragradient Direction Method with Self-Adaptive Step Size for Solving Split Minimization Problems and Its Applications to Compressed Sensing

by
Nattakarn Kaewyong
1 and
Kanokwan Sitthithakerngkiet
2,*
1
Department of Mathematics, Faculty of Applied Science, King Mongkut’s University of Technology North Bangkok (KMUTNB), Wongsawang, Bangsue, Bangkok 10800, Thailand
2
Intelligent and Nonlinear Dynamic Innovations Research Center, Department of Mathematics, Faculty of Applied Science, King Mongkut’s University of Technology North Bangkok (KMUTNB), Wongsawang, Bangsue, Bangkok 10800, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(6), 874; https://doi.org/10.3390/math10060874
Submission received: 10 February 2022 / Revised: 3 March 2022 / Accepted: 6 March 2022 / Published: 9 March 2022
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
The purpose of this work is to construct iterative methods for solving a split minimization problem using a self-adaptive step size, conjugate gradient direction, and inertia technique. We introduce and prove a strong convergence theorem in the framework of Hilbert spaces. We then demonstrate numerically how the extrapolation factor ( θ n ) in the inertia term and a step size parameter affect the performance of our proposed algorithm. Additionally, we apply our proposed algorithms to solve the signal recovery problem. Finally, we compared our algorithm’s recovery signal quality performance to that of three previously published works.

1. Introduction

Let C and Q be two closed convex subsets of two real Hilbert spaces H 1 and H 2 , respectively, and denote the metric projection onto C by Proj C . Assume that A is a bounded linear operator and that A * is its adjoint operator.
In 1994, Censor and Elfving [1] defined the split feasibility problem (SFP) as
finding x * C such that A x * Q .
This problem was designed to model inverse problems, which are at the core of many real-world problems, such as phase retrieval, image restoration, and intensity-modulated radiation therapy (IMRT). Specifically, signal processing, including signal recovery, which is used to transmit data in almost every field imaginable. Signal processing is essential for the use of X-rays, MRIs, and CT scans, as it enables complex data processing techniques to analyze and decipher medical images. As a result, a great amount of research is motivated to solve the SFP (1). See [2,3,4].
Berny [5] popularized the CQ algorithm for finding the SFP (1) solution. This algorithm is based on the fixed-point concept:
Proj C ( I κ n A * ( I Proj Q ) A ) x * = x * ,
where κ n is a constant. Other iterative methods for resolving the split feasibility problem (1) are described in [6,7,8,9,10,11,12,13] and its references. However, we needed prior knowledge of matrix norm A in order to use these iterative methods.
Yang [14] proposed the relaxed CQ algorithm in 2005 to avoid computing the operator norm of a bounded linear operator, but this algorithm remains subject to many conditions. As a result, many researchers have proposed and studied self-adaptive step size algorithms to obtain fewer conditions. Further information is available from [15,16,17,18].
López [18] introduced the intriguing self-adaptive step size algorithm in 2012, which is worth further investigation. This algorithm generates sequence { x n } in the following manner:
κ n = ϱ n ω ( x n ) ω ( x n ) , x n + 1 = Proj C ( I κ n A * ( I Proj Q ) A ) x n , n 1 ,
where x 1 H 1 is arbitrarily selected, ω ( x ) = ( I Proj Q ) A x 2 / 2 is the convex objective function and ω ( x ) = A * ( I Proj Q ) A x is its Lipschitz gradient, and { ϱ n } is a sequence satisfying 0 < ϱ n < 4 and inf ϱ n ( 4 ϱ n ) > 0 .
We now describe the variational inequality problem (VIP), as variational inequality theory is a major necessity for studying a broader class of numerous problems that arise in wide variety of fields of the pure and applied sciences. Assume that p : H 1 H 1 is a monotone operator. The VIP is defined mathematically as follows:
find x * C such that p ( x * ) , x x * 0 .
The projected gradient method [19] is the simplest iterative method for solving the VIP. This algorithm generates sequence { x n } in the following fashion:
x n + 1 = Proj C ( I κ n p ) x n , n 1 ,
where x 1 H 1 is arbitrarily selected and κ n is a positive real number. However, to use the projected gradient method, we must know the closed-form expression of the metric projection Proj C , which does not always exist. As a result, the steepest descent algorithms have been introduced and further developed in recent years. Following that, methods for accelerating the steepest descent method have been constructed and developed. When p : C R is a convex function that is also Fréchet differentiable, the conjugate gradient direction [20] of h at x n is defined as follows:
d 1 = p ( x 1 ) , d n + 1 = p ( x n ) + b n d n , n 1 ,
where x 1 C is arbitrarily selected, b n is a sequence satisfying 0 < b n < , and p is the gradient of function p.
Sakurai and Iiduka [21] investigated and introduced the iterative method to solve a fixed point of a nonexpansive mapping. This method is based on the concept of conjugate gradient directions (6), which can be used to accelerate the steepest descent method, which generates the sequence { x n } as follows:
d 1 = ( S I ) x 1 α , d n + 1 = ( S I ) x n α + b n d n , y n = x n + α d n + 1 , x n + 1 = μ α n x n + ( 1 μ α n ) y n , n 1 ,
where x 1 C is arbitrarily selected, S is a nonexpansive mapping on C, α > 0 , μ is a constant satisfying 0 μ < 1 , { α n } is a sequence in ( 0 , 1 ) , and { b n } is a sequence satisfying 0 b n < . The strong convergent theorem was established by Sakurai and Iiduka. Additionally, they demonstrated the performance of their algorithm, which significantly reduced the time required to find a solution.
Additionally, numerous researchers have concentrated their efforts on accelerating the convergence rate of algorithms. Polyak [22] pioneered this concept by introducing the heavy ball method, a critical technique for increasing the convergence rate of algorithms. Following that, modified heavy ball methods were introduced and developed to accelerate their algorithm convergence rate. Nesterov [23] introduced one of the most versatile modified heavy ball methods, which is defined as follows:
y n = x n + θ n ( x n x n 1 ) , x n + 1 = y n + κ n p ( y n ) , n 2 ,
where x 1 , x 2 H 1 are arbitrarily selected, κ n is a positive constant, θ n is an extrapolation factor satisfying 0 θ n < 1 , and the term θ n ( x n x n 1 ) is referred to as inertia. For additional information, the reader can consult [24,25,26].
Consider two proper, convex, and lower-semicontinuous functions, f : H 1 R and g : H 2 R . Recently, Moudafi and Thakur [9] published an interesting paper on the split proximal feasibility problem, that is to find x * such that
min x * H 1 { f ( x * ) + g λ ( A x * ) } ,
where λ is a positive constant, and g λ ( A x * ) = min y H 2 { g ( y ) + 1 2 λ y A x * } is the Moreau–Yosida approximation.
This problem is important for future work such as algorithms for designing large dimensional equiangular tight frames, additive parameters for deep face recognition, deep learning methods for compressed sensing, a unit softmax with Laplacian, and water allocation optimization. See, e.g., [27,28].
As can be seen, Rockafella [29] defined the subdifferential of the Moreau–Yosida approximation as follows:
( f ( x * ) + g λ ( A x * ) ) = f ( x * ) + A * g λ ( A x * ) = f ( x * ) + A * I prox λ g λ ( A x * ) .
Equation (10) implies that the following inclusion problem is optimal for the problem (9):
0 λ f ( x * ) + A * ( I prox λ g ) ( A x * ) ,
where prox λ g ( A x * ) = arg min y H 2 { g ( y ) + 1 2 λ y A x * } is the proximity operator of function g order λ and f ( x * ) = { x ¯ H 1 : f ( u ) f ( x * ) + x ¯ , u x * , u H 1 } is the subdifferential of function f at the point x * .
The problem (9) is referred to as the split minimization problem (SMP) when arg min f A 1 arg min g . It was defined by Moudafi and Thakur [9] as finding a point
x * arg min f such that A x * arg min g .
where arg min f = { x ˜ H 1 : f ( x ˜ ) f ( x ) , x H 1 } and arg min g = { y ˜ H 2 : g ( y ˜ ) g ( y ) , y H 2 } . The significance of this problem is that it is a generalization of the SFP (1), as the SMP (12) can be reduced to the SFP (1) by setting functions f and g to be the indicator functions of the nonempty closed convex subsets C and Q, respectively.
Assume that   
p ( x ) = 1 2 ( I prox λ g ) A x 2 ,
q ( x ) = 1 2 ( I prox λ κ n f ) x 2 .
The gradients of functions p and q are obtained as follows:
p ( x ) = A * ( I prox λ g ) A x ,
q ( x ) = ( I prox λ κ n f ) x .
We can verify that p and q are weakly lower semicontinuous, convex, and differentiable; see [30].
Abbas and Alshahrani [31] published two iterative algorithms in 2018 for determining the minimum-norm solution to a split minimization problem. These algorithms are denoted by the following formulas:
τ n = ϱ n p ( x n ) + q ( x n ) p ( x n ) 2 + q ( x n ) 2 , x n + 1 = prox λ κ n f ( ( 1 ϵ n ) x n κ n A * ( I prox λ g ) A x n ) , n 1 ,
and
τ n = ϱ n p ( x n ) + q ( x n ) p ( x n ) 2 + q ( x n ) 2 , x n + 1 = ( 1 ϵ n ) prox λ κ n f ( x n κ n A * ( I prox λ g ) A x n ) , n 1 ,
where x 1 H 1 is arbitrarily selected, 0 < ϱ n < 4 and { ϵ n } is a sequence in ( 0 , 1 ) . They established two strong convergence theorems for their proposed algorithms under some mild conditions.
Recently, Kaewyong and Sitthithakerngkiet [32] introduced a self-adaptive step size algorithm for resolving a split minimization problem. It is defined by the algorithm described below:
u n = x n + θ n ( x n x n 1 ) , τ n = ϱ n p ( u n ) p ( u n ) 2 + q ( u n ) 2 , y n = prox λ κ n f ( u n κ n A * ( I prox λ g ) A u n ) , x n + 1 = α n x n + ( 1 α n ) S [ δ n v + ( 1 δ n ) y n ] , n 2 ,
where x 1 , x 2 , v H 1 is arbitrarily selected, 0 < ϱ n < 4 , { α n } and { δ n } are sequences in ( 0 , 1 ) , and S is a nonexpansive mapping on H 1 . Under some appropriate conditions, the strong convergence theorem was established.
By studying the advantages and disadvantages of all the above works, in this work, we aim to construct new algorithms for solving the split minimization problem by developing the above algorithms based on useful techniques. The algorithms we constructed are combined with the following techniques: (1) self-adaptive step size technique to avoid computing the operator norm of a bounded linear operator, which is difficult if not impossible to calculate or even estimate; (2) inertia and conjugate gradient direction techniques to speed up the convergence rate. In part of the numerical examples, to demonstrate the outperformance of our proposed algorithms, we show that the extrapolation factor ( θ n ) in the inertia term results in a faster rate of convergence and that the step size parameter affects the rate of convergence as well. Additionally, we apply our proposed algorithm to solve the signal recovery problem. We compared the performance of our algorithm to that of three other strong convergence algorithms that had been published before our work.

2. Preliminaries

In this section will review some basic facts, definitions, and lemmas that will be necessary to show our main result. The collection of proper convex lower semicontinuous functions on H 2 is denoted by Γ ( H 2 ) .
Lemma 1.
Let a and b be any two elements in H 1 . Then, the following assertions hold:
1. 
a b 2 = a 2 b 2 + 2 b a , b ;
2. 
a + b 2 a 2 + 2 b , b + a ;
3. 
ξ a + ( 1 ξ ) b 2 = ξ a 2 + ( 1 ξ ) b 2 ξ ( 1 ξ ) a b 2 , where ξ [ 0 , 1 ] .
Proposition 1.
Suppose that S : C H 1 is a mapping. Let a and b are any two elements in C. Then,
1. 
S is monotone if a b , S a S b ;
2. 
S is nonexpansive if S a S b a b ;
3. 
S firmly nonexpansive if S a S b 2 S a S b , a b .
By the metric projection, it is well accepted that Proj C is a firmly nonexpansive mapping, i.e.,
Proj C a Proj C b 2 Proj C a Proj C b , a b , a , b H 1 .
Lemma 2.
Let Proj C : H 1 C be the metric projection. Then, the inequalities stated below hold:
1. 
a Proj C a 2 + b Proj C a 2 a b 2 , a , b H 1 ;
2. 
a b 2 Proj C a Proj C b 2 ( a b ) ( Proj C a Proj C b ) 2 , a H 1 , b C .
Definition 1
([33,34]). Let g be any function in Γ ( H 2 ) and x be any H 2 element. The proximal operator of g is defined by
prox g ( x ) = arg min b H 2 g ( b ) + 1 2 b a 2 .
Furthermore,
prox λ g ( x ) = arg min b H 2 g ( b ) + 1 2 λ b a 2
provides the proximal of g of order λ.
Proposition 2
([35,36]). Let g be any function in Γ ( H 2 ) . Assume that λ is a constant in (0,1), and Q is a nonempty closed convex subset of H 2 . Then, the following assertions hold:
1. 
If g = δ Q , then for all λ > 0 , the proximal operators prox λ g = Proj Q where δ is an indicator function of Q;
2. 
prox λ g is firmly nonexpansive;
3. 
prox λ g is the resolvent of the subdifferential g of g, that is, prox λ g = ( I + λ G ) 1 = J λ G .
Lemma 3
([37]). Let g be any function in Γ ( H 2 ) . Then, the following assertions hold:
1. 
arg min H 2 g = F i x ( prox g ) ;
2. 
prox g and I prox g are both firmly nonexpansive.
Lemma 4
([38]). Assume that { b n } [ 0 , ) with
b n + 1 ( 1 δ n ) b n + δ n ϱ n + σ n , n 0 ,
where { δ n } and { σ n } are sequences in ( 0 , 1 ) and { ϱ n } is a real sequence. If
1. 
n = 0 δ n =
2. 
lim sup n ϱ n 0 or n = 0 | δ n ϱ n | < .
3. 
n = 0 σ n < .
Then, lim n b n = 0 .
Lemma 5
([39]). Let Γ n be a real sequence that does not decrease at infinity, in the sense that there is a subsequence Γ n i Γ n with Γ n i < Γ n i + 1 for all i 0 . Given the sequence of integers { η ( n ) } n n 0 by
η ( n ) = max { k n | Γ k Γ k + 1 } .
Then, { η ( n ) } n n 0 is a nondecreasing sequence verifying lim n η ( n ) = and, for all n n 0 ,
max { Γ η ( n ) , Γ n } Γ η ( n ) + 1 .
Lemma 6.
Let D be a strongly positive linear bounded operator on H 1 with coeficient γ ¯ > 0 and 0 < μ < D 1 . Then, I μ D 1 μ γ ¯ .

3. Results

This section introduces and analyzes the algorithm for solving split minimization problems. Additionally, Ω S M P is used to denote the solutions set for the split minimization problem (SMP) (12).
Condition 1.
Let { a n } ( 0 , 1 ) , { b n } [ 0 , 1 2 ) . Let { θ n } and { ϱ n } be positive sequences. The sequences { a n } , { b n } , { θ n } , and { ϱ n } satisfy:
(C1) 
lim n a n = 0 and n = 1 a n = .
(C2) 
b n < a n 2 .
(C3) 
lim n θ n a n x n x n + 1 = 0 and lim n θ n x n x n + 1 = 0 .
(C4) 
0 < ϱ n < 4 and inf ϱ n ( 4 ϱ n ) > 0 .
Theorem 1.
Let A : H 1 H 2 be a bounded linear operator with its adjoint operator A * : H 2 H 1 . Let f and g be two proper, convex, and lower semicontinuous functions such that (12) is consistent (i.e., Ω S M P ) and that { ( I prox λ g ) A u n } is bounded. Assume that { a n } , { b n } , { θ n } , and { ϱ n } are sequences that satisfy Condition 1. Let D : H 1 H 1 and F : H 1 H 1 be a strongly positive bounded linear operator and an L-Lipschitz mapping with L > 0 , respectively. Assume that ξ > 0 and that 0 < ξ L < γ ¯ . Then, sequence { x n } generated by Algorithm 1 converges strongly to a solution of the SMP, which is also the unique solution of the variational inequalities:
( D ξ F ) z ´ , z ´ x 0 , x Ω S M P .
Algorithm 1: Algorithm for solving split minimization problem.
Intialization: Set { b n } [ 0 , 1 2 ) . Choose some positive sequence { ϱ n } and { a n } satisfying 0 < ϱ n < 4 and a n ( 0 , 1 ) , respectively. Select an arbitrary strating point x 1 , x 2 H 1 .
Iterative step: Given α > 0 and x n , d n H 1 . Compute x n + 1 as follows:
u n = x n + θ n ( x n x n 1 ) d n + 1 = κ n A * ( I prox λ g ) A u n α + b n d n y n = u n + α d n + 1 x n + 1 = a n ξ F ( x n ) + ( I a n D ) prox λ κ n f y n ,
where
d 1 = κ n A * ( I prox λ g ) A u 1 α ,
and
κ n = ϱ n p ( u n ) p ( u n ) 2 + q ( u n ) 2 , p ( u n ) 2 + q ( u n ) 2 0 , 0 , otherwise ,
where p , q , p , and q are defiend as Equations (13)–(16).
Stopping criterion: if p ( u n ) = q ( u n ) = 0 then stop. Otherwise, set n = n + 1 and return to Iterative step.
Proof. 
To begin with, we demonstrate that the sequence { d n } is bounded using mathematical induction. For n = 1 , the boundedness of the sequence { d n } is trivial. We obtain lim n b n = 0 by applying conditions (C1) and (C2). This implies the existence of n 0 N such that b n 1 2 , for all n n 0 . Assume that M 1 = max { d n 0 , 2 α sup n 1 κ n A * ( I prox λ g ) A u n } < . We start with the assumption that d n < M 1 , which holds true for some n n 0 , and demonstrate that it continues to hold true for n + 1 . Triangle inequality ensures that
d n + 1 = κ n A * ( I prox λ g ) A u n α + b n d n 1 α κ n A * ( I prox λ g ) A u n + b n d n 1 2 2 α κ n A * ( I prox λ g ) A u n + M 1 1 2 2 M 1 = M 1 .
This implies that d n < M 1 , for all n n 0 . As a result, { d n } is bound.
And after that, we demonstrate that the sequences { x n } , { y n } , { F ( x n ) } , and { u n } are bound. Given that Ω S M P , we can assume that z ´ Ω S M P . As a result, we have A z ´ = prox λ g A z ´ and z ´ = prox λ κ n f z ´ . By exploiting the fact that ( I prox λ g ) is firmly nonexpansive, we conclude that
u n z ´ κ n A * ( I prox λ g ) A u n 2 u n z ´ κ n A * ( I prox λ g ) A u n + κ n A * ( I prox λ g ) A z ´ = u n z ´ 2 2 κ n p ( u n ) p ( z ´ ) , u n z ´ + κ n 2 p ( u n ) p ( z ´ ) 2 = u n z ´ 2 2 κ n p ( u n ) , u n z ´ + κ n 2 p ( u n ) 2 .
It follows that
u n z ´ , p ( u n ) = u n z ´ , p ( u n ) p ( z ´ ) = A u n A z ´ , ( I prox λ g ) A u n ( I prox λ g ) A z ´ ( I prox λ g ) A u n 2 = 2 p ( u n ) .
Following from Equations (22) and (23), one gets
u n z ´ κ n A * ( I prox λ g ) A u n 2 u n z ´ 2 4 ϱ n p 2 ( u n ) p ( u n 2 + q ( u n ) 2 + ϱ n p ( u n ) p ( u n ) 2 + q ( u n ) 2 2 p ( u n ) 2 . u n z ´ 2 ϱ n ( 4 ϱ n ) p 2 ( u n ) p ( u n ) 2 + q ( u n ) 2 u n z ´ 2 .
We can see from (20) and the previously stated inequality that
y n z ´ = u n z ´ + α d n + 1 u n z ´ κ n A * ( I prox λ g ) A u n + α b n d n u n z ´ + α M 1 b n ,
and
y n z ´ 2 = u n z ´ + α d n + 1 2 = u n z ´ + κ n A * ( I prox λ g ) A u n + α b n d n 2 u n z ´ + κ n A * ( I prox λ g ) A u n 2 + 2 α b n y n z ´ , d n u n z ´ 2 ϱ n ( 4 ϱ n ) p 2 ( u n ) p ( u n ) 2 + q ( u n ) 2 + 2 α b n y n z ´ , d n u n z ´ 2 ϱ n ( 4 ϱ n ) p 2 ( u n ) p ( u n ) 2 + q ( u n ) 2 + 2 α b n y n z ´ d n .
Additionally, we can see from (20) that
u n z ´ x n z ´ + θ n x n x n + 1 ,
and
u n z ´ 2 ( x n z ´ ) + θ n ( x n x n + 1 ) 2 x n z ´ 2 + 2 θ n x n x n + 1 , u n z ´ x n z ´ 2 + 2 θ n x n x n + 1 u n z ´ .
Because lim n a n = 0 , we can safely assume that a n < D 1 for all n N . As a result, we can deduce from Equations (24), (26), and Lemma 6 that   
x n + 1 z ´ = a n ξ F ( x n ) + ( I a n D ) prox λ κ n f y n z ´ a n ξ F ( x n ) D z ´ + ( I a n D ) prox λ κ n f y n ( I a n D ) z ´ a n ξ F ( x n ) D z ´ + ( 1 a n γ ¯ ) prox λ κ n f y n z ´ a n ξ F ( x n ) F ( z ´ ) + a n ξ F ( z ´ ) D z ´ + ( 1 a n γ ¯ ) y n z ´ a n ξ L x n z ´ + a n ξ F ( z ´ ) D z ´ + ( 1 a n γ ¯ ) u n z ´ + α M 1 b n a n ξ L x n z ´ + a n ξ F ( z ´ ) D z ´ + ( 1 a n γ ¯ ) x n z ´ + θ n x n x n + 1 + α M 1 b n = ( 1 a n ( γ ¯ ξ L ) ) x n z ´ + a n ξ F ( z ´ ) D z ´ + ( 1 a n γ ¯ ) θ n x n x n + 1 + ( 1 a n γ ¯ ) α M 1 b n ( 1 a n ( γ ¯ ξ L ) ) x n z ´ + a n ( γ ¯ ξ L ) ξ F ( z ´ ) D z ´ + θ n a n x n x n + 1 + α M 1 γ ¯ ξ L max x n z ´ , α M 1 + θ n a n x n x n + 1 + ξ F ( z ´ ) D z ´ .
Thus,
x n + 1 z ´ max x 1 z ´ , α M 1 + θ 1 a 1 x 1 x 0 + ξ F ( z ´ ) D z ´ .
This inequality is obtained through the use of mathematical induction. Therefore, { x n } is bound. Consequently, { y n } , { F ( x n ) } , and { u n } are all bound. Additionally, we obtain from Equations (24) and (27) that
x n + 1 z ´ 2 = a n ξ F ( x n ) + ( I a n D ) prox λ κ n f y n z ´ 2 ( I a n D ) ( prox λ κ n f y n z ´ ) 2 + 2 a n ξ F ( x n ) D z ´ , x n + 1 z ´ ( 1 a n γ ¯ ) 2 y n z ´ 2 + 2 a n ξ F ( x n ) ξ F ( z ´ ) , x n + 1 z ´ 2 a n ξ F ( z ´ ) D z ´ , x n + 1 z ´ ( 1 a n γ ¯ ) 2 x n z ´ 2 + 2 θ n x n x n + 1 u n z ´ + 2 α a n 2 y n z ´ d n ϱ n ( 4 ϱ n ) p 2 ( u n ) p ( u n ) 2 + q ( u n ) 2 } + 2 a n ξ L x n z ´ x n + 1 z ´ + 2 a n ξ F ( z ´ ) D p , x n + 1 z ´ .
This implies that
x n + 1 z ´ 2 ( 1 a n γ ¯ ) 2 x n z ´ 2 + 2 θ n x n x n + 1 u n z ´ + 2 α a n 2 y n z ´ d n ϱ n ( 4 ϱ n ) p 2 ( u n ) p ( u n ) 2 + q ( u n ) 2 } + a n ξ L x n z ´ 2 + x n + 1 z ´ 2 + 2 a n ξ F ( z ´ ) D z ´ , x n + 1 z ´ ( 1 a n γ ¯ ) 2 x n z ´ 2 + a n ξ L x n z ´ 2 + a n ξ L x n + 1 z ´ 2 + a n ξ F ( z ´ ) D z ´ , x n + 1 z ´ + 2 θ n x n x n 1 u n z ´ + 2 α a n 2 y n z ´ d n 1 2 ( γ ¯ ξ L ) a n 1 a n ξ L x n z ´ 2 + a n 2 γ ¯ 2 1 a n ξ L x n z ´ 2 + 2 a n ( 1 a n ξ L ) ξ F ( z ´ ) D z ´ , x n + 1 z ´ + θ n a n x n x n + 1 u n z ´ + α a n y n z ´ d n } .
It follows that
x n + 1 z ´ 2 1 2 ( γ ¯ ξ L ) a n 1 a n ξ L x n z ´ 2 + 2 ( γ ¯ ξ L ) a n 1 a n ξ L a n γ ¯ 2 ( γ ¯ ξ L ) x n z ´ 2 + 1 ( γ ¯ ξ L ) ξ F ( z ´ ) D z ´ , x n + 1 z ´ + θ n ( γ ¯ ξ L ) a n x n x n + 1 u n z ´ + 2 a n α ( γ ¯ ξ L ) y n z ´ d n .
In another direction, we obtain from Equation (28) that
ϱ n ( 4 ϱ n ) p 2 ( u n ) p ( u n ) 2 + q ( u n ) 2 x n z ´ 2 x n + 1 z ´ 2 + 2 θ n x n x n 1 u n z ´ + 2 α a n 2 y n z ´ d n + 2 a n ξ L x n z ´ x n + 1 z ´ + 2 a n ξ F ( z ´ ) D z ´ x n + 1 z ´ .
Following that, two different cases for the convergence of the sequence { x n z ´ 2 } are considered.
Case I Assume that the sequence { x n z ´ 2 } does not increase. In that case, there exists n 0 N such that x n + 1 z ´ 2 x n z ´ 2 for every n n 0 . Thus, the sequence { x n z ´ 2 } converges, and
lim n ( x n + 1 z ´ 2 x n z ´ 2 ) = 0 .
By applying conditions (C1), (C3), and Equation (31) to the Formula (30), we obtain that
lim n ϱ n ( 4 ϱ n ) p 2 ( u n ) p ( u n ) 2 + q ( u n ) 2 = 0 .
Additionally, by using the assumption of the sequence { ϱ n } , and the boundedness of p ( u n ) and q ( u n ) , we obtain that
lim n p ( u n ) = 0 .
Since prox λ κ n f is firmly nonexpansive, we observe that
prox λ κ n f y n y n 2 prox λ κ n f y n z ´ ( y n z ´ ) 2 prox λ κ n f y n z ´ 2 2 prox λ κ n f y n z ´ , y n z ´ + y n z ´ 2 y n z ´ 2 prox λ κ n f y n z ´ 2 x n z ´ 2 + θ n a n x n x n + 1 u n z ´ + 2 α a n 2 y n z ´ d n ϱ n ( 4 ϱ n ) p 2 ( u n ) p ( u n ) 2 + q ( u n ) 2 prox λ κ n f y n z ´ 2 .
Additionally, we observe that
x n + 1 z ´ 2 = a n ξ F ( x n ) + ( I a n D ) prox λ κ n f y n z ´ 2 = a n ( ξ F ( x n ) D z ´ ) + ( I a n D ) ( prox λ κ n f y n z ´ ) 2 ( I a n D ) ( prox λ κ n f y n z ´ ) 2 + 2 a n ξ F ( x n ) D z ´ , x n + 1 z ´ ( 1 a n γ ¯ ) 2 prox λ κ n f y n z ´ 2 + 2 a n ξ F ( x n ) D z ´ x n + 1 z ´ prox λ κ n f y n z ´ 2 + 2 a n ξ F ( x n ) D z ´ x n + 1 z ´ .
It follows that
prox λ κ n f y n z ´ 2 x n + 1 z ´ 2 + 2 a n ξ F ( x n ) D z ´ x n + 1 z ´ .
By combining Equations (33) and (34), we obtain that
prox λ κ n f y n y n 2 x n z ´ 2 x n + 1 z ´ 2 + 2 α a n 2 y n z ´ d n + 2 a n ξ F ( x n ) D z ´ x n + 1 z ´ + θ n a n x n x n + 1 u n z ´ .
Additionally, by utilizing conditions (C1), (C2), and (31), we obtain that
lim n prox λ κ n f y n y n = 0 .
By using (20), we obtain that
x n + 1 prox λ κ n f y n = a n ξ F ( x n ) + ( 1 a n D ) prox λ κ n f y n prox λ κ n f y n = a n ξ F ( x n ) D prox λ κ n f y n .
Then, we get that
lim n x n + 1 prox λ κ n f y n = 0 .
Clearly, by using conditions (C3) and Equation (32), we obtain that
lim n x n u n = 0 , lim n u n y n = 0 .
From the foregoing, we can immediately conclude that
lim n x n y n = 0 .
Consider the following inequality:
x n + 1 x n x n + 1 prox λ κ n f y n + prox λ κ n f y n y n + x n y n .
Using this information, and Equations (35)–(37), we obtain that
lim n x n + 1 x n = 0 .
This implies that the sequence { x n } is a Cauchy.
Next, we demonstrate that z ¯ Ω S M P . Let z ¯ be a weak cluster point. There is a subsequence { x n i } that { x n i } z ¯ . By utilizing the lower-semicontinuity of h, we obtain that
0 p ( z ¯ ) lim inf i p ( x n i ) = lim n p ( x n ) = 0 .
This information implies that p ( z ¯ ) = ( I prox λ g ) A z ¯ = 0 . We can therefore deduce that A z ¯ is a fixed point of proximal mapping of g, or that 0 g ( A z ¯ ) . As a result, A z ¯ is a g minimizer.
Similarly, by utilizing lower-semicontinuity of l, we obtain that
0 q ( z ¯ ) lim inf i q ( x n i ) = lim n q ( x n ) = 0 .
That is, q ( z ¯ ) = ( I prox λ κ n f ) z ¯ = 0 . Therefore, we deduce that z ¯ is a fixed point of proximal mapping of f, or that 0 f ( z ¯ ) . Thus, z ¯ is a f minimizer. This information implies that z ¯ Ω S M P .
We now demonstrate that lim sup n ( D ξ F ) z ´ , x n + 1 z ´ 0 , where z ´ = P Ω S M P ( I D + ξ F ) z ´ is a unique solution to the variational inequality:
( D ξ F ) z ´ , x z ´ 0 , x Ω S M P .
By using the properties of the matric projection, we obtain that
lim sup n ( D ξ F ) z ´ , x n + 1 z ´ = lim i ( D ξ F ) z ´ , x n i z ´ = ( D ξ F ) z ´ , z ´ z ¯ 0 .
Following that, we show that the sequence { x n } strongly converges to the point z ´ , which is the unique solution to the variational inequality:
( D ξ F ) z ´ , x z ´ 0 , x Ω S M P .
By observing Equation (29), we obtain that
x n + 1 z ´ 2 ( 1 δ n ) x n z ´ 2 + δ n ϱ n ,
where δ n = 2 ( γ ¯ ξ L ) a n 1 a n ξ L and ϱ n = { a n γ ¯ 2 ( γ ¯ ξ L ) x n z ´ 2 + 1 ( γ ¯ ξ L ) ξ F ( z ´ ) D z ´ , x n + 1 z ´ + θ n ( γ ¯ ξ L ) a n x n x n + 1 u n z ´ + 2 a n α ( γ ¯ ξ L ) y n z ´ d n } .
By applying conditions (C1) and (C3) to δ n and ϱ n , we obtain that lim n δ n = 0 and lim sup n ϱ n 0 . As a result of applying Lemma 4 to Equation (38), we obtain that lim n x n z ´ = 0 . This implies that x n z ´ , with z ´ = P Ω S M P ( I D + ξ F ) z ´ .
Case II Assume that the sequence { | | x n z ´ 2 } is increasing and Γ n = x n z ´ 2 , n 1 . For n n 0 (where n 0 is sufficiently large), define a map η : N N by
η ( n ) = max { k N | k n , Γ k Γ k + 1 } .
Then, η is a nondecreasing sequence with η ( n ) , as n , and
0 Γ η ( n ) Γ η ( n ) + 1 , n n 0 .
Similarly to Case I, it is obvious that
lim n x η ( n ) + 1 prox λ κ n f y η ( n ) = 0 , lim n prox λ κ n f y η ( n ) y η ( n ) = 0 , lim n x η ( n ) y η ( n ) = 0 ,
and
lim n x η ( n ) + 1 x η ( n ) = 0 .
Because { x η ( n ) } is bounded, a subsequence of { x η ( n ) } exists. Assume that x η ( n ) z ¯ Ω S M P . Similarly to Case I, we obtain lim n δ η ( n ) = 0 and lim sup n ϱ η ( n ) 0 . Then, using (29), we determine that
Γ η ( n ) + 1 ( 1 δ η ( n ) ) Γ η ( n ) + δ η ( n ) ϱ η ( n ) .
This information implies that
Γ η ( n ) ϱ η ( n ) ,
and
lim sup n Γ η ( n ) 0 .
As a result,
lim n Γ η ( n ) = 0 .
Additionally, we obtain from Equation (39) that
lim sup n Γ η ( n ) + 1 lim sup n Γ η ( n ) .
As a result,
lim n Γ η ( n ) + 1 = 0 .
As a result of Lemma 5, we have
0 Γ n max { Γ n , Γ η ( n ) } Γ η ( n ) + 1 .
As a result, lim n Γ n = 0 . This information implies that x n z ´ = P Ω S M P ( I D + ξ F ) z ´ , completing the proof.    □

4. Numerical Examples

In this section, we demonstrated the performance of our proposed algorithm using two examples. The first example experiments with various step sizes to demonstrate how the step size selection affects the proposed algorithm’s convergence in infinite-dimensional space. In the second example, we demonstrated how our proposed algorithm can solve the LASSO problem, thereby resolving the signal recovery problem. Additionally, we compared our algorithm to three previously published strong convergence algorithms to demonstrate our algorithm’s performance in terms of recovery signal quality.
To begin with, we describe how Algorithm 1 can be applied to the split minimization problem, which can be modeled to solve the signal recovery problem.
Recall the SFP defined as follows:
x C such that A x Q .
Υ denotes the collection of SFP solutions. We now set f = δ C and g = δ Q , where δ C and δ Q are the indicator functions of two nonempty, closed, and convex sets, C and Q. Then, we determine that prox λ κ n f = Proj C and prox λ g = Proj Q . The following result was then obtained:
Corollary 1.
Let A : H 1 H 2 be a bounded linear operator with its adjoint operator A * : H 2 H 1 . Let f and g be two proper, convex, and lower-semicontinuous functions such that (40) is consistent (i.e., Υ ) and that { ( I prox λ g ) A x n } is bounded. Assume that
f = δ C = 0 , if x C , + , otherwise ,
and
g = δ Q = 0 , if x Q , + , otherwise .
Assume that { a n } , { b n } , { θ n } , and { ϱ n } are sequences that satisfy Condition 1. Let D : H 1 H 1 and F : H 1 H 1 be a strongly positive bounded linear operator and an L-Lipschitz mapping with L > 0 , respectively. Assume that ξ > 0 and that 0 < ξ L < γ ¯ . Then, sequence { x n } generated by Algorithm 2 converges strongly to a solution of the SFP (40), which is also the unique solution of the variational inequalities:
( D ξ F ) z ´ , z ´ x 0 , x Υ .
Algorithm 2: An iterative algorithm for solving split feasibility problem.
Intialization: Set { b n } [ 0 , 1 2 ) . Choose some positive sequence { ϱ n } and { a n } satisfying 0 < ϱ n < 4 and a n ( 0 , 1 ) , respectively. Select an arbitrary strating point x 1 , x 2 H 1 .
Iterative step: Given α > 0 and x n , d n H 1 . Compute x n + 1 as follows:
u n = x n + θ n ( x n x n 1 ) d n + 1 = κ n A * ( I Proj Q ) A u n α + b n d n y n = u n + α d n + 1 x n + 1 = a n ξ F ( x n ) + ( I a n D ) Proj C y n ,
where
d 1 = κ n A * ( I Proj Q ) A u 1 α ,
and
κ n = ϱ n p ( u n ) p ( u n ) 2 + q ( u n ) 2 , p ( u n ) 2 + q ( u n ) 2 0 , 0 , otherwise ,
where p , q , p , and q are defiend as Equations (13)–(16).
Stopping criterion: if p ( u n ) = q ( u n ) = 0 then stop. Otherwise, set n = n + 1 and return to Iterative step.
Example 1.
Consider the functional space L 2 ( [ m , n ] ) . Assume H 1 = L 2 ( [ m , n ] ) = H 2 with a norm defined by · . As can be seen [31], assume that
C : = z L 2 ( [ m , n ] ) | s , z t ,
with 0 a L 2 ( [ m , n ] ) and t R . Therefore,
Proj C ( z ) t s , z s s L 2 2 s + z , if s , z > t , z , otherwise .
Additionally, assume that
Q : = z L 2 ( [ m , n ] ) | z d L 2 r ,
with d L 2 ( [ m , n ] ) and r > 0 . Therefore,
Proj Q ( z ) z d z d L 2 r + d , if z Q , z , otherwise .
In this experiment, we set the parameters a n = 1 n + 1 , b n = 1 ( 2 n + 1 ) 2 , ξ = 0.75 , and α = 1 . Additionally, we set the operators D F I , A ( x ) = 1 4 x , and the sets
C = z L 2 ( [ 0 , 1 ] ) | 0 1 e t z ( t ) d t 1
and
Q = z L 2 ( [ 0 , 1 ] ) | 0 1 | z ( t ) e t | d t 4 .
At different initial functions, we compared the computational performance of Algorithm 2 with and without extrapolation factor ( θ n ) in a variety of ϱ n . To facilitate the experiment, we divided it into three distinct cases.
Case 1: x 1 = 4 t 2 ( t 1 ) , x 2 = ( t 1 ) 3 .
   Case 1.1: θ n = 1 n 2 .
    Case 1.1.1: ϱ n = 0.1 .
    Case 1.1.2: ϱ n = 2 .
    Case 1.1.3: ϱ n = 3.99 .
   Case 1.2: θ n = 0 .
    Case 1.2.1: ϱ n = 0.1 .
    Case 1.2.2: ϱ n = 2 .
    Case 1.2.3: ϱ n = 3.99 .
Case 2: x 1 = 5 t 3 ( t 1 ) , x 2 = 10 t e 5 t .
   Case 2.1: θ n = 1 n 2 .
    Case 2.1.1: ϱ n = 0.1 .
    Case 2.1.2: ϱ n = 2 .
    Case 2.1.3: ϱ n = 3.99 .
   Case 2.2: θ n = 0 .
    Case 2.2.1: ϱ n = 0.1 .
    Case 2.2.2: ϱ n = 2 .
    Case 2.2.3: ϱ n = 3.99 .
Case 3: x 1 = t e t , x 2 = 100 t 7 ( t 1 ) 3 .
   Case 3.1: θ n = 1 n 2 .
    Case 3.1.1: ϱ n = 0.1 .
    Case 3.1.2: ϱ n = 2 .
    Case 3.1.2: ϱ n = 3.99 .
   Case 3.2: θ n = 0 .
    Case 3.2.1: ϱ n = 0.1 .
    Case 3.2.2: ϱ n = 2 .
    Case 3.2.3: ϱ n = 3.99 .
In this experiment, the stopping criterion is Cauchy error x n + 1 x n < 10 4 . The initial functions for each case are depicted in Figure 1. Figure 2, Figure 3 and Figure 4 illustrate the convergence behaviors of sequence { x n ( t ) } with and without the extrapolation factor ( θ n ) for Cases 1–3. Cauchy error plots for sequence { x n ( t ) } with and without extrapolation factor ( θ n ) are shown in Figure 5, Figure 6 and Figure 7 for ϱ n = 0.1 , 2 , and 3.99 , respectively. Additionally, we demonstrate our algorithm’s performance in terms of iteration counts and CPU time for generating sequence { x n ( t ) } in each case, as shown in Table 1.
Remark 1.
1. 
As illustrated in Figure 2, Figure 3 and Figure 4, the convergence behavior of sequence { x n ( t ) } generated by our proposed algorithm is observable. It always converges to zero, regardless of the initial and different parameters chosen.
2. 
Figure 5, Figure 6 and Figure 7 illustrated the Cauchy error generated by our proposed algorithm for sequence { x n ( t ) } . It demonstrated that regardless of the parameters chosen, the algorithm with extrapolation factor ( θ n ) has a higher rate of convergence than the algorithm without it.
3. 
The advantages of our proposed algorithm in terms of iteration counts and CPU times are shown in Table 1. It demonstrated that the algorithm incorporating the extrapolation factor ( θ n ) is more efficient than the algorithm avoiding it.
Example 2.
Consider the following linear inverse problem: y = A x + b , where x R N is the signal to recover, b R M is a noise vector, and A : R N R M ( M < N ) is the acquisition device. To solve this inverse problem, we can convert it to the LASSO problem described below:
min x R N 1 2 y A x 2 2 subject to x 1 < r ,
where r is a positive constant. Assume that C = { z R N | z 1 < q } and that Q = { b } . The LASSO (44) problem then transforms into the spilt feasibility problem (1). Thus, as described in Corollary 1, Algorithm 2 can be used to recover a sparse signal x R N with k ( k N ) nonzero elements.
The purpose of this experiment is to demonstrate that Algorithm 2 can be used to solve the LASSO problem (44). Following that, we compare our algorithm to two strong convergence algorithms proposed by Abbas and Alshahrani [31]. These algorithms are referred to as Abbas-1 (17) and Abbas-2 (18), respectively. Additionally, we compare the algorithms to Kaewyong’s self-adaptive step size algorithm [32]. Nattakarn (19) is the name given to this algorithm.
Throughout all algorithms, we set regularization parameter λ = 1 and ϱ n = 2 . In our algorithm, we set a n = 1 n + 1 , b n = 1 ( 2 n + 1 ) 2 , D F I , ξ = 1 , α = 1 , and θ n = 1 n 2 . In Nattakarn (19), we set S I , α n = 0.25 , δ n = 1 n + 1 , v = 0.5 , and θ n = 1 n 2 . In Abbas-1 (17) and Abbas-2 (18), we set ϵ n = 1 n + 1 .
We evaluated the computational performance of each algorithm in a variety of different dimensions N and sparsity levels k (Case 1: N = 500 , k = 12 ; Case 2: N = 500 , k = 20 ; Case 3: N = 1000 , k = 30 ; Case 4: N = 1000 , k = 50 ). The number of iterations 200 is a used criterion for stopping. Additionally, we use the signal-to-noise ratio S N R = 20 log x 0 2 x 0 x n + 1 2 , where x 0 is an original signal, to quantify the quality of recovery, with a higher SNR indicating a higher-quality recovery.
Figure 8 illustrates the original and noise signals in various dimensions N and sparsities k. Figure 9, Figure 10, Figure 11 and Figure 12 illustrate the recovery performance of all algorithms in a variety of scenarios. Table 2 shows the SNR comparison results.
Remark 2.
1. 
As demonstrated in Example 2, our proposed algorithm is capable of solving the LASSO problem (44).
2. 
As showed in Table 2, Figure 9, Figure 10, Figure 11 and Figure 12, our algorithm outperforms other algorithms for solving the LASSO problem (44). Additionally, the dimensions and sparsities have no effect on computational performance of our proposed algorithm.

5. Conclusions

In this work, the split minimization problem was described in the framework of Hilbert spaces. By studying the advantages and disadvantages of some of the previous works, in this work, we constructed new algorithms for solving the split minimization problem by developing previously published algorithms based on useful techniques. The algorithms we constructed were combined with the following techniques: (1) self-adaptive step size technique to avoid computing the operator norm of a bounded linear operator norm, which is difficult if not impossible to calculate or even estimate, and (2) inertia and conjugate gradient direction techniques to speed up the convergence rate. In part of the numerical examples, we showed that the extrapolation factor ( θ n ) in the inertia term results in a faster rate of convergence and that the step size parameter affects the rate of convergence as well. Additionally, we applied our proposed algorithm to solve the signal recovery problem. We compared our algorithm’s recovery signal quality performance to that of three previously published works, which showed that our proposed algorithm outperformed other algorithms for solving the signal recovery problem.

Author Contributions

N.K. and K.S. contributed equally in writing this article. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by King Mongkuts University of Technology North Bangkok, Contract No. KMUTNB-PHD-62-03.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the Department of Mathematics, Faculty of Applied Science, King Mongkuts University of Technology North Bangkok.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  2. Shi, Z.; Zhou, C.; Gu, Y.; Goodman, N.A.; Qu, F. Source estimation using coprime array: A sparse reconstruction perspective. IEEE Sens. J. 2016, 17, 755–765. [Google Scholar] [CrossRef]
  3. Zhou, C.; Gu, Y.; Zhang, Y.D.; Shi, Z.; Jin, T.; Wu, X. Compressive sensing-based coprime array direction-of-arrival estimation. IET Commun. 2017, 11, 1719–1724. [Google Scholar] [CrossRef]
  4. Zheng, H.; Shi, Z.; Zhou, C.; Haardt, M.; Chen, J. Coupled coarray tensor CPD for DOA estimation with coprime L-shaped array. IEEE Signal Process. Lett. 2021, 28, 1545–1549. [Google Scholar] [CrossRef]
  5. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441. [Google Scholar] [CrossRef]
  6. Ansari, Q.H.; Rehan, A. Split feasibility and fixed point problems. In Nonlinear Analysis; Springer: Heidelberg, Germany, 2014; pp. 281–322. [Google Scholar]
  7. Boikanyo, O.A. A strongly convergent algorithm for the split common fixed point problem. Appl. Math. Comput. 2015, 265, 844–853. [Google Scholar] [CrossRef]
  8. Ceng, L.C.; Ansari, Q.; Yao, J.C. Relaxed extragradient methods for finding minimum-norm solutions of the split feasibility problem. Nonlinear Anal. Theory Methods Appl. 2012, 75, 2116–2125. [Google Scholar] [CrossRef]
  9. Moudafi, A.; Thakur, B. Solving proximal split feasibility problems without prior knowledge of operator norms. Optim. Lett. 2014, 8, 2099–2110. [Google Scholar] [CrossRef] [Green Version]
  10. Shehu, Y.; Cai, G.; Iyiola, O.S. Iterative approximation of solutions for proximal split feasibility problems. Fixed Point Theory Appl. 2015, 2015, 1–18. [Google Scholar] [CrossRef] [Green Version]
  11. Shehu, Y.; Iyiola, O.S.; Enyi, C.D. An iterative algorithm for solving split feasibility problems and fixed point problems in Banach spaces. Numer. Algorithms 2016, 72, 835–864. [Google Scholar] [CrossRef]
  12. Shehu, Y.; Ogbuisi, F.; Iyiola, O. Convergence analysis of an iterative algorithm for fixed point problems and split feasibility problems in certain Banach spaces. Optimization 2016, 65, 299–323. [Google Scholar] [CrossRef]
  13. Xu, H.K. Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010, 26, 105018. [Google Scholar] [CrossRef]
  14. Yang, Q. On variable-step relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 2005, 302, 166–179. [Google Scholar] [CrossRef] [Green Version]
  15. Gibali, A.; Mai, D.T. A new relaxed CQ algorithm for solving split feasibility problems in Hilbert spaces and its applications. J. Ind. Manag. Optim. 2019, 15, 963. [Google Scholar] [CrossRef] [Green Version]
  16. Moudafi, A.; Gibali, A. l 1-l 2 regularization of split feasibility problems. Numer. Algorithms 2018, 78, 739–757. [Google Scholar] [CrossRef] [Green Version]
  17. Shehu, Y.; Iyiola, O.S. Convergence analysis for the proximal split feasibility problem using an inertial extrapolation term method. J. Fixed Point Theory Appl. 2017, 19, 2483–2510. [Google Scholar] [CrossRef]
  18. López, G.; Martín-Márquez, V.; Wang, F.; Xu, H.K. Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012, 28, 085004. [Google Scholar] [CrossRef]
  19. Zeidler, E. Nonlinear Functional Analysis and Its Applications: III: Variational Methods and Optimization; Springer Science & Business Media: New York, NY, USA, 2013. [Google Scholar]
  20. Nocedal, J.; Wright, S. Numerical Optimization; Springer Science & Business Media: New York, NY, USA, 2006. [Google Scholar]
  21. Sakurai, K.; Iiduka, H. Acceleration of the Halpern algorithm to search for a fixed point of a nonexpansive mapping. Fixed Point Theory Appl. 2014, 2014, 1–11. [Google Scholar] [CrossRef] [Green Version]
  22. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. Ussr Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  23. Nesterov, Y. A method of solving a convex programming problem with convergence rate O (1/k2) O (1/k2). Sov. Math. Dokl 1983, 27, 372–376. [Google Scholar]
  24. Dang, Y.; Sun, J.; Xu, H. Inertial accelerated algorithms for solving a split feasibility problem. J. Ind. Manag. Optim. 2017, 13, 1383–1394. [Google Scholar] [CrossRef] [Green Version]
  25. Lorenz, D.A.; Pock, T. An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 2015, 51, 311–325. [Google Scholar] [CrossRef] [Green Version]
  26. Kitkuan, D.; Kumam, P.; Martínez-Moreno, J.; Sitthithakerngkiet, K. Inertial viscosity forward–backward splitting algorithm for monotone inclusions and its application to image restoration problems. Int. J. Comput. Math. 2020, 97, 482–497. [Google Scholar] [CrossRef]
  27. Ul Rahman, J.; Chen, Q.; Yang, Z. Additive Parameter for Deep Face Recognition. Commun. Math. Stat. 2020, 8, 203–217. [Google Scholar] [CrossRef]
  28. Jyothi, R.; Babu, P. TELET: A Monotonic Algorithm to Design Large Dimensional Equiangular Tight Frames for Applications in Compressed Sensing. arXiv 2021, arXiv:2110.12182. [Google Scholar] [CrossRef]
  29. Rockafellar, R.T.; Wets, R.J.B. Variational Analysis; Springer Science & Business Media: New York, NY, USA, 2009; Volume 317. [Google Scholar]
  30. Aubin, J.P. Optima and Equilibria: An Introduction to Nonlinear Analysis; Springer Science & Business Media: New York, NY, USA, 2013; Volume 140. [Google Scholar]
  31. Abbas, M.; AlShahrani, M.; Ansari, Q.H.; Iyiola, O.S.; Shehu, Y. Iterative methods for solving proximal split minimization problems. Numer. Algorithms 2018, 78, 193–215. [Google Scholar] [CrossRef]
  32. Kaewyong, N.; Sitthithakerngkiet, K. A Self-Adaptive Algorithm for the Common Solution of the Split Minimization Problem and the Fixed Point Problem. Axioms 2021, 10, 109. [Google Scholar] [CrossRef]
  33. Moreau, J.J. Propriétés des applications «prox». Comptes Rendus Hebdomadaires des Séances de l’Académie des Sci. 1963, 256, 1069–1071. [Google Scholar]
  34. Moreau, J.J. Proximité et dualité dans un espace hilbertien. Bull. de la Société Mathématique de Fr. 1965, 93, 273–299. [Google Scholar] [CrossRef]
  35. Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef] [Green Version]
  36. Micchelli, C.A.; Shen, L.; Xu, Y. Proximity algorithms for image models: Denoising. Inverse Probl. 2011, 27, 045009. [Google Scholar] [CrossRef]
  37. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: Berlin, Germany, 2011; Volume 408. [Google Scholar]
  38. Xu, H.K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  39. Maingé, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
Figure 1. The initial functions x 1 and x 2 for each case in Example 1.
Figure 1. The initial functions x 1 and x 2 for each case in Example 1.
Mathematics 10 00874 g001
Figure 2. Convergence behavior of the sequence { x n ( t ) } with and without the extrapolation factor ( θ n ) for Case 1.
Figure 2. Convergence behavior of the sequence { x n ( t ) } with and without the extrapolation factor ( θ n ) for Case 1.
Mathematics 10 00874 g002
Figure 3. Convergence behavior of the sequence { x n ( t ) } with and without the extrapolation factor ( θ n ) for Case 2.
Figure 3. Convergence behavior of the sequence { x n ( t ) } with and without the extrapolation factor ( θ n ) for Case 2.
Mathematics 10 00874 g003
Figure 4. Convergence behavior of the sequence { x n ( t ) } with and without the extrapolation factor ( θ n ) for Case 3.
Figure 4. Convergence behavior of the sequence { x n ( t ) } with and without the extrapolation factor ( θ n ) for Case 3.
Mathematics 10 00874 g004
Figure 5. Cauchy error plots for the sequence { x n ( t ) } with and without the extrapolation factor ( θ n ) when ϱ n = 0.1 .
Figure 5. Cauchy error plots for the sequence { x n ( t ) } with and without the extrapolation factor ( θ n ) when ϱ n = 0.1 .
Mathematics 10 00874 g005
Figure 6. Cauchy error plots for the sequence { x n ( t ) } with and without the extrapolation factor ( θ n ) when ϱ n = 2 .
Figure 6. Cauchy error plots for the sequence { x n ( t ) } with and without the extrapolation factor ( θ n ) when ϱ n = 2 .
Mathematics 10 00874 g006
Figure 7. Cauchy error plots for the sequence { x n ( t ) } with and without the extrapolation factor ( θ n ) when ϱ n = 3.99 .
Figure 7. Cauchy error plots for the sequence { x n ( t ) } with and without the extrapolation factor ( θ n ) when ϱ n = 3.99 .
Mathematics 10 00874 g007
Figure 8. Original and noise signals in various dimentions N and sparsities k in Example 2.
Figure 8. Original and noise signals in various dimentions N and sparsities k in Example 2.
Mathematics 10 00874 g008
Figure 9. Recovery result by using Algorithm 2 in Example 2.
Figure 9. Recovery result by using Algorithm 2 in Example 2.
Mathematics 10 00874 g009
Figure 10. Recovery result by using Abbas-1 (17) in Example 2.
Figure 10. Recovery result by using Abbas-1 (17) in Example 2.
Mathematics 10 00874 g010
Figure 11. Recovery result by using algorithm Abbas-2 (18) in Example 2.
Figure 11. Recovery result by using algorithm Abbas-2 (18) in Example 2.
Mathematics 10 00874 g011
Figure 12. Recovery result by using Nattakran (19) in Example 2.
Figure 12. Recovery result by using Nattakran (19) in Example 2.
Mathematics 10 00874 g012
Table 1. Comparison of iteration counts and CPU times for the sequence { x n ( t ) } of Example 1 in each case.
Table 1. Comparison of iteration counts and CPU times for the sequence { x n ( t ) } of Example 1 in each case.
Cases ϱ n = 0.1 ϱ n = 2 ϱ n = 3.99
θ n = 1 n 2 θ n = 0 θ n = 1 n 2 θ n = 0 θ n = 1 n 2 θ n = 0
Iter.No.CPU.Time (s)Iter.No.CPU.Time (s)Iter.No.CPU.Time (s)Iter.No.CPU.Time (s)Iter.No.CPU.Time (s)Iter.No.CPU.Time (s)
Case 115130.134216131.432394.3982104.4201114.6591288.9243
Case 215030.002316031.1148104.4520114.498284.3988288.9564
Case 312426.112515029.875394.3318114.4885104.5302288.7685
Table 2. SNR comparison results between Algorithm 2, Abbas-1 (17), Abbas-2 (18), and Nattakran (19).
Table 2. SNR comparison results between Algorithm 2, Abbas-1 (17), Abbas-2 (18), and Nattakran (19).
CasesNkAlgorithm 2Abbas-1 (17)Abbas-2 (18)Nattakran (19)
Case 15001243.160311.487411.50849.2145
Case 25002042.640914.363114.602112.4844
Case 310003042.470014.362014.563310.4168
Case 410005042.406712.919112.934112.4989
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kaewyong, N.; Sitthithakerngkiet, K. An Inertial Extragradient Direction Method with Self-Adaptive Step Size for Solving Split Minimization Problems and Its Applications to Compressed Sensing. Mathematics 2022, 10, 874. https://doi.org/10.3390/math10060874

AMA Style

Kaewyong N, Sitthithakerngkiet K. An Inertial Extragradient Direction Method with Self-Adaptive Step Size for Solving Split Minimization Problems and Its Applications to Compressed Sensing. Mathematics. 2022; 10(6):874. https://doi.org/10.3390/math10060874

Chicago/Turabian Style

Kaewyong, Nattakarn, and Kanokwan Sitthithakerngkiet. 2022. "An Inertial Extragradient Direction Method with Self-Adaptive Step Size for Solving Split Minimization Problems and Its Applications to Compressed Sensing" Mathematics 10, no. 6: 874. https://doi.org/10.3390/math10060874

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop