Next Article in Journal
DQN-GNN-Based User Association Approach for Wireless Networks
Next Article in Special Issue
Passive Stabilization of Static Output Feedback of Disturbed Nonlinear Stochastic System
Previous Article in Journal
The Effect of Different Configurations of Copper Structures on the Melting Flow in a Latent Heat Thermal Energy Semi-Cylindrical Unit
Previous Article in Special Issue
Existence and Uniqueness of Positive Solutions for the Fractional Differential Equation Involving the ρ(τ)-Laplacian Operator and Nonlocal Integral Condition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On an Extension of a Spare Regularization Model

by
Abdellatif Moudafi
CNRS-L.I.S UMR 7296, Aix Marseille Université, Campus Universitaire de Saint-Jérôme, 13397 Marseille, France
Mathematics 2023, 11(20), 4285; https://doi.org/10.3390/math11204285
Submission received: 10 September 2023 / Revised: 9 October 2023 / Accepted: 12 October 2023 / Published: 14 October 2023
(This article belongs to the Special Issue New Trends in Nonlinear Analysis)

Abstract

:
In this paper, we would first like to promote an interesting idea for identifying the local minimizer of a non-convex optimization problem with the global minimizer of a convex optimization one. Secondly, to give an extension of their sparse regularization model for inverting incomplete Fourier transforms introduced. Thirdly, following the same lines, to develop convergence guaranteed efficient iteration algorithm for solving the resulting nonsmooth and nonconvex optimization problem but here using applied nonlinear analysis tools. These both lead to a simplification of the proofs and to make a connection with classical works in this filed through a startling comment.

1. Introduction

Compressed sensing (see, for example, [1,2,3,4,5,6,7]), was used to invert incomplete Fourier transforms in the context of sparse signal/image processing, and the l 1 -norm was applied as a regularization for reconstructing an object from randomly selected incomplete frequency samples. Both the sparse regularization method and the compressed sensing method use the l 1 -norm as a regularization to impose sparsity for the reconstructed signal under certain transforms. Because the models based on the l 1 -norm are convex, they can be solved efficiently by available algorithms. Recently, the application of non-convex metrics as alternative approaches to l 1 norm has been favored, see for example, [8,9,10,11]. The main goal of this paper is to suggest the employ of the Moreau envelope associated with the l 0 -norm as a regularization. Note that the sparsity of a vector is originally measured by the l 0 -norm of the vector, i.e., the number of its nonzero components. However, the l 0 -norm is discontinuous at the origin, which is not appropriate from a computational point of view. The envelope of the l 0 -norm is a Lipschitz surrogate of the l 0 -norm, which is nonconvex. Through [7], a local minimizer of a function that is the sum of a convex function and the l 0 -norm can be identified with a global minimizer of a convex function which permits algorithmic development of convex optimization problems. For inverting incomplete Fourier transforms, the use of the l 0 -norm allows to formulate a sparsity regularization model that can reduce artifacts and outliers in the reconstructed signal. It also allow us to design an efficient algorithm for the resulting nonconvex and nonsmooth optimization problem by means of a fixed-point formulation. Moreover, the link of this minimization problem with the related convex minimization problem will permit to prove convergence of our proposed algorithm. Furthermore, a connection with proximal/projection gradient methods is also provided by appealing to two key formulas.

2. A Sparse Regularization Model

In order to obtain to the essential information to share, we took the same paper outline as in [6] and we assume the reader has some basic knowledge of monotone operator theory and convex analysis as can be found, for example, in [12,13,14,15].
In what follows, we propose an extension of a sparse regularization model based on the Moreau envelope of the l 0 -norm for inverting incomplete Fourier transforms considered in [6]. Likewise, relying on properties of the Moreau envelope of the l 0 -norm, we obtain an equivalent formulation favorable for algorithmic development.
Given two Euclidean spaces of dimensions N and d, a nonempty, closed and convex subset Q I R d and a matrix K : I R N I R d , we are interested in this work regarding the following problem:
Find y I R N such   that K y Q ,
This formalism is also at the heart of the modeling of many inverse problems posed by phase recovery problems and other real-world problems, see [16] and references therein.
Our job is to describe the sparse regularization model for Equation (1) in order to obtain a sparse vector y. The l 0 -norm, which counts the number of nonzero components of a vector x I R N , is naturally used to measure its sparsity and is defined by
x 0 = i = 1 N | x i | 0 ,
with | x i | 0 = 1 if x i 0 and | x i | 0 = 0 if x i = 0 .
Now, let P Q be the projection from I R N onto the set Q. Since the constraint is equivalent to the fact that K y P Q ( K y ) = 0 , we derive the following equivalent Lagrangian formulation
min y I R N 1 2 ( I P Q ) K y 2 + γ y 0 ,
with γ > 0 a Lagrangian multiplier.
Both non-convexity and discontinuity of the l 0 -norm at the origin lead to computational difficulties. To overcome these problems, we use a Lipschitz regularization of the l 0 -norm by its Moreau envelope. According to [14,17], for a positive number λ , the Moreau envelope of · 0 with index λ at x I R N is defined by
e n v λ · 0 ( x ) = min z I R N z 0 + 1 2 λ x z 2 .
e n v λ · 0 is continuous and locally convex near the origin. Moreover, as lim λ 0 e n v λ · 0 = · 0 , e n v λ · 0 is a good approximation of · 0 when λ is small enough. Therefore, with an appropriate choice of the parameter λ , e n v λ · 0 can be used as a measure of sparsity and allows to avoid drawbacks · 0 . For a fixed Q I R d and for y I R N , we let
H ( y ) = 1 2 ( I P Q ) K y 2 + γ e n v λ · 0 ( y ) ,
where γ is a positive parameter.
To recover a sparse vector y from (1), we now propose the sparse regularization model based on the Moreau envelope of the l 0 -norm
y ¯ = a r g m i n y I R N H ( y ) .
Since e n v λ · 0 is an approximation of · 0 , we expect that the proposed model enjoys nice properties and can be solved by efficient iteration algorithms.
As was pointed out earlier, e n v λ · 0 is an excellent sparsity promoting function. Therfore, we adopt v = e n v λ · 0 ( y ) in this paper.
We reformulate problem (5) to obtain a problem that is well suited and favorable for computation. Relying on definition (3) of e n v λ · 0 in problem (5) and r being fixed, we introduce the following function
F ( x , y ) = 1 2 ( I P Q ) K y 2 + γ 2 λ x y 2 + γ x 0 .
The non-convex function F ( x , y ) is a special case of those considered in [7]. We then consider the problem
( x ¯ , y ¯ ) = a r g m i n ( x , y ) I R N × I R N F ( x , y ) .
Next, we prove that problems (5) and (7) are essentially equivalent. A global minimizer of any of these problems will also be called a solution of the problem. We first present a relation between H ( y ) and F ( x , y ) . Remember that for λ > 0 , the proximity operator of · 0 at z I R N is defined by
p r o x λ · 0 ( z ) = a r g m i n x I R N { x 0 + 1 2 λ x z 2 } .
Clearly, if x p r o x λ · 0 ( z ) , then we have that
e n v λ · 0 ( z ) = x 0 + 1 2 λ x z 2 .
By relation (9), we obtain
H ( y ) = F ( x , y ) , x p r o x λ · 0 ( y ) and y I R N .
We now give a direct proof of [6], Proposition 1.
Proposition 1.
Let λ > 0 and γ > 0 . A pair ( x ¯ , y ¯ ) solves problem (7) if, and only if, y ¯ solves problem (5) with x ¯ , verifying the following relation
x ¯ p r o x λ · 0 ( y ¯ ) .
Proof. 
This follows directly from the following successive equalities.
inf ( x , y ) I R N × I R N F ( x , y ) = inf ( x , y ) I R N × I R N 1 2 ( I P Q ) K y 2 + γ 2 λ x y 2 + γ x 0 = inf y I R N 1 2 ( I P Q ) K y 2 + γ inf x I R N 1 2 λ x y 2 + x 0 . = inf y I R N 1 2 ( I P Q ) K y 2 + γ e n v λ · 0 ( y ) .
   □
Based on the fact that problems (5) and (7) are essentially equivalent, it suffices to establish that a local minimizer of the nonconvex problem (7) is a minimizer of a convex problem on a subdomain. To that end, we first present a convex optimization problem on a proper subdomain of I R N × I R N related to problem (7) and recall the notion of the support of a vector x I R N , denoted by N ( x ) , namely the index set on which the components of x is nonzero, that is N ( x ) = { i : x i 0 } . Note that when the support of x in problem (7) is specified, the non-convex problem (7) reduces to a convex one. Based on this observation, we introduce a convex function by
G ( x , y ) = 1 2 ( I P Q ) K y 2 + γ 2 λ x y 2 , ( x , y ) I R N × I R N .
Clearly, F ( x , y ) = G ( x , y ) + γ x 0 and G ( x , y ) is convex and differentiable on I R N × I R N .
We define now, for a given index set N , a subspace of I R N by setting
B N = { x I R N , N ( x ) N } .
B N is convex and closed (see [6]), and we consider then the minimization problem on B N × I R N defined by
a r g m i n { G ( x , y ) , ( x , y ) ( x , y ) B N × I R N } .
Problem (13) is convex, thanks to the convexity of both the function G and the set B N × I R N . Next, we will show the equivalence between the non-convex problem (7) and the convex problem (13) with an appropriate choose of the index set N . To this end, we investigate properties of the support set of certain sequences in I R N and for a given index set N , we define an operator P B N : I R N B N by
P B N ( y ) = y i i f i N and 0 otherwise .
This operator is indeed the orthogonal projection from I R N onto N , (see [6] Lemma 3). A convenient identification of the proximity operator of the l 0 -norm with the projection P B N and some properties of the sequence generated by p r o x λ · 0 , with respect to the existence of an integer which will denote κ ¯ , were developed in (Lemmas 4–7 together with Proposition 2 [6]), and which are still valid in our context.
Recall also the closed form formula of the proximity of l 0 . For all z I R N ,
p r o x λ | · | 0 ( z ) = { z i } if | z i | > 2 λ ; { z i , 0 } if | z i | = 2 λ ; { 0 } otherwise
A connection between problems (7) and (13) is given by the following result.
Theorem 1.
λ , γ > 0 , and ( x ¯ , y ¯ ) I R N × I R N be given. The pair ( x ¯ , y ¯ ) is a local minimizer of the non-convex problem (7) if, and only if, ( x ¯ , y ¯ ) is a minimizer of the convex problem (13) with N : = N ( x ¯ ) .
Proof. 
Follows directly by using ([6] Corollary 4.9), with ϕ ( y ) : = 1 2 ( I P Q ) K y 2 , μ : = γ 2 λ and D : = I .   □
Following the same lines of ([6] Propositions 1 and 3), we can identify and connect global and local minimizers of (7) with those of (5).

3. A Fixed Point Approach

We will propose an iterative method for finding a local minimizer of (7) relying on a fixed-point formulation. For all the facts we will use, we refer to [14].
Let us begin with a characterization of the convex problem (13).
Proposition 2.
Suppose λ , γ > 0 . If C { 1 , 2 , · · · , N } , then the problem (13) with N : = C has a solution and a pair ( x ¯ , y ¯ ) I R N × I R N solves (13) with N : = C if, and only if,
x ¯ = P N ( x ¯ ) ( y ¯ ) and y ¯ = x ¯ λ γ K * ( I P Q ) K y ¯ .
Proof. 
The existence of solutions follows by the fact that B N is compact together with the coercivity of G with respect to the second variable. On the other hand, the optimality condition of the minimization problem
min ( x , y ) B N × I R N 1 2 ( I P Q ) K y 2 + γ 2 λ x y 2 ,
reads as
( 0 , 0 ) K * ( I P Q ) K y ¯ γ λ ( x ¯ y ¯ ) , γ λ ( x ¯ y ¯ ) + N B N ( x ¯ ) ,
or, equivalently,
y ¯ = x ¯ λ γ K * ( I P Q ) K y ¯ and y ¯ x ¯ + λ γ N B N ( x ¯ ) x ¯ = ( I + λ γ N B N ) 1 ( y ¯ ) = P B N ( y ¯ ) .
   □
Application of both Theorem 1 and Proposition 2 leads to the following characterization of a local minimizer of the problem (7).
Theorem 2.
Let λ , γ > 0 be fixed. A pair ( x ¯ , y ¯ ) I R N × I R N is a local minimizer of (7) if, and only if, ( x ¯ , y ¯ ) verifies (16).
Let us now give a characterization of a global minimizer of (16).
Theorem 3.
Let λ , γ > 0 be fixed. If a pair ( x ¯ , y ¯ ) I R N × I R N is a local minimizer of (7), then ( x ¯ , y ¯ ) satisfies the relations
x ¯ = p r o x λ · 0 ( y ¯ ) and y ¯ = x ¯ λ γ K * ( I P Q ) K y ¯ .
Conversely, if a pair ( x ¯ , y ¯ ) verifies (17), then If a pair ( x ¯ , y ¯ ) is a local minimizer of (7).
Proof. 
The optimality condition of the minimization problem
min ( x , y ) I R N × I R N 1 2 ( I P Q ) K y 2 + γ 2 λ x y 2 + γ x 0 ,
reads as
( 0 , 0 ) K * ( I P Q ) K y ¯ γ λ ( x ¯ y ¯ ) , γ λ ( x ¯ y ¯ ) + γ · 0 ( x ¯ ) ,
or, equivalently,
y ¯ = x ¯ λ γ K * ( I P Q ) K y ¯ and y ¯ x ¯ + λ · 0 ( x ¯ ) x ¯ ( I + λ · 0 ) 1 ( y ¯ ) = p r o x λ · 0 ( y ¯ ) .
   □
In view of Theorem 3, we propose the following explicit–implicit Algorithm for solving problem (7)
x k + 1 p r o x λ · 0 ( y k ) and y k + 1 = x k + 1 λ γ K * ( I P Q ) K y k + 1
When the projection can be computed efficiently, updates of both variables x and y in Algorithm (18) at each iteration can be efficiently implemented.
Proposition 3.
If λ , γ > 0 , then the operator I + λ γ K * ( I P Q ) K is invertible. Hence, the second part of (18) reads as
y k + 1 : = J λ γ K * ( I P Q ) K ( x k + 1 ) = ( I + λ γ K * ( I P Q ) K ) 1 ( x k + 1 ) .
Proof. 
It is well known that K * ( I P Q ) K is a maximal monotone operator (more precisely, it is an inverse strongly monotone operator, see the beginning of the proof of Theorem 4), hence I + λ γ K * ( I P Q ) K is invertible by Minty Theorem and its inverse, which is the so-called resolvent operator, is single valued and firmly nonexpansive.   □

4. Convergence Analysis

In this section, we investigate the convergence behavior of Algorithm (18). As in [6], after a finite number of iterations, the support of the sparse variable x k defined by Algorithm (18) will remain unchanged, and hence solving the non-convex optimization problem (7) by algorithm (18) reduces to solving a convex optimization problem on the support.
First, we consider a function E, which is closely related to both functions F and G, and we define
E : I R N I R at y I R N by E ( y ) : = L 2 ( I P Q ) K y 2 ,
where L : = 1 + λ γ ; E ( y ) will be denoted for short by E ( y ) .
Now, we prove a convergence result of Algorithm (18).
Theorem 4.
Let ( x k , y k ) k I N be a sequence generated by Algorithm (18) with an initial ( x 0 , y 0 ) I R N × I R N for problem (7). If λ , γ > 0 are positive numbers, then we have the following properties
1. 
F ( x k + 1 , y k + 1 ) F ( x k , y k ) for all k 0 and the sequence F ( x k , y k ) k I N is convergent;
2. 
The sequence ( x k , y k ) k I N has a finite length, namely
k = 0 + x k + 1 x k 2 < + , a n d k = 0 + y k + 1 y k 2 < + ,
3. 
The sequence ( x k , y k ) k I N is asymptotically regular, that is
lim k + x k + 1 x k = 0 a n d lim k + y k + 1 y k 2 = 0 .
Proof. 
The function E is differentiable with a 1-Lipschitz continuous gradient. Indeed,
K * ( I P Q ) K ( x ) K * ( I P Q ) K ( y ) , x y = ( I P Q ) K ( x ) ( I P Q ) K ( y ) , K x K y ( I P Q ) K ( x ) ( I P Q ) K ( y ) 2 K * ( I P Q ) K ( x ) K * ( I P Q ) K ( y ) 2 .
This ensures that E is 1-Lipschitz continuous.
On the other hand, since y k + 1 = x k + 1 λ γ K * ( I P Q ) K y k + 1 , we can write
γ 2 λ x k + 1 y k + 1 2 = γ 2 λ λ γ K * ( I P Q ) K y k + 1 2 .
Taking into account definition of the · and the fact that K K * = I , we obtain that
γ 2 λ x k + 1 y k + 1 2 = λ 2 γ ( I P Q ) K y k + 1 2 .
Hence,
F ( x k + 1 , y k + 1 ) = 1 2 ( 1 + λ γ ) ( I P Q ) K y k + 1 2 + γ x k + 1 0 = E ( y k + 1 ) + γ x k + 1 0 .
Using the celebrate descent Lemma, see for example [12], we can write
F ( x k + 1 , y k + 1 ) E ( y k ) + E ( y k ) , y k + 1 y k + L 2 y k + 1 y k 2 + γ x k + 1 0 .
Now, we have
γ λ + γ E ( y k ) , x k + 1 x k = K * ( I P Q ) K y k , y k + 1 y k + λ γ ( K * ( I P Q ) K y k + 1 K * ( I P Q ) K y k ) = K * ( I P Q ) K y k , y k + 1 y k + λ γ K * ( I P Q ) K y k , y k + 1 y k λ γ ( I P Q ) K y k , P Q K y k + 1 P Q K y k ( 1 + λ γ ) K * ( I P Q ) K y k , y k + 1 y k = E ( y k ) , y k + 1 y k .
The Characterization of the orthogonal projection, namely
( y P Q y , z P Q y 0 z Q
assures that
( I P Q ) K y k , P Q K y k + 1 P Q K y k 0 ,
and thus
E ( y k ) , y k + 1 y k γ λ + γ E ( y k ) , x k + 1 x k .
Now, by using the second equation of (18) and by taking into account the fact that I P Q is firmly nonexpansive and that K K * = I , we can write
x k + 1 x k 2 = y k + 1 y k + λ γ ( K * ( I P Q ) K y k + 1 K * ( I P Q ) K y k ) 2 = y k + 1 y k 2 + 2 λ γ K * ( I P Q ) K y k + 1 K * ( I P Q ) K y k , y k + 1 y k + λ 2 γ 2 K * ( I P Q ) K y k + 1 K * ( I P Q ) K y k 2 y k + 1 y k 2 + 2 λ γ ( I P Q ) K y k + 1 ( I P Q ) K y k 2 + λ 2 γ 2 ( I P Q ) K y k + 1 ( I P Q ) K y k 2 .
This yields
x k + 1 x k 2 y k + 1 y k 2 + λ γ ( 2 + λ γ ) ( I P Q ) K y k + 1 ( I P Q ) K y k 2 ,
hence
y k + 1 y k 2 x k + 1 x k 2 .
Taking into account the fact that 0 < λ γ < 1 + 5 2 , we have L = 1 + λ γ < γ λ which, combined with the last inequality, ensures that
L y k + 1 y k 2 γ λ x k + 1 x k 2 .
Combining (22), (23) and (25) yields
F ( x k + 1 , y k + 1 ) E ( y k ) + γ γ + λ E ( y k ) , x k + 1 x k + γ 2 λ x k + 1 x k 2 + γ x k + 1 0 .
To prove the no-increasing of the sequence F ( x k , y k ) k I N , we first notice that the second part of (18) can be reads as y k = x k λ λ + γ E ( y k ) . Now, by applying definition of the proximal operator of λ · 0 at y k , we have
x k + 1 a r g min x I R N x 0 + 1 2 λ x x k + λ λ + γ E ( y k ) 2
or, equivalently,
x k + 1 a r g min x I R N x 0 + 1 2 λ x x k 2 + λ λ + γ E ( y k ) , x x k ,
which ensures that
x k + 1 0 + 1 2 λ x k + 1 x k 2 + 1 λ + γ E ( y k ) , x k + 1 x k x k 0 .
Finally, from (26) and (28), we deduce that
F ( x k + 1 , y k + 1 ) E ( y k ) + γ x k 0 = F ( x k , y k ) .
It follows from (27) that
γ x k + 1 0 + γ 2 λ x k + 1 x k 2 + γ λ + γ E ( y k ) , x k + 1 x k F ( x k , y k ) .
In addition form (22), we obtain that
E ( y k ) E ( y k ) , y k + 1 y k + L 2 y k + 1 y k 2 γ x k + 1 0 F ( x k + 1 , y k + 1 ) .
Summing the above two inequalities and using (23), we obtain
γ 2 λ x k + 1 x k 2 L 2 y k + 1 y k 2 F ( x k , y k ) F ( x k + 1 , y k + 1 ) .
This, combined with (24), yields
1 2 ( γ 2 λ L ) y k + 1 y k 2 + λ γ ( 2 + λ γ ) ( I P Q ) y k + 1 ( I P Q ) y k 2 F ( x k , y k ) F ( x k + 1 , y k + 1 ) .
By summing the last inequality and by taking into account the fact the convergence of the sequence ( F ( x k , x y ) ) k I N together with the fact that γ 2 λ L > 0 , we deduce first that
k = 0 y k + 1 y k 2 < + and k = 0 ( I P Q ) y k + 1 ( I P Q ) y k 2 < + .
The property k = 0 x k + 1 x k 2 < + follows then from relation (24). The latter properties ensures clearly the asymptotic regularity of the sequence ( x k , y k ) k I N .   □
As in ([6] Lemma 12), in our setting, we also have that the invariant support set of the sequence defined by Algorithm (18) exists for the nonconvex problem (7). Now, let us prove, more directly than in [6], the convergence of the sequence ( x k , y k ) k I N generated by (18) relying on averaged operators and Krasnoselskii–Mann Theorem. Averaged mappings are convenient in studying the convergence sequences generated by iterative algorithms for fixed-point problems thanks to the following celebrate theorem, see for example [12,13].
Theorem 5
(Krasnoselskii–Mann Theorem). Let M : I R N I R N be averaged and assume F i x M . Then, for any starting point x 0 , the sequence { M k x 0 } converges weakly to a fixed-point of M.
Recall also the definitions of nonexpansive and averaged operators, which appear naturally when using iterative algorithms for solving fixed-point problems and which are commonly encountered in the literature; see, for instance, [13]. A mapping T : I R N I R N is said to be nonexpansive if, for all x , y I R N , T x T y x y , firmly nonexpansive if 2 T I is nonexpansive, or equivalently T x T y , x y T x T y 2 , for all x , y I R N . It is well known that T is firmly nonexpansive if and only if T can be written as T = 1 2 ( I + S ) , where S : I R N I R N is nonexpansive. Recall also that mapping T : I R N I R N is said to be averaged if it can be expressed as T = ( 1 α ) I + α S with S : I R N I R N a nonexpansive mapping and α [0, 1]. Thus, firmly nonexpansive mappings (e.g., projections on convex convex and nonempty subsets and resolvent of maximal monotone operators) are averaged mappings.
Mimicking the analysis in [6], ( x ¯ , y ¯ ) is a solution of (13) if, and if ( x ¯ , y ¯ ) satisfies (16) and thus x ¯ is verified as
x ¯ = P B N J λ γ K * ( I P Q ) K ( x ¯ ) .
Similarly, using the same arguments, we drive that ( x k , y k ) generated by (18) leads to
x k + 1 = P B N ( y k ) k κ ¯
and thus for all k κ ¯ , it satisfies
x k + 1 = P B N ( y k ) and y k + 1 = x k + 1 λ γ K * ( I P Q ) K y k + 1 .
Consequently,
x ¯ = P B N J λ γ K * ( I P Q ) K ( x ¯ ) ,
and
x k + 1 = P B N J λ γ K * ( I P Q ) K ( x k ) .
It is well-known that firmly nonexpansive mappings (including orthogonal projections on closed convex nonempty subsets and resolvent mappings of maximal monotone operators) are averaged operators. In view of the fact that the composite of finitely many averaged mappings is averaged, see for instance [12], and by applying Krasnoselskii–Mann Theorem, we deduce the convergence of the subsequence ( x k ) k κ ¯ to a solution x ¯ of (29). x ¯ B N , since ( x k ) k κ ¯ B N , which is closed, and we also have N = N ( x κ ¯ ) = N ( x ¯ ) in view of ([6] Lemma 7). In addition, since the resolvent of a maximal monotone operator is nonexpansive, for all m > n > κ ¯ , we can write that
y m y n = J λ γ K * ( I P Q ) K ( x m ) J λ γ K * ( I P Q ) K ( x n ) x m x n .
( x k ) k κ ¯ being convergent, it is a Cauchy sequence and thus it is also the case of the subsequence ( y k ) k κ ¯ which, in turn, converges to some limit y ¯ . Now, by passing to the limit in y k + 1 = J λ γ K * ( I P Q ) K ( x k ) and taking into account the continuity of the resolvent, we obtain y ¯ = J λ γ K * ( I P Q ) K ( x ¯ ) , where x ¯ is a solution of (16) which ensures that ( x ¯ , y ¯ ) is a solution of (13) with N = N ( x ¯ ) .
Based on the above, these lead to the following Theorem.
Theorem 6.
Let ( x k , y k ) I R N × I R N a sequence generated by (18) form an initial point ( x 0 , y 0 ) . If λ , γ > 0 are chosen such that 0 < λ γ < 1 + 5 2 , then ( x k , y k ) converges to a local minimizer ( x ¯ , y ¯ ) of (7). Moreover, F ( x k , y k ) k I N is a convergent sequence and if, in addition | y ¯ j | 2 β for all j N ( y ¯ ) , then y ¯ is a local minimizer of (5).
Finally, let us point out that, since K K * = I , the fixed point iteration in (18) turns into
y k + 1 = x k + 1 λ 2 γ K * ( I P Q ) ( K x k + 1 ) .
In particular, when Q = { r } , this reduces to
y k + 1 = x k + 1 λ 2 γ K * ( K x k + 1 r ) .
Indeed, as K K * = I ,
J λ γ K * ( I P Q ) K ( x k + 1 ) = x k + 1 λ γ K * ( I P Q ) 1 ( K x k + 1 ) = x k + 1 λ γ K * ( ( i Q ) 1 ) 1 ( K x k + 1 ) = x k + 1 λ γ K * ( i Q ) 2 ( K x k + 1 ) = x k + 1 λ 2 γ K * ( I P Q ) ( K x k + 1 ) .
We used both the fact that for any maximal monotone operator A and ν > 0 , we have
J ν K * A K ( x ) = x ν K * A 1 ( K x ) ,
A 1 = I J 1 A being the Yosida operator of A with parameter 1, and that for all ν , μ > 0 , we have
( A ν ) μ = A ν + μ .
We used also the fact that the Yosida operator of the subdifferential of the indicator function of Q with parameter 2 (the latter is nothing but the the normal cone to Q) is exactly I P Q 2 .
Therefore, the proposed algorithms are nothing else than a Proximal Gradient and a Projection Gradient Algorithms. Nevertheless, the crucial idea (namely, the support of the sparse main variable generated by the Algorithm remains unchanged after a finite number of iterations) that permits to locate a local minimizer of the nonconvex optimization problem with a global minimizer of a convex optimization one deserves a great interest.
Clearly, the analysis developed here can be extended to split feasibility problems, namely
Find y C such   that K y Q ,
with C I R N , Q I R d being two closed, convex subsets and K : I R N I R d a given matrix. Since the sum of a maximal monotone operator (the normal cone to C) and the monotone Lipschitz one (the operator λ 2 γ K * ( I P Q ) K ) is still maximal monotone [14], this can be naturally extended to the following general minimization problem by means of its regularized version, i.e.,
min y I R N f ( y ) + g ( K y ) + γ y 0 t h r o u g h min y I R N f ( y ) + g ν ( K y ) + γ ( y 0 ) λ ,
with f , g being two proper, convex, lower semicontinuous functions defined on I R N , I R d , respectively, and ν , λ > 0 . The proximity map of g and the subdifferential of f will act as the projection on the set Q and the normal cone to C, respectively, since they share both the same properties.

5. Conclusions

Based on an interesting idea developed in [6], which leads to identifying a local minimizer of a nonconvex minimization problem with a global optimizer of a convex optimization one, we provide an extension of the sparse regularization model for inverting incomplete Fourier transforms. Next, we propose an efficient convergence guaranteed iteration algorithm for solving the resulting non-convex and non-smooth optimization problem. The fixed-point approach is preferred, as it enables us to develop efficient algorithms with guaranteed convergence. Combined with applied nonlinear analysis tools, this leads both to a simplification of the proofs and to make a connection with classical works as split convex feasibility problems. With this generalization, the proposed approach may be applicable to other real world applications such as inverse problem of intensity-modulated radiation therapy (IMRT) treatment planning [18]. It can be applied equally to cooperative wireless sensor network positioning [19] or adaptive image denoising [20]. Thus, the proposed method is expected to work efficiently for problems that can be reformulated as sparse optimization and convex feasibility problems. We will consider this as a future project for numerical applications as well as other potential extensions, for example, in a non-convex framework. These in turn will pave the way for other applications in the real world, which is the case, for example, in [21].

Funding

This research received no external funding.

Data Availability Statement

Data sharing not applicable. No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

We would like to thank the anonymous reviewers for their meticulous reading of the paper and for their pertinent comments.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef]
  2. Candes, E.J.; Romberg, J.; Tao, T. Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 2006, 59, 1207–1223. [Google Scholar] [CrossRef]
  3. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  4. Shen, F.C.L.; Xu, Y.; Zeng, X. The Moreau envelope approach for the L1/TV image denoising model. Inverse Probl. Imaging 2014, 8, 53–77. [Google Scholar]
  5. Wu, T.; Shen, L.; Xu, Y. Fixed-point proximity algorithms solving an incomplete Fourier transform model for seismic wavefield modeling. J. Comput. Appl. Math. 2021, 385, 113208. [Google Scholar] [CrossRef]
  6. Wu, T.; Xu, Y. Inverting Incomplete Fourier Transforms by a Sparse Regularization Model and Applications in Seismic Wavefileld Modeling. J. Sci. Comput. 2022, 92, 48. [Google Scholar] [CrossRef]
  7. Xu, Y. Sparse regularization with the l0-norm. Anal. Appl. 2023, 21, 901–929. [Google Scholar] [CrossRef]
  8. Cao, W.; Xu, H.K. l1lp DC regularization method for compressed sensing. J. Nonlinear Convex Anal. 2020, 9, 1889–1901. [Google Scholar]
  9. Lou, Y.; Yan, M. Fast l1l2 Minimization via a proximal operator. J. Sci. Comput. 2017, 74, 767–785. [Google Scholar] [CrossRef]
  10. Moudafi, A.; Gibali, A. l1l2 regularization of split feasibility problems. Numer. Algorithms 2018, 78, 739–757. [Google Scholar] [CrossRef]
  11. Moudafi, A.; Xu, H.-K. A DC Regularization of Split Minimization Problems. Appl. Anal. Optim. 2018, 2, 285–297. [Google Scholar]
  12. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  13. Cegielski, A. Iterative Methods for Fixed Point Problems in Hilbert Spaces. In Lecture Notes in Mathematics; LNM; Springer: Berlin/Heidelberg, Germany, 2013; Volume 2057. [Google Scholar]
  14. Rockafellar, R.T.; Wets, R.J.-B. Variational Analysis; Springer: Berlin/Heidelberg, Germany, 1998. [Google Scholar]
  15. Yamada, I.; Yukawa, M.; Yamagishi, M. Minimizing the Moreau Envelope of Nonsmooth Convex Functions over the Fixed Point Set of Certain Quasi-Nonexpansive Mappings. In Fixed-Point Algorithms for Inverse Problems in Science and Engineering; Springer: New York, NY, USA, 2011; pp. 345–390. [Google Scholar]
  16. Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, A. A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51, 2353–2365. [Google Scholar] [CrossRef] [PubMed]
  17. Moreau, J.-J. Fonctions convexes duales et points proximaux dans un espace hilbertien. Comptes Rendus Acad. Sci. Paris Ser. A Math. 1962, 255, 1897–2899. [Google Scholar]
  18. Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21, 2071–2084. [Google Scholar] [CrossRef]
  19. Gholami, M.R.; Tetruashvili, L.; Strom, E.G.; Censor, Y. Cooperative Wireless Sensor Network Positioning via Implicit Convex Feasibility. IEEE Trans. Signal Process. 2013, 61, 5830–5840. [Google Scholar] [CrossRef]
  20. Censor, Y.; Gibali, A.; Lenzen, F.; Schnorr, C. The Implicit Convex Feasibility Problem and Its Application to Adaptive Image Denoising. J. Comp. Math. 2016, 34, 610–625. [Google Scholar] [CrossRef]
  21. Brooke, M.; Censor, Y.; Gibali, A. Dynamic string-averaging CQ-methods for the split feasibility problem with percentage violation constraints arising in radiation therapy treatment planning. Int. Trans. Oper. Res. 2020, 30, 181–205. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Moudafi, A. On an Extension of a Spare Regularization Model. Mathematics 2023, 11, 4285. https://doi.org/10.3390/math11204285

AMA Style

Moudafi A. On an Extension of a Spare Regularization Model. Mathematics. 2023; 11(20):4285. https://doi.org/10.3390/math11204285

Chicago/Turabian Style

Moudafi, Abdellatif. 2023. "On an Extension of a Spare Regularization Model" Mathematics 11, no. 20: 4285. https://doi.org/10.3390/math11204285

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop