Next Article in Journal
A Clustering-Based Approach to Detecting Critical Traffic Road Segments in Urban Areas
Next Article in Special Issue
Constraint Qualifications for Vector Optimization Problems in Real Topological Spaces
Previous Article in Journal
Confocal Families of Hyperbolic Conics via Quadratic Differentials
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convergence of Parameterized Variable Metric Three-Operator Splitting with Deviations for Solving Monotone Inclusions

College of Science, Civil Aviation University of China, Tianjin 300300, China
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(6), 508; https://doi.org/10.3390/axioms12060508
Submission received: 17 April 2023 / Revised: 19 May 2023 / Accepted: 22 May 2023 / Published: 24 May 2023
(This article belongs to the Special Issue Numerical Analysis and Optimization)

Abstract

:
In this paper, we propose a parameterized variable metric three-operator algorithm for finding a zero of the sum of three monotone operators in a real Hilbert space. Under some appropriate conditions, we prove the strong convergence of the proposed algorithm. Furthermore, we propose a parameterized variable metric three-operator algorithm with a multi-step inertial term and prove its strong convergence. Finally, we illustrate the effectiveness of the proposed algorithm with numerical examples.

1. Introduction

Let H be a real Hilbert space with inner product  · , ·  and the induced norm  · . We consider the following monotone inclusion problem of the sum of three operators: find  x H  such that
0 A x + B x + C x ,
where  A , B : H 2 H  are maximally monotone operators and  C : H H  is a  β -cocoercive operator,  β > 0 . There have been numerous algorithms for solving the problem (1) when  B = 0  because of the wide applications of this problem in compressed sensing, image recovery, sparse optimization, machine learning, etc.; see [1,2,3,4,5,6,7], to name a few. Although in theory these algorithms can be used to solve the problem (1) by bundling  A + B + C  as  T + C  with  T = A + B ( Id + T ) 1  is hard to compute in practice. For the past few years, problem (1) has received a lot of attention, and several algorithms have been constructed for solving it.
In 2017, Davis and Yin [8] proposed the following three-operator algorithm:
z k = J γ A ( x k ) , y k = J γ B ( 2 z k x k γ C z k ) , x k + 1 = x k + λ k ( y k z k ) .
Here  J γ A = ( Id + γ A ) 1 J γ B = ( Id + γ B ) 1 γ ( 0 , 2 β ) λ k ( 0 , 4 β γ 2 β ) . Expression (2) also has the form
x k + 1 = ( 1 λ k ) x k + λ k T x k ,
where the averaged operator T is defined by
T = J γ B ( 2 J γ A Id γ C J γ A ) + Id J γ A .
Then [8] proved that the sequence  { x k }  generated by (2) converges weakly to a fixed point  x *  of T under some suitable conditions by utilizing the averageness of T. In turn,  J γ A ( x * )  solves problem (1).
Soon after, Cui, Tang and Yang [9] put forward the inertial version of the algorithm (2):
w k = x k + θ k ( x k x k 1 ) , z k = J γ A ( w k ) , y k = J γ B ( 2 z k w k γ C z k ) , x k + 1 = w k + λ k ( y k z k ) ,
to improve the convergence speed of the algorithm (2). For the problem (1) in which  C = 0 , B is monotone and L-Lipschitz continuous, Malitsky and Tam [10] proposed the forward-reflected-backward algorithm to solve problem (1), which consists of iterating
x k + 1 = J γ A ( x k 2 γ B x k + γ B x k 1 γ C x k ) .
Under some appropriate conditions, they proved the convergence of (6). Furthermore, Zong et al. [11] introduced the inertial semi-forward-reflected-backward splitting algorithm to address (1):
x k + 1 = J γ A ( x k γ k B x k + γ k 1 ( B x k B x k 1 ) α ( x k x k 1 ) γ k C x k ) ,
and proved the weak convergence of the algorithm. Zhang and Chen [12] proposed the parameterized three-operator splitting algorithm:
z k = J γ A ( x k ) , y k = J γ B ( [ 2 γ ( 2 α ) ] z k x k γ C z k ) , x k + 1 = x k + λ k ( y k z k ) ,
which generalizes the parameterized Douglas–Rachford algorithm [13]. They proved that the sequence generated by the regularization of (8) converges to the least-norm solution of problem (1). Other algorithms for solving (1) have also been investigated in recent years; see [14,15,16].
In order to speed up the convergence of iterative algorithms, scholars often use acceleration techniques. The inertial extrapolation technique and variable metric technique are two popular acceleration methods. The inertial extrapolation technique, or heavy ball method [17], has been widely studied in past decades; please see [18,19,20,21,22,23] and the references therein. The variable metric technique is a method to improve the convergence speed of the corresponding algorithm by changing the step size in each iteration. This method has been widely used in various optimization problems in the past decades. To solve the monotone inclusion problem of the sum of two operators, Rockafeller [24] first combined the variable metric strategy with the forward-backward algorithm in 1997. For solving the monotone inclusion of the sum of two operators, Combettes [25] proposed the following variable metric forward-backward splitting algorithm
y k = x k γ k U k ( B k x k + b k ) , x k + 1 = x k + λ k ( J γ k U k A ( y k ) + a k x k ) ,
where  { a k }  and  { b k }  are absolutely summable sequences in H { U k }  is a linear bounded self-adjoint positive operator sequence defined on H, and proved its strong convergence. Bonettini [26] proposed an inexact inertial variable metric proximity gradient algorithm. In [27], the author studied the variable metric forward-backward splitting algorithm for convex minimization problems without the Lipschitz continuity assumption of the gradient and proved the weak convergence of the iteration. Further, in [28], Audrey and Yves proposed a variable metric forward-backward-based algorithm for solving problems of the sum of two nonconvex functions. In [29], the authors addressed the weak convergence of a nonlinearly preconditioned forward-backward splitting method for the sum of a maximally hypermonotone operator A and a hypercocoercive operator B. For other applications of variable metric techniques, see [30,31,32] and the references therein.
Inspired by the above work, we propose a parameterized variable metric three-operator algorithm and a multi-step inertial parameterized variable metric three-operator algorithm. Furthermore, instead of requiring the averageness or nonexpansiveness of the operator, we directly analyze the convergence of the proposed iterative algorithm.
The rest of this paper is organized as follows. In Section 2, we recall some basic definitions and important lemmas. In Section 3, we propose the parameterized variable metric three-operator algorithm and prove its strong convergence. Then we introduce a multi-step inertial parameterized variable metric three-operator algorithm and prove a strong convergence result of it. In Section 4, we show the performance of the proposed iterative algorithm under different parameters and illustrate the feasibility of the algorithm with numerical examples.

2. Preliminaries

In this section, we list the necessary symbols, notations, definitions and lemmas and certify some results used in this paper. Let  B ( H )  be the space of bounded linear operators from H to H. Set  S ( H ) = { L B ( H ) | L = L * } , where  L *  denotes the adjoint of L, and  P α ( H ) = { L S ( H ) | L x , x α x , x } α > 0 . Given  M P α ( H ) , define the M-inner product  · , · M  on H by  x , y M = M x , y  for all  x , y H . So the corresponding M-norm on H is defined as  x M = M x , x  for all  x H . The strong convergence of a sequence  { z n }  is denoted by → and the symbol ⇀ means weak convergence.  Id  means the identity operator.  + 1 ( N )  denotes the set of sequences  { η n }  in  [ 0 , + )  such that  n N η n < + . Let  T : H H  be an operator. We denote the fixed point set of T by  Fix ( T ) , that is,  Fix ( T ) = { x H | T x = x } . Let  A : H 2 H  be a set-valued operator. We denote its graph and domain by  gra A = { ( x , u ) H × H | u A x }  and  dom A = { x H | A x } , respectively. In this paper, let S be the solution set of problem (1) and we always assume that  S .
Definition 1.
Let  T : H H  be an operator. T is said to be
( i )  ([33]) τ-cocoercive,  τ > 0  if
T x T y , x y τ T x T y 2 , x , y H ;
( i i )  ([34]) demiregular at  x H  if  { x n } H  with  x n x  and  T x n T x  as  n , it follows that  x n x  as  n .
Definition 2
([33]). Let  A : H 2 H  be an operator. A is said to be
( i )  monotone if
x y , u v 0 , ( x , u ) gra A , ( y , v ) gra A .
( i i )  strictly monotone if
x y x y , u v > 0 , ( x , u ) gra A , ( y , v ) gra A .
( i i i )  β-strongly monotone if
x y , u v β x y 2 , ( x , u ) gra A , ( y , v ) gra A .
( i v )  maximally monotone if  ( x , u ) H × H
( x , u ) gra A x y , u v 0 , ( y , v ) gra A .
( v )  uniformly monotone with modulus  ϕ : R + [ 0 , + ]  if ϕ is increasing and vanishes only at 0 and
x y , u v ϕ ( x y ) , ( x , u ) gra A , ( y , v ) gra A .
Definition 3
([33]). Let D be a nonempty subset of H. Let  { x n }  be a sequence in H.  { x n }  is called Fej e ´ r monotone with respect to D if
x n + 1 x ^     x n x ^ , x ^ D , n 0 .
Lemma 1
([33]). Let  A : H 2 H  be a maximally monotone operator and let  M : H H  be a linear bounded self-adjoint and strongly monotone operator. Then
( i )   M 1 A  is maximally monotone operator.
( i i )   λ > 0 J λ M 1 A = ( Id + λ M 1 A ) 1  is firmly nonexpansive with respect to M-norm, that is,
J λ M 1 A x J λ M 1 A y , x y M J λ M 1 A x J λ M 1 A y M 2 , x , y H .
( i i i )   J M 1 A = ( M + A ) 1 M .
Lemma 2
([35]). Let  x H y H z H , we have
2 x y , x z = x y 2 + x z 2 y z 2 .
Lemma 3
([33]). Let Ω be a nonempty subset of H. Let  { x n }  be a sequence in H. If the following conditions hold:
( i )   x Ω lim n x n x  exists;
( i i )  every weak sequential cluster point of  { x n }  is in Ω.
  • Then the sequence  { x n }  converges weakly to a point in Ω.
Lemma 4
([36]). Let  { α n }  be a sequence in  [ 0 , + ) { η n } + 1 ( N ) { δ n } + 1 ( N )  such that
α n + 1 ( 1 + η n ) α n + δ n , n N .
Then  { α n }  converges.

3. Iterative Algorithms and Convergence Analyses

In this section, we propose the parameterized variable metric three-operator algorithm and multi-step inertial parameterized variable metric three-operator algorithm to solve the approximate solution of the problem (1). The weak and strong convergence results are obtained. Both algorithms require the following assumptions:
Assumption 1.
A , B : H 2 H  are maximally monotone operators,  C : H H  is a β-cocoercive operator,  β > 0 .
Assumption 2.
D B ( H )  and the solution set of  zer ( A + B + C + ( 2 Id D ) )  is nonempty.
The following Proposition is mainly used to prove the convergence of the proposed algorithm.
Proposition 1.
Let  A , B : H 2 H  be maximally monotone operators,  C : H H  be a β-cocoercive operator and  β > 0 . Let  D B ( H ) M P α ( H )  and  γ > 0 . Denote by
J γ A M = ( M + γ A ) 1 , K M = { z H | J γ A M z = J γ B M ( 2 M J γ A M z z γ C J γ A M z γ ( 2 Id D ) J γ A M z ) } .
Then  zer ( A + B + C + ( 2 Id D ) ) = J γ A M ( K M )  and  K M = Fix ( T ) , where  T = Id J γ A M + J γ B M ( 2 M J γ A M Id γ C J γ A M γ ( 2 Id D ) J γ A M ) . Moreover,
Fix ( T ) = { M x + γ a | x zer ( A + B + C + ( 2 Id D ) ) , a A x ( B x C x ( 2 Id D ) x ) } .
Proof of Proposition 1.
Let  x zer ( A + B + C + ( 2 Id D ) ) . We have
0 ( A + B + C + ( 2 Id D ) ) x 0 γ ( A + B + C + ( 2 Id D ) ) x z H   with   z M x γ A x   such   that   M x z γ B x + γ C x + γ ( 2 Id D ) x x = J γ A M z   and   2 M x z γ C x γ ( 2 Id D ) x M x + γ B x x = J γ A M z   and   x = J γ B M ( 2 M x z γ C x γ ( 2 Id D ) x ) z K M   and   x J γ A M ( K M ) .
In turn, if  x J γ A M ( K M ) , there exists  z K M  such that  x = J γ A M z . Each of the above steps can be worked backward. It follows that  zer ( A + B + C + ( 2 Id D ) ) = J γ A M ( K M ) .
That  K M = Fix ( T )  is a trivial result.
Next, we show that (10) holds.
z K M x zer ( A + B + C + ( 2 Id D ) ) , x = J γ A M z , x = J γ B M ( 2 M x z γ C x γ ( 2 Id D ) x ) z M x γ A x , M x z γ B x + γ C x + γ ( 2 Id D ) x a A x ( B x C x ( 2 Id D ) x ) , z = M x + γ a z Fix ( T ) .
The proof is completed. □
Proposition 2.
Let  A , B : H 2 H  be maximally monotone operators,  C : H H  be an operator. Let  α > 0 γ > 0 M 1 M 2 P α ( H ) . Given  z , z ^ H , denote by
x = J γ A M 1 z , x ^ = J γ A M 2 z ^ , y = J γ B M 1 ( 2 M 1 x z γ C x γ ( 2 Id D ) x ) , y ^ = J γ B M 2 ( 2 M 2 x ^ z ^ γ C x ^ γ ( 2 Id D ) x ^ ) .
Then we have
0 z z ^ , ( x y ) ( x ^ y ^ ) α ( x y ) ( x ^ y ^ ) 2 + γ y y ^ , ( 2 Id D ) x ^ ( 2 Id D ) x γ y y ^ , C x C x ^ + y y ^ , ( M 2 M 1 ) ( y ^ x ^ ) + ( M 2 M 1 ) x ^ , ( x y ) ( x ^ y ^ ) .
Further, if A or B is uniformly monotone with modulus ϕ, then the above inequality holds with 0 on the left replaced by  γ ϕ ( x x ^ )  or  γ ϕ ( y y ^ ) .
Proof of Proposition 2.
According to (11), we have
z M 1 x γ A x , z ^ M 2 x ^ γ A x ^ , 2 M 1 x z γ C x γ ( 2 Id D ) x M 1 y γ B y , 2 M 2 x ^ z ^ γ C x ^ γ ( 2 Id D ) x ^ M 2 y ^ γ B y ^ .
By the monotonicity of A, B we obtain
0 x x ^ , z M 1 x ( z ^ M 2 x ^ ) ,
0 y y ^ , 2 M 1 x z γ C x γ ( 2 Id D ) x M 1 y ( 2 M 2 x ^ z ^ γ C x ^ γ ( 2 Id D ) x ^ M 2 y ^ ) = y y ^ , z M 1 x ( z ^ M 2 x ^ ) γ y y ^ , C x C x ^ + y y ^ , M 2 y ^ M 2 x ^ + γ ( 2 Id D ) x ^ ( M 1 y M 1 x + γ ( 2 Id D ) x ) .
Adding the above two inequalities, we obtain
0 z M 1 x ( z ^ M 2 x ^ ) , ( x y ) ( x ^ y ^ ) γ y y ^ , C x C x ^ + y y ^ , M 2 y ^ M 2 x ^ + γ ( 2 Id D ) x ^ ( M 1 y M 1 x + γ ( 2 Id D ) x ) = z z ^ , ( x y ) ( x ^ y ^ ) M 1 x M 2 x ^ , ( x y ) ( x ^ y ^ ) γ y y ^ , C x C x ^ + y y ^ , M 2 y ^ M 2 x ^ ( M 1 y M 1 x ) + γ y y ^ , ( 2 Id D ) ( x ^ x ) .
Notice that
M 1 x M 2 x ^ , ( x y ) ( x ^ y ^ ) = M 1 x M 1 x ^ , ( x y ) ( x ^ y ^ ) M 1 x ^ M 2 x ^ , ( x y ) ( x ^ y ^ ) = x x ^ , ( x y ) ( x ^ y ^ ) M 1 M 1 x ^ M 2 x ^ , ( x y ) ( x ^ y ^ )
and that
y y ^ , M 2 y ^ M 2 x ^ ( M 1 y M 1 x ) = y y ^ , M 1 y ^ M 1 x ^ ( M 1 y M 1 x ) + y y ^ , M 2 y ^ M 1 y ^ ( M 2 x ^ M 1 x ^ ) = y y ^ , ( x y ) ( x ^ y ^ ) M 1 + y y ^ , ( M 2 M 1 ) ( y ^ x ^ ) .
By  M 1 P α ( H ) , we have,
0 z z ^ , ( x y ) ( x ^ y ^ ) ( x y ) ( x ^ y ^ ) M 1 2 + γ y y ^ , ( 2 Id D ) x ^ ( 2 Id D ) x γ y y ^ , C x C x ^ + y y ^ , ( M 2 M 1 ) ( y ^ x ^ ) + ( M 2 M 1 ) x ^ , ( x y ) ( x ^ y ^ ) z z ^ , ( x y ) ( x ^ y ^ ) α ( x y ) ( x ^ y ^ ) 2 + γ y y ^ , ( 2 Id D ) x ^ ( 2 Id D ) x γ y y ^ , C x C x ^ + y y ^ , ( M 2 M 1 ) ( y ^ x ^ ) + ( M 2 M 1 ) x ^ , ( x y ) ( x ^ y ^ ) .
Further, if A or B is uniformly monotone, we just need to replace 0 with  γ ϕ ( x x ^ )  or  γ ϕ ( y y ^ )  from (12). □

3.1. Parameterized Variable Metric Three-Operator Algorithm

In this subsection, we study the following parameterized variable metric three-operator algorithm and its convergence.
Pick any  z 0 H ,
x n = J γ A M n z n , y n = J γ B M n ( 2 M n x n z n γ C x n γ ( 2 Id D ) x n ) , z n + 1 = z n + μ n ( y n x n ) , n 0 ,
where  J γ A M n = ( M n + γ A ) 1 M n P α ( H ) , n N α > 0 γ > 0 .
Theorem 1.
Let  { z n }  be generated by Algorithm (13). Suppose that the following conditions hold:
(C1)  D [ 0 , 2 ) γ ( 0 , 4 α β ( 2 D ) 2 D + β 2 Id D 2 ) ;
(C2)  n = 0 μ n = + 0 < μ n lim sup n μ n = μ ¯ 2 α γ ( 1 2 β + 2 Id D 2 2 ( 2 D ) ) ;
(C3)  M P α ( H ) n = 0 1 μ n M M n < + .
  • Then we have
1. 
{ z n }  is bounded;
2. 
y n x n 0 z n z * K M x n x * y n x * C x n C x * , where  x * = J γ A M z * zer ( A + B + C + ( 2 Id D ) ) K M  is defined as Proposition 1;
3. 
Suppose that one of the following holds:
(a) 
A is uniformly monotone on every nonempty bounded subset of  dom A ;
(b) 
B is uniformly monotone on every nonempty bounded subset of  dom B ;
(c) 
x zer ( A + B + C + ( 2 Id D ) ) , C is demiregular at x,
then  x n , y n x * .
Proof of Theorem 1.
1. Set  w n = y n x n . By (13), we have
( x n , z n M n x n ) gra γ A , ( y n , 2 M n x n z n γ C x n γ ( 2 Id D ) x n M n y n ) gra γ B .
Let  x zer ( A + B + C + ( 2 Id D ) ) . Then there exist  p K M  and  p n K M n , respectively, such that  x = J γ A M ( p )  and  x = J γ A M n ( p n )  in view of Proposition 1, where  K M n = { p n H | J γ A M n p n = J γ B M n ( 2 M n J γ A M n p n p n γ C J γ A M n p n γ ( 2 Id D ) J γ A M n p n ) } . By taking  z = p n z ^ = z n M 1 = M 2 = M n  and noting that  x ^ = x n y ^ = y n x = y  in Proposition 2, we obtain
0 p n z n , w n α w n 2 γ x y n , C x C x n + γ x y n , ( 2 Id D ) x n ( 2 Id D ) x .
Multiplying the first two terms on the right of (15) by  2 μ n , we obtain
2 μ n ( p n z n , w n α w n 2 ) = 2 p n z n , z n + 1 z n 2 α μ n w n 2 = z n p n 2 z n + 1 p n 2 + μ n ( μ n 2 α ) w n 2 .
Since C is a  β -cocoercive operator, it follows from Young’s inequality that
γ x y n , C x C x n = γ x x n , C x C x n + γ w n , C x C x n β γ C x C x n 2 + γ w n , C x C x n β γ C x C x n 2 + β γ C x C x n 2 + γ 4 β w n 2 = γ 4 β w n 2 .
Furthermore, the last term of (15) can be expressed by utilizing Young’s inequality again as
γ x y n , ( 2 Id D ) x n ( 2 Id D ) x = γ y n x n , ( 2 Id D ) x n ( 2 Id D ) x γ x n x , ( 2 Id D ) x n ( 2 Id D ) x γ w n , ( 2 Id D ) x ( 2 Id D ) x n 2 γ x n x 2 + γ D x n x 2 γ ( ϵ 2 w n 2 + 1 2 ϵ 2 Id D 2 x x n 2 ) + ( γ D 2 γ ) x n x 2 = γ [ D 2 + 1 2 ϵ 2 Id D 2 ] x n x 2 + γ ϵ 2 w n 2 γ ϵ 2 w n 2 ,
where the positive constant  ϵ  satisfies
2 Id D 2 2 ( 2 D ) ϵ 4 α β γ 2 β μ n 2 β γ ,
which implies that  D 2 + 1 2 ϵ 2 Id D 2 0 .
Now, substituting (16)–(18) into (15), we derive
0 z n p n 2 z n + 1 p n 2 + μ n ( μ n 2 α + γ 2 β ) w n 2 + γ ϵ μ n w n 2 .
Thereby,
z n + 1 p n 2 + μ n ( 2 α γ 2 β μ n γ ϵ ) w n 2 z n p n 2 .
It yields
z n + 1 p n     z n p n
since  2 α γ 2 β μ n γ ϵ > 0  by (19). Hence, we obtain from  x = J γ A M ( p ) = J γ A M n ( p n )  and Proposition 1 that  p n p = ( M n M ) x  and
z n + 1 p z n + 1 p n + p n p z n p + 2 p n p z n p + 2 M n M x .
Note that  n = 0 M M n < +  because  n = 0 1 μ ¯ M M n n = 0 1 μ n M M n < + . Thus we have  lim n z n p  exists by Lemma 4. As a result,  { z n }  is bounded.
In addition, we have by applying Lemma 1
x n x 2 = J γ A M n z n J γ A M n p n 2 1 α J γ A M n z n J γ A M n p n M n 2 = 1 α J γ M n 1 A M n 1 z n J γ M n 1 A M n 1 p n M n 2 1 α M n 1 z n M n 1 p n M n 2 1 α M n 1 z n p n 2 1 α 2 ( z n p + p p n ) 2 < .
So  { x n }  is a bounded sequence. Similarly,  { y n }  is bounded.
2. In Proposition 2, we take  z = z n + 1 z ^ = z n M 1 = M n + 1 M 2 = M n , then  x = x n + 1 y = y n + 1 x ^ = x n  and  y ^ = y n . Thus,
0 z n + 1 z n , w n w n + 1 α w n + 1 w n 2 γ y n + 1 y n , C x n + 1 C x n + γ y n + 1 y n , ( 2 Id D ) ( x n x n + 1 ) + y n + 1 y n , ( M n M n + 1 ) ( y n x n ) + ( M n M n + 1 ) x n , w n w n + 1 z n + 1 z n , w n w n + 1 α w n + 1 w n 2 γ y n + 1 y n , C x n + 1 C x n + M n + 1 M n [ y n y n + 1 w n + x n w n + 1 w n ] + γ y n + 1 y n , ( 2 Id D ) ( x n x n + 1 ) .
Similar to (17) and (18), we obtain the following estimations of the third and the last terms of the right side in (21), respectively,
γ y n + 1 y n , C x n + 1 C x n = γ w n + 1 w n , C x n + 1 C x n γ x n + 1 x n , C x n + 1 C x n β γ C x n + 1 C x n 2 + γ 4 β w n + 1 w n 2 β γ C x n + 1 C x n 2 = γ 4 β w n + 1 w n 2
and
γ y n + 1 y n , ( 2 Id D ) ( x n x n + 1 ) = γ w n + 1 w n , ( 2 Id D ) ( x n + 1 x n ) γ x n + 1 x n , ( 2 Id D ) ( x n + 1 x n ) γ ( ϵ 2 w n + 1 w n 2 + 1 2 ϵ 2 Id D 2 x n + 1 x n 2 ) 2 γ x n + 1 x n 2 + γ D x n + 1 x n 2 = γ ( 1 2 ϵ 2 Id D 2 + D 2 ) x n + 1 x n 2 + γ ϵ 2 w n + 1 w n 2 γ ϵ 2 w n + 1 w n 2 ,
where  ϵ  is given by (19).
Multiplying (21) by 2 and substituting (22) and (23) into (21), we conclude that
0 2 z n + 1 z n , w n w n + 1 2 α w n + 1 w n 2 + γ 2 β w n + 1 w n 2 + γ ϵ w n + 1 w n 2 + 2 M n + 1 M n [ y n y n + 1 w n + x n w n + 1 w n ] = 2 μ n w n , w n w n + 1 + γ 2 β + γ ϵ 2 α w n + 1 w n 2 + 2 M n + 1 M n [ y n y n + 1 w n + x n w n + 1 w n ] = μ n w n 2 μ n w n + 1 2 + γ 2 β + γ ϵ 2 α + μ n w n + 1 w n 2 + 2 M n + 1 M n [ y n y n + 1 w n + x n w n + 1 w n ] .
From  μ n > 0 , we have
w n + 1 2 w n 2 1 μ n 2 α γ 2 β γ ϵ μ n w n + 1 w n 2 + 2 μ n M n + 1 M n [ y n y n + 1 w n + x n w n + 1 w n ] w n 2 + 2 μ n M n + 1 M n [ y n y n + 1 w n + x n w n + 1 w n ]
since  2 α γ 2 β γ ϵ μ n 0  by (19). Let
δ n = 2 μ n M n + 1 M n [ y n y n + 1 w n + x n w n + 1 w n ] .
We have  n = 0 δ n < +  by virtue of the fact that  n = 0 1 μ n M n + 1 M n < +  and  { x n } { y n } { w n }  are bounded sequences. Therefore,  { w n }  converges from Lemma 4.
Next, we show that  w n = y n x n  converges to 0 as n goes to infinity.
Rearranging terms in (20), we have
μ n ( 2 α γ 2 β μ n γ ϵ ) w n 2 z n p n 2 z n + 1 p n 2 z n p n 2 z n + 1 p n + 1 2 p n p n + 1 2 + 2 z n + 1 p n + 1 p n p n + 1 z n p n 2 z n + 1 p n + 1 2 ( M n M n + 1 ) x 2 + 2 [ z n + 1 p + p p n + 1 ] ( M n M n + 1 ) x = z n p n 2 z n + 1 p n + 1 2 ( M n M n + 1 ) x 2 + 2 [ z n + 1 p + ( M M n + 1 ) x ] ( M n M n + 1 ) x z n p n 2 z n + 1 p n + 1 2 + Ω ( M n M n + 1 ) x ,
where  Ω = sup n { 2 [ z n + 1 p + ( M M n + 1 ) x ] ( M n M n + 1 ) x } <  due to the boundedness of  { z n }  and (C2), (C3).
Summing on both sides of (24) from 0 to k, we obtain as k goes to infinity
n = 0 μ n ( 2 α γ 2 β μ n γ ϵ ) w n 2 z 0 p 0 2 + Ω n = 0 ( M n M n + 1 ) x < .
By  n = 0 μ n = +  and (19), it has  n = 0 μ n ( 2 α γ 2 β μ n γ ϵ ) = + . Thus,  lim inf n w n = 0  in view of (25). Hence,  lim n w n = 0 , that is  y n x n 0  as  n .
For simplicity, denote by  v n = γ C ( x n ) . From 1.,  { z n } { x n } , and  { v n }  are bounded sequences. Assume that  ( z * , x * , v * )  is a weak limit point of the sequence  { ( z n , x n , v n ) } . Then there exists its subsequence  { ( z n k , x n k , v n k ) }  such that  ( z n k , x n k , v n k ) ( z * , x * , v * ) .
Define  F : H 3 2 H 3  by
F = ( γ A ) 1 ( γ C ) 1 γ B + γ ( 2 Id D ) + 0 0 Id 0 0 Id Id Id 0 .
Then F is a maximally monotone operator (see [33] (Example 20.35, Corollary 25.5(i))). From (14), we obtain
x n k y n k x n k y n k M n k ( x n k y n k ) + γ ( 2 Id D ) ( y n k x n k ) F z n k M n k x n k v n k y n k .
Noting that  z n k M n k x n k z * M x * v n k v * y n k x *  by  w n k 0 , and that  gra F  is sequentially closed in  H w e a k × H s t r o n g , we deduce
0 0 0 ( γ A ) 1 ( γ C ) 1 γ B + γ ( 2 Id D ) + 0 0 Id 0 0 Id Id Id 0 z * M x * v * x * .
In other words,
x * = J γ A M z * , v * = γ C x * , x * = J γ B M ( 2 M x * z * γ C x * γ ( 2 Id D ) x * ) .
Thus  z * K M . This gives  z n z * K M  according to Lemma 3. Furthermore,  x * = J γ A M z * zer ( A + B + C + ( 2 Id D ) )  by Proposition 1. Since  z *  is the unique weak limit point of  { z n } x * = J γ A M z *  and  v * = γ C x *  are the unique weak limit points of  { x n } { v n } , respectively. So,  x n x * v n v * . By  w n 0 , we have  y n x * .
Taking  z = z * z ^ = z n M 1 = M  and  M 2 = M n  in Proposition 2 and applying (17) and (18), we have
0 z * z n , w n α w n 2 + γ ϵ 2 w n 2 β γ C x * C x n 2 + γ w n , C x * C x n + x * y n , ( M n M ) w n + ( M n M ) x n , w n .
Then,
β γ C x * C x n 2 z * z n , w n + ( γ ϵ 2 α ) w n 2 + γ w n , C x * C x n + x * y n , ( M n M ) w n + ( M n M ) x n , w n 0 , as   n .
That is,  C x n C x *  as  n .
3. As seen in 2., there exists  x * zer ( A + B + C + ( 2 Id D ) )  such that  x n x *  as  n .
( a )  Set  O = { x * } { x n } . Obviously, O is a bounded subset of  dom A . Since A is uniformly monotone on O, which means that there exists an increasing function  ϕ A : R + [ 0 , + ]  satisfying that  ϕ A ( x ) = 0  if and only if  x = 0 , which allows us to write the result of Proposition 2 as if we set  z = z * z ^ = z n M 1 = M M 2 = M n  
γ ϕ A ( x * x n ) z * z n , w n α w n 2 γ x * y n , C x * C x n + γ x * y n , ( 2 Id D ) ( x n x * ) + x * y n , ( M n M ) w n + ( M n M ) x n , w n .
Hence, we have by using (18) and the results of 2.
0 γ ϕ A ( x * x n ) z * z n , w n α w n 2 γ x * y n , C x * C x n + γ ϵ 2 w n 2 + x * y n , ( M n M ) w n + ( M n M ) x n , w n 0 , as   n ,
where  ϵ  is a positive constant satisfying (19).  lim n x n x * = 0  follows from  lim n ϕ A ( x * x n ) = 0  and the definition of  ϕ A . Moreover,  { y n }  converges strongly to  x *  holds due to  lim n y n x n = 0 .
(b) The proof is the same as (a).
(c) From 2., we have  x n x * zer ( A + B + C + ( 2 Id D ) )  and  C x n C x * . The demiregularity of C at  x *  guarantees  x n x *  as  n . □
Remark 1.
( i )  The operator T defined in Proposition 1 is similar in form to the operators that appeared in [8] (Proposition 2.1) and [12] (Lemma 3.4), but it is no longer an averaged operator. We prove the weak and strong convergence results of the proposed algorithm in this more general case.
( i i )  Algorithm (13) can be seen as a generalization of (8). In fact, if we choose  M n = Id  and  D = α α ( 0 , 2 ) , Algorithm (13) becomes (8).

3.2. Multi-Step Inertial Parameterized Variable Metric Three-Operator Algorithm

In this subsection, we present the multi-step inertial parameterized variable metric three-operator algorithm, which combines Algorithm (13) and the multi-step inertial technique. Meanwhile, we show the convergence of the proposed algorithm.
Let  s N +  and  U = { 0 , , s 1 } . Let  { θ i , n } i U ( 1 , 1 ) s . Choose  z 0 H z i 1 = z 0 i U  and set
v n = z n + i U θ i , n ( z n i z n i 1 ) , x n = J γ A M n v n , y n = J γ B M n ( 2 M n x n v n γ C x n γ ( 2 Id D ) x n ) , z n + 1 = v n + μ n ( y n x n ) , n 0 ,
where  J γ A M n = ( M n + γ A ) 1 M n P α ( H ) .
Theorem 2.
Let  { z n }  be defined by Algorithm (26),  K M  be defined as Proposition 1. Set  D [ 0 , 2 ) γ ( 0 , 4 α β ( 2 D ) 2 D + β 2 Id D 2 )  and provided the following conditions hold:
(C3)  n = 0 μ n = + 0 < μ ̲ μ n 2 α γ 2 β ( 1 + β 2 Id D 2 2 D ) ;
(C4)  M n P α ( H )  such that  n = 0 M M n < + ;
(C5)  θ i ¯ = s u p n N | θ i , n | i U θ i ¯ < 1 lim n θ i , n 0  and
n = 1 + max i U | θ i , n | i U z n i z n i 1 < + .
Then the following assertions hold:
1. 
For every  q K M lim n z n q  exists;
2. 
{ z n } { v n }  converge weakly to the same point of  K M { x n } { y n }  converge weakly to the same point of  zer ( A + B + C + ( 2 Id D ) ) ;
3. 
Suppose that one of the following holds:
(a) 
A is uniformly monotone on every nonempty bounded subset of  dom A ;
(b) 
B is uniformly monotone on every nonempty bounded subset of  dom B ;
(c) 
x zer ( A + B + C + ( 2 Id D ) ) , C is demiregular at x,
  • then  { x n } { y n }  converge strongly to the same point of  zer ( A + B + C + ( 2 Id D ) ) .
Proof of Theorem 2.
1. Given  q K M , arbitrarily, there exists  x ¯ zer ( A + B + C + ( 2 Id D ) )  such that  x ¯ = J γ A M ( q ) , and for this  x ¯ , there exists  q n K M n  such that  x ¯ = J γ A M n ( q n )  according to Proposition 1. We choose  z = q n z ^ = v n M 1 = M 2 = M n  in Proposition 2. Then we have  x = y = x ¯ x ^ = x n y ^ = y n , and
0 q n v n , y n x n α y n x n 2 γ x ¯ y n , C x ¯ C x n + γ x ¯ y n , ( 2 Id D ) x n ( 2 Id D ) x ¯ .
Applying Lemma 2, we have
2 μ n ( q n v n , y n x n α y n x n 2 ) = 2 q n v n , z n + 1 v n 2 α μ n y n x n 2 = v n q n 2 z n + 1 q n 2 + μ n ( μ n 2 α ) y n x n 2 .
As same as (17) and (18), we obtain
γ x ¯ y n , C x ¯ C x n β γ C x ¯ C x n 2 + γ y n x n , C x ¯ C x n γ 4 β y n x n 2 ,
γ x ¯ y n , ( 2 Id D ) x n ( 2 Id D ) x ¯ γ ϵ 2 y n x n 2 ,
where  2 Id D 2 2 ( 2 D ) ϵ 4 α β γ 2 β μ n 2 β γ .
Combining (28)–(31), we have
0 v n q n 2 z n + 1 q n 2 + μ n ( μ n 2 α + γ 2 β + γ ϵ ) y n x n 2 ,
from which it follows that
z n + 1 q n 2 + μ n ( 2 α γ 2 β μ n γ ϵ ) y n x n 2 v n q n 2 .
Since  2 α γ 2 β μ n γ ϵ 0 , it has
z n + 1 q n 2 v n q n 2 ,
which implies
z n + 1 q n v n q n z n q n + i U θ i , n ( z n i z n i 1 ) z n q + q q n + max i U | θ i , n | i U z n i z n i 1 z n q + M M n x ¯ + max i U | θ i , n | i U z n i z n i 1 .
By (C4), (C5) and Lemma 4, we have  lim n z n q  exists and  { z n }  is bounded. Hence,  { v n } { x n }  and  { γ C x n }  are bounded.
2. Since  lim n θ i , n 0  and (27) holds, for all  i U , it has
lim n | θ i , n | z n i z n i 1 = 0 .
Particularly,
lim n z n + 1 z n = 0 .
Further, we have
z n + 1 v n z n + 1 z n + i U | θ i , n | z n i z n i 1 0 , a s n .
By (32) and (33), we have
0 z n v n = i U θ i , n ( z n i z n i 1 ) i U | θ i , n | z n i z n i 1 ,
which indicates that  z n v n 0 .
Since  lim n y n x n = lim n 1 μ n z n + 1 v n = 0 . Then,  y n x n 0 .
Set  u n = γ C x n . Let  ( z ¯ , x ¯ , u ¯ )  be a weak limit point of the sequence  { ( z n , x n , u n ) } . Then there exists a subsequence  { ( z n k , x n k , u n k ) }  such that  ( z n k , x n k , u n k ) ( z ¯ , x ¯ , u ¯ ) . Without loss of generality, we assume that  { v n k } { v n }  such that  v n k z ¯ . In addition, we have  M n k M  as  k  by (C4).
Define a maximally monotone operator  F : H 3 2 H 3  by
F = ( γ A ) 1 ( γ C ) 1 γ B + γ ( 2 Id D ) + 0 0 Id 0 0 Id Id Id 0 .
By (26), we have
( x n , v n M n x n ) gra γ A , ( y n , 2 M n x n v n γ C x n γ ( 2 Id γ D ) x n M n y n ) gra γ B .
Therefore, from (34), we obtain
x n k y n k x n k y n k M n k ( x n k y n k ) + γ ( 2 Id D ) ( y n k x n k ) F v n k M n k x n k u n k y n k .
Since  gra F  is sequentially closed in  H w e a k × H s t r o n g ,
0 0 0 ( γ A ) 1 ( γ C ) 1 γ B + γ ( 2 Id D ) + 0 0 Id 0 0 Id Id Id 0 z ¯ M x ¯ u ¯ x ¯ ,
it yields that
x ¯ = J γ A M z ¯ , u ¯ = γ C x ¯ , x ¯ = J γ B M ( 2 M x ¯ z ¯ γ C x ¯ γ ( 2 Id D ) x ¯ ) .
Hence,  z ¯ K M . Using Lemma 3,  v n z ¯ K M . Meanwhile,  z n z ¯ K M  by  z n v n 0 . Furthermore,  x ¯ = J γ A M z ¯ zer ( A + B + C + ( 2 Id D ) )  by Proposition 1. From that  z n z ¯ x ¯ = J γ A M z ¯  and  u ¯ = γ C x ¯ , we conclude that  x n x ¯ u n u ¯ . Using  y n x n 0 , we obtain  y n x ¯ .
By (30) and (31) and Proposition 2 with  z = z ¯ z ^ = v n M 1 = M M 2 = M n , we have
β γ C x ¯ C x n 2 z ¯ v n , y n x n + ( γ ϵ 2 α ) y n x n 2 + γ y n x n , C x ¯ C x n + x ¯ y n , ( M n M ) ( y n x n ) + ( M n M ) x n , y n x n .
Hence,  C x n C x ¯ .
3. Taking  z = z ¯ z ^ = v n M 1 = M M 2 = M n  and using similar arguments in the proof of 3. in Theorem 1, we obtain  x n x ¯  and  y n x ¯ , where  x ¯ zer ( A + B + C + ( 2 Id D ) ) . □
Remark 2.
If  θ i , n [ 0 , 1 ) , (27) reduces to
n = 1 + max i U θ i , n i U z n i z n i 1 < + .
(35) can be implemented by the following simple online updating rule
θ i , n = min { θ i , b i , n } ,
where  θ i [ 0 , 1 ) { b i , n }  and  { max b i , n i U z n i z n i 1 }  are summable sequences for each  i U . For example, one can choose
b i , n = b i n 1 + δ i U z n i z n i 1 , b i > 0 , δ > 0 .

4. Numerical Results

We consider the following LASSO problem with a nonnegative constraint:
min x R N { 1 2 A x b 2 + x 1 + ι c ( x ) } ,
where  A R J × N b R J × 1 C = { x R N | x i 0 , i = 1 , 2 , , N } . Let  ι C ( x )  be the indicator function of the closed convex set C, that is,
ι C ( x ) = 0 , x C , + , otherwise .
Set  f 1 ( x ) = x 1 f 2 ( x ) = ι c ( x )  and  f 3 ( x ) = 1 2 A x b 2 . Obviously,  f 3 ( x ) = A T ( A x b ) f 3 ( x )  is  A T A -Lipschitz continuous and the i-th component of  J γ n f 1 M x  is
( J γ n f 1 M x ) i = ( J γ n M 1 f 1 M 1 x ) i = ( p r o x γ n M f 1 M 1 x ) i = x i m i γ n m i , x i m i > γ n m i , x i m i + γ n m i , x i m i < γ n m i , 0 , o t h e r w i s e ,
where  M = d i a g ( m 1 , m 2 , , m N ) . Similarly, we have
J γ n f 2 M ( x ) = J γ n M 1 f 2 ( M 1 x ) = p r o x γ n M f 2 ( M 1 x ) = P C ( M 1 x ) .
Problem (36) is equivalent to the problem of finding  x R N  such that
0 f 1 ( x ) + f 2 ( x ) + f 3 ( x ) .
Select any initial value  z 0 , z 1 , z 2 R N U = { 1 , 2 } i U θ 1 , n = θ 2 , n = min { 0.1 , n 1 n + 2 } . Set  J = N = 20 E R 20 × 20 M n = M = 2 E α = 1 2 L = A T A ζ = 4 α ( 2 D ) L ( 2 D ) + 2 E D 2 , and take  z n + 1 z n < eps as the stopping criteria. In the following, we denote Algorithm (13) as PVMTO and Algorithm (26) as MIPVMTO.
In Algorithms (13) and (26) take  D = k E k [ 0.009 , 1.999 ] , and take  γ = 1 2 ζ μ n = 0.499 , eps = 10 5 . In Figure 1 and Figure 2, we show the effect of the parameter D on the CPU time and iteration numbers of the PVMTO and the MIPVMTO. As can be seen from the figure, compared with PVMTO, MIPVMTO has a great improvement in the number of iterations and CPU. Furthermore, we list the CPU time of PVMTO and MIPVMTO at different stopping criteria and different D in Table 1.
In Algorithms (13) and (26) take  D = 0.009 E e [ 0.999 20 , 19.999 20 ] μ n = 0.999 e γ = e ζ  and eps  = 10 5 . The effect of choosing different  γ  on the number of iterations of the two algorithms is shown in Figure 3. It can be concluded that, when  γ = 1 2 ζ , the number of iterations of our two proposed algorithms is the least, and the number of iterations of MIPVMTO is less than that of the PVMTO.
In Algorithms (13) and (26) take  D = 0.009 E M n = M = 2 E  and  γ = 1 2 ζ μ n = 0.499 . In Figure 4, we compare the number of iterations of PVMTO and MIPVMTO under different stopping criteria, and it can be seen that the parameterized variable metric three-operator algorithm with inertia term has more advantages.

5. Conclusions

In this paper, we propose a parameterized variable metric three-operator algorithm to solve the monotone inclusion problem involving the sum of three operators and prove the strong convergence of the algorithm under some appropriate conditions. The multi-step inertial parameterized variable metric three-operator algorithm is also proposed and its strong convergence is analyzed in order to speed up the parameterized variable metric three-operator algorithm. To a certain extent, the proposed algorithm can be seen as a generalization of the parameterized three-operator algorithm [12]. The constructed numerical examples show the efficiency of the proposed algorithms and the effects of the choices of the parameter operator D on running time. In future development, we can consider proving that the regularization of parameterized variable metric three-operator algorithm converges to the least norm solution of the sum of three maximally monotone operators as shown in [12] and show the real applications of the algorithms in practice. Another direction of consideration is to study the self-adaption version of the currently proposed algorithms.

Author Contributions

All authors contributed equally to this article. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Fundamental Research Funds for the Central Universities under Grant No. 3122019185.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

The authors sincerely thank the reviewers and the editors for their valuable comments and suggestions, which will make this paper more readable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Qin, X.; An, N.T. Smoothing algorithms for computing the projection onto a minkowski sum of convex sets. Comput. Optim. Appl. 2019, 74, 821–850. [Google Scholar] [CrossRef]
  2. Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef]
  3. Izuchukwu, C.; Reich, S.; Shehu, Y. Strong convergence of forward–reflected–backward splitting methods for solving monotone inclusions with applications to image restoration and optimal control. J. Sci. Comput. 2023, 94, 73. [Google Scholar] [CrossRef]
  4. Briceño-Arias, L.M.; Combettes, P.L. Monotone operator methods for nash equilibria in non-potential games. Comput. Anal. Math. 2013, 50, 143–159. [Google Scholar]
  5. An, N.T.; Nam, N.M.; Qin, X. Solving k-center problems involving sets based on optimization techniques. J. Glob. Optim. 2020, 76, 189–209. [Google Scholar] [CrossRef]
  6. Nemirovski, A.; Juditsky, A.B.; Lan, G.; Shapiro, A. Robust stochastic approximation approach to stochastic programming. SIAM J. Optim. 2009, 19, 1574–1609. [Google Scholar] [CrossRef]
  7. Tang, Y.; Wen, M.; Zeng, T. Preconditioned three-operator splitting algorithm with applications to image restoration. J. Sci. Comput. 2022, 92, 106. [Google Scholar] [CrossRef]
  8. Davis, D.; Yin, W. A three-operator splitting scheme and its optimization applications. Set-Valued Var. Anal. 2017, 25, 829–858. [Google Scholar] [CrossRef]
  9. Cui, F.; Tang, Y.; Yang, Y. An inertial three-operator splitting algorithm with applications to image inpainting. arXiv 2019, arXiv:1904.11684. [Google Scholar]
  10. Malitsky, Y.; Tam, M.K. A forward-backward splitting method for monotone inclusions without cocoercivity. SIAM J. Optim. 2020, 30, 1451–1472. [Google Scholar] [CrossRef]
  11. Zong, C.; Tang, Y.; Zhang, G. An inertial semi-forward-reflected-backward splitting and its application. Acta Math. Sin. Engl. Ser. 2022, 38, 443–464. [Google Scholar] [CrossRef]
  12. Zhang, C.; Chen, J. A parameterized three-operator splitting algorithm and its expansion. J. Nonlinear Var. Anal. 2021, 5, 211–226. [Google Scholar]
  13. Wang, D.; Wang, X. A parameterized Douglas-Rachford algorithm. Comput. Optim. Appl. 2021, 164, 263–284. [Google Scholar] [CrossRef]
  14. Ryu, E.K.; Vũ, B.C. Finding the forward-Douglas–Rachford-forward method. J. Optim. Theory Appl. 2020, 184, 858–876. [Google Scholar] [CrossRef]
  15. Yan, M. A primal-dual three-operator splitting scheme. arXiv 2016, arXiv:1611.09805v1. [Google Scholar]
  16. Briceño-Arias, L.M.; Davis, D. Forward-backward-half forward algorithm for solving monotone inclusions. SIAM J. Optim. 2018, 28, 2839–2871. [Google Scholar] [CrossRef]
  17. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  18. Chen, C.; Chan, R.H.; Ma, S.; Yang, J. Inertial proximal ADMM for linearly constrained separable convex optimization. SIAM J. Imaging Sci. 2015, 8, 2239–2267. [Google Scholar] [CrossRef]
  19. Combettes, P.L.; Glaudin, L.E. Quasi-nonexpansive iterations on the affine hull of orbits: From Mann’s mean value algorithm to inertial methods. SIAM J. Optim. 2017, 27, 2356–2380. [Google Scholar] [CrossRef]
  20. Qin, X.; Wang, L.; Yao, J.C. Inertial splitting method for maximal monotone mappings. J. Nonlinear Convex. Anal. 2020, 21, 2325–2333. [Google Scholar]
  21. Dey, S. A hybrid inertial and contraction proximal point algorithm for monotone variational inclusions. Numer. Algorithms 2023, 93, 1–25. [Google Scholar] [CrossRef]
  22. Ochs, P.; Chen, Y.; Brox, T.; Pock, T. iPiano: Inertial proximal algorithm for nonconvex optimization. SIAM J. Imaging Sci. 2014, 7, 1388–1419. [Google Scholar] [CrossRef]
  23. Dong, Q.L.; Lu, Y.Y.; Yang, J.F. The extragradient algorithm with inertial effects for solving the variational inequality. Optimization 2016, 65, 2217–2226. [Google Scholar] [CrossRef]
  24. Chen, G.H.G.; Rockafellar, R.T. Convergence rates in forward-backward splitting. SIAM J. Optim. 1997, 7, 421–444. [Google Scholar] [CrossRef]
  25. Combettes, P.L.; Vũ, B.C. Variable metric forward-backward splitting with applications to monotone inclusions in duality. Optimization 2014, 63, 1289–1318. [Google Scholar] [CrossRef]
  26. Bonettini, S.; Porta, F.; Ruggiero, V. A variable metric forward-backward method with extrapolation. SIAM J. Sci. Comput. 2016, 38, A2558–A2584. [Google Scholar] [CrossRef]
  27. Salzo, S. The variable metric forward-backward splitting algorithm under mild differentiability assumptions. SIAM J. Optim. 2017, 27, 2153–2181. [Google Scholar] [CrossRef]
  28. Audrey, R.; Yves, W. Variable metric forward-backward algorithm for composite minimization problems. SIAM J. Optim. 2021, 31, 1215–1241. [Google Scholar]
  29. Vũ, B.C.; Papadimitriou, D. A nonlinearly preconditioned forward-backward splitting method and applications. Numer. Funct. Anal. Optim. 2022, 42, 1880–1895. [Google Scholar] [CrossRef]
  30. Bonettini, S.; Rebegoldi, S.; Ruggiero, V. Inertial variable metric techniques for the inexact forward-backward algorithm. SIAM J. Sci. Comput. 2018, 40, A3180–A3210. [Google Scholar] [CrossRef]
  31. Lorenz, D.; Pock, T. An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 2015, 51, 311–325. [Google Scholar] [CrossRef]
  32. Cui, F.; Tang, Y.; Zhu, C. Convergence analysis of a variable metric forward-backward splitting algorithm with applications. J. Inequal. Appl. 2019, 141, 1–27. [Google Scholar] [CrossRef]
  33. Bauschke, H.H.; Combettes, P.L. CMS books in mathematics. In Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  34. Aragón-Artacho, F.J.; Torregrosa-Belén, D. A Direct Proof of Convergence of Davis-Yin Splitting Algorithm Allowing Larger Stepsizes. Set-Valued Var. Anal. 2022, 30, 1011–1029. [Google Scholar] [CrossRef]
  35. Marino, G.; Xu, H.K. Convergence of generalized proximal point algorithms. Commun. Pure Appl. Anal. 2004, 3, 791–808. [Google Scholar] [CrossRef]
  36. Combettes, P.L.; Vũ, B.C. Variable metric quasi-Fejér monotonicity. Nonlinear Anal. 2012, 78, 17–31. [Google Scholar] [CrossRef]
Figure 1. The effect of  D = k E  on CPU (s).
Figure 1. The effect of  D = k E  on CPU (s).
Axioms 12 00508 g001
Figure 2. The effect of  D = k E  on iteration numbers.
Figure 2. The effect of  D = k E  on iteration numbers.
Axioms 12 00508 g002
Figure 3. Different  γ  with  z n + 1 z n < 10 5 .
Figure 3. Different  γ  with  z n + 1 z n < 10 5 .
Axioms 12 00508 g003
Figure 4. Numerical results with different stopping criterion.
Figure 4. Numerical results with different stopping criterion.
Axioms 12 00508 g004
Table 1. Numerical results of the PVMTO (Algorithm (13)) and the MIPVMTO (Algorithm (26)).
Table 1. Numerical results of the PVMTO (Algorithm (13)) and the MIPVMTO (Algorithm (26)).
PVMTOMIPVMTO
eps   D Iter   x n CPU ( s )   D Iter   x n CPU ( s )
10 3 0.00999E5310.8920970.0751160.00999E4730.8968130.101615
0.50659E5640.9094720.0790040.50659E5260.9172010.078163
1.00499E6240.9322230.0830931.00499E5920.9427670.080023
1.50599E7250.9659790.0984671.50599E6150.9693680.088909
1.91199E7450.9874240.1031771.91199E6680.9946280.093426
1.93599E7460.9886540.1028051.93599E6760.9965320.095638
1.95699E7470.9897390.1063071.95699E6850.9983930.092109
1.97899E7490.9909390.0979101.97899E6961.0004910.094137
1.98999E7500.9915220.0987711.98999E7021.0016130.093303
1.99999E7500.9919410.0984901.99999E7071.0026060.094370
2.00000E7500.9919410.1138512.00000E7500.9919410.113851
10 4 0.00999E11140.8697430.1597360.00999E9580.8701600.131419
0.50659E12230.8803580.1655800.50659E10630.8809000.142245
1.00499E14870.8930680.1939781.00499E13140.8937680.175073
1.50599E18780.9098940.2457091.50599E16750.9108600.224609
1.91199E25230.9281380.3281251.91199E22490.9293210.302159
1.93599E25380.9295950.3427511.93599E22970.9310240.324058
1.95699E25860.9311190.3502961.95699E23020.9324620.309686
1.97899E26350.9327680.3471201.97899E23570.9341050.315834
1.98999E26230.9334560.3417211.98999E23880.9349620.318405
1.99999E26520.9342300.3494951.99999E24170.9357590.321433
2.00000E26520.9342310.3147992.00000E26520.9342310.314799
10 5 0.00999E20170.8938550.5183560.00999E16920.8939160.237726
0.50659E23700.9153750.3120550.50659E19900.9154300.267332
1.00499E29980.9394460.3899011.00499E25200.9395090.340453
1.50599E43650.9685860.5635291.50599E36670.9686610.485276
1.91199E64640.9990460.8323741.91199E55300.9991580.735078
1.93599E67171.0013730.8640621.93599E57481.0014880.765171
1.95699E69571.0035021.4883621.95699E59561.0036222.570363
1.97899E72291.0058613.0108921.97899E61901.0059872.856389
1.98999E73741.0070653.2096431.98999E63151.0071962.881690
1.99999E75111.0082053.3511711.99999E64331.0083412.996621
2.00000E75111.0082063.0248312.00000E75111.0082063.024831
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, Y.; Yan, Y. Convergence of Parameterized Variable Metric Three-Operator Splitting with Deviations for Solving Monotone Inclusions. Axioms 2023, 12, 508. https://doi.org/10.3390/axioms12060508

AMA Style

Guo Y, Yan Y. Convergence of Parameterized Variable Metric Three-Operator Splitting with Deviations for Solving Monotone Inclusions. Axioms. 2023; 12(6):508. https://doi.org/10.3390/axioms12060508

Chicago/Turabian Style

Guo, Yanni, and Yinan Yan. 2023. "Convergence of Parameterized Variable Metric Three-Operator Splitting with Deviations for Solving Monotone Inclusions" Axioms 12, no. 6: 508. https://doi.org/10.3390/axioms12060508

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop