Next Article in Journal
A New Class of Coupled Systems of Nonlinear Hyperbolic Partial Fractional Differential Equations in Generalized Banach Spaces Involving the ψ–Caputo Fractional Derivative
Next Article in Special Issue
A New Study on the Fixed Point Sets of Proinov-Type Contractions via Rational Forms
Previous Article in Journal
EEG Frontal Asymmetry in Dysthymia, Major Depressive Disorder and Euthymic Bipolar Disorder
Previous Article in Special Issue
New Self-Adaptive Inertial-like Proximal Point Methods for the Split Common Null Point Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Primal-Dual Splitting Algorithms for Solving Structured Monotone Inclusion with Applications

1
Department of Mathematics, Nanchang University, Nanchang 330031, China
2
Tianjin Key Laboratory for Advanced Signal Processing and College of Science, Civil Aviation University of China, Tianjin 300300, China
*
Author to whom correspondence should be addressed.
Jinjian Chen and Xingyu Luo are the co-first authors of this paper.
Symmetry 2021, 13(12), 2415; https://doi.org/10.3390/sym13122415
Submission received: 15 November 2021 / Revised: 7 December 2021 / Accepted: 8 December 2021 / Published: 13 December 2021
(This article belongs to the Special Issue Symmetry in Nonlinear Analysis and Fixed Point Theory)

Abstract

:
This work proposes two different primal-dual splitting algorithms for solving structured monotone inclusion containing a cocoercive operator and the parallel-sum of maximally monotone operators. In particular, the parallel-sum is symmetry. The proposed primal-dual splitting algorithms are derived from two approaches: One is the preconditioned forward–backward splitting algorithm, and the other is the forward–backward–half-forward splitting algorithm. Both algorithms have a simple calculation framework. In particular, the single-valued operators are processed via explicit steps, while the set-valued operators are computed by their resolvents. Numerical experiments on constrained image denoising problems are presented to show the performance of the proposed algorithms.

1. Introduction

In the last decade, there has been great interest in primal-dual splitting algorithms for solving structured monotone inclusion. The reason is that many convex minimization problems arising in image processing, statistical learning, and economic management can be modelled by such monotone inclusion problems. Based on the perspective of operator splitting algorithms, these primal-dual splitting algorithms can be roughly divided into four categories: (i) Forward–backward splitting type [1,2,3,4]; (ii) Douglas-Rachford splitting type [5,6,7]; (iii) Forward–backward–forward splitting type [8,9,10,11,12]; and (iv) Projective splitting type [13,14,15,16,17]. In 2014, Becker and Combettes [11] first studied the following structured monotone inclusion problem:
Problem 1.
Let H be a real Hilbert space, z H , and m > 0 be an integer. Let A : H 2 H be maximally monotone and C : H H be monotone and L-Lipschitzian continuous, for some L > 0 . For every integer i = 1 , , m , let X i and G i be real Hilbert spaces, let B i : X i 2 X i and D i : Y i 2 Y i be maximally monotone operators, and let K i : H X i and M i : H Y i be nonzero linear bounded operators. The problem is to solve the primal inclusion
f i n d x H s u c h t h a t z A x + i = 1 m K i * B i K i M i * D i M i x + C x ,
together with its dual inclusion  
f i n d p i X i , i = 1 , , m , q i Y i , i = 1 , , m , y i H , i = 1 , , m , s u c h t h a t x H : z i = 1 m L i * K i * p i A x + C x , K i x K i y i B i 1 p i , i = 1 , , m , M i y i D i 1 q i , i = 1 , , m , K i * p i = M i * q i , i = 1 , , m .
Based on the method of [10], they proposed a primal-dual splitting algorithm to solve (1) and (2). Moreover, they applied the algorithm to solve an image restoration model with infimal convolution terms mixing first- and second-order total variation, which was initially studied in [18] and further explored in [19]. The advantage of this model is that it can reduce the staircase effects caused by the first-order total variation. Furthermore, Boţ and Hendrich [4] studied a more general monotone inclusion problem as follows.
Problem 2.
Let H be a real Hilbert space, z H , and m > 0 be an integer. Let A : H 2 H be maximally monotone and C : H H be μ 1 -cocoercive operator, for some μ > 0 . For every i = 1 , , m , let G i , X i , Y i be real Hilbert spaces, let r i G i , let B i : X i 2 X i and D i : Y i 2 Y i be maximally monotone operators, and let L i : H G i , K i : G i X i and M i : G i Y i be nonzero bounded linear operators. The problem is to solve the primal inclusion
f i n d   x H such   that   z A x + i = 1 m L i * K i * B i K i M i * D i M i ( L i x r i ) + C x ,
together with its dual inclusion
f i n d p i X i , i = 1 , , m , q i Y i , i = 1 , , m , y i G i , i = 1 , , m , s u c h t h a t x H : z i = 1 m L i * K i * p i A x + C x , K i L i x y i r i B i 1 p i , i = 1 , , m , M i y i D i 1 q i , i = 1 , , m , K i * p i = M i * q i , i = 1 , , m .
It is easy to see that Problem 1 could be viewed as a special case of Problem 2 by letting L i = I and r i = 0 , for any i = 1 , , m . They proposed two different primal-dual algorithms for solving the primal-dual pair of monotone inclusions (3) and (4). The first algorithm is a forward–backward splitting type, which is defined as follows. Let x 0 H , and for any i = 1 , , m , p i , 0 X i , q i , 0 Y i and z i , 0 , y i , 0 , v i , 0 G i , and set
( n 0 ) x ˜ n = J τ A ( x n τ ( C x n + i = 1 m L i * v i , n z ) ) for i = 1 , , m p ˜ i , n = J θ 1 , i B i 1 p i , n + θ 1 , i K i z i , n q ˜ i , n = J θ 2 , i D i 1 q i , n + θ 2 , i M i y i , n u 1 , i , n = z i , n + γ 1 , i K i * p i , n 2 p ˜ i , n + v i , n + σ i L i 2 x ˜ n x n r i u 2 , i , n = y i , n + γ 2 , i M i * q i , n 2 q ˜ i , n + v i , n + σ i L i 2 x ˜ n x n r i z ˜ i , n = 1 + σ i γ 2 , i 1 + σ i γ 1 , i + γ 2 , i ( u 1 , i , n σ i γ 1 , i 1 + σ i γ 2 , i u 2 , i , n ) y ˜ i , n = 1 1 + σ i γ 2 , i u 2 , i , n σ i γ 2 , i z ˜ i , n v ˜ i , n = v i , n + σ i L i 2 x ˜ n x n r i z ˜ i , n y ˜ i , n x n + 1 = x n + λ n x ˜ n x n for   i = 1 , , m p i , n + 1 = p i , n + λ n p ˜ i , n p i , n q i , n + 1 = q i , n + λ n q ˜ i , n q i , n z i , n + 1 = z i , n + λ n z ˜ i , n z i , n y i , n + 1 = y i , n + λ n y ˜ i , n y i , n v i , n + 1 = v i , n + λ n v ˜ i , n v i , n ,
where for any i = 1 , , m , τ , θ 1 , i , θ 2 , i , γ 1 , i , γ 2 , i and σ i are strictly positive real numbers such that
2 μ 1 ( 1 α ¯ ) min i = 1 , , m 1 τ , 1 θ 1 , i , 1 θ 2 , i , 1 γ 1 , i , 1 γ 2 , i , 1 σ i > 1
for
α ¯ = max τ i = 1 m σ i L i 2 , max j = 1 , , m θ 1 , j γ 1 , j K j 2 , θ 2 , j γ 2 , j M j 2 .
In addition, { λ n } [ ε , 1 ] , where ε ( 0 , 1 ) . The second algorithm is a forward–backward–forward splitting type. Let x 0 H , and for any i = 1 , , m , let p i , 0 X i , q i , 0 Y i , z i , 0 , y i , 0 , v i , 0 G i , and set  
( n 0 ) x ˜ n = J γ n A ( x n γ n ( C x n + i = 1 m L i * v i , n z ) ) for   i = 1 , , m p ˜ i , n = J γ n B i 1 p i , n + γ n K i z i , n q ˜ i , n = J γ n D i 1 q i , n + γ n M i y i , n u 1 , i , n = z i , n γ n K i * p i , n v i , n γ n L i x n r i u 2 , i , n = y i , n γ n M i * q i , n v i , n γ n L i x n r i z ˜ i , n = 1 + γ n 2 1 + 2 γ n 2 ( u 1 , i , n γ n 2 1 + γ n 2 u 2 , i , n ) y ˜ i , n = 1 1 + γ n 2 u 2 , i , n γ n 2 z ˜ i , n v ˜ i , n = v i , n + γ n L i x n r i z ˜ i , n y ˜ i , n x n + 1 = x ˜ n + γ n ( C x n C x ˜ n + i = 1 m L i * v i , n v ˜ i , n ) for   i = 1 , , m p i , n + 1 = p ˜ i , n γ n K i z i , n z ˜ i , n q i , n + 1 = q ˜ i , n γ n M i y i , n y ˜ i , n z i , n + 1 = z ˜ i , n + γ n K i * p i , n p ˜ i , n y i , n + 1 = y ˜ i , n + γ n M i * q i , n q ˜ i , n v i , n + 1 = v ˜ i , n γ n L i x n x ˜ n ,
where { γ n } ε , 1 ε β with ε 0 , 1 β + 1 and
β = μ + max i = 1 m L i 2 , max j = 1 , , m K j 2 , M j 2 .
The first algorithm (5) could be viewed as a preconditioned forward–backward splitting algorithm [20]. While the second algorithm (8) is an instance of the forward-backward-forward splitting algorithm proposed by Tseng [21]. We can see that the operators B i 1 and D i 1 are symmetry in both of algorithms. We call the first algorithm (5) the FB_BH algorithm and the second algorithm, the FBF_BH algorithm.
In this paper, we continue studying primal-dual splitting algorithms for solving the structured monotone inclusion (3) and (4). First, we establish a new convergence theorem for the primal-dual forward–backward splitting type algorithm (5). We relax the limitation of the iteration parameters as well as expand the selection range of the relaxation parameter. Since the primal-dual forward–backward–forward splitting type algorithm (8) does not make full use of the cocoercive operator, we introduce a new primal-dual splitting algorithm for solving (3) and (4), which is based on the forward–backward–half-forward splitting algorithm [22]. This new algorithm is not only less computationally expensive than the original algorithm but also provides a larger range of parameter selection. To show the advantages of the proposed algorithms, we apply them to solve image denoising problems.
This paper is organized as follows. In Section 2, we recall some preliminary results in monotone operator theory. In Section 3, we present the main results. We study the convergence of two primal-dual splitting algorithms for solving (3) and (4). Furthermore, we employ the obtained algorithms to solve convex minimization problems. In Section 4, we perform numerical experiments on image denoising problems. Finally, we present conclusions.

2. Preliminaries

Throughout this paper, H is a real Hilbert space, which is equipped with inner product · , · and associated norm · = · , · . Let 2 H be the power set of H . Let L : H G be a linear bounded operator, where G is another real Hilbert space, the operator L * : G H is said to be its adjoint if L x , y = x , L * y holds for all x H and all y G . Most of the definitions are taken from [23].
Let A : H 2 H be a set-valued operator. Let zer A = { x H : 0 A x } be the set of its zeros, ran A = { u H : x H , u A x } its range, and gra A = { ( x , u ) H × H : u A x } its graph. The inverse of A is defined by A 1 : H 2 H , u { x H : u A x } .
Definition 1.
Let A : H 2 H be a set-valued operator. A is said to be monotone, if x y , u v 0 , ( x , u ) , ( y , v ) gra A . Furthermore, A is said to be maximally monotone if it is monotone, and there exists no monotone operator B : H 2 H such that gra B properly contains gra A.
Definition 2.
Let T : H H be a single-valued operator.
(i) T is said to be L-Lipschitzian, for some L > 0 , if T x T y L x y , x , y H .
(ii) T is said to be μ-cocoercive, for some μ > 0 , if x y , T x T y μ T x T y 2 , x , y H .
Definition 3.
Let A : H 2 H , the resolvent of A with index λ > 0 is defined by
J λ A = ( I d + λ A ) 1 ,
where I d denotes the identity operator on H .
Definition 4.
Let A 1 , A 2 : H 2 H be two set-valued operators. The sum and the parallel sum of A 1 and A 2 are defined as follows:
A 1 + A 2 : H 2 H , A 1 + A 2 ( x ) = A 1 ( x ) + A 2 ( x ) A 1 A 2 : H 2 H , A 1 A 2 = A 1 1 + A 2 1 1 .
Let f : H R ¯ : = R { ± } . We denote its effective domain as dom f : = { x H : f ( x ) < + } , f is said to be proper if dom f and f ( x ) > for all x H . Furthermore, we define Γ 0 ( H ) : = { f : H R ¯ f is proper, convex, and lower semicontinuous (lsc)}.
The conjugate function of f is defined by f * : H R ¯ , f * ( p ) = sup { p , x f ( x ) : x H } for all p H . If f Γ 0 ( H ) , then f * * = f .
Let f Γ 0 ( H ) , the subdifferential of f is defined by f : H 2 H : x { u H ( y H ) f ( y ) f ( x ) + u y x } . If f Γ 0 ( H ) , then f is maximally monotone.
Let two proper functions f , h : H R ¯ ,
f h : H R ¯ , ( f h ) ( x ) = inf y H { f ( y ) + h ( x y ) } ,
denotes their infimal convolution.
Let f Γ 0 ( H ) and γ > 0 , the proximity operator of f is defined by
prox γ f : H H : x argmin y H f ( y ) + 1 2 γ x y 2 .
It follows from f Γ 0 ( H ) that prox γ f ( x ) = J γ f ( x ) .
The following lemma shows the relationship between the proximity operator of f and its convex conjugate f * .
Lemma 1. 
(Moreau’s decomposition) Let γ > 0 and f Γ 0 ( H ) , then
prox γ f ( x ) + γ prox 1 γ f * ( 1 γ x ) = x , x H .
Let C H be a nonempty closed convex set, the indicator function of C is denoted by
δ C : x 0 , if x C + , otherwise .
The orthogonal projection onto C is defined by P C , which is P C ( x ) = argmin y C x y . Let γ > 0 , prox γ δ C ( x ) = P C ( x ) .

3. Main Results

In this section, we study primal-dual splitting algorithms for solving (3) and (4) and discuss their asymptotic convergence. First, we provide a technique lemma, which shows that the primal-dual monotone inclusions (3) and (4) are equivalent to the sum of three maximally monotone operators. In the following, let H = H 1 H m be the direct sum of real Hilbert spaces { H i } i = 1 m . Let v = ( v 1 , , v m ) H and q = ( q 1 , , q m ) H , the inner product and associated norm on H are defined as
v , q H = i = 1 m v i , q i , v H = i = 1 m v i 2 .
Lemma 2.
Let H , A, C, X i , Y i , G i , B i , D i , L i , K i , M i , i = 1 , , m be defined as in Problem 2, and let
X : = X 1 X m , Y : = Y 1 Y m , G : = G 1 G m , K : = H X Y G G G , p = p 1 , , p m X , q = q 1 , , q m Y , z = z 1 , , z m G , y = y 1 , , y m G , v = v 1 , , v m G , r = r 1 , , r m G , B : X 2 X : p B 1 p 1 , , B m p m , D : Y 2 Y , q D 1 q 1 , , D m q m , M ˜ : G Y , y M 1 y 1 , , M m y m , K ˜ : G X , y K 1 y 1 , , K m y m , M : K 2 K , ( x , p , q , z , y , v ) ( z + A x ) × B 1 p × D 1 q × ( v , v , r + z + y ) S : K K , ( x , p , q , z , y , v ) i = 1 m L i * v i , K ˜ z , M ˜ y , K ˜ * p , M ˜ * q , L 1 x , , L m x Q : K K , ( x , p , q , z , y , v ) ( C x , 0 , 0 , 0 , 0 , 0 )
Then the following conclusions hold:
(i)
M is maximally monotone.
(ii)
S is monotone and l-Lipschitzian, where
l = ( max { max i = 1 , , m K i 2 , max i = 1 , , m M i 2 , i = 1 m L i 2 } ) 1 2 .
(iii)
Q is μ 1 -cocoercive.
(iv)
For any x ¯ H , x ¯ is a solution to Problem 2, if and only if x ¯ zer ( M + S + Q ) .
Proof. 
(i) Since A , B , and D are maximally monotone, it follows from [23] Proposition 20.22 and Proposition 20.23 that the set-valued operator M is maximally monotone.
(ii) By taking two arbitrary elements x = ( x , p , q , z , y , v ) and x ^ = ( x ^ , p ^ , q ^ , z ^ , y ^ , v ^ ) in K , we obtain
x x ^ , S x S x ^ = ( x x ^ , p 1 p ^ 1 , , p m p ^ m , q 1 q ^ 1 , , q m q ^ m , z 1 z ^ 1 , , z m z ^ m , y 1 y ^ 1 , , y m y ^ m , v 1 v ^ 1 , , v m v ^ m ) , ( i = 1 m L i * ( v i v ^ i ) , K 1 ( z 1 z ^ 1 ) , , K m ( z m z ^ m ) , M 1 ( y 1 y ^ 1 ) , , M m ( y m y ^ m ) , K 1 * ( p 1 p ^ 1 ) , , K m * ( p m p ^ m ) , M 1 * ( q 1 q ^ 1 ) , , M m * ( q m q ^ m ) , L 1 ( x x ^ ) , , L m ( x x ^ ) = i = 1 m x x ^ , L i * ( v i v ^ i ) i = 1 m p i p ^ i , K i ( z i z i ^ ) i = 1 m q i q ^ i , M i ( y i y ^ i ) + i = 1 m z i z ^ i , K i * ( p i p ^ i ) + i = 1 m y i y ^ i , M i * ( q i q i ^ ) i = 1 m v i v ^ i , L i ( x x ^ ) = 0 ,
which means that S is monotone. It follows from the Cauchy-Schwarz inequality that
S x S x ^ = ( i = 1 m L i * ( v i v ^ i ) , K 1 ( z 1 z ^ 1 ) , , K m ( z m z ^ m ) , M 1 ( y 1 y ^ 1 ) , , M m ( y m y ^ m ) , + K 1 * ( p 1 p ^ 1 ) , , K m * ( p m p ^ m ) , M 1 * ( q 1 q ^ 1 ) , , M m * ( q m q ^ m ) , L 1 ( x x ^ ) , , L m ( x x ^ ) = ( i = 1 m L i * ( v i v ^ i ) 2 + i = 1 m K i ( z i z i ^ ) 2 + i = 1 m M i ( y i y ^ i ) 2 + i = 1 m K i * ( p i p i ^ ) 2 + i = 1 m M i * ( q i q i ^ ) 2 + i = 1 m L i * ( x x ^ ) 2 ) 1 2 ( ( i = 1 m L i 2 ) i = 1 m v i v ^ i 2 + i = 1 m K i 2 z i z i ^ 2 + i = 1 m M i 2 y i y ^ i 2 + i = 1 m K i 2 p i p ^ i 2 + i = 1 m M i 2 q i q ^ i 2 + i = 1 m L i 2 x x ^ 2 ) 1 2 ( ( i = 1 m L i 2 ) i = 1 m v i v ^ i 2 + max i = 1 , , m K i 2 i = 1 m z i z ^ i 2 + max i = 1 , , m M i 2 i = 1 m y i y i ^ 2 + max i = 1 , , m K i 2 i = 1 m p i p ^ i 2 + max i = 1 , , m M i 2 i = 1 m q i q ^ i 2 + i = 1 m L i 2 x x ^ 2 ) 1 2 l ( x x ^ 2 + i = 1 m p i p ^ i 2 + i = 1 m q i q ^ i 2 + i = 1 m z i z ^ i 2 + i = 1 m y i y ^ i 2 + i = 1 m v i v ^ i 2 ) 1 2 = l x x ^ .
Hence, S is monotone and l-Lipschitzian.
(iii) Let x = ( x , p , q , z , y , v ) K and x ^ = ( x ^ , p ^ , q ^ , z ^ , y ^ , v ^ ) K . Since C is μ 1 -cocoercive, we have
x x ^ , Q x Q x ^ x x ^ , C x C x ^ μ 1 C x C x ^ 2 = μ 1 Q x Q x ^ 2 .
Then, the operator Q is μ 1 -cocoercive.
(iv) Let x ¯ H , then
x ¯ solves ( 3 ) ( x ¯ , p ¯ , q ¯ , y ¯ ) H X Y G : z i = 1 m L i * K i * p ¯ i A x ¯ + C x ¯ , K i L i x ¯ y ¯ i r i B i 1 p ¯ i , i = 1 , , m , M i y ¯ i D i 1 q ¯ i , i = 1 , , m , K i * p ¯ i = M i * q ¯ i , i = 1 , , m . ( x ¯ , p ¯ , q ¯ ) H X Y ( z ¯ , y ¯ , v ¯ ) G G G : 0 z + A x ¯ + i = 1 m L i * v ¯ i + C x ¯ 0 K i z ¯ i + B i 1 p ¯ i , i = 1 , , m , 0 M i y ¯ i + D i 1 q ¯ i , i = 1 , , m , 0 = K i * p ¯ i v ¯ i , i = 1 , , m , 0 = M i * q ¯ i v ¯ i , i = 1 , , m , 0 = r i + z ¯ i + y ¯ i L i x ¯ , i = 1 , , m ( x ¯ , p ¯ , q ¯ , z ¯ , y ¯ , v ¯ ) zer ( M + S + Q ) .
Therefore, if ( p ¯ , q ¯ , z ¯ , y ¯ , v ¯ ) is a solution of (4), then there exists x ¯ H such that ( x ¯ , p ¯ , q ¯ , z ¯ , y ¯ , v ¯ ) is a primal-dual solution of Problem 2.  □

3.1. Primal-Dual Forward–Backward Splitting Type Algorithm

In this subsection, we prove the convergence of the primal-dual forward–backward splitting type algorithm (5). By Lemma 2, M + S is maximally monotone, and Q is cocoercive. It is natural to use the forward–backward splitting algorithm. However, the resolvent operator of M + S does not have a closed-form solution. To overcome this difficulty, Boţ and Hendrich [4] introduced a special precondition to the forward–backward splitting algorithm and obtained the primal-dual splitting algorithm (5). In the following, we present an improved convergence analysis of the primal-dual forward–backward splitting type algorithm (5), which sharpens the selection of iterative parameters.
Theorem 1.
Consider Problem 2, suppose that
z ran A + i = 1 m L i * K i * B i K i M i * D i M i ( L i · r i ) + C .
For any i = 1 , , m , let τ , θ 1 , i , θ 2 , i , γ 1 , i , γ 2 , i and σ i be strictly positive real numbers and { λ n } [ 0 , 2 1 2 β ] , satisfying the following conditions:
(i)
2 β > 1 , where β = μ 1 1 τ i = 1 m σ i L i 2 ;
(ii)
( 1 α ¯ ) min i = 1 , , m 1 τ , 1 θ 1 , i , 1 θ 2 , i , 1 γ 1 , i , 1 γ 2 , i , 1 σ i > 0 , where α ¯ is defined by
α ¯ = max τ i = 1 m σ i L i 2 , max j = 1 , , m θ 1 , j γ 1 , j K j 2 , θ 2 , j γ 2 , j M j 2 .
(iii)
n = 0 + λ n ( 2 1 2 β λ n ) = + .
Consider the iterative sequences generated by (5). Then, there exists a primal-dual solution ( x ¯ , p ¯ , q ¯ , z ¯ , y ¯ , v ¯ ) to Problem 2 such that x n x ¯ , p i , n p ¯ i , q i , n q ¯ i , z i , n z ¯ i , y i , n y ¯ i , and v i , n v ¯ i for any i = 1 , , m as n + .
Proof. 
Let H , A, C, X i , Y i , G i , B i , D i , L i , K i , M i , i = 1 , , m be defined as in Problem 2. Let the real Hilbert space K = H X Y G G G and
p = ( p 1 , , p m ) q = ( q 1 , , q m ) y = ( y 1 , , y m ) and z = z 1 , , z m v = v 1 , , v m r = r 1 , , r m .
Define
V : K K , ( x , p , q , z , y , v ) x τ , p θ 1 , q θ 2 , z γ 1 , y γ 2 , v σ + i = 1 m L i * v i , K ˜ z , M ˜ y , K ˜ * p , M ˜ * q , L 1 x , , L m x .
Further, for positive real values τ , θ 1 , i , θ 2 , i , γ 1 , i , γ 2 , i , σ i R + + , i = 1 , , m , define the notations
p θ 1 = p 1 θ 1 , 1 , , p m θ 1 , m q θ 2 = q 1 θ 2 , 1 , , q m θ 2 , m , z γ 1 = z 1 γ 1 , 1 , , z m γ 1 , m y γ 2 = y 1 γ 2 , 1 , , y m γ 2 , m , v σ = v 1 σ 1 , , v m σ m .
Then, (5) can be rewritten in the form of
( n 0 ) x n x ˜ n τ i = 1 m L i * ( v i , n v ˜ i , n C x n z + A x ˜ n + i = 1 m L i * v ˜ i , n For i = 1 , , m p i , n p ˜ i , n θ 1 , i + K i z i , n z ˜ i , n B i 1 p ˜ i , n K i z ˜ i , n q i , n q ˜ i , n θ 2 , i + M i y i , n y ˜ i , n D i 1 q ˜ i , n M i y ˜ i , n z i , n z ˜ i , n γ 1 , i + K i * p i , n p ˜ i , n = v ˜ i , n + K i * p ˜ i , n y i , n y ˜ i , n γ 2 , i + M i * q i , n q ˜ i , n = v ˜ i , n + M i * q ˜ i , n v i , n v ˜ i , n σ i L i x n x ˜ n = r i + z ˜ i , n + y ˜ i , n L i x ˜ n x n + 1 = x n + λ n x ˜ n x n .
Let
p n = p 1 , n , p m , n X q n = q 1 , n , , q m , n Y z n = z 1 , n , , z m , n G y n = y 1 , n , , y m , n G v n = v 1 , n , , v m . n G p ˜ n = p ˜ 1 , n , p ˜ m , n X q ˜ n = q ˜ 1 , n , , q ˜ m , n Y z ˜ n = z ˜ 1 , n , , z ˜ m , n G y ˜ n = y ˜ 1 , n , , y ˜ m , n G v ˜ n = v ˜ 1 . n , , v ˜ m , n G
and
x n = x n , p n , q n , z n , y n , v n K x ˜ n = x ˜ n , p ˜ n , q ˜ n , z ˜ n , y ˜ n , v ˜ n K .
Therefore, the iteration scheme in (22) is equivalent to
( n 0 ) V x n x ˜ n Q x n ( M + S ) x ˜ n x n + 1 = x n + λ n x ˜ n x n .
We introduce the notations
A K : = V 1 ( M + S ) and B K : = V 1 Q .
Then, for any n 0 , we have
V x n x ˜ n Q n ( M + S ) x ˜ n V x n Q x n ( V + M + S ) x ˜ n x n V 1 Q x n Id + V 1 ( M + S ) x ˜ n x ˜ n = Id + V 1 ( M + S ) 1 x n V 1 Q x n x ˜ n = Id + A K 1 x n B K x n ,
which can be written as
x ˜ n = J A K x n B K x n .
Thus, the iterative scheme in (23) becomes
( n 0 ) x ˜ n = J A K x n B K x n x n + 1 = x n + λ n x ˜ n x n .
We then introduce the Hilbert space K V with inner product and norm, respectively, defined, for x , y K , via
x , y K V = x , V y K and x K V = x , V x K
Since M + S and Q are maximally monotone on K , the operators A K and B K are maximally monotone on K V . Moreover, since V is self-adjoint and ρ -strongly positive, one can easily see that weak and strong convergence in K V are equivalent with weak and strong convergence in K , respectively. In the following, we prove that B K is β -cocoercive on K V . In fact, let x , y K V , we have
B K x B K y K V 2 = Q x Q y , V 1 Q x V 1 Q y K = C x C y , ( 1 τ I d i = 1 m σ i L i * L i ) 1 ( C x C y ) ( 1 τ i = 1 m σ i L i 2 ) 1 C x C y , C x C y = ( 1 τ i = 1 m σ i L i 2 ) 1 C x C y 2 .
It follows from the above inequality that we obtain
x y , B K x B K y K V = x y , Q x Q y K = x y , C x C y μ 1 C x C y μ 1 ( 1 τ i = 1 m σ i L i 2 ) B K x B K y K V 2 = β B K x B K y K V 2 ,
where β = μ 1 ( 1 τ i = 1 m σ i L i 2 ) .
Since 2 β > 1 , so the iteration scheme (27) could be viewed as a special case of the forward–backward splitting algorithm. By Corollary 28.9 of [23], the iterative sequences { x n } converge weakly to a point x ¯ = ( x ¯ , p ¯ , q ¯ , z ¯ , y ¯ , v ¯ ) in zer A K + B K . It is observed that zer A K + B K = zer V 1 ( M + S + Q ) = zer ( M + S + Q ) . Then, we obtain that x n x ¯ , p i , n p ¯ i , q i , n q ¯ i , z i , n z ¯ i , y i , n y ¯ i , and v i , n v ¯ i for any i = 1 , , m as n + . This completes the proof.  □
Remark 1.
Theorem 1 improves the FB_BH algorithm (5) in the following aspects:
(i) The parameter conditions of τ , θ 1 , i , θ 2 , i , γ 1 , i , γ 2 , i and σ i , for any i = 1 , , m , are relaxed.
(ii) The range of relaxing parameters { λ n } has been expanded. We have improved the relaxation parameter form { λ n } ( 0 , 1 ] to { λ n } [ 0 , 2 1 2 β ] . Since 2 β > 1 , then 2 1 2 β > 1 .

3.2. Primal-Dual Forward–Backward-Half–Forward Splitting Type Algorithm

By Lemma 2, the primal-dual pair of monotone inclusions (3) and (4) are equivalent to the monotone inclusions of the sum of M + S + Q , where M is maximally monotone, S is monotone, Lipschitz, and Q is cocoercive. It is well-known that a μ 1 -cocoercive operator is μ -lipschitz continuous. The forward–backward–forward splitting type algorithm (8) does not make use of the cocoercive property of Q . In the following, we propose a forward–backward–half-forward splitting type algorithm for solving (3) and (4) and prove its convergence.
Theorem 2.
For Problem 3, suppose that
z ran A + i = 1 m L i * K i * B i K i M i * D i M i ( L i · r i ) + C .
Let γ n [ η , χ η ] , where η ( 0 , χ / 2 ] , l is defined by (16), and χ : = 4 μ 1 1 + 1 + 16 μ 1 2 l 2 . Let x 0 H , and for any i = 1 , , m , let p i , 0 X i , q i , 0 Y i , z i , 0 G i , y i , 0 G i and v i , 0 G i . Set
( n 0 ) x ˜ n = J γ n A ( x n γ n ( C x n + i = 1 m L i * v i , n z ) ) For i = 1 , , m p ˜ i , n = J γ n B i 1 p i , n + γ n K i z i , n q ˜ i , n = J γ n D i 1 q i , n + γ n M i y i , n u 1 , i , n = z i , n γ n K i * p i , n v i , n γ n ( L i x n r i ) u 2 , i , n = y i , n γ n M i * q i , n v i , n γ n ( L i x n r i ) z ˜ i , n = 1 + γ n 2 1 + 2 γ n 2 u 1 , i , n γ n 2 1 + γ n 2 u 2 , i , n y ˜ i , n = 1 + γ n 2 1 + 2 γ n 2 u 2 , i , n γ n 2 1 + γ n 2 u 1 , i , n v ˜ i , n = v i , n + γ n L i x n r i z ˜ i , n y ˜ i , n x n + 1 = x ˜ n + γ n i = 1 m L i * v i , n v ˜ i , n For i = 1 , , m p i , n + 1 = p ˜ i , n γ n K i z i , n z ˜ i , n q i , n + 1 = q ˜ i , n γ n M i y i , n y ˜ i , n z i , n + 1 = z ˜ i , n + γ n K i * p i , n p ˜ i , n y i , n + 1 = y ˜ i , n + γ n M i * q i , n q ˜ i , n v i , n + 1 = v ˜ i , n γ n L i x n x ˜ n ,
Then there exists a primal-dual solution ( x ¯ , p ¯ , q ¯ , z ¯ , y ¯ , v ¯ ) to Problem 1.2 such that x n x ¯ , and for i = 1 , , m , p i , n p ¯ i , q i , n q ¯ i , z i , n z ¯ i , y i , n y ¯ i , and v i , n v ¯ i as n + .
Proof. 
Notice that (32) is equivalent to
( n 0 ) ( x n γ n ( C x n + i = 1 m L i * v i , n ) ) ( Id + γ n ( A · z ) ) x ˜ n for i = 1 , , m p i , n + γ n K i z i , n ( Id + γ n B i 1 ) p ˜ i , n q i , n + γ n M i y i , n ( Id + γ n D i 1 ) q ˜ i , n z i , n γ n K i * p i , n = z ˜ i , n γ n v ˜ i , n y i , n γ n M i * q i , n = y ˜ i , n γ n v ˜ i , n v i , n + γ n L i x n = v ˜ i , n + γ n ( r i + z ˜ i , n + y ˜ i , n ) x n + 1 = x ˜ n + γ n i = 1 m L i * v i , n v ˜ i , n for i = 1 , , m p i , n + 1 = p ˜ i , n γ n K i z i , n z ˜ i , n q i , n + 1 = q ˜ i , n γ n M i y i , n y ˜ i , n z i , n + 1 = z ˜ i , n + γ n K i * p i , n p ˜ i , n y i , n + 1 = y ˜ i , n + γ n M i * q i , n q ˜ i , n v i , n + 1 = v ˜ i , n γ n L i x n x ˜ n .
Using the notations in Theorem 1, the iteration scheme (33) could be equivalently written as
( n 0 ) x n γ n ( S + Q ) x n Id + γ n M x ˜ n x n + 1 = x n + γ n S x n S x ˜ n ,
which is equivalent to
( n 0 ) x ˜ n = J γ n M ( x n γ n ( S + Q ) x n ) x n + 1 = x ˜ n + γ n S x n S x ˜ n .
Therefore, (35) is an instance of the forward–backward–half-forward splitting algorithm in K , whose convergence has been investigated in [22].
Let ( x ¯ , p ¯ , q ¯ , z ¯ , y ¯ , v ¯ ) zer ( M + S + Q ) , then, ( x ¯ , p ¯ , q ¯ , z ¯ , y ¯ , v ¯ ) zer ( M + S + Q ) H X Y G G G is a primal-dual solution to Problem 1.1 . By Theorem 2.3 of [22], we have x n x ¯ , and for i = 1 , , m , p i , n p ¯ i , q i , n q ¯ i , z i , n z ¯ i , y i , n y ¯ i , and v i , n v ¯ i as n + . This completes the proof.  □
Remark 2.
In contrast to the FBF_BH algorithm (8), the proposed algorithm (32) has two advantages:
(i) The calculation of the cocoercive operator in (8) requires twice, while it only requires once in the proposed algorithm (32).
(ii) The range of the iterative parameter of the proposed algorithm (32) is larger than algorithm (8).

3.3. Applications to Convex Minimization Problems

In this subsection, we apply the proposed algorithms to solve the following convex minimization problem.
Problem 3.
Let H be a real Hilbert space, let z H and h : H R is differentiable with μ-Lipschitzian gradient for some μ > 0 . Let f Γ 0 ( H ) . For every i = 1 , , m , let G i , X i , Y i be real Hilbert spaces, r i G i , let g i Γ 0 X i and l i Γ 0 Y i and consider the nonzero linear bounded operators L i : H G i , K i : G i X i and M i : G i Y i . The primal optimization problem is
min x H f ( x ) + i = 1 m g i K i l i M i L i x r i + h ( x ) x , z ,
together with its conjugate dual problem
max ( p , q ) X Y , K i * p i = M i * q i , i = 1 , , m f * h * z i = 1 m L i * K i * p i i = 1 m g i * p i + l i * q i + p i , K i r i .
Let ( x ¯ , p ¯ , q ¯ , y ¯ ) H X Y G be a solution of the following primal-dual system of monotone inclusions
z i = 1 m L i * K i * p ¯ i f ( x ¯ ) + h ( x ¯ ) and   K i L i x ¯ y ¯ i r i g i * p ¯ i , M i y ¯ i l i * q ¯ i , K i * p ¯ i = M i * q ¯ i , i = 1 , , m ,
which means that x ¯ is an optimal solution to (36) and ( p ¯ , q ¯ ) is an optimal solution to (37).
For the primal-dual system (38), the iterative sequence proposed in (5) and (32) and the corresponding convergence statements are introduced as follows.
Algorithm 1: Primal-dual forward–backward splitting type algorithm for solving (36)
Let x 0 H , and for any i = 1 , , m , let p i , 0 X i , q i , 0 Y i and z i , 0 , y i , 0 , v i , 0 G i Define
( n 0 ) x ˜ n = prox τ f x n τ h ( x n ) + i = 1 m L i * v i , n z For i = 1 , , m p ˜ i , n = prox θ 1 , i g i * p i , n + θ 1 , i K i z i , n q ˜ i , n = prox θ 2 , i l i * q i , n + θ 2 , i M i y i , n u 1 , i , n = z i , n + γ 1 , i K i * p i , n 2 p ˜ i , n + v i , n + σ i L i 2 x ˜ n x n r i u 2 , i , n = y i , n + γ 2 , i M i * q i , n 2 q ˜ i , n + v i , n + σ i L i 2 x ˜ n x n r i z ˜ i , n = 1 + σ i γ 2 , i 1 + σ i γ 1 , i + γ 2 , i u 1 , i , n σ i γ 1 , i 1 + σ i γ 2 , i u 2 , i , n y ˜ i , n = 1 1 + σ i γ 2 , i u 2 , i , n σ i γ 2 , i z ˜ i , n v ˜ i , n = v i , n + σ i L i 2 x ˜ n x n r i z ˜ i , n y ˜ i , n x n + 1 = x n + λ n x ˜ n x n For i = 1 , , m p i , n + 1 = p i , n + λ n p ˜ i , n p i , n q i , n + 1 = q i , n + λ n q ˜ i , n q i , n z i , n + 1 = z i , n + λ n z ˜ i , n z i , n y i , n + 1 = y i , n + λ n y ˜ i , n y i , n v i , n + 1 = v i , n + λ n v ˜ i , n v i , n
The convergence of Algorithm 1 is presented in the following theorem.
Theorem 3.
For the convex optimization problem (36), suppose that
z ran f + i = 1 m L i * K i * g i K i M i * l i M i ( L i · r i ) + h
and consider the sequences generated by Algorithm 1. For any i = 1 , , m , let τ , θ 1 , i , θ 2 , i , γ 1 , i , γ 2 , i and σ i be strictly positive real numbers and { λ n } satisfy the conditions in Theorem 1. Then, there exists an optimal solution x ¯ to (36) and optimal solution ( p ¯ , q ¯ ) to (37) such that x n x ¯ and for i = 1 , , m , p i , n p ¯ i , and q i , n q ¯ i as n + .
Proof. 
In Theorem 1, let
A = f , C = h , and B i = g i , D i = l i , i = 1 , , m .
According to Theorem 20.25 of [23], the operators in (41) are maximally monotone. On the other hand, we have B i 1 = g i * and D i 1 = l i * for i = 1 , , m . Moreover, by the Baillon-Haddad theorem, C = h is μ 1 -cocoercive. By Theorem 1, we have x n x ¯ and for i = 1 , , m , p i , n p ¯ i , and q i , n q ¯ i .  □
The second algorithm is obtained from (32).
Algorithm 2: Primal-dual forward–backward-half–forward splitting type algorithm for solving (36)
Let γ n [ η , χ η ] , where η ( 0 , χ / 2 ] , l is defined by (16), and χ : = 4 μ 1 1 + 1 + 16 μ 1 2 l 2 . Let x 0 H , and for any i = 1 , , m , let p i , 0 X i , q i , 0 Y i , z i , 0 G i , y i , 0 G i and v i , 0 G i . Set
( n 0 ) x ˜ n = prox γ n f ( x n γ n ( h ( x n ) + i = 1 m L i * v i , n z ) ) For i = 1 , , m p ˜ i , n = prox γ n g i * p i , n + γ n K i z i , n q ˜ i , n = prox γ n l i * q i , n + γ n M i y i , n u 1 , i , n = z i , n γ n K i * p i , n v i , n γ n ( L i x n r i ) u 2 , i , n = y i , n γ n M i * q i , n v i , n γ n ( L i x n r i ) z ˜ i , n = 1 + γ n 2 1 + 2 γ n 2 u 1 , i , n γ n 2 1 + γ n 2 u 2 , i , n y ˜ i , n = 1 + γ n 2 1 + 2 γ n 2 u 2 , i , n γ n 2 1 + γ n 2 u 1 , i , n v ˜ i , n = v i , n + γ n L i x n r i z ˜ i , n y ˜ i , n x n + 1 = x ˜ n + γ n i = 1 m L i * v i , n v ˜ i , n For i = 1 , , m p i , n + 1 = p ˜ i , n γ n K i z i , n z ˜ i , n q i , n + 1 = q ˜ i , n γ n M i y i , n y ˜ i , n z i , n + 1 = z ˜ i , n + γ n K i * p i , n p ˜ i , n y i , n + 1 = y ˜ i , n + γ n M i q i , n q ˜ i , n v i , n + 1 = v ˜ i , n γ n L i x n x ˜ n .
As a direct result of Theorem 2, we have the following convergence theorem for (42). Since the proof is the same as Theorem 3, we omit it here.
Theorem 4.
For the convex optimization problem (36), suppose that
z ran f + i = 1 m L i * K i * g i K i M i * l i M i ( L i · r i ) + h ,
and consider the sequences generated by Algorithm 2. Then, there exists an optimal solution x ¯ to (36) and optimal solution ( p ¯ , q ¯ ) to (37) such that x n x ¯ and for i = 1 , , m , p i , n p ¯ i , and q i , n q ¯ i as n + .

4. Numerical Experiments

In this section, we present some experimental results on image denoising problems under Gaussian noise. We compare the proposed algorithms with the FB_BH algorithm (5) and the FBF_BH algorithm (8). We call Algorithm 1 the FB algorithm. On the other hand, we refer to the proposed Algorithm 2 as the FBHF algorithm. All numerical experiments were implemented on Matlab R2016b on a Lenovo laptop with Intel i7-6700 CPU 3.40 GHz and 4 GB memory.

4.1. Image Denoising Problems

In this subsection, we show how the proposed algorithms could be applied to solve image denoising problems.
Let b R n be the observed and vectorized noisy image of size M × N (with n = M N for greyscale and n = 3 M N for colored images). Let k 1 , and define
D k : = 1 1 0 0 0 0 1 1 0 0 0 0 1 1 0 0 0 0 1 1 0 0 0 0 0 R k × k ,
which models the discrete first-order derivative. We denote by A B the Kronecker product of the matrices A and B and define
D x = I N D M , D y = D N I M and D 1 = D x D y
where D x and D y represent the vertical and horizontal difference operators, respectively, and I N and I M are the identity matrices of sizes N and M, respectively. Further, we define the discrete second-order derivatives matrices
D x x = I N D M T D M , D y y = D N T D N I M , D 2 = D x x D y y ,
and
L 1 = D x T 0 0 D y T .
We mainly consider the following two constrained image denoising models:
2 - IC min x C 1 2 x b 2 + α 1 · 1 D 1 α 2 · 1 D 2 ( x ) ,
and
2 - MIC min x C 1 2 x b 2 + α 1 · 1 α 2 · 1 L 1 D 1 x ,
where α 1 > 0 , α 2 > 0 are the regularization parameters, and C is a nonempty closed convex set. By using the indicator function, both of the constrained 2 - IC and 2 2 MIC can be reformulated as the following unconstrained optimization problem,
2 - IC min x R n 1 2 x b 2 + α 1 · 1 D 1 α 2 · 1 D 2 ( x ) + δ C ( x ) ,
and
2 - MIC min x R n 1 2 x b 2 + α 1 · 1 α 2 · 1 L 1 D 1 x + δ C ( x ) .
It is easy to see that (50) and (51) are special cases of the general convex minimization problem (36), respectively. In fact, let m = 1 , z = 0 , and r 1 = 0 . For the 2 - IC , let f ( x ) = δ C ( x ) , g 1 ( x ) = α 1 x 1 , l 1 ( x ) = α 2 x 1 , K 1 = D 1 , M 1 = D 2 , L 1 = I , and h ( x ) = 1 2 x b 2 ; for the 2 - MIC , let f ( x ) = δ C ( x ) , g 1 ( x ) = α 1 x 1 , l 1 ( x ) = α 2 x 1 , K 1 = I , M 1 = L 1 , L 1 = D 1 , and h ( x ) = 1 2 x b 2 .

4.2. Numerical Settings

The test images are shown in Figure 1. In our experiments, the test image is added by Gaussian noise with zero mean and standard deviation σ g . In the following experiment, we set C = { x R n | 0 x i 255 } .
We use the peak-signal-to-noise (PSNR) and the structural similarity index (SSIM) [24] to evaluate the quality of the restored images, which are defined by
P S N R = 20 log 10 255 m n x x ˜ ,
and
S S I M = ( 2 μ 1 μ 2 + c 1 ) ( 2 σ 12 + c 2 ) ( 2 μ 1 2 μ 2 2 + c 1 ) ( σ 1 2 + σ 2 2 + c 2 ) ,
where x R n is the original image, x ˜ R n is the restored image, c 1 > 0 and c 2 > 0 are small constants, μ 1 and μ 2 are the mean values of x and x ˜ , respectively; σ 1 and σ 2 are the variances of x and x ˜ , respectively; and σ 12 is the covariance of x and x ˜ .
The criterion for stopping all algorithms is that the relative error of two consecutive iterations satisfies the following inequality
x n + 1 x n x n < ε ,
where ε > 0 is a given small constant.
We tune the regularization parameters α 1 and α 2 so as to maximize the PSNR values of the restored images. The choices of α 1 and α 2 are presented in Table 1.

4.3. Numerical Results and Discussion

In the first experiment, we discuss the influence of the selection of iterative parameters on the convergence speed of the compared algorithms. According to the convergence theorems, the parameter selection of these algorithms is shown in Table 2. For the 2 - IC and 2 - MIC , let h ( x ) = 1 2 x b 2 , then, h ( x ) = x b , and μ = 1 .
We chose Castle in Figure 1 as the test image, and the Gaussian noise level σ g = 15 . The numerical results of the FBF_BH algorithm and the FBHF algorithm with a different selections of the parameters { γ n } are reported in Table 3. It can be seen from Table 3 that both the FBF_BH algorithm and the FBHF algorithm gradually reduced the number of iterations as the step size parameter increased.
For the FB_BH algorithm and the FB algorithm, we selected several combinations of the parameters in Table 4.
According to the iteration parameters in Table 4, the obtained numerical results are shown in Table 5.
According to the results of Table 3 and Table 5, we chose the following parameters of the compared algorithms for the following experiments.
(1) For the FBF_BH algorithm, the best parameter of 2 - IC was γ n = 0.15 , and the best parameter of 2 - MIC was γ n = 0.26 .
(2) For the FBHF algorithm, the best parameter of 2 - IC was γ n = 0.17 , and the best parameter of 2 - MIC was γ n = 0.32 .
(3) For the FB_BH algorithm, the best parameters of 2 - IC were θ 1 = 0.3 , γ 1 = 0.3 , θ 2 = 0.15 , γ 2 = 0.15 , λ n = 1 , τ = 0.3 , and σ = 0.3 , and the best parameters of 2 - MIC were θ 1 = 0.4 , γ 1 = 0.3 , θ 2 = 0.2 , γ 2 = 0.2 , λ n = 1 , τ = 0.2 , and σ = 0.3 .
(4) For the FB algorithm, the best parameters of 2 - IC were θ 1 = 0.3 , γ 1 = 0.3 , θ 2 = 0.2 , γ 2 = 0.1 , λ n = 1.8 , τ = 0.2 , and σ = 0.2 , and the best parameters of 2 - MIC were θ 1 = 0.3 , γ 1 = 0.3 , θ 2 = 0.2 , γ 2 = 0.2 , λ n = 1.8 , τ = 0.2 , and σ = 0.2 .
In the second experiment, we tested the performance of the compared algorithms for solving 2 - IC and 2 - MIC . We present the numerical results by each algorithm in Table 6.
From the experimental results of Table 6, we can see that the proposed FBHF algorithm converged faster than the FBF_BH algorithm in terms of the number of iterations while ensuring higher PSNR and SSIM values. Meanwhile, the proposed FB algorithm also converged faster than the FB_BH algorithm. The obtained results verify that the proposed algorithms are better than those in [4]. Some of the recovered images are shown in Figure 2 and Figure 3, respectively.

5. Conclusions

In this paper, we studied the convergence of two different primal-dual splitting algorithms for solving monotone inclusions (3) and (4). Firstly, we proved the convergence of the forward–backward type algorithm (5). Our parameter conditions improved the results of Boţ and Hendrich [4]. Secondly, we proposed a new forward–backward–half-forward type algorithm (32). In contrast to the forward–backward–forward type algorithm (8), the iterative sequences in the proposed forward–backward–half-forward type algorithm (32) used the cocoercive operator only once via the forward step. Finally, we applied the proposed algorithms to solve image denoising problems (48) and (49). The numerical results demonstrated the advantages of the proposed algorithms.

Author Contributions

Formal analysis, Q.D.; Investigation, Q.D.; Methodology, Y.T.; Supervision, Y.T.; Writing—original draft, X.L.; Writing—review & editing, J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Natural Science Foundations of China (12061045, 11661056).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

The authors would like to thank the referees and the editor for their helpful comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Vũ, B.C. A splitting algorithm for dual monotone inclusions involving cocoercive operators. Adv. Comput. Math. 2013, 38, 667–681. [Google Scholar] [CrossRef]
  2. Pesquet, J.-C.; Repetti, A. A class of randomized primal-dual algorithms for distributed optimization. J. Nonlinear Convex Anal. 2015, 16, 2453–2490. [Google Scholar]
  3. Vũ, B.C. A spliting algorithm for coupled system of primal-dual monotone inclusions. J. Optim. Theory Appl. 2015, 164, 993–1025. [Google Scholar] [CrossRef]
  4. Boţ, R.I.; Hendrich, C. Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators. Inverse Probl. Imaging 2016, 10, 617–640. [Google Scholar] [CrossRef] [Green Version]
  5. Boţ, R.I.; Hendrich, C. A douglas-rachford type primal-dual method for solving inclusions with mixtures of composite and parallel-sum type monotone operators. SIAM J. Optim. 2013, 4, 2541–2565. [Google Scholar] [CrossRef] [Green Version]
  6. Boţ, R.I.; Csetnek, E.R.; Heinrich, A. A primal-dual splitting for finding zeros of sums of maximal monotone operators. SIAM J. Optim. 2013, 23, 2011–2036. [Google Scholar] [CrossRef]
  7. Yang, Y.X.; Tang, Y.C.; Wen, M.; Zeng, T.Y. Preconditioned douglas-rachford type primal-dual method for solving composite monotone inclusion problems with applications. Inverse Probl. Imaging 2021, 15, 787–825. [Google Scholar] [CrossRef]
  8. Briceño-Arias, L.M.; Combettes, P.L. A monotone+skew splitting splitting model for composite monotone inclusions in duality. SIAM J. Control Optim. 2011, 21, 1230–1250. [Google Scholar] [CrossRef] [Green Version]
  9. Combettes, P.L.; Pesquet, J.-C. Primal-dual splitting algorithm for solving inclusions with mixtures of composite, lipschitzian, and paralle-sum type monotone operators. Set-Valued Var. Anal. 2012, 20, 307–330. [Google Scholar] [CrossRef] [Green Version]
  10. Combettes, P.L. Systems of structured monotone inclusions: Duality, algorithms, and applications. SIAM J. Optim. 2013, 23, 2420–2447. [Google Scholar] [CrossRef] [Green Version]
  11. Becker, S.R.; Combettes, P.L. An algorithm for splitting parallel sums of linearly composed monotone operatos with applications to signal recovery. J. Nonlinear Convex Anal. 2014, 15, 137–159. [Google Scholar]
  12. Boţ, R.I.; Hendrich, C. Convergence analysis for a primal-dual monotone+skew splitting algorithm with applications to total variation minimization. J. Math. Imaging Vis. 2014, 49, 551–568. [Google Scholar]
  13. Alotaibi, A.; Combettes, P.L.; Shahzad, N. Solving coupled composite monotone inclusions by successive fejér approximations of their kukn-tucker set. SIAM J. Optim. 2014, 24, 2076–2095. [Google Scholar] [CrossRef] [Green Version]
  14. Tran-Dinh, Q.; Vu, B.C. A new splitting method for solving composite monotone inclusions involving parallel-sum operators. arXiv 2015, arXiv:1505.07946. [Google Scholar]
  15. Combettes, P.L.; Eckstein, J. Asynchronous block-iterative primal-dual decomposition methods for monotone inclusions. Math. Program. 2018, 168, 645–672. [Google Scholar] [CrossRef] [Green Version]
  16. Bardaro, C.; Bevignani, G.; Mantellini, I.; Seracini, M. Bivariate generalized exponential sampling series and applications to seismic waves. Constr. Math. Anal. 2019, 2, 153–167. [Google Scholar] [CrossRef]
  17. Johnstone, P.R.; Eckstein, J. Projective splitting with forward steps. Math. Program. 2020, 1–40. [Google Scholar] [CrossRef]
  18. Chambolle, A.; Lions, P.L. Image recovery via total variation minimizaing and related problems. Numer. Math. 1997, 76, 167–188. [Google Scholar] [CrossRef]
  19. Setzer, S.; Steidl, G.; Teuber, T. Infimal convolution regularization with discrete l1-type functionals. Commun. Math. Sci. 2011, 9, 797–827. [Google Scholar] [CrossRef] [Green Version]
  20. Combettes, P.L.; Vũ, B.C. Variable metric forward-backward splitting with applications to monotone inclusions in duality. Optimization 2014, 63, 1289–1318. [Google Scholar] [CrossRef] [Green Version]
  21. Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  22. Briceño-Arias, L.M.; Davis, D. Forward-backward-half forward algorithm for solving monotone inclusions. SIAM J. Optim. 2018, 28, 2839–2871. [Google Scholar] [CrossRef] [Green Version]
  23. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed.; Springer: London, UK, 2017. [Google Scholar]
  24. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Test images. (a) 481 × 321 “Castle” image, (b) 493 × 517 “Building” image.
Figure 1. Test images. (a) 481 × 321 “Castle” image, (b) 493 × 517 “Building” image.
Symmetry 13 02415 g001
Figure 2. Noisy and restored “Castle” images. (a) σ g = 15 . (b) σ g = 25 . (c) σ g = 50 . (d) 2 - IC /FBF_BH. (e) 2 - IC /FBF_BH. (f) 2 - IC /FBF_BH. (g) 2 - IC /FB_BH. (h) 2 - IC /FB_BH. (i) 2 - IC /FB_BH. (j) 2 - IC /FBHF. (k) 2 - IC /FBHF. (l) 2 - IC /FBHF. (m) 2 - IC /FB. (n) 2 - IC /FB. (o) 2 - IC /FB.
Figure 2. Noisy and restored “Castle” images. (a) σ g = 15 . (b) σ g = 25 . (c) σ g = 50 . (d) 2 - IC /FBF_BH. (e) 2 - IC /FBF_BH. (f) 2 - IC /FBF_BH. (g) 2 - IC /FB_BH. (h) 2 - IC /FB_BH. (i) 2 - IC /FB_BH. (j) 2 - IC /FBHF. (k) 2 - IC /FBHF. (l) 2 - IC /FBHF. (m) 2 - IC /FB. (n) 2 - IC /FB. (o) 2 - IC /FB.
Symmetry 13 02415 g002
Figure 3. Noisy and restored “Building” images. (a) σ g = 15 . (b) σ g = 25 . (c) σ g = 50 . (d) 2 - IC /FBF_BH. (e) 2 - IC /FBF_BH. (f) 2 - IC /FBF_BH. (g) 2 - IC /FB_BH. (h) 2 - IC /FB_BH. (i) 2 - IC /FB_BH. (j) 2 - IC /FBHF. (k) 2 - IC /FBHF. (l) 2 - IC /FBHF. (m) 2 - IC /FB. (n) 2 - IC /FB. (o) 2 - IC /FB.
Figure 3. Noisy and restored “Building” images. (a) σ g = 15 . (b) σ g = 25 . (c) σ g = 50 . (d) 2 - IC /FBF_BH. (e) 2 - IC /FBF_BH. (f) 2 - IC /FBF_BH. (g) 2 - IC /FB_BH. (h) 2 - IC /FB_BH. (i) 2 - IC /FB_BH. (j) 2 - IC /FBHF. (k) 2 - IC /FBHF. (l) 2 - IC /FBHF. (m) 2 - IC /FB. (n) 2 - IC /FB. (o) 2 - IC /FB.
Symmetry 13 02415 g003
Table 1. The regularization parameters selection of the 2 - IC and 2 - MIC .
Table 1. The regularization parameters selection of the 2 - IC and 2 - MIC .
ImageModel σ = 15 σ = 25 σ = 50
α 1 α 2 α 1 α 2 α 1 α 2
Castle 2 - IC 7.721.214.729.735.5123.9
2 - MIC 7.621.114.850.835.7115.9
Building 2 - IC 6.125.312.43331.387.8
2 - MIC 6.127.812.549.431.8140
Table 2. The parameter selection of the compared algorithms.
Table 2. The parameter selection of the compared algorithms.
ModelMethodParameter
2 - IC FB_BH λ n ( 0 , 1 ] , 2 ( 1 α ¯ ) min 1 τ , 1 θ 1 , 1 θ 2 , 1 γ 1 , 1 γ 2 , 1 σ > 1 , α ¯ = max τ σ , 2.8072 θ 1 γ 1 , 5.6133 θ 2 γ 2
FBF_BH γ n ( 0 , 0.1512 )
FB λ n ( 0 , 2 1 2 β ) , 2 β > 1 , ( 1 α ¯ ) min 1 τ , 1 θ 1 , 1 θ 2 , 1 γ 1 , 1 γ 2 , 1 σ > 0 , α ¯ = max τ σ , 2.8072 θ 1 γ 1 , 5.6133 θ 2 γ 2 , β = 1 τ σ
FBHF γ n ( 0 , 0.1704 )
2 - MIC FB_BH λ n ( 0 , 1 ] , 2 ( 1 α ¯ ) min 1 τ , 1 θ 1 , 1 θ 2 , 1 γ 1 , 1 γ 2 , 1 σ > 1 , α ¯ = max 2.8072 τ σ , θ 1 γ 1 , 1.9926 θ 2 γ 2
FBF_BH γ n ( 0 , 0.2627 )
FB λ n ( 0 , 2 1 2 β ) , 2 β > 1 , ( 1 α ¯ ) min 1 τ , 1 θ 1 , 1 θ 2 , 1 γ 1 , 1 γ 2 , 1 σ > 0 , α ¯ = max 2.8072 τ σ , θ 1 γ 1 , 1.9926 θ 2 γ 2 , β = 1 τ 7.8798 σ
FBHF γ n ( 0 , 0.3259 )
Table 3. Numerical results of the FBF_BH algorithm and the FBHF algorithm with different selections of parameters in terms of the PSNR, SSIM, and number of iterations (Iter).
Table 3. Numerical results of the FBF_BH algorithm and the FBHF algorithm with different selections of parameters in terms of the PSNR, SSIM, and number of iterations (Iter).
MethodModel γ n ε = 10 5 ε = 10 6
PSNRSSIMIterPSNRSSIMIter
FBF_BH 2 - IC 0.0330.51380.8410179130.53460.84096572
0.0530.52250.8410141130.53560.84095203
0.0730.52870.8410122730.53610.84094445
0.0930.53090.8410110330.53700.84093994
0.1130.53170.8410100330.53790.84093730
0.1330.53230.841094630.53860.84093560
0.1530.53300.841088830.53910.84093372
2 - MIC 0.0330.53300.8384228730.54340.83914984
0.0730.53580.8386112630.54510.83922824
0.1130.53780.838779630.54600.83932207
0.1530.53900.838863430.53650.83931896
0.1930.54000.838853830.54670.83931686
0.2330.54090.838947430.54680.83931518
0.2630.54140.838943930.54700.83931456
FBHF 2 - IC 0.0330.51380.8410179030.53460.84096474
0.0530.52250.8410141130.53560.84095203
0.0730.52870.8410122730.53610.84094449
0.0930.53090.8410110330.53700.84093994
0.1130.53170.8410100330.53790.84093729
0.1330.53230.841094630.53860.84093561
0.1530.53300.841088730.53910.84093371
0.1730.53360.841083730.53930.84093169
2 - MIC 0.0330.53300.8384228630.54340.83914983
0.0730.53580.8386112630.54520.83922824
0.1130.53780.838779630.54600.83932207
0.1530.53900.838863330.54650.83931896
0.1930.54010.838853830.54670.83931686
0.2330.54100.838947430.54680.83931519
0.2730.54160.838942830.54700.83931432
0.3130.54210.839039430.54710.83941346
0.3230.54220.839038730.54710.83941317
Table 4. Parameter selection of the FB_BH algorithm and the FB algorithm.
Table 4. Parameter selection of the FB_BH algorithm and the FB algorithm.
MethodModelCase θ 1 θ 2 γ 1 γ 2 τ σ λ n
FB_BH 2 - IC 10.30.150.30.150.30.31
20.20.150.20.150.30.31
30.20.10.20.10.20.21
40.20.20.20.10.10.31
2 - MIC 10.30.150.30.150.30.31
20.40.20.30.10.20.31
30.10.20.10.20.30.21
40.10.10.10.10.20.41
FB 2 - IC 10.30.150.30.150.30.31.4
20.20.10.30.20.30.41.5
30.30.20.30.10.20.21.8
40.20.20.20.10.10.31.3
2 - MIC 10.50.40.50.40.30.31.4
20.40.30.40.40.250.251.5
30.30.20.30.20.20.21.8
40.20.30.20.30.30.31.4
Table 5. Numerical results of the FB_BH algorithm and the FB algorithm with different parameters in terms of the PSNR, SSIM, and number of iterations (Iter).
Table 5. Numerical results of the FB_BH algorithm and the FB algorithm with different parameters in terms of the PSNR, SSIM, and number of iterations (Iter).
MethodModelCase ε = 10 5 ε = 10 6
PSNRSSIMIterPSNRSSIMIter
FB_BH 2 - IC 130.53860.841175330.54050.84092846
230.53610.841182430.53990.84093117
330.53790.841188530.53970.84093277
430.53690.840981830.53950.84083087
2 - MIC 130.53860.841165730.54110.84092481
230.54120.838953430.54680.83931632
330.52530.840893030.53640.84093603
430.53130.8410103430.53750.83093875
FB 2 - IC 130.53870.841170130.54080.84092640
230.53820.841275430.54190.84102514
330.53890.840960130.54070.84092242
430.53770.840973530.53980.84082725
2 - MIC 130.54420.839129230.54750.83941047
230.54350.839132130.54740.83941140
330.54470.839228530.54760.8394980
430.54350.839137230.54740.83941121
Table 6. Numerical results of the compared algorithms in terms of the PSNR, SSIM, and number of iterations (Iter).
Table 6. Numerical results of the compared algorithms in terms of the PSNR, SSIM, and number of iterations (Iter).
ImageModel σ g FBF_BHFBHF
PSNRSSIMIterPSNRSSIMIter
Castle 2 - IC 1530.53300.841088830.53360.8410837
2527.93700.779978627.93760.7798736
5024.94070.7027136424.94270.70261315
2 - MIC 1530.54140.838943930.54220.8390387
2527.93520.779666527.93790.7798615
5024.93320.7010116924.93800.70141093
Building 2 - IC 1528.36170.8404136528.36120.84041339
2525.59390.7333102025.59430.7333971
5022.65060.5558110022.65100.55581045
2 - MIC 1528.36630.840549228.36650.8405431
2525.59970.733262325.60010.7332570
5022.67160.5565114622.67240.55651070
ImageModel σ g FB_BHFB
PSNRSSIMIterPSNRSSIMIter
Castle 2 - IC 1530.53860.841175330.53890.8409601
2527.93980.779969127.94520.7797548
5024.94150.7027132124.94850.70131039
2 - MIC 1530.54120.838953430.54260.8391398
2527.93300.779573427.94000.7799601
5024.92780.7707124324.94180.70171085
Building 2 - IC 1528.36350.8405104728.36340.8404906
2525.59520.733383225.59560.7334719
5022.60510.5563101822.65210.5564849
2 - MIC 1528.36640.840562328.36680.8405420
2525.58220.732958425.60050.7322554
5022.66290.5577104322.66530.5578856
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, J.; Luo, X.; Tang, Y.; Dong, Q. Primal-Dual Splitting Algorithms for Solving Structured Monotone Inclusion with Applications. Symmetry 2021, 13, 2415. https://doi.org/10.3390/sym13122415

AMA Style

Chen J, Luo X, Tang Y, Dong Q. Primal-Dual Splitting Algorithms for Solving Structured Monotone Inclusion with Applications. Symmetry. 2021; 13(12):2415. https://doi.org/10.3390/sym13122415

Chicago/Turabian Style

Chen, Jinjian, Xingyu Luo, Yuchao Tang, and Qiaoli Dong. 2021. "Primal-Dual Splitting Algorithms for Solving Structured Monotone Inclusion with Applications" Symmetry 13, no. 12: 2415. https://doi.org/10.3390/sym13122415

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop