Next Article in Journal
Fuzzy Algorithm Applied to Factors Influencing Competitiveness: A Case Study of Brazil and Peru through Affinities Theory
Previous Article in Journal
Vector-Valued Analytic Functions Having Vector-Valued Tempered Distributions as Boundary Values
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Regularized Tseng Method for Solving Various Variational Inclusion Problems and Its Application to a Statistical Learning Model

Department of Mathematics, The Technion—Israel Institute of Technology, 3200003 Haifa, Israel
*
Authors to whom correspondence should be addressed.
Axioms 2023, 12(11), 1037; https://doi.org/10.3390/axioms12111037
Submission received: 28 September 2023 / Revised: 30 October 2023 / Accepted: 30 October 2023 / Published: 6 November 2023
(This article belongs to the Section Hilbert’s Sixth Problem)

Abstract

:
We study three classes of variational inclusion problems in the framework of a real Hilbert space and propose a simple modification of Tseng’s forward-backward-forward splitting method for solving such problems. Our algorithm is obtained via a certain regularization procedure and uses self-adaptive step sizes. We show that the approximating sequences generated by our algorithm converge strongly to a solution of the problems under suitable assumptions on the regularization parameters. Furthermore, we apply our results to an elastic net penalty problem in statistical learning theory and to split feasibility problems. Moreover, we illustrate the usefulness and effectiveness of our algorithm by using numerical examples in comparison with some existing relevant algorithms that can be found in the literature.

1. Introduction

Variational inclusion problems have widely been studied by researchers because of their many valuable applications and generalizations. It is well known that many problems in applied sciences and engineering, mathematical optimization, machine learning, statistical learning, and optimal control can be modeled as variational inclusion problems. See, for example, [1,2,3] and references therein. In addition, under some assumptions, such problems involve many important concepts in applied mathematics, such as convex minimization, split feasibility, fixed points, saddle points, and variational inequalities; see, for example, [4,5,6,7].
Let H be a real Hilbert space, and let S : H H and T : H H be maximal monotone and monotone operators, respectively. The variational inclusion problem (VIP) is to
find u H such that 0 ( T + S ) u .
A foremost method proposed for solving (1) is the forward-backward splitting method (FBSM) [2]. The FBSM operates as follows:
x n + 1 = ( I + λ S ) 1 ( x n λ T x n ) n N ,
where λ ( 0 , 2 L ) and I : H H is the identity operator. The FBSM was proved to generate sequences that converge weakly to a solution of (1) under the assumption that T is 1 L -cocoercive (or inverse strongly monotone). Aside from the cocoercivity assumption, convergence is only guaranteed under a similar strong assumption such as the strong monotonicity of S + T [8]. Interested readers could consult [9] for results on finding zeros of sums of maximal monotone operators using a similar forward-backward scheme.
As an improvement on the FBSM, Tseng [10] proposed the modified forward-backward splitting method (MFBSM) (also called the forward-backward-forward splitting method) for solving the VIP for a more general case, where T is monotone and L-Lipschitz continuous. The MFBSM has the following structure:
y n = ( I + λ S ) 1 ( x n λ T x n ) x n + 1 = y n λ ( T y n T x n ) , n N .
A weak convergence theorem was proved for this algorithm. While the implementation of the FBSM requires the prior knowledge of the Lipschitz constant of T, which makes the algorithm a bit restrictive, the MFBSM uses line search techniques to circumvent the onerous task of estimating the Lipschitz constant. Be that as it may, line search techniques are computationally expensive, as they require several extra computations per iteration; see, for example, [11]. For the inexact and the stochastic versions of the MFBSM, respectively, please see [12,13].
Motivated by the need to reduce the computational burden associated with the line search technique, and by the fact that strong convergence is more desirable than weak convergence in infinite dimensional Hilbert spaces and in applications, some authors have incorporated existing hybrid techniques in the MFBSM and have proposed hybrid-like strongly convergent methods with self-adaptive step sizes. See, for instance, [11,14] and references therein.
Let S : H H and T i : H H be maximal monotone and monotone operators, respectively, where i [ I ] : = { 1 , 2 , , I } . We recall the modified variational inclusion problem (MVIP) introduced in [15]:
find u H such that 0 i [ I ] ( a i T i + S ) u ,
where a i ( 0 , 1 ) and i [ I ] a i = 1 . Obviously, the MVIP is more general than the VIP in the sense that if T i = T i [ I ] , then the MVIP becomes the VIP. It is easy to see that if x i [ I ] ( T i + S ) 1 ( 0 ) , then 0 i [ I ] ( a i T i + S ) x . However, the converse is not true in general.
In addition, the MVIP is more general than the following common variational inclusion problem, which has recently been studied in [16]:
find 0 i [ I ] ( T i + S ) u .
In view of its generality, the MVIP has recently attracted the attention of some authors who studied it and proposed algorithms for solving it. The authors of [17] studied the problem for the case where the T i s are inverse strongly monotone. They proposed a Halpern-inertial forward-backward splitting algorithm for solving the problem and proved a strong convergence theorem. Moreover, the authors of [18] studied the MVIP in the case where the T i s are monotone and Lipschitz continuous and designed a modified Tseng method for solving it. By using some symmetry properties, they proved the weak convergence of the method they proposed. Furthermore, they provided a formulation of an image deblurring problem as an MVIP and used their results in order to solve this problem.
On the other hand, motivated by the work [16], the authors of [19] have recently studied the following common variational inclusion problem:
find 0 i [ I ] ( T i + S i ) u ,
where S i : H H are maximal monotone operators and T i : H H are monotone Lipschitz continuous operators. They combined the inertial technique, Tseng’s method, and the shrinking projection method to design an iterative algorithm that solves (4).
Motivated by the above studies, in this paper, we propose a unified simple modification of the MFBSM, which we call the regularized MFBSM (RMFBSM), for solving (2)–(4) and establish a strong convergence theorem for the sequences it generates. To realize our objectives, we first examine a regularized MVIP and study the solution nets it generates. The novelty of our scheme lies in the fact that, unlike the existing modifications of the MFBSM in the literature, it yields strong convergence while preserving the two-step structure of the MFBSM. In the case where T i = T and S i = S for all i = 1 , 2 , , I , we apply our result to a statistical learning model and to split feasibility problems.
The organization of our paper is as follows: In Section 2, we recall some useful definitions and preliminary results, which are needed in our study and in the convergence analysis of our algorithm. In Section 3, we propose the regularized MFBSM and establish a strong convergence theorem for it. In Section 4, we give some applications of our main results. In Section 5, we present numerical examples to illustrate our method and compare it with some existing related algorithms in the literature. We conclude with Section 6.

2. Preliminaries

We start this section by stating some notations and recalling a number of important definitions.
Let C be a nonempty, closed, and convex subset of a real Hilbert space H , the inner product and induced norm of which are denoted by · , · and · , respectively. We denote by ‘ u n u ’ and ‘ u n u ’ the weak and the strong convergence, respectively, of the sequence { u n } to a point u. The following identity is well known:
u ± v 2 = u 2 ± 2 u , v + v 2 u , v H .
Definition 1
([1,20]). A mapping T : H H is said to be
 (i)
L-Lipschitz continuous if there exists a constant L > 0 such that
T u T v L u v u , v H ;
 (ii)
β-strongly monotone if there exists a constant β > 0 such that
T u T v , u v β u v 2 u , v H ;
 (iii)
β-inverse strongly monotone if there exists a constant β > 0 such that
T u T v , u v β T u T v 2 u , v H ;
 (iv)
monotone if
T u T v , u v 0 u , v H ;
 (v)
hemicontinuous if for every u , v , w H , we have
lim α 0 T ( u + α v ) , w = T u , w .
Remark 1.
In view of the above definitions, it is clear that every strongly monotone mapping is monotone. In addition, Lipschitz continuous mappings are hemicontinuous.
Let S : H H be a set-valued operator. The graph of S, denoted by g r ( S ) , is defined by
g r ( S ) : = { ( u , v ) H × H : v S u } .
The set-valued operator S is called a monotone operator if ∀ ( u , v ) , ( y , z ) g r ( S ) , u y , v z 0 and a maximal monotone operator if the graph of S is not a proper subset of the graph of any other monotone operator. For a maximal monotone operator S and λ > 0 , the resolvent of S is defined by
J λ S : = ( I + λ S ) 1 : H H .
It is well known that J λ S is firmly nonexpansive (in particualr, nonexpansive).
For each u H , there exists a unique nearest element, denoted by P C u C and called the metric projection of H onto C at u. That is,
u P C u u v v C .
The indicator function of C, denoted by i C , is defined by
i C ( u ) : = 0 , if u C , + , if u C .
Recall that the subdifferential f of a proper convex function f at u H is defined by
f ( u ) : = { z H : f ( u ) + z , y u f ( y ) , y H } .
The normal cone of C at the point u H , denoted by N C ( u ) , is defined by
N C ( u ) : = { x H : x , y u 0 y C } , if u C , , otherwise .
We know that i K is a maximal monotone operator, and we have i K = N K . Furthermore, for each λ > 0 ,
P C u = ( I + λ i K ) 1 u .
The following important lemmata are useful in our convergence analysis.
Lemma 1
([21]). Let H be a real Hilbert space. Let S : H H be a maximal monotone operator and let T : H H be a monotone and Lipschitz continuous mapping. Then, the mapping M = S + T is a maximal monotone mapping.
Lemma 2
([22]). Let H be a real Hilbert space. Let K be a nonempty, closed, and convex subset of H , and let F : H H be a hemicontinuous and monotone operator. Then, u ¯ is a solution to the variational inequality
find u ¯ K such that F u ¯ , u u ¯ 0 u K
if and only if u ¯ is a solution to the following problem:
find u ¯ K such that F u , u u ¯ 0 u K .
Lemma 3
([23]). Let H be a real Hilbert space. Suppose that F : H H is κ-Lipschitzian and β-strongly monotone over a closed and convex subset K H . Then, the variational inequality problem
find u ¯ K such that F u ¯ , v u ¯ 0 v K
has a unique solution u ¯ K .
Lemma 4
([24]). Let { Ψ n } be a sequence of non-negative real numbers, { a n } be a sequence of real numbers in ( 0 , 1 ) satisfying the condition n = 1 a n = , and { b n } be a sequence of real numbers. Assume that
Ψ n + 1 ( 1 a n ) Ψ n + a n b n , n 1 .
If lim sup k b n k 0 for every subsequence { Ψ n k } of { Ψ n } satisfying lim inf k Ψ n k + 1 Ψ n k 0 , then lim n Ψ n = 0 .

3. Regularized Modified Forward-Backward Splitting Method

Let S : H H be a maximal monotone operator, T i : H H be a monotone and L i -Lipschitz continuous operator for i [ I ] : = { 1 , 2 , , I } , and a i ( 0 , 1 ) such that i [ I ] a i = 1 . We denote the solution set of problem (2) by Ω , that is,
Ω : = { u H : 0 ( i [ I ] a i T i + S ) u } ,
and assume that Ω . Let F : H H be a γ -strongly monotone and L-Lipschitz continuous operator. We are seeking a solution x * H such that
x * Ω and F x * , u x * 0 u Ω .
We denote the solution set of (6) by Ω ˜ . In this connection, we consider the following regularized modified variational inclusion problem (RMVIP):
find u H such that 0 ( i [ I ] a i T i + S ) u + τ F u ,
where τ > 0 is a regularization parameter. Observe that by Lemma 1, i [ I ] a i T i + S is a maximal monotone operator. In addition, ( i [ I ] a i T i + S ) + τ F is strongly monotone because F is strongly monotone. Therefore, for each τ > 0 , problem (7) possesses a unique solution which we denote by u τ . The following result concerning the solution net { u τ } τ ( 0 , 1 ) can be deduced from [25] of Lemma 2, see also [7] of Proposition 3.1, but we will give the proof for completeness.
Proposition 1.
Let { u τ } τ ( 0 , 1 ) be the solution net of (7). Then,
 (a)
{ u τ } τ ( 0 , 1 ) is bounded.
 (b)
Let u τ j { u τ } τ ( 0 , 1 ) , j = 1 , 2 . Then, u τ 1 u τ 2 | τ 1 τ 2 | τ 1 M , where M > 0 is a constant.
 (c)
lim τ 0 + u τ exists and belongs to Ω ˜ .
Proof. 
(a) Let u τ be a solution to (7). Then we have
τ F u τ i [ I ] a i T i u τ S u τ .
Choose u Ω . Then,
i [ I ] a i T i u S u .
Using (8) and (9), we obtain
τ F u τ i [ I ] a i T i u τ + i [ I ] a i T i u , u τ u 0 .
Since T i is monotone, it follows from (10) that
τ F u τ , u u τ i [ I ] a i T i u τ T i u , u τ u 0 .
Using (11) and the γ -strong monotonicity of F , we find that
γ u u τ 2 F u F u τ , u u τ F u , u u τ F u u u τ .
It now follows from (12) that u u τ F u γ and hence { u τ } τ ( 0 , 1 ) is bounded.
(b) Let u τ j { u τ } τ ( 0 , 1 ) , j = 1 , 2 . Then,
0 ( i [ I ] a i T i + S ) u τ j + τ j F u τ j .
Using the monotonicity of ( i [ I ] a i T i + S ) and (13), we find that
τ 1 F u τ 1 + τ 2 F u τ 1 τ 2 F u τ 1 + τ 2 F u τ 2 , u τ 1 u τ 2 = τ 1 F u τ 1 + τ 2 F u τ 2 , u τ 1 u τ 2 0 .
Using (14), we see that
( τ 2 τ 1 ) F u τ 1 , u τ 1 u τ 2 τ 2 F u τ 1 F u τ 2 , u τ 1 u τ 2 τ 2 γ u τ 1 u τ 2 2 .
Therefore, it follows from (15) that
τ 2 γ u τ 1 u τ 2 2 ( τ 2 τ 1 ) F u τ 1 , u τ 1 u τ 2 | τ 2 τ 1 | F u τ 1 u τ 1 u τ 2 .
Using (16), we obtain
u τ 1 u τ 2 | τ 2 τ 1 | F u τ 1 τ 2 γ = | τ 2 τ 1 | M τ 2 ,
where M = F u τ 1 γ .
(c) The boundedness of the net { u τ } τ ( 0 , 1 ) implies that there exists a subsequence { u τ n } of { u τ } such that u τ n u ¯ as n where ( τ n ) n N is a sequence in ( 0 , 1 ) such that τ n 0 + as n . Note that τ n F u τ n ( i [ I ] a i T i + S ) u τ n for each n. Since the operator ( i [ I ] a i T i + S ) is maximal monotone and hence sequentially closed in the weak-strong topology on H × H , taking n , we infer that 0 ( i [ I ] a i T i + S ) u ¯ . Therefore, u ¯ Ω . In addition, 0 ( i [ I ] a i T i + S ) x * and F x * , u x * 0 u Ω . It then follows that
τ n F u τ n i [ I ] a i T i u τ n + i [ I ] a i T i x * , u τ n x * 0 .
It now follows from (17) and the monotonicity of T i that
τ n F u τ n , u τ n x * i [ I ] a i T i u τ n T i x * , u τ n x * 0 .
Using (18), we conclude that
F u τ n F x * , u τ n x * + F x * , u τ n x * = F u τ n , u τ n x * 0 .
Moreover, using (19) and the γ -strong monotonicity of F , we see that
γ u τ n x * 2 F u τ n F x * , u τ n x * F x * , x * u τ n .
Therefore, using (20), we have
F x * , x * u τ n 0 .
Passing to the limit in (21) as n , we find that
F x * , x * u ¯ 0 x * Ω , u ¯ Ω ,
and using Lemma 2, we arrive at the conclusion that u ¯ solves (6). The uniqueness of the solution to F x * , u x * 0 according to Lemma 3 implies that u ¯ = x * . Thus, passing to the limit as n in (20), we conclude that u τ n x * as n .    □
Following an approach similar to the above analysis, we denote the solution set of problem (4) by Ω 2 and consider the corresponding regularized common variational inclusion problem:
find u H such that 0 ( T i + S i ) u + τ F u for all i .
If { u τ } τ ( 0 , 1 ) is the solution net of (22), then Proposition 1 (a) and (b) hold, and for (c), we see that lim τ 0 + u τ exists and belongs to Ω ˜ 2 : = { x * Ω 2 : F x * , u * x * 0 u * Ω 2 } .
We make the following assumptions in connection with our Algorithm 1.
Assumption 1.
(A1) τ n ( 0 , 1 ) , lim n τ n = 0 , and n = 1 τ n = ;
 (A2)
lim n | τ n + 1 τ n | τ n 2 = 0 ;
 (A3)
{ μ n } and { p n } are two real sequences satisfying n = 1 μ n < and n = 1 p n < , respectively.
We next present a unified algorithm for solving the aforementioned problems.
Algorithm 1: Regularized Modified Forward-Backward Splitting Method (RMFBSM)
  Initialization: Let u 1 H , μ ( 0 , 1 ) , and λ 1 > 0 be given.
  Iterative steps: Given u n , calculate u n + 1 as follows:
  Step 1: Compute
y n i = J λ n S i ( u n λ n T i u n λ n τ n F u n ) .
  Step 2: Find i n : = arg max i [ I ] { y n i u n } .
  Step 3: Compute
u n + 1 = y n i n λ n ( T i n y n i n T i n u n ) .
  Update
λ n + 1 = min λ n + ρ n , ( μ + μ n ) y n u n T i n y n i n T i n u n , i f T i n y n i n T i n u n , λ n + ρ n , otherwise .
  Set n : = n + 1 and go back to Step 1.
Remark 2.
If T i = T , S i = S , ρ n = 0 = μ n in Algorithm 1, and d n = 0 in Algorithm 1 of Hieu et al. [25], then the two algorithms are the same. However, the problems we intend to solve are more general than the problem (VIP) studied by Hieu et al. The sequence of step sizes in Algorithm 1 is convergent, as shown in the next lemma.
Lemma 5.
The sequence { λ n } generated by Algorithm 1 is bounded, and λ n [ min { μ L i n , λ 1 } , λ 1 + P ] . Moreover, there exists λ [ min { μ L i n , λ 1 } , λ 1 + P ] such that lim n λ n = λ , where P = n = 1 ρ n .
Proof. 
The proof follows from [26] of Lemma 2.1.    □
Lemma 6.
Let { u n } be the sequence generated by Algorithm 1. Then, there exists n 0 N such that
u n + 1 u τ n 2 1 γ λ n τ n u n u τ n 2 ( 1 μ ) y n i u n 2 i [ I ] , n n 0 ,
where u τ n solves the RMVIP with u τ replaced by u τ n .
Proof. 
Using the definition of y n i , we obtain
u n λ n T i u n λ n τ n F u n y n i + λ n S i y n i ,
from which it follows that
u n λ n ( T i u n T i y n i ) λ n τ n F u n y n i λ n ( T i + S i ) y n i , for each i .
Employing (24) and (25) in particular for i n , we obtain
u n u n + 1 λ n τ n F u n λ n ( T i n + S i n ) y n i n .
In addition,
λ n τ n F u τ n λ n ( T i n + S i n ) u τ n .
Therefore, it follows from (26) and (27) that
u n u n + 1 , y n i n u τ n λ n τ n F u n F u τ n , y n i n u τ n 0 .
Using the fact that F is γ -strongly monotone, we infer from (28) that
u n y n i n , y n i n u τ n u n + 1 y n i n , y n i n u τ n λ n τ n F u n F u τ n , y n i n u n λ n τ n F u n F u τ n , u n u τ n λ n τ n γ u n u τ n 2 .
Note that
u n y n i n , y n i n u τ n = 1 2 u n u τ n 2 u n y n i n 2 y n i n u τ n 2
and
u n + 1 y n i n , y n i n u τ n = 1 2 u n + 1 u τ n 2 u n + 1 y n i n 2 y n i n u τ n 2 .
Substituting (30) and (31) in (29) and multiplying throughout by 2, we obtain
u n + 1 u τ n 2 u n u τ n 2 u n y n i n 2 + u n + 1 y n i n 2 2 λ n τ n F u n F u τ n , y n i n u n 2 λ n τ n γ u n u τ n 2 = ( 1 2 λ n τ n γ ) u n u τ n 2 2 λ n τ n F u n F u τ n , y n i n u n u n y n i n 2 + u n + 1 y n i n 2 .
Using (24) and the Peter-Paul inequality, as well as (32), for any ϵ 1 > 0 , we have
u n + 1 u τ n 2 ( 1 2 λ n τ n γ ) u n u τ n 2 + λ n τ n L ϵ 1 u n u τ n 2 + λ n τ n L ϵ 1 y n i n u n 2 u n y n i n 2 + λ n 2 T i n y n i n T i n u n 2 1 λ n τ n ( 2 γ L ϵ 1 ) u n u τ n 2 1 λ n τ n L ϵ 1 λ n 2 ( μ + μ n ) 2 λ n + 1 2 y n i n u n 2 .
Choose ϵ 1 = L γ . Observe that from Lemma 5 and the Assumption 1 (A1) and Assumption 1 (A3),
lim n 1 λ n τ n L 2 γ λ n 2 ( μ + μ n ) 2 λ n + 1 2 = 1 μ 2 > 1 μ > 0 .
Therefore, there exists n 0 N such that 1 λ n τ n L 1 2 γ λ n 2 μ 2 λ n + 1 2 > ( 1 μ ) for all n n 0 . Thus, it follows from (33) that
u n + 1 u τ n 2 ( 1 γ λ n τ n ) u n u τ n 2 ( 1 μ ) y n i n u n 2 ( 1 γ λ n τ n ) u n u τ n 2 ( 1 μ ) y n i u n 2 , i [ I ] , n n 0 ,
as asserted.    □
Theorem 1.
The sequences generated by Algorithm 1 converge strongly to x * Ω 2 ˜ .
Proof. 
Invoking Proposition 1(b) and the Peter-Paul inequality, we see that there exists ϵ 2 > 0 such that
u n + 1 u τ n 2 = u n + 1 u τ n + 1 + u τ n + 1 u τ n 2 = u n + 1 u τ n + 1 2 + 2 u n + 1 u τ n + 1 , u τ n + 1 u τ n + u τ n + 1 u τ n 2 u n + 1 u τ n + 1 2 ϵ 2 u n + 1 u τ n + 1 2 u τ n + 1 u τ n 2 ϵ 2 + u τ n + 1 u τ n 2 = ( 1 ϵ 2 ) u n + 1 u τ n + 1 2 ( 1 ϵ 2 ϵ 2 ) u τ n + 1 u τ n 2 ( 1 ϵ 2 ) u n + 1 u τ n + 1 2 1 ϵ 2 ϵ 2 τ n + 1 τ n τ n 2 M 2 .
Next, we define the sequence { β n } by
β n : = τ n , if γ λ n > 1 γ λ n τ n , if γ λ n 1 .
Then, from Lemma 6 and (35), it follows that i [ I ] and for n n 0 ,
u n + 1 u τ n + 1 2 ( 1 β n ) 1 ϵ 2 u n u τ n 2 + 1 ϵ 2 τ n + 1 τ n τ n 2 M 2 1 1 ϵ 2 ( 1 μ ) y n i u n 2 .
Let ϵ 2 = 0.5 β n . Then, from (36), it follows that
u n + 1 u τ n + 1 2 ( 1 ϵ 2 1 ϵ 2 ) u n u τ n 2 + ϵ 2 1 ϵ 2 1 ϵ 2 ϵ 2 2 τ n + 1 τ n τ n 2 M 2 1 μ 1 ϵ 2 y n i u n 2 = ( 1 ϵ 2 1 ϵ 2 ) u n u τ n 2 + ϵ 2 1 ϵ 2 4 ( 1 ϵ 2 ) β n 2 τ n + 1 τ n τ n 2 M 2 1 μ 1 ϵ 2 y n i u n 2 , n n 0 .
Let Θ n = u n u τ n 2 , δ n = ϵ 2 1 ϵ 2 and
b n = 4 ( 1 ϵ 2 ) β n 2 τ n + 1 τ n τ n 2 M 2 .
We see that δ n ( 0 , 1 ) and n = 1 δ n = . Then, from (37), we obtain
Θ n + 1 ( 1 δ n ) Θ n + δ n b n 1 μ 1 ϵ 2 y n i u n 2 , n n 0 .
It is not difficult to see that the sequence { b n } is bounded. To complete the proof, we invoke Lemma 4 by showing that lim sup k b n k 0 for every subsequence { Θ n k } of { Θ n } that satisfies lim inf k ( Θ n k + 1 Θ n k ) 0 . Therefore, let { Θ n k } be a subsequence of { Θ n } such that lim inf k ( Θ n k + 1 Θ n k ) 0 . It then follows from (38) that
lim sup k 1 μ ( 1 ϵ 2 ) y n k i u n k 2 lim sup k ( Θ n k Θ n k + 1 ) + δ n k ( b n k Θ n k ) lim inf k ( Θ n k + 1 Θ n k ) 0 .
From (39), we infer that
lim k y n k i u n k = 0 .
Moreover, from (24) and (40), it follows that
lim k u n k + 1 u n k = 0 .
Indeed, it immediately follows from conditions (A1) and (A2) that lim sup k b k = 0 . Therefore, using Lemma 4, we obtain that lim n Θ n = 0 . Consequently, using Proposition 1 (c) and (40), we conclude that lim n u n = lim n y n i = x * .    □
We next present some consequences of the above result to confirm that our Algorithm 1 indeed provides a unified framework for solving the inclusion problems (2)–(4).
Corollary 1.
Let S : H H be a maximal monotone operator and T i : H H be monotone Lipschitz continuous operators with constants L i for i [ I ] . Let F : H H be a γ-strongly monotone and Lipschitz continuous operator with constant L. Assume Ω 1 : = { x H : 0 i [ I ] ( T i + S ) } . Then, the sequences generated by Algorithm 2, under Assumption 1, converge strongly to x * Ω 1 , satisfying F x * , u * x * 0 u * Ω 1 .
Algorithm 2: Regularized Modified Forward-Backward Splitting Method (RMFBSM)
  Initialization: Let u 1 H , μ ( 0 , 1 ) , and λ 1 > 0 be given.
  Iterative steps: Calculate u n + 1 as follows:
  Step 1: Compute
y n i = J λ n S ( u n λ n T i u n λ n τ n F u n ) .
  Step 2: Find i n : = arg max i [ I ] { y n i u n } .
  Step 3: Compute
u n + 1 = y n i n λ n ( T i n y n i n T i n u n ) .
  Update
λ n + 1 = min λ n + ρ n , ( μ + μ n ) y n u n T i n y n i n T i n u n , i f T i n y n i n T i n u n , λ n + ρ n , o t h e r w i s e .
  Set n : = n + 1 and go back to Step 1.
Proof. 
Note that Algorithm 2 is derived from Algorithm 1 by taking S i = S for i [ I ] . Thus, the proof follows from the proof of Theorem 1.    □
Corollary 2.
Let S : H H be a maximal monotone operator and T j : H H be monotone Lipschitz continuous operators with constants L j for j [ J ] . Let F : H H be a γ-strongly monotone and Lipschitz continuous operator with constant L and a j ( 0 , 1 ) such that j [ J ] I a j = 1 . Assume that Ω : = { x H : 0 j [ J ] ( a j T j + S ) } . Then, the sequences generated by Algorithm 3, under Assumption 1, converge strongly to x * Ω , satisfying F x * , u * x * 0 u * Ω .
Algorithm 3: Regularized Modified Forward-Backward Splitting Method (RMFBSM)
  Initialization: Let u 1 H , μ ( 0 , 1 ) , and λ 1 > 0 be given.
  Iterative steps:  Given u n , calculate u n + 1 as follows:
  Step 1: Compute
y n = J λ n S ( u n λ n j = 1 J a j T j u n λ n τ n F u n ) .
  Step 2: Compute
u n + 1 = y n λ n i = 1 J a j ( T j y n T j u n ) .
  Update
λ n + 1 = min λ n + ρ n , ( μ + μ n ) y n u n j [ J ] a j ( T j y n T j u n ) , i f j [ J ] a j T j y n j [ J ] a j T j u n , λ n + ρ n , o t h e r w i s e .
  Set n : = n + 1 and go back to Step 1.
Proof. 
Note that Algorithm 3 is derived from Algorithm 1 by taking I = 1 and setting T 1 = T = j [ J ] a j T j . Note that the new operator T is monotone and Lipschitz continuous. Thus, the proof follows from the proof of Theorem 1.    □
Remark 3.
On the other hand, we note that the structure of some steps in Algorithm 1 differs from that of Algorithm 1 of [18]. For instance, the authors of [18] needed to solve an optimization problem to compute u n + 1 . Our iterative scheme 3 provides an alternative and bypasses such difficulties. However, we do not consider accelerated schemes in the present study because such results can easily be obtained from the results in a recent work of ours [27].

4. Applications

In this section, in the cases where I = 1 , we will consider two applications of our main results.

4.1. Split Feasibility Problems

Let C and Q be nonempty, closed, and convex subsets of the real Hilbert spaces H 1 and H 2 , respectively, and let A : H 1 H 2 be a bounded linear operator, the adjoint of which is denoted by A * : H 2 H 1 . We now recall the split feasibility problems (SFP):
find u C such that A u Q .
We denote the set of solutions of (42) by S and assume that S . The concept of SFP was introduced by Censor and Elfving [28] in 1994 in the framework of Euclidean spaces. It has been successfully applied to model some inverse problems in medical image reconstruction, phase retrievals, gene regulatory network inference, and intensity modulation radiation therapy; see, for example, [29,30]. From its conception to date, the SFP have been widely studied by several authors who also proposed various iterative schemes for solving them. Interested readers should consult [31,32,33,34,35,36,37,38] and references therein for recent studies and generalizations of this problem.
One can verify that the solution set of SFP (42) coincides with the solution set of the following constrained minimization problem; see, for example, [36]:
min u C f ( u ) : = 1 2 A u P Q A u 2 .
However, it is to be observed that the minimization problem (43) is ill-posed, in general, and therefore calls for regularization. We consider the following Tikhonov regularization [38]:
min u C f κ ( u ) : = 1 2 A u P Q A u 2 + 1 2 κ u 2 .
Equivalently, (44) can be written as the following unconstrained minimization problem:
min u H 1 F κ ( u ) : = 1 2 A u P Q A u 2 + 1 2 κ u 2 + i C ( u ) .
Note that the function f κ is differentiable and its gradient f κ ( u ) = A * ( A u P Q A u ) + κ u . Problem (45) is equivalent to the following inclusion problem:
find u H 1 such that 0 f κ ( u ) + i C ( u ) .
It is not difficult to see that f κ is monotone and ( A 2 + κ ) Lipschitz continuous. Therefore, since i C is a maximal monotone operator, (46) assures us that we can apply our result to solving the SFP (42). Thence, our next result.
Theorem 2.
Let C and Q be nonempty, closed, and convex subsets of the real Hilbert spaces H 1 and H 2 , respectively. Suppose that A : H 1 H 2 is a bounded linear operator and let the operator A * : H 2 H 1 be its adjoint operator. Assume that S : = { u H 1 : u C a n d A u * Q } and that Assumption 1 holds. Then, the sequence { u n } generated by Algorithm 4 converges strongly to a point u * S , satisfying F u * , y u * 0 y S .
Algorithm 4: RMFBSM Method for Solving SFP
  Initialization: Let u 1 H 1 , μ ( 0 , 1 ) , and λ 1 > 0 be given. Iterative steps:
  Calculate u n + 1 as follows: Step 1: Compute
y n = P C ( u n λ n f κ u n λ n τ n F u n ) .
  Step 2: Compute
u n + 1 = y n λ n ( f κ y n f κ u n ) .
  Update
λ n + 1 = min λ n , μ y n u n f κ y n f κ u n , if f κ y n f κ u n , λ n , otherwise .
  Set n : = n + 1 and go back to Step 1.
Proof. 
Let S = i C and T i = T = f κ for all i = 1 , 2 , , N in Algorithm 1. Note that by (5), we have J λ n S = P C . Therefore, the proof can be obtained by following the steps of the proof of Theorem 1. □

4.2. Elastic Net Penalty Problem

We consider the following linear regression model used in statistical learning:
y = A 1 u ¯ 1 + A 2 u ¯ 2 + + A N u ¯ N + ϵ ,
where y is the response predicted by the N predictors A 1 , A 2 , , A N and ϵ is a random error term. We assume that the model is sparse with a limited number of nonzero coefficients. The model can be written in matrix format as follows:
y = A u + ϵ ,
where y R M , A R M × N is the predictor matrix and u = ( u ¯ 1 , u ¯ 2 , , u ¯ N ) R N . In the statistics community, one solves the model by recasting (48) as the following penalized optimization problem:
min u R N A u y 2 2 + pen σ ( u ) ,
where pen σ ( u ) , σ 0 , is a penalty function of u.
Penalized regression and variable selection are two key topics in linear regression analysis. They have recently attracted the attention of many authors who have proposed and analyzed various penalty functions (see [39] and references therein). A very popular penalized regression model for variable selection is LASSO (least absolute shrinkage and selection operator), which was proposed by Tibshirani [40]. It is an 1 -norm regularization least square model given by
min u R N 1 2 A u y 2 2 + σ 1 u 1 ,
where σ 1 0 is a nonnegative regularization parameter. The 1 penalty makes LASSO perform both continuous shrinkage and automatic variable selection simultaneously [41]. However, its conditions are invalid when applied to a group of highly correlated variables [42]. In addition to LASSO, other approaches for variable selection with penalty functions more general than the 1 penalty have been proposed (please see [43] and references therein for more details on other penalty functions and applications to SVM).
In this subsection, we focus on the penalty function, which was proposed by Zou and Hastie [41] and called the elastic net penalty function. The elastic net is a linear combination of 1 and 2 penalties and is defined by
pen σ ( u ) : = σ 1 u 1 + σ 2 u 2 2 , σ 1 , σ 2 > 0 .
Therefore, we will consider the following elastic net penalty problem:
min u R N 1 2 A u y 2 2 + σ 1 u 1 + σ 2 u 2 2 .
Let
f ( u ) = 1 2 A u y 2 2 + σ 2 u 2 2
and
g ( u ) = σ 1 u 1 .
Then, f ( u ) = A ( A u y ) + 2 σ 2 u , where A stands for the transpose of A. Thus, f is monotone and Lipschitz continuous with constant L + 2 σ 2 , L being the largest eigenvalue of the matrix A A . Note that g is a proper lower semicontinuous convex function and g is maximal monotone. The resolvent of the maximal monotone operator g is given as (see [20])
J σ g v = arg min u R N { g ( u ) + 1 2 σ u v 2 2 } .
By the optimality condition, the penalty problem (51) is equivalent to the following inclusion problem:
0 f ( u * ) + g ( u * ) .
We apply our Algorithm 1 to solve the elastic net penalty problem (51), taking S = g and T i = T = f for each i = 1 , 2 , , N , and then compare the effectiveness of our methods with some existing methods in the literature. To this end, let u R N be a sparse randomly generated N × 1 matrix and let A R M × N and ϵ R M be randomly generated matrices, the entries of which are normally distributed with mean 0 and variance 1. In our experiments, we choose different values for u , u 1 , A, and ϵ , as follows:
  • Case A: A = randn ( 50 , 10 ) , ϵ = randn ( 50 , 1 ) , u = sprandn ( 10 , 1 , 0.3 ) , u 1 = sprandn ( 10 , 1 , 0.2 ) .
  • Case B: A = randn ( 1000 , 200 ) , ϵ = randn ( 100 , 1 ) , u = sprandn ( 200 , 1 , 0.1 ) , u 1 = sprandn ( 200 , 1 , 0.2 ) .
  • Case C: A = randn ( 200 , 1200 ) , ϵ = randn ( 200 , 1 ) , u = sprandn ( 1200 , 1 , 1 60 ) , u 1 = sprandn ( 1200 , 1 , 0.02 ) .
  • Case D: A = randn ( 120 , 512 ) , ϵ = randn ( 120 , 1 ) , u = sprandn ( 512 , 1 , 1 64 ) , u 1 = sprandn ( 512 , 1 , 1 64 ) .
We also choose σ 1 = 0.6 , σ 2 = 0.4 , μ = 0.02 , τ n = 1 n + 2 , and F u = 10 u for all u H . Using MATLAB 2021b, we compare the performance of our Algorithm 1 (RMFBSM) with MFBSM and Algorithm 2 in [14] (VTM). Table 1 and Figure 1 illustrate the outcome of our computations. The stopping criterion is e n = 1 n u n u 2 2 10 3 .

5. Numerical Example

We present the following numerical example to further illustrate the effectiveness of our proposed Algorithm 1 (RMFBSM). The codes are written in MATLAB 2021b and run on an HP Laptop Windows 10 with Intel(R) Core(TM) i5 CPU and 4 GB RAM.
Example 1.
Consider 2 ( R ) : = { u = ( u 1 , u 2 , , u j , ) , u j R : j = 1 | u j | 2 < } , u = ( j = 1 | u j | 2 ) 1 2 u 2 ( R ) . Let H = 2 ( R ) , and let S : H H , T i : H H , i = 1 , 2 , 3 , and F : H H be defined by S u = ( 2 u 1 , 2 u 2 , , 2 u i , ) , T 1 x = ( u 1 , u 2 2 , , u j 2 , ) , T 2 u = u 2 + ( 1 , 0 , 0 , , ) , T 3 u = u 3 + ( 2 , 0 , 0 , ) , and F u = ( 7 u 1 , 7 u 2 , , 7 u j , ) , respectively u H . Choose a 1 = 1 2 , a 2 = 1 5 , and a 3 = 3 10 . One can see that S is maximal monotone, T i s are Lipschitz continuous and monotone, and F is strongly monotone and Lipschitz continuous. In our computations, we choose, for each n N , τ n = 1 n + 1 , λ 1 = 0.3 , and μ = 0.1 .
The following choices of the initial values u 1 are considered:
  • Case a: u 1 = ( 1 , 1 2 , 1 4 , 1 8 , ) ;
  • Case b: u 1 = ( 2 3 , 1 9 , 1 54 , 1 324 , ) ;
  • Case c: u 1 = ( 100 , 10 , 1 , 0.1 , ) ;
  • Case d: u 1 = ( 9 , 3 3 , 3 , 3 , ) .
We compare the performance of our Algorithm (1) (RMFBSM) with MFBSM and VTM. We choose α n = 1 n + 2 and f ( u ) = u 2 for VTM. The stopping criterion used for our computations is e n = u n J λ n S ( I λ n T ) u n < 10 8 . Observe that e n = 0 implies that u n is a solution to the inclusion problem of finding u H such that 0 ( S + T ) u . We plot the graphs of errors against the number of iterations in each case. The figures and numerical results are shown in Figure 2 and Table 2, respectively.
Remark 4.
The numerical results show that our algorithm performs better than the two algorithms with which we compared it with respect to both the computation time taken and the number of iterations. Nevertheless, its performance can be further improved by incorporating inertial extrapolation terms into the algorithm. A possible future direction of research concerns estimating and comparing the rate of convergence of our proposed algorithm with the pertinent algorithms in the literature.

6. Conclusions

We study three classes of variational inclusion problems of finding a zero of the sum of monotone operators in real Hilbert space. We propose a unified simple structure, which combines a modification of the Tseng method with self-adaptive step sizes, and prove that the sequences it generates converge strongly to a solution of the problems. We apply our results to the elastic net penalty problem in statistical learning and to split feasibility problems. Numerical experiments are presented in order to illustrate the usefulness and effectiveness of our new method. Results related to accelerated versions of our method can be derived from the results of our recent work [27].

Author Contributions

Writing—original draft, A.T. and S.R. Both authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

Adeolu Taiwo is grateful to the Department of Mathematics at the Technion—Israel Institute of Technology for granting him a Postdoctoral Research Fellowship. Simeon Reich was partially supported by the Israel Science Foundation (Grant No. 820/17), by the Fund for the Promotion of Research at the Technion (Grant 2001893) and by the Technion General Research Fund (Grant 2016723).

Data Availability Statement

Not applicable.

Acknowledgments

Both authors are grateful to four anonymous referees for their helpful comments and useful suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Alber, Y.; Ryazantseva, I. Nonlinear Ill-Posed Problems of Monotone Type; Springer: Dordrecht, The Netherlands, 2006; p. xiv+410. ISBN 978-1-4020-4395-6/1-4020-4395-3. [Google Scholar]
  2. Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
  3. Taiwo, A.; Alakoya, T.O.; Mewomo, O.T. Halpern-type iterative process for solving split common fixed point and monotone variational inclusion problem between Banach spaces. Numer. Algorithms 2020, 86, 1359–1389. [Google Scholar] [CrossRef]
  4. Bello, A.U.; Yusuf, H.; Djitte, N. Single-step algorithm for variational inequality problems in 2-uniformly convex banach spaces. Rend. Circ. Mat. Palermo, II. Ser. 2023, 72, 1463–1481. [Google Scholar] [CrossRef]
  5. Rehman, H.; Kumam, P.; Ozdemir, M.; Yildirim, I.; Kumam, W. A class of strongly convergent subgradient extragradient methods for solving quasimonotone variational inequalities. Dem. Math. 2023, 56, 20220202. [Google Scholar] [CrossRef]
  6. Reich, S.; Taiwo, A. Fast hybrid iterative schemes for solving variational inclusion problems. Math. Meth. Appl. Sci. 2023, 46, 17177–17198. [Google Scholar] [CrossRef]
  7. Taiwo, A.; Reich, S. Bounded perturbation resilience of a regularized forward-reflected-backward splitting method for solving variational inclusion problems with applications. Optimization 2023. [Google Scholar] [CrossRef]
  8. Chen, G.H.; Rockafellar, R.T. Convergence rates in forward-backward splitting. SIAM J. Optim. 1997, 7, 421–444. [Google Scholar] [CrossRef]
  9. Passty, G.B. Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 1979, 72, 383–390. [Google Scholar] [CrossRef]
  10. Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  11. Cholamjiak, P.; Hieu, D.V.; Cho, Y.J. Relaxed Forward–Backward splitting methods for solving variational inclusions and applications. J. Sci. Comput. 2021, 88, 85. [Google Scholar] [CrossRef]
  12. Briceño-Arias, L.M.; Combettes, P.L. A monotone+skew splitting model for composite monotone inclusions in duality. SIAM J. Optim. 2011, 21, 1230–1250. [Google Scholar] [CrossRef]
  13. Vũ, B.C. Almost sure convergence of the forward-backward-forward splitting algorithm. Optim. Lett. 2016, 10, 781–803. [Google Scholar]
  14. Gibali, A.; Thong, D.V. Tseng type methods for solving inclusion problems and its applications. Calcolo 2018, 55, 49. [Google Scholar] [CrossRef]
  15. Khuangsatung, W.; Kangtunyakarn, A. Algorithm of a new variational inclusion problem and strictly pseudononspreading mapping with application. Fixed Point Theory Appl. 2014, 2014, 209. [Google Scholar] [CrossRef]
  16. Cholamjiak, W.; Khan, S.A.; Yambangwal, D.; Kazmi, K.R. Strong convergence analysis of common variational inclusion problems involving an inertial parallel monotone hybrid method for a novel application to image restoration. RACSAM 2020, 114, 99. [Google Scholar] [CrossRef]
  17. Sombut, K.; Sitthithakerngkiet, K.; Arunchai, A.; Seangwattana, T. An inertial forward-backward splitting method for solving modified variational inclusion problems and its applications. Mathematics 2023, 11, 2017. [Google Scholar] [CrossRef]
  18. Seangwattana, T.; Sombut, K.; Arunchai, A.; Sitthithakerngkiet, K. A modified Tseng’s method for solving the modified variational inclusion problems and its applications. Symmetry 2021, 13, 2250. [Google Scholar] [CrossRef]
  19. Suparatulatorna, R.; Chaichana, K. A strongly convergent algorithm for solving common variational inclusion with application to image recovery problems. Appl. Numer. Math. 2022, 173, 239–248. [Google Scholar] [CrossRef]
  20. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, CMS Books in Mathematics; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  21. Brézis, H. Operateurs Maximaux Monotones; North-Holland Publishing Company: Amsterdam, The Netherlands, 1983; 183p. [Google Scholar]
  22. Cottle, R.W.; Yao, J.C. Pseudo-monotone complementarity problems in Hilbert space. J. Optim. Theory Appl. 1992, 75, 281–295. [Google Scholar] [CrossRef]
  23. Yamada, I. The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings. Inherently parallel algorithms in feasibility and optimization and their applications. Stud. Comput. Math. 2001, 8, 473–504. [Google Scholar]
  24. Saejung, S.; Yotkaew, P. Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. 2012, 75, 742–750. [Google Scholar] [CrossRef]
  25. Hieu, D.V.; Anh, P.K.; Ha, N.H. Regularization proximal method for monotone variational inclusions. Netw. Spat. Econ. 2021, 21, 905–932. [Google Scholar] [CrossRef]
  26. Wang, Z.; Lei, Z.; Long, X.; Chen, Z. Tseng splitting method with double inertial steps for solving monotone inclusion problems. arXiv 2022, arXiv:2209.11989v1. [Google Scholar] [CrossRef]
  27. Taiwo, A.; Reich, S. Two regularized inertial Tseng methods for solving inclusion problems with applications to convex bilevel programming. Optimization 2023. under review. [Google Scholar]
  28. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  29. Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21, 2071–2084. [Google Scholar] [CrossRef]
  30. Wang, J.; Hu, Y.; Li, C.; Yao, J.-C. Linear convergence of CQ algorithms and applications in gene regulatory network inference. Inverse Probl. 2017, 33, 5. [Google Scholar] [CrossRef]
  31. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef]
  32. Lu, T.; Zhao, L.; He, H. A multi-view on the CQ algorithm for split feasibility problems: From optimization lens. J. Appl. Numer. Optim. 2020, 2, 387–399. [Google Scholar]
  33. Moudafi, A. Byrne’s extended CQ-algorithms in the light of Moreau-Yosida regularization. Appl. Set-Valued Anal. Optim. 2021, 3, 21–26. [Google Scholar]
  34. Reich, S.; Taiwo, A. A one-step Tikhonov regularization iterative scheme for solving split feasibility and fixed point problems. Minimax Theory Appl. 2023. accepted for publication. [Google Scholar]
  35. Reich, S.; Truong, M.T.; Mai, T.N.H. The split feasibility problem with multiple output sets in Hilbert spaces. Optim. Lett. 2020, 14, 2335–2353. [Google Scholar] [CrossRef]
  36. Taiwo, A.; Reich, S.; Chinedu, I. Strong convergence of two regularized relaxed extragradient schemes for solving the split feasibility and fixed point problem with multiple output sets. Appl. Anal. 2023. [CrossRef]
  37. Takahashi, W. The split feasibility problem and the shrinking projection method in Banach spaces. J. Nonlinear Convex Anal. 2015, 16, 1449–1459. [Google Scholar]
  38. Xu, H.-K. Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010, 26, 10. [Google Scholar] [CrossRef]
  39. Zeng, L.; Xie, J. Group variable selection via SCAD − L2. Statistics 2014, 48, 49–66. [Google Scholar] [CrossRef]
  40. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B 1996, 58, 267–288. [Google Scholar] [CrossRef]
  41. Zou, H.; Hastie, T. Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B 2005, 67, 301–320. [Google Scholar] [CrossRef]
  42. Zhang, C.; Huang, J. The sparsity and bias of the lasso selection in high-dimensional linear regression. Ann. Stat. 2008, 36, 1567–1594. [Google Scholar] [CrossRef]
  43. Becker, N.; Toedt, G.; Lichter, P.; Benner, A. Elastic SCAD as a novel penalization method for SVM classification tasks in high-dimensional data. BMC Bioinform. 2011, 12, 138. [Google Scholar] [CrossRef]
Figure 1. Top left: Case A; top right: Case B; bottom left: Case C; bottom right: Case D.
Figure 1. Top left: Case A; top right: Case B; bottom left: Case C; bottom right: Case D.
Axioms 12 01037 g001
Figure 2. Top left: Case a; top right: Case b; bottom left: Case c; bottom right: Case d.
Figure 2. Top left: Case a; top right: Case b; bottom left: Case c; bottom right: Case d.
Axioms 12 01037 g002
Table 1. Numerical results.
Table 1. Numerical results.
RMFBSMMFBSMVTM
Case ACPU time (sec)
No. of Iter.
0.0131
590
0.0200
936
0.0448
2818
Case BCPU time (sec)
No. of Iter.
0.5782
2441
0.6199
2734
2.6694
10,564
Case CCPU time (sec)
No. of Iter.
3.8084
10,710
5.3986
15,901
5.2824
15,345
Case DCPU time (sec)
No. of Iter.
0.8748
5412
1.1264
6833
0.9293
5530
Table 2. Numerical results.
Table 2. Numerical results.
RMFBSMMFBSMVTM
Case aCPU time (sec)
No of Iter.
0.0145
17
0.0155
70
0.0154
46
Case bCPU time (sec)
No. of Iter.
0.0040
17
0.0163
67
0.0050
45
Case cCPU time (sec)
No of Iter.
0.0019
23
0.0038
88
0.0048
58
Case dCPU time (sec)
No of Iter.
0.0019
19
0.0027
78
0.0359
52
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Taiwo, A.; Reich, S. A Regularized Tseng Method for Solving Various Variational Inclusion Problems and Its Application to a Statistical Learning Model. Axioms 2023, 12, 1037. https://doi.org/10.3390/axioms12111037

AMA Style

Taiwo A, Reich S. A Regularized Tseng Method for Solving Various Variational Inclusion Problems and Its Application to a Statistical Learning Model. Axioms. 2023; 12(11):1037. https://doi.org/10.3390/axioms12111037

Chicago/Turabian Style

Taiwo, Adeolu, and Simeon Reich. 2023. "A Regularized Tseng Method for Solving Various Variational Inclusion Problems and Its Application to a Statistical Learning Model" Axioms 12, no. 11: 1037. https://doi.org/10.3390/axioms12111037

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop