Next Article in Journal
Inferring Complementary and Substitutable Products Based on Knowledge Graph Reasoning
Previous Article in Journal
Noisy Tree Data Structures and Quantum Applications
Previous Article in Special Issue
Solving Integral Equation and Homotopy Result via Fixed Point Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Algorithm That Adjusts the Stepsize to Be Self-Adaptive with an Inertial Term Aimed for Solving Split Variational Inclusion and Common Fixed Point Problems

by
Matlhatsi Dorah Ngwepe
1,
Lateef Olakunle Jolaoso
1,*,
Maggie Aphane
1 and
Ibrahim Oyeyemi Adenekan
2
1
Department of Mathematics and Applied Mathematics, Sefako Makgatho Health Sciences University, Pretoria 0204, South Africa
2
Department of Mathematics, University of Louisiana at Lafayette, Lafayette, LA 70504, USA
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(22), 4708; https://doi.org/10.3390/math11224708
Submission received: 2 September 2023 / Revised: 29 September 2023 / Accepted: 13 November 2023 / Published: 20 November 2023
(This article belongs to the Special Issue Advances in Fixed Point Theory and Its Applications)

Abstract

:
In this research paper, we present a new inertial method with a self-adaptive technique for solving the split variational inclusion and fixed point problems in real Hilbert spaces. The algorithm is designed to choose the optimal choice of the inertial term at every iteration, and the stepsize is defined self-adaptively without a prior estimate of the Lipschitz constant. A convergence theorem is demonstrated to be strong even under lenient conditions and to showcase the suggested method’s efficiency and precision. Some numerical tests are given. Moreover, the significance of the proposed method is demonstrated through its application to an image reconstruction issue.

1. Introduction

In this paper, we consider the Split Variational Inclusion Problem (SVIP) introduced by Moudafi [1], which is the problem of finding the null point of a monotone operator in a Hilbert space whose image under a bounded linear operator belongs to another Hilbert space. Mathematically, the problem is defined as follows: find
a * H 1 such that 0 m 1 ( a * )
and
b * = B a * solves 0 m 2 ( b * ) ,
where 0 is called the zero vector, with H 1 and H 2 being both real Hilbert spaces together with the multivalued maximal monotone mappings, m i : H i 2 H i where i = 1 , 2 . Furthermore, the bounded linear operator is denoted by B : H 1 H 2 . We denote the solution set of (1) and (2) by Γ .
An operator m : H 2 H is called:
(i)
Monotone if
k x , a b 0 k m ( a ) , x m ( b ) a , b H .
(ii)
Maximal monotone if the graph of any monotone mapping does not properly contain graph G ( m ) of m, where:
G ( m ) = { ( a , k ) H 1 × H 1 | k m ( a ) } .
(iii)
The symbol used to represent the solution of m when a certain value λ greater than zero is used as a parameter is called the resolvent, which is denoted by J λ m :
J λ m ( a ) = ( I + λ m ) 1 ( a ) a H .
Other nonlinear optimization problems, such as split feasibility problems, split minimization problems, split variational inequality, split zero problems, and split equilibrium problems, can all be generalized by the SVIP; see [2,3,4]. Reducing SVIPs to split feasibility problems is important when modeling the intensity-modulating radiation therapy (IMRT) treatment planning. Moreso, the SVIPs play important roles in formulating many problems arising from engineering, economics, medicine, data compression, and sensor networks [5,6].
Recently, several authors have introduced some iterative methods for solving SVIPs, which have improved over time. In 2002, Byrne et al. [7] first introduced a weak convergence method for solving SVIPs as follows:
a n + 1 = J λ m 1 ( a n + γ B * ( J λ m 2 I ) B a n ) ,
for some parameter λ > 0 , we have B * representing the adjoint of B together with L = | | B * B | | , γ ( 0 , 2 L ) and J λ m i = ( I + λ m i ) 1 known as the resolvent operator for m i (with i = 1 , 2 ). The sequence { a n } generated by ( 3 ) was proved to converge weakly to a * under some certain conditions. Moudafi [1] proposed an iterative method that helps to solve SVIP with inverse strongly monotone operators; he also obtained weak convergence results using the following iteration:
a n + 1 = U ( a n + γ B * ( F I ) B a n ) n N ,
where λ > 0 , γ ( 0 , 2 L ) with L being the largest absolute value of the operator B * B , U = J λ m 1 ( I λ ϕ ) and F = J λ m 2 ( I λ φ ) , and J λ m 1 together with J λ m 2 are the resolvent operators of m 1 and m 2 , respectively. Lastly, let ϕ : H 1 H 1 and φ : H 2 H 2 be single-valued operators. Marino and Xu [8] presented an iterative scheme that considers the strong convergence of the viscosity approximation method introduced by Moudafi [9]:
a n + 1 = ( I α n G ) F a n + α n G β f ( a n ) n 0 ,
f is a function that contracts on the set H with the contraction coefficient α ( 0 , 1 ) . G is a linear operator that is strongly positive and bounded on H with a constant μ . The parameter is defined in a way that 0 < β < μ α , exclusive. There exists a nonexpansive mapping F and a sequence { α n } that takes values in ( 0 , 1 ) . The strong convergence of the sequence { a n } obtained from (5) to the fixed point a * F i x ( F ) : = { a * : F ( a * ) = a * } has been proven. Furthermore, a * serves as the unique solution of the variational inequality:
( G β f ) a * , a a * 0 , a C .
The presentation of the optimality condition for the minimization problem is included as follows:
min a C 1 2 G a , a h ( a ) ,
the function h is a potential function for β f , i.e.,
h ( a ) = { β f ( a ) f o r a H } .
In 2014, Kazmi and Rizvi [10] were inspired by the work of Byrne et al. (3) to propose the following iteration for solving SVIPs. For a given a 1 H 1 ,
k n = J λ m 1 ( a n + γ B * ( J λ m 2 I ) B a n ) , a n + 1 = α n f ( a n ) + ( 1 α n ) S k n n 1 ,
where α > 0 , γ ( 0 , 1 L ) , L is the spectral radius of the operator B * B and sequence { α n } satisfies the conditions: lim n α n = 0 , n = 0 α n = , and n = 0 | α n α n 1 | < . The sequences { k n } , { a n } generated by (7) converges strongly to z F i x ( S ) Γ . Note that algorithms (3), (4), and (7) contain a stepsize γ , which requires the computation of the norm of the bounded linear operator; this computation is not easy to compute making these algorithms difficult to compute. The inertial technique has been gaining attention from researchers to enhance the accuracy and performance of various algorithms. This technique plays a vital role in the convergence rate of the algorithms and is based on a discrete version of a second-order dissipative dynamical system; see, for instance [11,12,13,14,15,16,17,18,19,20]. In Hilbert spaces, Chuang [21] introduced a hybrid inertial proximal algorithm for solving SVIPs in 2017:
The proof of this proposed algorithm establishes that if { λ n } [ λ , δ B 2 ] and { a n } meets a specific requirement, then sequence { a n } from Algorithm 1 weakly converges to an SVIP solution:
n = 1 a n a n 1 2 < .
Algorithm 1 Hybrid inertial proximal algorithm.
  • Initialization: Choose { θ n } [ 0 , 1 ) , { β n } ( 0 , 1 ) . Let a 0 , a 1 H 1 be arbitrary. Set n = 1 .
  • Iterative steps: Calculate a n + 1 as follows:
  • Step 1. Set v n = a n + θ n ( a n a n 1 ) and compute
    b n = J β n m 1 [ v n λ n B * ( I J β n m 2 ) B v n ] ,
    where λ n > 0 satisfies
    λ n B * ( I J β n m 2 ) B v n B * ( I J β n m 2 ) B b n δ v n b n 0 < δ < 1
    if b n = v n then stop and b n is a solution of the SVIP. Otherwise,
  • Step 2. Compute
    a n + 1 = J β n m 1 ( v n α n d ( v n , b n ) ) ,
    where
    d ( v n , b n ) = v n b n λ n [ B * ( I J β n m 2 ) B v n B * ( I J β n m 2 ) B b n ] ,
    α n = v n b n , d ( v n , b n ) d ( v n , b n ) 2 .
    Set n = n + 1 and go to step 1.
It is easy to see that Algorithm 1 depends on a prior estimate of the norm of the bounded operator, and the Condition (8) is too strong to verify before computation.
Furthermore, Kesornprom and Cholamjiak [22] improved the contraction step in Algorithm 2 and introduced the following algorithm for solving the SVIP:
Algorithm 2 Proximal type algorithms with linesearch and inertial methods.
  • Let ζ , λ ( 0 , 1 ) , δ > 0 , and sequences { β n } n N ( 0 , ) , { θ n } n N [ 0 , θ ) [ 0 , 1 ) . Take arbitrarily a 1 H 1 and compute
    k n = a n + θ n ( a n a n 1 ) b n = J β n m 1 ( k n ρ n B * ( I J β n m 2 ) B k n ) ,
    where ζ n = δ ζ r n and r n is considered as the smallest possible non-negative integer such that
    ρ n B * ( I J β n m 2 ) B k n B * ( I J β n m 2 ) B b n λ k n b n .
    Define
    a n + 1 = k n ϕ α n d ( k n , ρ n )
    where ϕ ( 0 , 2 ) ,
    d ( k n , ρ n ) = k n b n ρ n ( B * ( I J β n m 2 ) B k n B * ( I J β n m 2 ) B b n )
    and
    α n = k n b n , d ( k n , ρ n ) + ρ n ( I J β n m 2 ) B b n 2 d ( k n , ρ n ) 2 .
They also proved a weak convergence result under similar conditions as in Algorithm 1. Let us mention that both Algorithms 1 and 2 involve a line search procedure, which consumes extra computation time and memory during implementation. As a way to overcome this setback, Tang [23] recently introduced a self-adaptive technique for selecting the stepsize without a prior estimate of the Lipschitz constant nor a line search procedure as follows (Algorithm 3):
Algorithm 3 Self-adaptive technique method.
  • Initialization: Choose a sequence { ζ n } that is non-negative and satisfies conditions 0 < ζ n < 4 , i n f ζ n ( 4 ζ n ) > 0 . Select starting points arbitrarily a 0 and set n = 0 .
  • Iterative step: Given the current iterate a n ( n 0 ) . Compute
    τ n = ζ n f ( a n ) | | T ( a n ) | | 2 + | | H ( a n ) | | 2
    and calculate the next iteration as
    a n + 1 = J λ m 1 ( I τ n B * ( I J λ m 2 ) B ) a n .
    Stop criterion: If a n + 1 = a n , then stop the iteration. Otherwise, set n = n + 1 and go back to the iterative step.
where f ( a ) = 1 2 | | ( I J λ m 2 ) B a | | 2 , T ( a ) = B * ( I J λ m 2 ) B a and H ( a ) = ( I J λ m 1 ) x . The author proved that the sequence generated by Algorithm 3 converges weakly to a solution of the SVIP. Tan, Qin, and Yao [24] introduced four self-adaptive iterative algorithms with inertial effects to solve SVIPs in real Hilbert spaces. This algorithm does not need any prior information about the operator norm. This means that their stepsize is self-adaptive. The conditions assumed in performing the strong convergences of the four algorithms are as follows:
(C1)
Let the solution set of (SVIP) be nonempty, i.e., Ω .
(C2)
Let H 1 and H 2 be assumed to be two real Hilbert spaces with a bounded linear operator and its adjoint denoted by B : H 1 H 2 and B * : H 2 H 1 , respectively.
(C3)
Let T i : H i H i , i = 1 , 2 be the set-valued maximal monotone mappings and f : H 1 H 2 is a mapping which satisfies the p-contractive property with a constant p [ 0 , 1 ) .
(C4)
Let the sequence { ω ¯ n } be positive such that lim n ω ¯ n σ n = 0 where { σ n } ( 0 , 1 ) satisfies lim n σ n = 0 and n = 1 σ n = .
Various methods inspired the first iterative algorithm. Namely, Byrne et al.’s [7] method, the viscosity-type method, and the projection and contraction method. An iterative method called the self-adaptive inertial projection and contraction method is utilized for solving the SVIP. A description of the initial iterative method is provided below (Algorithm 4):
Algorithm 4 Viscocity type with projection and contraction method.
  • Initialization: Set λ , x , ζ > 0 , χ , δ ( 0 , 1 ) , κ ( 0 , 2 ) , and let a 0 , a 1 H .
  • Iterative steps: Calculate a n + 1 as follows:
  • Step 1: Given the iterates a n 1 and a n ( n 1 ) , set k n = a n + x n ( a n a n 1 ) where
    x n = m i n { ω ¯ a n a n 1 , x } if a n a n 1 , x otherwise .
  • Step 2. Compute q n = J λ T 1 [ k n γ n B * ( I J λ T 2 ) B k n ] where γ n = ζ χ w n and w n is the smallest non-negative integer such that
    γ n B * ( I J λ T 2 ) B k n B * ( I J λ T 2 ) B q n δ k n q n .
    If k n = q n , stop the process and consider q n a valid solution for the problem (SVIP). Otherwise, proceed to step 3.
  • Step 3. Compute g n = k n κ μ n c n where
    c n = k n q n γ n [ B * ( I J λ T 2 ) B k n B * ( I J λ T 2 ) B q n ] ,
    μ n = k n q n , c n c n 2 .
    Step 4. Compute a n + 1 = η n f ( a n ) + ( 1 η n ) g n .
  • Go to step 1 after setting n = n + 1 .
Strong convergence was obtained. The second proposed algorithm is an inertial Mann-type projection and contraction algorithm to solve the SVIP, which is presented as follows (Algorithm 5):
Algorithm 5 Mann-type with projection and contraction method.
  • Initialization: Set λ , x , ζ > 0 , χ , δ ( 0 , 1 ) , κ ( 0 , 2 ) and let a 0 , a 1 H .
  • Iterative steps: To determine the upcoming iteration point a n + 1 , follow these steps:
  • k n = a n + k n ( a n a n 1 ) , q n = J λ T 1 [ k n γ n B * ( I J λ T 2 ) B k n ] , g n = k n κ μ n c n , a n + 1 = ( 1 η n τ n ) k n + τ n g n ,
    where { x n } , { γ n } , and { c n } are defined in ( 10 ) , ( 11 ) , and ( 12 ) , respectively.
   Strong convergence was obtained. The third proposed algorithm is an inertial Mann-type algorithm whereby the new stepsize does not require any line search process, making it a self-adaptive algorithm. The details of the iterative scheme are described below (Algorithm 6):
Algorithm 6 Inertial Mann-type with self-adaptive method.
  • Initialization: Set λ , x > 0 , ϕ ( 0 , 2 ) and let a 0 , a 1 H .
  • Iterative steps: To determine the upcoming iteration point a n + 1 , follow these steps:
  • k n = a n + x n ( a n a n 1 ) , g n = J λ T 1 [ k n γ n B * ( I J λ T 2 ) B k n ] , a n + 1 = ( 1 η n τ n ) k n + τ n g n ,
    The sequence { x n } is given in Equation ( 10 ) and the value of the stepsize γ n is modified using the subsequent formula below:
    γ n = ϕ n ( I J λ T 2 ) B k n 2 B * ( I J λ T 2 ) B k n 2 i f B ( I J λ T 2 ) B k n 0 , 0 o t h e r w i s e .
Strong convergence was obtained. The algorithm proposed fourthly is a variation of Algorithm 6, which leverages the viscosity-type approach to prove the robust convergence of the proposed method. We present the algorithm as follows (Algorithm 7):
Algorithm 7 New inertial viscocity method.
  • Initialization: Set λ , x > 0 , ϕ ( 0 , 2 ) and let a 0 , a 1 H .
  • Iterative steps: To determine the upcoming iteration point a n + 1 , follow these steps:
  • k n = a n + x n ( a n a n 1 ) , g n = J λ T 1 [ k n γ n B * ( I J λ T 2 ) B k n ] , a n + 1 = η n f ( a n ) + ( 1 η n ) g n ,
    where { x n } and { γ n } are defined in ( 10 ) and ( 15 ) , respectively.
Strong convergence was obtained. The four algorithms contain an inertial term that plays a role at the rate of the convergence of Algorithms 4–7. Note that the strong convergence theorems proved for Algorithms 4–7 proposed by Tan, Qin, and Yan were obtained under some weaker conditions. Zhou, Tan, and Li [25] proposed a pair of adaptive hybrid steepest descent algorithms with an inertial extrapolation term for split monotone variational inclusion problems in infinite-dimensional Hilbert spaces. These algorithms benefit from combining two methods, the hybrid steepest descent method and the inertial method, ensuring and achieving strong convergence theorems. Secondly, the stepsizes of the two proposed algorithms are self-adaptive, which overcomes the difficulty of the computation of the operator norm. The details of the first algorithm are presented as follows (Algorithm 8):  
Algorithm 8 Inertial hyrbid steepest descent algorithm.
Requirements: Take arbitrary starting points a 0 ; a 1 H 1 . Choose sequences { α n } [ 0 , 1 ) , { η n } and { β n } in ( 0 , 1 ) and γ , τ , μ > 0 .
1.
Set n = 1 and compute k n = a n + α n ( a n a n 1 ) and adaptive stepsize
λ n = η n ( I W 2 ) B k n 2 B * ( I W 2 ) B k n 2 B k n F i x ( W 2 ) , 0 o t h e r w i s e .
2.
Compute b n = W 1 ( k n λ n B * ( I W 2 ) B k n ) .
3.
If b n = k n , then stop. Otherwise, compute a n + 1 = β n τ n h ( b n ) + ( I β n μ D ) b n .
4.
Set n = n + 1 and return to 1.
Strong convergence was obtained. The second proposed algorithm is presented as follows (Algorithm 9):
Algorithm 9 Self-adaptive hybrid steepest descnt method.
Requirements: Two arbitrary starting points a 0 ; a 1 H 1 . Choose sequences { α n } [ 0 , 1 ) , { η n } and { β n } in ( 0 , 1 ) and γ , τ , μ > 0 .
1.
Set n = 1 and compute k n = a n + α n ( a n a n 1 ) , z n = W 1 ( k n ) and adaptive stepsize
α n = η n ( I W 2 ) B z n 2 B * ( I W 2 ) B z n 2 B z n F i x ( W 2 ) , 0 o t h e r w i s e .
2.
Compute b n = z n λ n B * ( I W 2 ) B z n .
3.
If b n = z n = k n , then stop. Otherwise, compute a n + 1 = β n τ n h ( b n ) + ( I β n μ D ) b n .
4.
Set n = n + 1 and return to 1.
Strong convergence was obtained. The assumptions applied to Algorithms 8 and 9 are as follows: Let H 1 and H 2 denote two Hilbert spaces, and suppose that B : H 1 H 2 is a linear operator that is bounded. Additionally, let B * be the adjoint operator of B. Let f i : H i H i be a ν i -inverse strongly monotone mapping with i = 1 , 2 and m i : H i 2 H i be set-valued maximal monotone mappings with i = 1 , 2 . D : H 1 H 1 is L 2 -Lipschitz continuous and η -strongly mapping with L 2 , η > 0 . Let h : H 1 H 1 be L 1 -Lipschitz continuous mapping with L 1 > 0 . Moreover, Alakoya et al. [26] introduced a method with an inertial extrapolation technique, viscosity approximation, and contains a stepsize that is self-adaptive; thus, the method is known as an inertial self-adaptive algorithm for solving the SVIP (Algorithm 10):
Algorithm 10 General viscosity with self-adaptive and inertial method.
  • Step 0: Select a 0 , a 1 H 1 , { ρ n } ( 0 , 4 ) , { β n } , { α n } ( 0 , 1 ) , { θ n } [ 0 , θ ) for some θ > 0 .
  • Set n = 1 .
  • Step 1: Given the ( n 1 ) -th and n-th iterates, set
    v n = a n + θ n ( a n a n 1 ) .
    Step 2: Compute
    k n = J λ m 1 ( v n τ n B * ( I J λ m 2 ) )
    where
    τ n = ρ n g ( v n ) T ( v n ) 2 + H ( v n ) 2 if T ( v n ) 2 + H ( v n ) 2 0 , 0 otherwise .
    Step 3: Compute
    a n + 1 = α n f ( a n ) + β n a n + ( ( 1 β n ) I α n G ) S k n .
    Set n = n + 1 and return to Step 1.
where S : H 1 H 1 is a quasi-nonexpansive mapping, G : H 1 H 1 is a strongly positive mapping, and f : H 1 H 1 is a contraction mapping. The convergence of a common solution for the sequence { a n } generated by Algorithm 10 was established by the authors through proof of its strong convergence z Γ F i x ( S ) provided that { α n } and { θ n } satisfy lim n θ n α n | | a n a n 1 | | = 0 . It is clear that Algorithm 10 performs better than Algorithms 1–3 and other related methods. However, there is a need to improve the performance of Algorithm 10 by using an optimal choice of parameters for the inertial extrapolation term.
Based on the outcomes above, our paper presents a novel approach that utilizes an optimal selection of inertial term and self-adaptive techniques for solving the SVIP and fixed point problems by employing multivalued demicontractive mappings in actual Hilbert spaces. Our algorithm enhances the results of Algorithms 1–3 and 10, and other associated findings in the literature. We demonstrate a robust convergence outcome, subject to certain mild conditions, and provide relevant numerical experiments to showcase the efficiency of the proposed method. We also consider an application of our algorithm to solving image deblurring problems to demonstrate the applicability of our results.

2. Preliminaries

In this section, we present certain definitions and fundamental outcomes that will be employed in our ensuing analysis. Suppose that H is a real Hilbert space, and C is a subset of H that is closed, nonempty, and convex. We use a n p and a n p to denote the strong and weak convergences, respectively, of a sequence { a n } H to a point p H .
For every vector k ¯ H , there exists a unique element P C k ¯ in the subspace C such that
| | P C ( k ¯ ) k ¯ | | = min { | | z k ¯ | | : z C } .
The metric projection from H onto C is denoted as P C and can be defined by the subsequent expression:
(i)
For k ¯ H and z C ,
z = P C ( k ¯ ) k ¯ z , z b 0 , b C ;
(ii)
k ¯ b , P C ( k ¯ ) P C ( b ) | | P C ( k ¯ ) P C ( b ) | | 2 k ¯ , b H ;
(iii)
For each k ¯ H and b C
| | b P C ( k ¯ ) | | 2 + | | k ¯ P C ( k ¯ ) | | 2 | | k ¯ b | | 2 .
An operator F : H H is called:
(i)
α -Lipschitz if there is a positive value of α such that
| | F b F a | | α | | b a | | b , a H
and a contraction if α ( 0 , 1 ) ;
(ii)
Nonexpansive if F is 1-Lipschitz;
(iii)
Quasi-nonexpansive when its fixed point set is not empty and
F b p b p b H , p F i x ( F ) ;
(iv)
k -demicontractive if F i x ( F ) and there exists a constant k [ 0 , 1 ) such that
| | F a p | | 2 | | a p | | 2 + k | | a F a | | 2 a H , p F i x ( F ) .
Note that the nonexpansive and quasi-nonexpansive mappings are contained in the class of k -demicontractive mapping; we also follow the same conditions for the Hausdorff mapping, S : H 2 H .
Suppose we have a metric space ( X , d ) and a family of subsets C B ( X ) that are both closed and bounded. We can induce the Hausdorff metric using the metric d on any two subsets X , Y C B ( X ) . This metric is defined as follows:
H ( X , Y ) = max sup x X d ( x , Y ) , sup y Y d ( X , y ) ,
where d ( x , Y ) = inf y Y d ( x , y ) . A fixed point of a multivalued mapping S : H C B ( H ) is a point a H that belongs to S a . If S ( a ) only contains a, then we refer to a as a strict fixed point of S . The study of strictly fixed points for a specific type of contractive mappings was first conducted by Aubin and Siegel [27]. Since then, this condition has been rapidly applied to various multivalued mappings, such as those in [28,29,30].
Lemma 1. 
The inequalities stated below are valid in a Hilbert space denoted by H.:
(i) 
| | b a | | 2 = | | b | | 2 2 b , a + | | a | | 2 b , a H ;
(ii) 
| | b + a | | 2 | | b | | 2 + 2 a , b + a b , a H .
We also use the following Lemmas to achieve our goal in the section on the main results; Lemmas [31,32,33].

3. Main Results

In this section, we introduce our algorithm and provide its convergence analysis. First, we prove the state of our algorithm as follows:
Let H 1 , H 2 be two real Hilbert spaces, and the multivalued maximal monotone operators are denoted by m i : H i 2 H i , where i = 1 , 2 . We denote the bounded linear operator with its adjoint as B : H 1 H 2 and B * : H 2 H 1 , respectively. For i = 1 , , m define S i : H 1 C B ( H 1 ) be a finite family of k i -demicontractive mappings such that I S i is demiclosed at the point zero with S i ( q ) = { q } q F i x ( S i ) and k = max { k i } . Suppose that the solution set:
Γ = { a * H 1 : 0 m 1 ( a * ) , 0 m 2 ( B a * ) } i = 1 r F i x ( S i ) .
Let our contraction mapping be g : H 1 H 1 with a constant of σ ( 0 , 1 ) and D : H 1 H 1 be a strongly non-negative operator with η > 0 being its coefficient where this condition, 0 < ξ < η σ , is satisfied. Moreover, let { ϵ n } , { ρ n , i } , { λ n } be non-negative sequences such that 0 < y ϵ n , ρ n , i , λ n u < 1 . Define the following functions:
f ( a ) = 1 2 | | ( I J σ m 2 ) B a | | 2
and
T ( a ) = B * ( I J λ m 2 ) B a , H ( a ) = ( I J λ m 1 ) a .
Now, we present our algorithm as follows (Algorithm 11):    
Algorithm 11 Proposed new inertial and self-adaptive method.
  • Step 0: Choose α > 3 , η n ( 0 , 4 ) and select initial guess a 0 , a 1 H 1 . Set n = 1 .
  • Step 1: Choose θ n such that 0 θ n θ ¯ n , where θ ¯ n is defined below, given the ( n 1 ) th and n t h iterates:
    θ ¯ n = min n 1 n + α 1 , ϵ n max { a n a n 1 , n 2 a n a n 1 2 } if a n a n 1 , n 1 n + α 1 otherwise .
    Set
    v n = a n + θ n ( a n a n 1 ) .
    Step 2: Compute
    τ n = η n f ( v n ) | | T ( v n ) | | 2 + | | H ( v n ) | | 2
    and
    b n = J σ m 1 ( I τ n B * ( I J σ m 2 ) B ) v n
  • Step 3: Compute the next iterate via
    z n = ρ n , 0 b n + i = 1 r ρ n , i χ n , i , a n + 1 = λ n ξ g ( a n ) + ( 1 λ n D ) z n ,
    where χ n , i S i b n and i = 1 r ρ n , i = 1 . Set n = n + 1 and go back to Step 1.
   To ensure our convergence outcomes, we have made assumptions on the control parameters λ n , ϵ n , ρ n , i that must meet certain conditions:
(C1)
lim n λ n = 0 and n = 0 λ n = ,
(C2)
lim inf n ( ρ n , 0 k ) ρ n , i > 0 i = 1 , 2 , , r ,
(C3)
ϵ n = o ( λ n ) , i.e., lim n ϵ n λ n = 0 .
Remark 1. 
It is clear from (24) and Assumptions (C3), that
lim n θ n λ n a n a n 1 2 lim n θ ¯ n λ n a n a n 1 2 lim n ϵ n λ n · 1 n 2 = 0
and
lim n θ n λ n a n a n 1 lim n θ ¯ n λ n a n a n 1 lim n ϵ n λ n = 0 .

Convergence Analysis

We begin the convergence of Algorithm 11 by proving the following results.
Lemma 2. 
Consider the function f : H 2 R and h : H 1 R defined in ( 22 ) ; then, the functions T and H defined on ( 23 ) are Lipschitz continuous.
Proof. 
Since T ( a ) = B * ( I J σ m 2 ) B a , therefore
| | T ( a ) T ( b ) | | 2 = B * ( ( I J σ m 2 ) B a ( I J σ m 2 ) B b ) , B * ( ( I J σ m 2 ) B a ( I J σ m 2 ) B b ) = ( I J σ m 2 ) ( B a B b ) , B B * ( ( I J σ m 2 ) B a ( I J σ m 2 ) B b ) L | | ( I J σ m 2 ) B a ( I J σ m 2 ) B b | | 2 ,
where L = | | B * B | | . On the other hand,
T ( a ) T ( b ) , a b = B * ( ( I J σ m 2 ) B a ( I J σ m 2 ) B b ) , a b = ( I J σ m 2 ) B a ( I J σ m 2 ) B b , B a B b | | ( I J σ m 2 ) B a ( I J σ m 2 ) B b | | 2 .
Combining the above formulas, we have:
T ( a ) T ( b ) , a b 1 L | | T ( a ) T ( b ) | | 2 ,
T being 1 L inverse strong monotone implies that its inverse is L-Lipschitz continuous. Furthermore,
T ( a ) T ( b ) , a b | | T ( a ) T ( b ) | | | | a b | | ,
hence
| | T ( a ) T ( b ) | | L | | a b | | .
Likewise, it can be observed that the function H exhibits Lipschitz continuity.    □
Lemma 3. 
The sequence { a n } , which was generated by Algorithm 11 is bounded.
Proof. 
Given q Γ , then
| | v n q | | = | | a n + θ n ( a n a n 1 ) q | | | | a n q | | + θ n | | a n a n 1 | | .
Since Γ , then q = J σ m 1 ( q ) , B p = J σ m 2 ( B q ) , and ( I J σ m 2 ) B q = B q B q = 0 . Note that T ( v n ) = B * ( I J σ m 2 ) B v n , I J σ m 2 is firmly nonexpansive, therefore we obtain the following:
T ( v n ) , v n q = B * ( I J σ m 2 ) B v n , v n q = ( I J σ m 2 ) B v n ( I J σ m 2 ) B q , B v n B q | | ( I J σ m 2 ) B v n | | 2 = 2 f ( v n )
and
| | b n q | | 2 = | | J σ m 1 ( I τ n B * ( I J σ m 2 ) B ) v n q | | 2 | | ( I τ n B * ( I J σ m 2 ) B ) v n q | | 2 = | | v n q τ n T ( v n ) | | 2 = | | v n q | | 2 + τ n 2 | | T ( v n ) | | 2 2 τ n T ( v n ) , v n q | | v n q | | 2 + τ n 2 | | T ( v n ) | | 2 4 τ n f ( v n ) | | v n q | | 2 η n ( 4 η n ) f 2 ( v n ) | | T ( v n ) | | 2 + | | H ( v n ) | | 2 .
Since 0 < η n < 4 , then | | b n q | | | | v n q | | . We use Lemma [31] to obtain the following results:
| | z n q | | 2 = | | ρ n , 0 b n + i = 1 r ρ n , i χ n , i q | | 2 ρ n , 0 | | b n q | | 2 + i = 1 r ρ n , i | | χ n , i q | | 2 i = 1 r ρ n , 0 ρ n , i | | b n χ n , i | | 2 = ρ n , 0 | | b n q | | 2 + i = 1 r ρ n , i d ( χ n , i , S i q ) 2 i = 1 r ρ n , 0 ρ n , i | | b n χ n , i | | 2 ρ n , 0 | | b n q | | 2 + i = 1 r ρ n , i H ( S i b n , S i q ) 2 i = 1 r ρ n , 0 ρ n , i | | b n χ n , i | | 2 ρ n , 0 | | b n q | | 2 + i = 1 r ρ n , i ( | | b n q | | 2 + κ i d ( b n , S i b n ) 2 ) i = 1 r ρ n , 0 ρ n , i | | b n χ n , i | | 2 | | b n q | | 2 i = 1 r ( ρ n , 0 κ ) ρ n , i | | b n χ n , i | | 2 ,
thus, we apply condition (C2) and have the following:
| | z n q | | 2 | | b n q | | 2 .
Follow from ( 28 ) , ( 30 ) , and ( 31 ) to obtain:
| | a n + 1 q | | = | | λ n ( ξ g ( a n ) D q ) + ( 1 λ n D ) ( z n q ) | | λ n | | ξ g ( a n ) D q | | + ( 1 λ n η ) | | z n q | | λ n | | ξ ( g ( a n ) g ( q ) ) + ( ξ g ( q ) D q ) | | + ( 1 λ n η ) | | z n q | | λ n ξ σ | | a n q | | + λ n | | ξ g ( q ) D q | | + ( 1 λ n η ) [ | | a n q | | + θ n | | a n a n 1 | | ] = ( 1 λ n ( η ξ σ ) ) | | a n q | | + λ n | | ξ g ( q ) D q | | + ( 1 λ n η ) σ n | | a n a n 1 | | = ( 1 λ n ( η ξ σ ) ) | | a n q | | + ( η ξ σ ) λ n | | ξ g ( q ) D q | | η ξ σ + 1 λ n η η ξ σ θ n λ n | | a n a n 1 | | .
Note that sup n 1 1 λ n η η ξ σ θ n λ n | | a n a n 1 | | exists by Remark 1 and let
M = max | | ξ g ( q ) D q | | η ξ σ , sup n 1 1 λ n η η ξ σ θ n λ n | | a n a n 1 | | .
Therefore, we have the following:
| | a n + 1 q | | ( 1 λ n ( η ξ σ ) ) | | a n q | | + λ n ( η ξ σ ) M .
We continue and use Lemma [32] (i) to imply that { | | a n p | | } is bounded and therefore, { a n } is also bounded. Consequently, sequences { v n } , { z n } , and { b n } are bounded.    □
Lemma 4. 
Given { a n } as the sequence generated by the proposed Algorithm 5, put s n = | | a n q | | 2 , a ˜ n = 2 λ n ( η ξ σ ) 1 λ n ξ σ , u n = 1 2 ( η ξ σ ) ( 2 ξ g ( q ) D q , a n + 1 q + λ n M 1 ) , for some M 1 > 0 and c n = θ n | | a n a n 1 | | 1 λ n ξ σ M 2 where M 2 = sup n 1 ( ( 1 λ n q ) 2 ( | | a n q | | + | | a n 1 q | | ) + 2 ( 1 λ n q ) 2 | | a n a n 1 | | ) and q Γ .
Then, the following conclusions hold:
(i) 
s n + 1 ( 1 y ˜ n ) s n + c n + y ˜ n u n .
(ii) 
1 lim sup n u n < + .
Proof. 
From Algorithm 11, we have
| | v n q | | 2 = | | a n + θ n ( a n a n 1 ) q | | 2 = | | a n q | | 2 + 2 θ n a n q , a n a n 1 + θ n 2 | | a n a n 1 | | 2 .
Let us use Lemma 1(i) in order to determine the following results:
2 a n q , a n a n 1 = | | a n 1 q | | 2 + | | a n q | | 2 + | | a n a n 1 | | 2
thus, substituting ( 35 ) into ( 34 ) , we obtain
| | v n q | | 2 = | | a n q | | 2 + θ n ( | | a n 1 q | | 2 + | | a n q | | 2 + | | a n a n 1 | | 2 ) + θ n 2 | | a n a n 1 | | 2 | | a n q | | 2 + θ n ( | | a n q | | 2 | | a n 1 q | | 2 + 2 θ n | | a n a n 1 | | 2 ) .
Now, we follow from Lemma 1(ii) and have that
| | a n + 1 q | | 2 = | | λ n ( ξ g ( a n ) D q ) + ( 1 λ n D ) ( z n q ) | | 2 ( 1 λ n η ) 2 | | z n q | | 2 + 2 λ n ξ g ( a n ) D q , a n + 1 q .
Follow from ( 30 ) , ( 32 ) , and ( 34 ) to obtain
| | a n + 1 q | | 2 ( 1 λ n η ) 2 | | v n q | | 2 + 2 λ n ξ g ( a n ) D q , a n + 1 q = ( 1 λ n η ) 2 ( | | a n q | | 2 + θ n ( | | a n q | | 2 | | a n 1 q | | 2 ) + 2 θ n | | a n a n 1 | | 2 ) + 2 λ n ξ g ( a n ) D q , a n + 1 q = ( 1 λ n η ) 2 | | a n q | | 2 + θ n ( 1 λ n η ) 2 ( | | a n q | | 2 | | a n 1 q | | 2 ) + 2 θ n ( 1 λ n η ) 2 | | a n a n 1 | | 2 + 2 λ n ξ g ( a n ) D q , a n + 1 q ( 1 λ n η ) 2 | | a n q | | 2 + θ n ( 1 λ n η ) 2 ( | | a n q | | + | | a n 1 q | | ) | | a n a n 1 | | + 2 θ n ( 1 λ n η ) 2 | | a n a n 1 | | 2 + 2 λ n ξ g ( a n ) D q , a n + 1 q .
Also,
2 ξ g ( a n ) D q , a n + 1 q = 2 ξ ( g ( a n ) g ( q ) ) + ξ g ( q ) D q , a n + 1 q 2 ξ σ | | a n q | | | | a n + 1 q | | + 2 ξ g ( q ) D q , a n + 1 q ξ σ ( | | a n q | | 2 + | | a n + 1 q | | 2 ) + 2 ξ g ( q ) D q , a n + 1 q .
Furthermore, substitute ( 39 ) into ( 38 ) and have that
| | a n + 1 q | | 2 [ ( 1 λ n η ) 2 + λ n ξ σ ] | | a n q | | 2 + θ n ( 1 λ n η ) 2 ( | | a n q | | + | | a n 1 q | | ) | | a n a n 1 | | + 2 θ n ( 1 λ n η ) 2 | | a n a n 1 | | 2 + λ n ξ σ | | a n + 1 q | | 2 + 2 λ n ξ g ( q ) D q , a n + 1 q = ( 1 λ n ( 2 q ξ σ ) ) | | a n q | | 2 + ( λ n η ) 2 | | a n q | | 2 + θ n [ ( 1 λ n η ) 2 ( | | a n q | | + | | a n 1 q | | ) + 2 ( 1 λ n η ) 2 | | a n a n 1 | | ] | | a n a n 1 | | + λ n ξ σ | | a n + 1 q | | 2 + 2 λ n ξ g ( q ) D q , a n + 1 q ( 1 λ n ( 2 q ξ σ ) ) | | a n q | | 2 + λ n ξ σ | | a n + 1 q | | 2 + θ n [ ( 1 λ n η ) 2 ( | | a n q | | + | | a n 1 q | | ) + 2 ( 1 λ n ) 2 | | a n a n 1 | | ] | | a n a n 1 | | + λ n ( 2 ξ g ( q ) D q , a n + 1 q + λ n M 1 )
for some M 1 0 , we have that
| | a n + 1 q | | 2 ( 1 λ n ( 2 q ξ σ ) ) 1 λ n ξ σ | | a n q | | 2 + θ n 1 λ n ξ σ | | a n a n 1 | | M 2 + λ n ( 2 ξ g ( q ) D q , a n + 1 q + λ n M 1 ) 1 λ n ξ σ = 1 2 λ n ( η ξ σ ) 1 λ n ξ σ | | a n q | | 2 + θ n 1 λ n ξ σ | | a n a n 1 | | M 2 + 2 λ n ( η ξ σ ) 1 λ n ξ σ ( 2 ξ g ( q ) D q , a n + 1 q + λ n M 1 ) 2 ( η ξ σ ) .
Furthermore, from the boundedness of { a n } , it is easy to see that
sup n 0 u n sup n 0 1 2 ( ρ ξ σ ) ( 2 | | ξ g ( q ) D a * | | | | a n + 1 q | | + M 1 ) < .
Our next objective is to demonstrate that lim sup n u n 1 . To do so, we will assume the opposite and suppose that lim sup n u n < 1 , which implies that there exists n 0 N where u n 1 n n 0 . Therefore, according to (i), we can conclude that
s n + 1 ( 1 y ˜ n ) s n + c n + y ˜ n u n < ( 1 y ˜ n ) s n + c n y ˜ n = c n + s n y ˜ n ( s n + 1 ) c n + s n 2 ( η ξ σ ) λ n .
By induction, we obtain
s n + 1 s n 0 + i = n 0 r c i 2 ( η ξ σ ) i = n 0 r λ i n n 0 .
Taking the limit superior of both sides of the last inequality and noting that c i approaches 0, we obtain
lim sup n s n s n 0 lim n 2 ( η ξ σ ) i = n 0 r λ i = .
This is a contradiction of { s n } being a non-negative real sequence. Therefore, we can conclude that lim sup n u n 1 .    □
Remark 2. 
Given that the lim n λ n 0 , it becomes easier to confirm that y ˜ n also approaches zero. In addition, according to Remark 1, c n approaches zero with an increasing n value.
Now, we present our strong convergence theorem.
Theorem 1. 
Given { a n } as the sequence generated by the proposed Algorithm 11 and suppose that Assumption (C1)–(C3) are satisfied. Then, { a n } strongly converges to a unique point p = P Γ ( I D + ξ g ) ( p ) , which solves the variational inequality
( D ξ g ) p , p a 0 , a Γ .
Proof. 
Given q Γ . We will use Φ n to denote | | a n q | | 2 . Below are the possible cases we are considering.
CASE A: We start by assuming that there exists a n 0 N such that Φ n is monotonically decreasing n n 0 . Then, lim n Φ n Φ n + 1 0 . Our first aim is to demonstrate that lim n { | | b n v n | | , | | χ n , i b n | | , | | a n + 1 a n | | } 0 .
| | v n a n | | = | | a n a n + θ n ( a n a n 1 ) | | = θ n | | a n a n 1 | | 0 a s n ,
thus, lim n | | v n a n | | = 0 . From Equations ( 30 ) and ( 37 ) we have that:
η n ( 4 η n ) f 2 ( v n ) | | T ( v n ) | | 2 + | | H ( v n ) | | 2 | | v n q | | 2 | | b n q | | 2 | | v n q | | 2 | | a n + 1 q | | 2 + | | a n + 1 q | | 2 | | b n q | | 2 | | a n q | | 2 + θ n M ( | | a n a n 1 | | ) | | a n + 1 q | | 2 + ( 1 λ n τ ) | | z n q | | 2 + 2 λ n ξ g ( a n ) D q , a n + 1 q | | b n q | | 2 Φ n Φ n + 1 + θ n M | | a n a n + 1 | | + | | b n q | | 2 λ n τ | | b n q | | 2 + 2 λ n ξ g ( a n ) D q , a n + 1 q | | b n q | | 2 0 a s n .
Note that inf η n ( 4 η n ) > 0 and T together with H are Lipschitz continuous, so we obtain that:
lim n f 2 ( v n ) = 0 .
Therefore, f ( v n ) 0 and | | b n v n | | 0 as n . From ( 30 ) , ( 31 ) , ( 36 ) , and ( 37 ) , we obtain the following results:
| | a n + 1 q | | 2 ( 1 λ n η ) 2 | | z n q | | 2 + 2 λ n ξ g ( a n ) D q , a n + 1 q ( 1 λ n η ) 2 | | b n q | | 2 i = 1 r ( ρ n , 0 k ) ρ n , i | | b n χ n , i | | 2 + 2 λ n ξ g ( a n ) D q , a n + 1 q ( 1 λ n η ) 2 | | a n q | | 2 + θ n ( | | a n q | | 2 | | a n 1 q | | 2 ) + ( 1 λ n η ) 2 2 θ n | | a n a n 1 | | 2 i = 1 r ( ρ n , 0 k ) ρ n , i | | b n χ n , i | | 2 + 2 λ n ξ g ( a n ) D q , a n + 1 q .
Hence,
( 1 λ n η ) 2 i = 1 r ( ρ n , 0 k ) ρ n , i | | b n χ n , i | | 2 ( 1 λ n η ) 2 | | a n q | | 2 + θ n ( 1 λ n η ) 2 ( | | a n q | | 2 | | a n 1 q | | 2 ) + 2 θ n ( 1 λ n η ) | | a n a n 1 | | 2 + 2 λ n ξ g ( a n ) D q , a n + 1 q | | a n + 1 q | | 2 Φ n Φ n + 1 + λ n M 3 + θ n ( 1 λ n η ) 2 ( Φ n Φ n + 1 ) + θ n ( 1 λ n η ) 2 | | a n a n 1 | | 2 + 2 λ n ξ g ( a n ) D q , a n + 1 q 0 a s n .
Thus, by applying condition (C2), we obtain
lim n | | b n χ n , i | | = 0 .
We also have that
| | z n b n | | = | | ρ n , 0 b n + i = 1 r ρ n , i χ n , i b n | | ρ n , 0 | | b n b n | | + i = 1 r ρ n , i | | χ n , i b n | | 0 , n ,
thus lim n | | z n b n | | = 0 . Therefore,
lim n | | z n a n | | = lim n ( | | z n b n | | + | | b n a n | | ) = 0 .
Finally,
| | a n + 1 z n | | = | | λ n ξ g ( a n ) + ( 1 λ n D ) z n z n | | = λ n | | ξ g ( a n ) D z n | | 0 , n ,
which results in the following:
| | a n + 1 a n | | | | a n + 1 z n | | + | | z n a n | | 0 a s n .
As k , the subsequence { a n k } weakly converges to a * . Denote F n k = I τ n k B * ( I J σ m 2 ) B , since J σ m 2 is firmly nonexpansive, hence F n k and J σ m 1 ( I τ n k B * ( I J σ m 2 ) B ) are averaged and nonexpansive. So the subsequence { v n k } converges weakly to a fixed point a * of the operator J σ m 1 F n . We now show that a * Γ that is a * m 1 1 ( 0 ) with B a * m 2 1 ( 0 ) and a * i = 1 r F i x ( S i ) . From ( 30 ) we have
η n ( 4 η n ) f 2 ( v n ) | | T ( v n ) | | 2 + | | H ( v n ) | | 2 | | v n q | | 2 | | b n q | | 2 ,
since T and H are Lipschitz continuous, thus T ( v n ) and H ( v n ) are bounded. In addition, inf η n ( 4 η n ) > 0 , hence f ( v n ) 0 as n 0 . Since the subsequence { a n k } converges weakly to a * , therefore, the function f is lower semi-continuous and v n a n 0 as n , then we can determine
0 f ( a * ) lim inf k f ( v n k ) = lim n f ( v n ) = 0 .
That is,
f ( a * ) = 1 2 | | ( I J σ m 2 ) B a * | | 2 = 0 .
This implies that B a * is a fixed point of J σ B 2 or ( I J σ B 2 ) B a * = 0 , then we can have B a * m 2 1 ( 0 ) or 0 m 2 ( B a * ) . Moreover, the point a * is a fixed point of the operator J σ m 1 ( I τ n B * ( I J σ m 2 ) B ) , which means that a * = J σ m 1 ( I τ n B * ( I J σ m 2 ) B ) a * . Since ( I J σ m 2 ) B a * = 0 , hence ( I τ n B * ( I J σ m 2 ) B ) a * = a * , consequently a * = J σ m 1 a * , This implies that a * is a stationary (fixed) point of J σ m 1 , in fact, a * m 1 1 ( 0 ) . Furthermore, from (48) and the fact that I S i is demiclosed at zero, then a * F i x ( S i ) for i = 1 , , r . Hence a * Γ .
First, we show that { a n } strongly converges to a * , where a * = P Γ ( I D + ξ g ) a * is the unique solution of the variational inequality (VI):
( D ξ g ) a * , a * a 0 , a Γ .
For us to achieve our goal, we prove that lim sup n ( D ξ g ) a * , a * a n 0 . Choose a subsequence { a n j } of { a n } such that lim sup j ( D ξ g ) a * , a * a n = lim j ( D ξ g ) a * , a * a n j . Since a n j a ¯ and using (10), we have
lim sup j ( D ξ g ) a * , a * a n = lim j ( D ξ g ) a * , a * a n j = ( D ξ g ) a * , a * a ¯ = a * ( I ( D ξ g ) ) a * , a * a ¯ 0 .
Furthermore, we make use of Lemma 4, Lemma 4(i), and (51) to obtain that | | a n a * | | 0 , implying that sequence { a n } strongly converges to a * . Case A is concluded.
CASE B: Now we assume that { | | a n q | | } is not monotonically decreasing. Then for some n 0 and n n 0 , we define ϕ : N N by the following:
ϕ ( n ) = m a x { t N : t n : ϕ t ϕ t + 1 } .
Moreover, ϕ is increasing with lim n ϕ ( n ) and
0 | | a ϕ ( n ) q | | | | a ϕ ( n ) + 1 q | | , n n 0 .
We can apply a similar argument to the one used in Case A and conclude that
lim n | | b ϕ ( n ) v ϕ ( n ) | | = lim n | | x ϕ ( n ) , i b ϕ ( n ) | | = lim n | | a ϕ ( n ) + 1 a ϕ ( n ) | | = 0 .
Thus, Ω v ( a ϕ ( n ) ) Γ , where Ω v ( a ϕ ( n ) ) is the weak subsequential limit of { a ϕ ( n ) } . Also, we have
lim sup n ( D ξ g ) q , q a ϕ ( n ) 0 .
Thus, we follow from Lemma 4(i) and we have
| | a ϕ ( n ) + 1 q | | 2 1 2 λ ϕ ( n ) ( η ξ σ ) 1 λ ϕ ( n ) ξ σ | | a ϕ ( n ) q | | 2 + 2 λ ϕ ( n ) ( η ξ σ ) 1 λ ϕ ( n ) ξ σ ( 2 ξ g ( q ) D q , a ϕ ( n ) + 1 q + λ ϕ ( n ) M ) + α ϕ ( n ) M 2 | | a ϕ ( n ) a ϕ ( n ) 1 | | 1 λ ϕ ( n ) ξ σ
for some M > 0 and where
M 2 = s u p n 1 ( ( 1 λ ϕ ( n ) q ) 2 ( | | a ϕ ( n ) q | | + | | a ϕ ( n ) 1 q | | ) + 2 ( 1 λ ϕ ( n ) q ) 2 | | a ϕ ( n ) a ϕ ( n ) 1 | | ) .
Since | | a ϕ ( n ) q | | 2 | | a ϕ ( n ) + 1 q | | 2 then from ( 53 ) , we obtain the following results:
0 1 2 λ ϕ ( n ) ( η ξ σ ) 1 λ ϕ ( n ) ξ σ | | a ϕ ( n ) q | | 2 + 2 λ ϕ ( n ) ( η ξ σ ) 1 λ ϕ ( n ) ξ σ ( 2 ξ g ( q ) D q , a ϕ ( n ) + 1 q + λ ϕ ( n ) M ) + α ϕ ( n ) M 2 | | a ϕ ( n ) a ϕ ( n ) 1 | | 1 λ ϕ ( n ) ξ σ | | a ϕ ( n ) q | | 2 .
Hence, we obtain:
2 λ ϕ ( n ) ( η ξ σ 1 λ ϕ ( n ) ξ σ | | a ϕ ( n ) q | | 2 2 λ ϕ ( n ) ( η ξ σ ) 1 λ ϕ ( n ) ξ σ ( 2 ξ g ( q ) D q , a ϕ ( n ) + 1 q + λ ϕ ( n ) M ) + α ϕ ( n ) M 2 | | a ϕ ( n ) a ϕ ( n ) 1 | | 1 λ ϕ ( n ) ξ σ .
Therefore, we obtain the results below:
| | a ϕ ( n ) q | | 2 2 ξ g ( q ) D q , a ϕ ( n ) + 1 q + λ ϕ ( n ) M 4 + α ϕ ( n ) M 2 | | a ϕ ( n ) a ϕ ( n ) 1 | | 2 λ ϕ ( n ) ( η ξ σ ) .
Since the sequence { a ϕ ( n ) } is bounded and lim n λ ϕ ( n ) 0 , it follows from Equation ( 52 ) and Remark 1.
lim n | | a ϕ ( n ) q | | = 0 .
We can conclude that n n o , the following statement holds:
0 | | a n q | | 2 m a x { | | a ϕ ( n ) q | | 2 , | | a ϕ ( n ) + 1 q | | 2 } = | | a ϕ ( n ) + 1 q | | 2 .
Hence, lim n | | a n q | | = 0 . Therefore, we imply that sequence { a n } converges strongly to q. This completes the proof.    □
The result presented in Theorem 1 can lead to an improvement when compared to the findings of [26]. It is important to recall that the set of quasi-nonexpansive mappings can be classified as 0-demicontractive. Therefore, we can utilize the same discoveries to obtain outcomes when approximating a common solution for the SVIP, together with a restricted number of multivalued quasi-nonexpansive mappings. The following remark highlights our contributions to this paper:
Remark 3. 
(i) 
A new optimal choice of the inertial extrapolation technique is introduced. This can also be adapted for other iterative algorithms to perform better.
(ii) 
The algorithm obtained a strong convergence result without necessarily imposing a solid condition on the control parameters.
(iii) 
The self-adaptive technique prevents the need to calculate a prior estimate of the norm of the bounded linear operator at every iteration.
(iv) 
The algorithm produces suitable solutions that approximate the entire set of solutions Γ as stated in ( 1 ) , using appropriate starting points. This feature sets it apart from Tikhonov-type regularization methods, which always converge to the same solution sequence. We find this attribute particularly intriguing.

4. Numerical Illustrations

Let us provide some numerical examples that demonstrate the effectiveness and efficiency of the suggested algorithms. We will compare the performance of Algorithm 11 (also known as Algorithm 11) with Algorithms 1, 2, 3 and 10 (also known as Algorithms 1, 2, 3, and 10, respectively). Kindly note that the renumbering of the article occurred due to the change in the numbering style in the template. All codes were written in MATLAB R2020b and performed on a PC Desktop. Intel(R) Core(TM) i7-6600U CPU @ 3.00 GHz 3.00 GHz, RAM 32.00 GB.
Example 1. 
Let H 1 = H 2 = R 3 and B , m 1 , m 2 : R 3 R 3 be defined by
B = 1 1 0 1 2 0 0 0 3 , m 1 = 4 0 0 0 3 0 0 0 2 and m 2 = 6 0 0 0 5 0 0 0 4 .
It is easy to check that the resolvent operators concerning m 1 and m 2 are defined by
J σ m 1 ( a ) = a 1 1 + 4 σ , a 2 1 + 3 σ , a 3 1 + 2 σ and J σ m 2 ( a ) = a 1 1 + 5 σ , a 2 1 + 5 σ , a 3 1 + 4 σ
for σ > 0 and a R 3 . Also, let F j : R 3 2 R 3 be defined by
F j a = ( 3 j + 1 ) x 3 , ( j + 1 ) a i f a 0 ( j + 1 ) a , ( 3 j + 1 ) a 3 i f a > 0
It is clear that T ( F ) = { 0 } and H ( F j a , F j 0 ) = ( j + 1 ) 2 | z | 2 . Thus,
d ( a , F j a ) 2 = a + ( 3 j + 1 ) 3 a 2 = ( 3 j + 4 ) 3 a 2 = 9 j 2 + 24 j + 16 9 | a | 2 .
Furthermore,
H ( F j a , F j 0 ) 2 = ( j + 1 ) 2 | a | 2 = | a 0 | 2 + ( j 2 + 2 j ) | a 0 | 2 = | a 0 | 2 + 9 j 2 + 2 j 9 j 2 + 24 j + 16 d 2 ( a , F j a ) .
Hence, F j is k -demicontractive with k = ( 9 j 2 + 2 j ) 9 j 2 + 24 j + 16 ( 0.1 ) . Moreover, the solution set Γ = { 0 } . We choose the following choice of parameters for Algorithm 11: θ n = 1 ( n + 1 ) 2 , η n = 2 n 5 n + 4 , β n = 1 m + 1 , λ n = 1 n + 1 , ξ = 1 , α = 0 , 2 g ( a ) = a 4 D ( a ) = a . For Algorithm 1, we take θ n = 2 n 5 n + 4 , δ = 0.03 ; for Algorithm 2, we take θ n = 1 ( n + 1 ) 2 , δ = 0.04 , λ = 0.03 ; for Algorithm 3, we take η n = 0.04 ; and for Algorithm 10, we take θ n = 2 n 5 n + 3 , α n = 1 n + 1 , σ n = 1 2 B * B 2 . We test the algorithms using the following initial points:
  • ([Case I:]) a 0 = eye(3,1) and a 1 = rand(3,1),
  • ([Case II:]) a 0 = rand(3,1) and a 1 = rand(3,1),
  • ([Case III:]) a 0 = randn(3,1) and a 1 = randn(3,1),
  • ([Case IV:]) a 0 = ones(3,1) and a 1 = rand(3,1),
where “eye", “randn", “rand", and “ones" are MATLAB functions. We used a n + 1 a n < 10 6 as the stopping criterion for all the implementation. The numerical results are shown in Table 1 and Figure 1. Furthermore, we run the algorithms for 100 randomly generated starting points to check the performance of the algorithms using the performance profile metric introduced by Dolan and More [34], which is widely accepted as a benchmark for comparing the performance of algorithms. The details of the setup of the performance profile can be found in [34]. In particular, for each algorithm s S = { 1 , 2 , , 5 } and case p P = { 1 , , 100 } , we defined a parameter t p , s which is the computation value of algorithm s S for solving problem case p P such as the number of iterations, time of execution, or error value of Algorithm s S to solve problem p P . The performance of each algorithm is scaled concerning the best performance of any other algorithm in S , which yields the performance ratio
η p , s = t p , s min { t p , s : s S } .
We select a parameter η r such that η r η p , s for all p and s, and η p , s = η r only if solver s is unable to solve problem p. It is worth noting that the choice of η r does not affect the performance evaluation, as explained in [34]. To determine an overall assessment of each solver’s performance, we use the following measurement:
P s ( t ) = 1 n p size { p P : η p , s t } ,
the probability P s ( t ) represents the likelihood of solver s S to achieve a performance ratio η p , s within a factor t R of the best possible ratio. The performance profile P s : R [ 0 , 1 ] for a solver is a non-decreasing function that is piecewise continuous from the right at each breakpoint when P s is defined as the cumulative distribution function of the performance ratio. The probability P s ( 1 ) denotes the chance of the solver achieving the best performance among all solvers. The performance profile results (Figure 2 show that Algorithm 11 has the best performance for 100% of the cases considered in terms of the number of iterations. In contrast, Algorithm 3 has the worst performance. Moreover, Algorithm 10 performs better than Algorithms 1–3 even in worst senerios. Also, Algorithm 11 has the best performance for about 82% of the cases in terms of the time of execution, followed by Algorithm 10 for about 18% of the cases. In contrast, Algorithm 3 has the worst performance in terms of the time of execution. It is good to note that despite the self-adaptive technique used in selecting the stepsize for Algorithm 3, its performance is relatively worse than other methods.
Example 2. 
Our algorithms are utilized to solve an image reconstruction problem that can be modeled as the Least Absolute Selection and Shrinkage Operator (LASSO) problem described in Tibshirani’s work [35]. Alternatively, it can be modeled as an underdetermined linear system given by
z = m a + ϵ ,
where a is the original image in R M , m is the blurring operator in M × N ( M < < N ), ϵ is noise, and z is the degraded or blurred data which must be recovered. Typically, this can be reformulated as a convex unconstrained minimization problem given by
min a R N 1 2 m a z 2 2 + λ a 1 ,
where λ > 0 , a 2 is the Euclidean norm of a and a 1 = i = 1 N | y i | is the l 1 -norm of a. Various scientific and engineering fields have found the problem to be a valuable tool. Over the years, several iterative techniques have been developed to solve Equation (62), with the earliest being the projection approach introduced by Figureido et al. [36]. Equivalently, the LASSO problem (62) can be expressed as an SVIP when C = { a R k : a 1 t } and Q = { z } , m 1 = i C , m 2 = i Q , where i C and i Q are the indicator functions on C and Q, respectively. We aim to reconstruct the initial image a based on the information the blurred image z provides. The image is in greyscale and has a width of M pixels and a height of N pixels, with each pixel value within the [ 0 , 255 ] range. The total number of pixels in the image is D = M × N . The signal-to-noise ratio, which is determined by the amount of noise present in the restored image, is used to evaluate the quality of the resulting image, and it is defined by
S N R = 20 × log 10 a 2 a a * 2 ,
with a and a * being the original and restored images, respectively. In image restoration, the quality of the restored image is typically measured by its signal-to-noise ratio (SNR), where a higher SNR indicates better quality. To evaluate the effectiveness of our approach, we conducted experiments using three test images: Cameraman (256 × 256), Medical Resonance Imaging (MRI) (128 × 128), and Pout (400 × 318), all of which were obtained from the Image Processing Toolbox in MATLAB. Specifically, we degraded each test image using a Gaussian 7 × 7 blur kernel with a standard deviation of 4. We processed the algorithms using the following control parameters: Algorithm 11: θ n = 1 n 2 , η n = 2 n 1 8 n + 7 , β n = 1 r + 1 , λ n = 1 100 n + 1 , ξ = 1 , α = 0.4 g ( a ) = a 8 D ( a ) = 2 a . For Algorithm 1, we take θ n = n 7 n + 3 , λ = 0.05 ; for Algorithm 2, we take θ n = 1 n 2 , δ = 0.06 , λ = 0.09 ; for Algorithm 3, we take η n = 0.05 ; and for Algorithm 10, we take θ n = n 7 n + 3 , α n = 1 100 n + 1 , σ n = 1 2 B * B 2 . We also choose the initial values as a 0 = 0 R M × N and a 1 = 1 R M × N . The numerical results are shown in Figure 3, Figure 4, Figure 5 and Figure 6 and Table 2. It is easy to see that all the algorithms efficiently reconstruct the blurred image. Though the performance of the algorithms varies in terms of the quality of the reconstructed image, we note that Algorithm 11 was able to reconstruct the images faster than other algorithms used in the experiments. This also emphasizes the importance of the proposed algorithm.

5. Conclusions

Our paper proposes a novel inertial self-adaptive iterative technique that utilizes viscosity approximation to obtain a common solution for split variational inclusion problems and fixed point problems in real Hilbert spaces. We have selected an optimal inertial extrapolation term to enhance the algorithm’s accuracy. Additionally, we incorporated a self-adaptive technique that allows for stepsize adjustment without relying on prior knowledge of the norm of the bounded linear operator. Our method has been proven to converge strongly, and we have included numerical implementations to demonstrate its efficiency and effectiveness.

Author Contributions

Conceptualization, M.D.N. and L.O.J.; methodology, M.D.N. and L.O.J.; software, I.O.A.; validation, M.A. and L.O.J.; formal analysis, M.D.N. and L.O.J.; investigation, M.D.N.; resources, M.A.; data curation, I.O.A.; writing—original draft preparation, M.D.N.; writing—review and editing, L.O.J.; visualization, L.O.J. and I.O.A.; supervision, M.A. and L.O.J.; project administration, M.D.N.; funding acquisition, M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Moudafi, A. Split monotone variational inclusions. J. Optim. Theory Appl. 2011, 150, 275–283. [Google Scholar] [CrossRef]
  2. Censor, Y.; Elfving, T. A multiprojection algorithms using Bregman projection in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  3. Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, A. A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51, 2353–2365. [Google Scholar] [CrossRef] [PubMed]
  4. Censor, Y.; Gibali, A.; Reich, S. Algorithms for the split variational inequality problem. Numer. Algorithms 2012, 59, 301–323. [Google Scholar] [CrossRef]
  5. Byrne, C. Iterative oblique projection onto convex subsets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
  6. Combettes, P.L. The convex feasibility problem in image recovery. In Advances in Imaging and Electron Physics; Hawkes, P., Ed.; Academic Press: New York, NY, USA, 1996; pp. 155–270. [Google Scholar]
  7. Byrne, C.; Censor, Y.; Gibali, A.; Reich, S. Weak and strong convergence of algorithms for the split common null point problem. J. Nonlinear Convex. Anal. 2012, 13, 759–775. [Google Scholar]
  8. Marino, G.; Xu, H.K. A general iterative method for nonexpansive mapping in Hilbert spaces. J. Math. Anal. Appl. 2006, 318, 43–52. [Google Scholar] [CrossRef]
  9. Moudafi, A. Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef]
  10. Kazmi, K.R.; Rizvi, S.H. An iterative method for split variational inclusion problem and fixed point problem for a nonexpansive mapping. Optim. Lett. 2014, 8, 1113–1124. [Google Scholar] [CrossRef]
  11. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  12. Polyak, B.T. Some methods of speeding up the convergence of iterarive methods. Z. Vychisl. Mat. Mat. Fiz. 1964, 4, 1–17. [Google Scholar]
  13. Moudafi, A.; Oliny, M. Convergence of a splitting inertial proximal method for monotone operators. J. Comput. Appl. Math. 2003, 155, 447–454. [Google Scholar] [CrossRef]
  14. Deepho, J.; Kumam, P. The hybrid steepest descent method for split variational inclusion and constrained convex minimization problem. Abstr. Appl. Anal. 2014, 2014, 365203. [Google Scholar] [CrossRef]
  15. Anh, P.K.; Thong, D.V.; Dung, V.T. A strongly convergent Mann-type inertial algorithm for solving split variational inclusion problems. Optim. Eng. 2021, 22, 159–185. [Google Scholar] [CrossRef]
  16. Maingé, P.E. Regularized and inertial algorithms for common fixed points of nonlinear operators. J. Math. Anal. Appl. 2008, 34, 876–887. [Google Scholar] [CrossRef]
  17. Wangkeeree, R.; Rattanaseeha, K. The general iterative methods for split variational inclusion problem and fixed point problem in Hilbert spaces. J. Comput. Anal. Appl. 2018, 25, 19–31. [Google Scholar]
  18. Long, L.V.; Thong, D.V.; Dung, V.T. New algorithms for the split variational inclusion problems and application to split feasibility problems. Optimization 2019, 68, 2339–2367. [Google Scholar] [CrossRef]
  19. Dong, Q.L.; Lu, Y.Y.; Yang, J. The extragradient algorithm with inertial effects for solving the variational inequality. Optimization 2016, 65, 2217–2226. [Google Scholar] [CrossRef]
  20. Alakoya, T.O.; Jolaoso, L.O.; Mewomo, O.T. Modified inertial subgradient extragradient method with self-adaptive stepsize for solving monotone variational inequality and fixed point problems. Optimization 2020, 545–574. [Google Scholar] [CrossRef]
  21. Chuang, C.S. Hybrid inertial proximal algorithm for the split variational inclusion problem in Hilbert spaces with applications. Optimization 2017, 66, 777–792. [Google Scholar] [CrossRef]
  22. Kesornprom, S.; Cholamjiak, P. Proximal type algorithms involving linesearch and inertial technique for split variational inclusion problem in hilbert spaces with applications. Optimization 2019, 68, 2369–2395. [Google Scholar] [CrossRef]
  23. Tang, Y. Convergence analysis of a new iterative algorithm for solving split variational inclusion problem. J. Ind. Manag. Optim. 2020, 16, 235–259. [Google Scholar] [CrossRef]
  24. Tan, B.; Qin, X.; Yao, J.C. Strong convergence of self-adaptive inertial algorithms for solving split variational inclusion problems with applications. J. Sci. Comput. 2021, 87, 20. [Google Scholar] [CrossRef]
  25. Zhou, Z.; Tan, B.; Li, S. Adaptive hybrid steepest descent algorithms involving an inertial extrapolation term for split monotone variational inclusion problems. Math. Methods Appl. Sci. 2022, 45, 8835–8853. [Google Scholar] [CrossRef]
  26. Alakoya, T.O.; Jolaoso, L.O.; Mewomo, O.T. A self adaptive inertial algorithm for solving split variational inclusion and fixed point problems with applications. J. Ind. Manag. Optim. 2022, 18, 239–265. [Google Scholar] [CrossRef]
  27. Aubin, J.P.; Siegel, J. Fixed points and stationary points of dissipative multivalued maps. Proc. Am. Math. Soc. 1989, 78, 391–398. [Google Scholar] [CrossRef]
  28. Panyanak, B. Endpoints of multivalued nonexpansive mappings in geodesic spaces. Fixed Point Theory Appl. 2015, 2015, 147. [Google Scholar] [CrossRef]
  29. Kahn, M.S.; Rao, K.R.; Cho, Y.J. Common stationary points for set-valued mappings. Int. J. Math. Math. Sci. 1993, 16, 733–736. [Google Scholar] [CrossRef]
  30. Jailoka, P.; Suantai, S. The split common fixed point problem for multivalued demicontractive mappings and its applications. RACSAM 2019, 113, 689–706. [Google Scholar] [CrossRef]
  31. Chidume, C.E.; Ezeora, J.N. Krasnoselskii-type algorithm for family of multi-valued strictly pseudo-contractive mappings. Fixed Point Theory Appl. 2014, 2014, 111. [Google Scholar] [CrossRef]
  32. Mainge, P.E. Convergence theorems for inertial KM-type algorithms. J. Comput. Appl. Math. 2008, 219, 223–236. [Google Scholar] [CrossRef]
  33. Mainge, P.E. A hybrid extragradient viscosity method for monotone operators and fixed point problems. SIAM J. Control Optim. 2008, 49, 1499–1515. [Google Scholar] [CrossRef]
  34. Dolan, E.D.; Moré, J.J. Benchmarking optimization software with performance profiles. Math. Program 2002, 91, 201–213. [Google Scholar] [CrossRef]
  35. Tibshirani, R. Regression Shrinkage and Selection via the Lasso. J. R. Stat. Soc. Ser. B 1996, 58, 267–288. [Google Scholar] [CrossRef]
  36. Figueiredo, M.A.T.; Nowak, R.D.; Wright, S.J. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 2007, 1, 586–597. [Google Scholar] [CrossRef]
Figure 1. Example 1, (Top Left): Case I; (Top Right): Case II, (Bottom Left): Case III; (Bottom Right): Case IV.
Figure 1. Example 1, (Top Left): Case I; (Top Right): Case II, (Bottom Left): Case III; (Bottom Right): Case IV.
Mathematics 11 04708 g001aMathematics 11 04708 g001b
Figure 2. Performance profile results for Example 1 in terms of number of iterations (left) and time of execution (right).
Figure 2. Performance profile results for Example 1 in terms of number of iterations (left) and time of execution (right).
Mathematics 11 04708 g002
Figure 3. Image reconstruction using cameraman (256 × 256) image.
Figure 3. Image reconstruction using cameraman (256 × 256) image.
Mathematics 11 04708 g003aMathematics 11 04708 g003b
Figure 4. Image reconstruction using MRI (128 × 128) image.
Figure 4. Image reconstruction using MRI (128 × 128) image.
Mathematics 11 04708 g004
Figure 5. Image construction using Pout image (291 × 240).
Figure 5. Image construction using Pout image (291 × 240).
Mathematics 11 04708 g005
Figure 6. Graphs of SNR against iteration number. Top Left: Cameraman; Top Right: MRI; and Bottom: Pout.
Figure 6. Graphs of SNR against iteration number. Top Left: Cameraman; Top Right: MRI; and Bottom: Pout.
Mathematics 11 04708 g006
Table 1. Numerical results for Example 1.
Table 1. Numerical results for Example 1.
Case ICase IICase IIICase IV
Algorithm 11No of L.13151714
CPU time (s)0.00130.00760.00900.0018
Algorithm 1No of L.43506348
CPU time (s)0.02820.02230.01140.0156
Algorithm 2No of L.41485946
CPU time (s)0.02700.01910.01370.0166
Algorithm 3No of L.10414016073
CPU time (s)0.03050.04250.02780.0194
Algorithm 10No of L.28323728
CPU time (s)0.01670.01610.01480.0034
Table 2. Computational result for Example 2.
Table 2. Computational result for Example 2.
AlgorithmsCameramanMRIPout
Time (s)SNRTime (s)SNRTime (s)SNR
Algorithm 1114.216934.35802.362726.421511.396940.2075
Algorithm 116.998934.35172.746126.956512.537037.7244
Algorithm 219.627334.34682.817125.897612.393140.9870
Algorithm 318.811134.43652.733326.467513.547940.3109
Algorithm 1017.302231.59742.707424.677512.314436.2867
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ngwepe, M.D.; Jolaoso, L.O.; Aphane, M.; Adenekan, I.O. An Algorithm That Adjusts the Stepsize to Be Self-Adaptive with an Inertial Term Aimed for Solving Split Variational Inclusion and Common Fixed Point Problems. Mathematics 2023, 11, 4708. https://doi.org/10.3390/math11224708

AMA Style

Ngwepe MD, Jolaoso LO, Aphane M, Adenekan IO. An Algorithm That Adjusts the Stepsize to Be Self-Adaptive with an Inertial Term Aimed for Solving Split Variational Inclusion and Common Fixed Point Problems. Mathematics. 2023; 11(22):4708. https://doi.org/10.3390/math11224708

Chicago/Turabian Style

Ngwepe, Matlhatsi Dorah, Lateef Olakunle Jolaoso, Maggie Aphane, and Ibrahim Oyeyemi Adenekan. 2023. "An Algorithm That Adjusts the Stepsize to Be Self-Adaptive with an Inertial Term Aimed for Solving Split Variational Inclusion and Common Fixed Point Problems" Mathematics 11, no. 22: 4708. https://doi.org/10.3390/math11224708

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop