Next Article in Journal
The Mathematical Simulation for the Photocatalytic Fatigue of Polymer Nanocomposites Using the Monte Carlo Methods
Next Article in Special Issue
Ćirić-Type Operators and Common Fixed Point Theorems
Previous Article in Journal
Research on Multicriteria Decision-Making Scheme of High-Speed Railway Express Product Pricing and Slot Allocation under Competitive Conditions
Previous Article in Special Issue
Common Fixed-Point and Fixed-Circle Results for a Class of Discontinuous F-Contractive Mappings

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Self-Adaptive Method and Inertial Modification for Solving the Split Feasibility Problem and Fixed-Point Problem of Quasi-Nonexpansive Mapping

by
Yuanheng Wang
*,
Tiantian Xu
,
Jen-Chih Yao
and
Bingnan Jiang
College of Mathematics and Computer Science, Zhejiang Normal University, Jinhua 321004, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(9), 1612; https://doi.org/10.3390/math10091612
Submission received: 25 March 2022 / Revised: 3 May 2022 / Accepted: 5 May 2022 / Published: 9 May 2022
(This article belongs to the Special Issue Fixed Point, Optimization, and Applications II)

## Abstract

:
The split feasibility problem (SFP) has many practical applications, which has attracted the attention of many authors. In this paper, we propose a different method to solve the SFP and the fixed-point problem involving quasi-nonexpansive mappings. We relax the conditions of the operator as well as consider the inertial iteration and the adaptive step size. For example, the convergence generated by our new method is better than that of other algorithms, and the convergence rate of our algorithm greatly improves that of previous algorithms.
MSC:
47H09; 47H10; 47H04

## 1. Introduction

Since Censor et al. [1] introduced the SFP, more and more people have paid attention to this problem due to its various applications in resolving practical issues.
Throughout this paper, we suppose that $H 1$, $H 2$ are real Hilbert spaces, and C, Q are nonempty convex closed subsets of $H 1$, $H 2$, respectively. We consider $A : H 1 → H 2$ a bounded linear operator, and $A ≠ 0$. The SFP can be stated in the following form [2,3,4,5,6,7,8,9]:
Find a point $q ∈ H 1$, such that
$q ∈ C , A q ∈ Q .$
The solution for (1) is denoted by $SFP ( C , Q )$:
$SFP ( C , Q ) : = { q ∈ C : A q ∈ Q } .$
We note that the $C Q$ algorithm of Byrne [2] is a very successful approach to (1), where ${ q n }$ is generated by the following process:
For any initial estimation as $q 1 ∈ H 1$,
$q n + 1 = P C ( q n − τ n A * ( I − P Q ) A q n ) , ∀ n ≥ 1 .$
The metric projections of C, Q are $P C$, $P Q$, and the adjoint operator of A is $A *$. We select the step size $τ n$ with $τ n ∈ 0 , 2 ∥ A ∥ 2$. The selection of $τ n$ is dependent on the operator norm, but the calculation of the operator norm is not easy.
The use of Formula (3) to solve (1) can be further optimized. We introduce the following function:
$f ( q ) : = 1 2 ∥ ( I − P Q ) A q ∥ 2 .$
According to the above function, we can get the following equation:
$∇ f ( q ) = A * ( I − P Q ) A q .$
Therefore, (3) is also included as a particular case of a gradient projection algorithm. In order to conquer the difficulty of numerical calculation, many authors have come up with the variable step size, which does not need to calculate norms $∥ A ∥$. Later on, based on predecessors, López et al. [4] thought deeply and finally put forward a new variable step size sequence $τ n$, expressed in the following form:
$τ n : = ρ n f ( q n ) ∥ ∇ f ( q n ) ∥ 2 , ∀ n ≥ 1 ,$
where $ρ n$ satisfies these conditions: the upper bound is 4, the lower bound is 0, and $ρ n$ is a sequence of positive real numbers. If we select the step size (6), we do not need to know any other conditions of the norm $∥ A ∥$, Q, and A.
In 2019, Qin et al. [5] introduced and studied a fixed point method to solve the SFP (1). Given that $q 1 ∈ C$, calculate the following iteration as:
$y n = P C ( ( 1 − δ n ) ( q n − τ n A * ( I − P Q ) A q n ) + δ n S q n ) , q n + 1 = α n g ( q n ) + β n q n + γ n y n , n ≥ 1 ,$
where $g : C → C$ is a $k −$contraction, $S : C → C$ is a nonexpansive mapping, $Fix ( S )$ denotes the set of fixed points of S. ${ α n }$, ${ β n }$, ${ γ n }$, ${ δ n }$, and ${ τ n }$ are real sequences and belong to $( 0 , 1 )$, satisfying the following:
($C 1$)
$0 < lim inf n → ∞ β n ≤ lim sup n → ∞ β n < 1$;
($C 2$)
$lim n → ∞ | τ n − τ n + 1 | = 0$, $0 < lim inf n → ∞ τ n ≤ lim sup n → ∞ τ n < 2 ∥ A ∥ 2$;
($C 3$)
$lim n → ∞ α n = 0$, $∑ n = 1 ∞ α n = ∞$;
($C 4$)
$0 < lim inf n → ∞ δ n ≤ lim sup n → ∞ δ n < 1$, $lim n → ∞ | δ n − δ n + 1 | = 0$ ;
($C 5$)
$α n + β n + γ n = 1$.
Then, ${ q n }$ converges strongly to $x * ∈ Fix ( S ) ∩ SFP ( C , Q )$, and $x *$ is the unique solution of the following variational inequality:
$〈 q − x * , g ( x * ) − x * 〉 ≤ 0 , ∀ q ∈ Fix ( S ) ∩ SFP ( C , Q ) .$
In 2020, Kraikaew et al. [6] further weakened the conditions and simplified the process of proof. They showed that the sequence ${ q n }$ produced by (7) converges strongly to $q * ∈ Fix ( S ) ∩ SFP ( C , Q )$ when the following conditions are satisfied:
($C 1$)
$lim sup n → ∞ β n < 1$;
($C 2$)
$0 < lim inf n → ∞ τ n ≤ lim sup n → ∞ τ n < 2 ∥ A ∥ 2$;
($C 3$)
$lim n → ∞ α n = 0$, $∑ n = 1 ∞ α n = ∞$;
($C 4$)
$0 < lim inf n → ∞ δ n ≤ lim sup n → ∞ δ n < 1$ ;
($C 5$)
$α n + β n + γ n = 1$.
Based on previous works, in this paper, we further weaken the conditions and add the inertia method so that the choice of step size does not need to calculate the operator norm.

## 2. Preliminaries

Throughout this paper, we suppose that H is a real Hilbert space, and D is a nonempty convex closed subset of H. For sequence ${ q n }$, and with q in H, we use $q n → q$ to represent a strong convergence and $q n ⇀ q$ to represent a weak convergence. $Fix ( T )$ denotes the fixed points of $T : H → H$.
The mapping $T : H → H$ is called:
(i)
A nonexpansive mapping if $∥ T x − T y ∥ ≤ ∥ x − y ∥$ for any $x , y ∈ H$;
(ii)
A quasi-nonexpansive mapping if $Fix ( T ) ≠ ∅$ and $∥ T x − y ∥ ≤ ∥ x − y ∥$ for every $x ∈ H$, $y ∈ Fix ( T )$;
(iii)
A firmly nonexpansive mapping if $∥ T x − T y ∥ 2 ≤ ∥ x − y ∥ 2 − ∥ ( I − T ) x − ( I − T ) y ∥ 2$ for any $x , y ∈ H$;
(iv)
A $ι −$Lipschitz continuous mapping if there is $ι > 0$ such that $∥ T x − T y ∥ ≤ ι ∥ x − y ∥$ for any $x , y ∈ H$;
(v)
A contraction mapping if there exists $κ ∈ [ 0 , 1 )$ such that $∥ T ( x ) − T ( y ) ∥ ≤ κ ∥ x − y ∥$, for any $x , y ∈ H$.
Lemma 1
([10,11]). For any $x , y ∈ H$, then
(1)
$∥ x + y ∥ 2 ≤ ∥ x ∥ 2 + 2 〈 y , x + y 〉 , ∀ x , y ∈ X$;
(2)
$∥ t x + ( 1 − t ) y ∥ 2 = t ∥ x ∥ 2 + ( 1 − t ) ∥ y ∥ 2 − t ( 1 − t ) ∥ x − y ∥ 2 , ∀ t ∈ [ 0 , 1 ]$.
Recall that $P D$ is the metric projection operator, that is:
$P D y : = arg min x ∈ D ∥ x − y ∥ 2 , y ∈ H .$
Lemma 2
([12,13,14]). Given $x ∈ D$ and $y ∈ H$,
(1)
$x = P D y$ is equivalent to $〈 x − y , y − z 〉 ≥ 0 , ∀ z ∈ D$;
(2)
$∥ x − P D y ∥ 2 ≤ ∥ x − y ∥ 2 − ∥ y − P D y ∥ 2$.
From Lemma 2, we can easily prove that $I − P D$ is firmly nonexpansive.
Lemma 3
([15]). Let ${ q n }$ be a non-negative number sequence, which satisfies:
$q n + 1 ≤ ( 1 − Γ n ) q n + Γ n Λ n , n ≥ 1 , q n + 1 ≤ q n − Ψ n + Φ n , n ≥ 1 ,$
where ${ Γ n }$ is a sequence in the open interval $( 0 , 1 )$, ${ Ψ n }$ is a non-negative real sequence, ${ Λ n }$, ${ Φ n }$ are two sequences on $R$, satisfying the following:
(1)
$∑ n = 0 ∞ Γ n = ∞$;
(2)
$lim n → ∞ Φ n = 0$;
(3)
$lim k → ∞ Ψ n k = 0$ implies $lim sup k → ∞ Λ n k ≤ 0$, where ${ n k }$ is a subsequence of ${ n }$.
Then, $lim n → ∞ q n = 0$.
Lemma 4
([16]). Let $f ( q ) = 1 2 ∥ ( I − P Q ) A q ∥ 2$. Then $∇ f$ is $∥ A ∥ 2 −$ Lipschitz continuous.
Definition 1
([17]). Let $T : H → H$ be a nonlinear operator with $Fix ( T ) ≠ ∅$, I be the identity operator. If the following implication holds for ${ q n } ∈ H$:
$q n ⇀ q a n d ( I − T ) q n → 0 ⇒ q ∈ Fix ( T ) ,$
then we say that $I − T$ is demiclosed at zero.
It is easy to see that this implication holds for Lipschitz continuous quasi-nonexpansive mappings (see [18]).

## 3. Main Results

Theorem 1.
Let $S : H 1 → H 1$ be a quasi-nonexpansive mapping. Suppose that $I − S$ is demiclosed at zero, and $g : H 1 → H 1$ is a κ-contraction. In addition, let ${ α n }$, ${ β n }$, ${ γ n }$, ${ δ n }$ be sequences in $[ 0 , 1 ]$, satisfying the following:
$( C 1 )$
$lim sup n → ∞ β n < 1$;
$( C 2 )$
$lim n → ∞ ϵ n α n = 0$;
$( C 3 )$
$lim n → ∞ α n = 0$, $∑ n = 1 ∞ α n = ∞$;
$( C 4 )$
$0 < lim inf n → ∞ δ n ≤ lim sup n → ∞ δ n < 1$;
$( C 5 )$
$α n + β n + γ n = 1$.
For each $n ≥ 1$, we can define the following constant:
$f ( w n ) : = 1 2 ∥ ( I − P Q ) A w n ∥ 2 ,$
so that
$∇ f ( w n ) = A * ( I − P Q ) A w n .$
If ${ q n }$ is defined by: $q 0 , q 1 ∈ H 1$ are arbitrarily chosen, and we have the following equation:
$w n = q n + μ n ( q n − q n − 1 ) , y n = P C ( ( 1 − δ n ) ( w n − τ n A * ( I − P Q ) A w n ) + δ n S w n ) , q n + 1 = α n g ( q n ) + β n w n + γ n y n , n ≥ 1 ,$
$μ n = min μ , ϵ n ∥ q n − q n − 1 ∥ , if q n ≠ q n − 1 , μ , otherwise ,$
where $μ ≥ 0$, $τ n = ρ n f ( w n ) ∥ ∇ f ( w n ) ∥ 2$, $ρ n ∈ ( 0 , 4 )$, and $ρ n$ is a sequence of positive real numbers. If $∇ f ( w n ) = 0$, then stop; otherwise, let $n : = n + 1$ and go to compute the next iteration. Assuming that $Fix ( S ) ∩ SFP ( C , Q ) ≠ ∅$, then ${ q n }$ converges strongly to $x * ∈ Fix ( S ) ∩ SFP ( C , Q )$, and $x *$ is the unique solution of the following variational inequality:
$〈 z ′ − x * , g ( x * ) − x * 〉 ≤ 0 , ∀ z ′ ∈ Fix ( S ) ∩ SFP ( C , Q ) .$
Proof.
From Lemma 2, we know that $x *$ is a solution of the following variational inequality:
$〈 z ′ − x * , g ( x * ) − x * 〉 ≤ 0 , ∀ z ′ ∈ Fix ( S ) ∩ SFP ( C , Q ) ,$
if and only if $x * = P Fix ( S ) ∩ SFP ( C , Q ) g ( x * )$. Since g is contractive and $P Fix ( S ) ∩ SFP ( C , Q )$ is nonexpansive, we know that $P Fix ( S ) ∩ SFP ( C , Q ) g$ is contractive. Hence, such $x *$ exists and is unique.
First, let $p ∈ SFP ( C , Q ) ∩ Fix ( S )$. Because $p ∈ C$, according to Lemma 1 and Lemma 2, we find that:
$∥ y n − p ∥ 2 = ∥ P C ( ( 1 − δ n ) ( w n − τ n A * ( I − P Q ) A w n ) + δ n S w n ) − p ∥ 2 ≤ ∥ ( ( 1 − δ n ) ( w n − τ n A * ( I − P Q ) A w n ) + δ n S w n ) − p ∥ 2 − ∥ ( I − P C ) ( ( 1 − δ n ) ( w n − τ n A * ( I − P Q ) A w n ) + δ n S w n ) ∥ 2 = ∥ δ n ( S w n − p ) + ( 1 − δ n ) ( w n − τ n A * ( I − P Q ) A w n − p ) ∥ 2 − ∥ ( I − P C ) ( ( 1 − δ n ) ( w n − τ n A * ( I − P Q ) A w n ) + δ n S w n ) ∥ 2 = δ n ∥ S w n − p ∥ 2 + ( 1 − δ n ) ∥ w n − τ n ∇ f ( w n ) − p ∥ 2 − δ n ( 1 − δ n ) ∥ S w n − w n + τ n A * ( I − P Q ) A w n ∥ 2 − ∥ ( I − P C ) ( ( 1 − δ n ) ( w n − τ n A * ( I − P Q ) A w n ) + δ n S w n ) ∥ 2 ≤ δ n ∥ w n − p ∥ 2 + ( 1 − δ n ) ∥ w n − τ n ∇ f ( w n ) − p ∥ 2 − δ n ( 1 − δ n ) ∥ S w n − w n + τ n A * ( I − P Q ) A w n ∥ 2 − ∥ ( I − P C ) ( ( 1 − δ n ) ( w n − τ n A * ( I − P Q ) A w n ) + δ n S w n ) ∥ 2 ≤ δ n ∥ w n − p ∥ 2 + ( 1 − δ n ) ( ∥ w n − p ∥ 2 + τ n 2 ∥ ∇ f ( w n ) ∥ 2 − 2 τ n 〈 ∇ f ( w n ) , w n − p 〉 ) − δ n ( 1 − δ n ) ∥ S w n − w n + τ n A * ( I − P Q ) A w n ∥ 2 − ∥ ( I − P C ) ( ( 1 − δ n ) ( w n − τ n A * ( I − P Q ) A w n ) + δ n S w n ) ∥ 2 ,$
and
$〈 ∇ f ( w n ) , w n − p 〉 = 〈 ( I − P Q ) A w n − ( I − P Q ) A p , A w n − A p 〉 ≥ ∥ ( I − P Q ) A w n ∥ 2 = 2 f ( w n ) .$
Therefore, by combining (10) and (11), we derive the following:
$∥ y n − p ∥ 2 ≤ ∥ w n − p ∥ 2 − 4 ( 1 − δ n ) τ n f ( w n ) + τ n 2 ( 1 − δ n ) ∥ ∇ f ( w n ) ∥ 2 − δ n ( 1 − δ n ) ∥ S w n − w n + τ n A * ( I − P Q ) A w n ∥ 2 − ∥ ( I − P C ) ( ( 1 − δ n ) ( w n − τ n A * ( I − P Q ) A w n ) + δ n S w n ) ∥ 2 = ∥ w n − p ∥ 2 − ρ n ( 4 − ρ n ) ( 1 − δ n ) f 2 ( w n ) ∥ ∇ f ( w n ) ∥ 2 − δ n ( 1 − δ n ) ∥ S w n − w n + τ n A * ( I − P Q ) A w n ∥ 2 − ∥ ( I − P C ) ( ( 1 − δ n ) ( w n − τ n A * ( I − P Q ) A w n ) + δ n S w n ) ∥ 2 .$
Note that $ρ n ∈ ( 0 , 4 )$, ${ δ n }$ is a sequence in $( 0 , 1 )$. We thus derive the following equation:
$∥ y n − p ∥ ≤ ∥ w n − p ∥ .$
Putting $z n = β n 1 − α n w n + γ n 1 − α n y n$, by Lemma 1, we can derive that:
$∥ z n − p ∥ 2 = β n 1 − α n w n + γ n 1 − α n y n − p 2 = β n 1 − α n ( w n − p ) + γ n 1 − α n ( y n − p ) 2 = β n 1 − α n ∥ w n − p ∥ 2 + γ n 1 − α n ∥ y n − p ∥ 2 − β n 1 − α n γ n 1 − α n ∥ w n − y n ∥ 2 .$
From the conditions imposed on ${ α n }$, ${ β n }$, ${ γ n }$ and (12), we have the following:
$∥ z n − p ∥ 2 ≤ β n 1 − α n ∥ w n − p ∥ 2 + γ n 1 − α n ∥ y n − p ∥ 2 ≤ β n 1 − α n ∥ w n − p ∥ 2 + γ n 1 − α n ∥ w n − p ∥ 2 − ( 1 − δ n ) γ n 1 − α n ρ n ( 4 − ρ n ) f 2 ( w n ) ∥ ∇ f ( w n ) ∥ 2 − γ n 1 − α n δ n ( 1 − δ n ) ∥ S w n − w n + τ n A * ( I − P Q ) A w n ∥ 2 − γ n 1 − α n ∥ ( I − P C ) ( ( 1 − δ n ) ( w n − τ n A * ( I − P Q ) A w n ) + δ n S w n ) ∥ 2 = ∥ w n − p ∥ 2 − ( 1 − δ n ) γ n 1 − α n ρ n ( 4 − ρ n ) f 2 ( w n ) ∥ ∇ f ( w n ) ∥ 2 − γ n 1 − α n δ n ( 1 − δ n ) ∥ S w n − w n + τ n A * ( I − P Q ) A w n ∥ 2 − γ n 1 − α n ∥ ( I − P C ) ( ( 1 − δ n ) ( w n − τ n A * ( I − P Q ) A w n ) + δ n S w n ) ∥ 2 .$
From the conditions imposed on ${ α n }$, ${ δ n }$ and ${ γ n }$, we have the following equation:
$∥ z n − p ∥ ≤ ∥ w n − p ∥ .$
Since $z n = β n 1 − α n w n + γ n 1 − α n y n$, we can get the equations below:
$q n + 1 = α n g ( q n ) + β n w n + γ n y n = α n g ( q n ) + ( 1 − α n ) z n .$
Since g is a $κ$-contraction and by using (15), we can get the following:
$∥ q n + 1 − p ∥ = ∥ α n g ( q n ) + ( 1 − α n ) z n − p ∥ = ∥ α n ( g ( q n ) − p ) + ( 1 − α n ) ( z n − p ) ∥ ≤ α n ∥ g ( q n ) − p ∥ + ( 1 − α n ) ∥ z n − p ∥ ≤ α n ∥ g ( q n ) − p ∥ + ( 1 − α n ) ∥ w n − p ∥ ≤ α n ∥ g ( q n ) − g ( p ) ∥ + α n ∥ g ( p ) − p ∥ + ( 1 − α n ) ∥ q n − p + μ n ( q n − q n − 1 ) ∥ ≤ α n κ ∥ q n − p ∥ + α n ∥ g ( p ) − p ∥ + ( 1 − α n ) ∥ q n − p ∥ + ( 1 − α n ) ∥ μ n ( q n − q n − 1 ) ∥ ≤ α n κ ∥ q n − p ∥ + α n ∥ g ( p ) − p ∥ + ( 1 − α n ) ∥ q n − p ∥ + μ n ∥ q n − q n − 1 ∥ ≤ α n κ ∥ q n − p ∥ + α n ∥ g ( p ) − p ∥ + ( 1 − α n ) ∥ q n − p ∥ + ϵ n = ( 1 − α n ( 1 − κ ) ) ∥ q n − p ∥ + α n ( 1 − κ ) ∥ g ( p ) − p ∥ 1 − κ + ϵ n α n ( 1 − κ ) .$
Since $lim n → ∞ ϵ n α n = 0$, we therefore have $ϵ n α n < M$, where M is a suitable positive constant. Hence, we have the following:
$∥ q n + 1 − p ∥ ≤ ( 1 − α n ( 1 − κ ) ) ∥ q n − p ∥ + α n ( 1 − κ ) ∥ g ( p ) − p ∥ + M 1 − κ ≤ max ∥ q n − p ∥ , ∥ g ( p ) − p ∥ + M 1 − κ$
We can thus deduce that:
$∥ q n + 1 − p ∥ ≤ max ∥ q 1 − p ∥ , ∥ g ( p ) − p ∥ + M 1 − κ .$
Therefore, the sequence ${ ∥ q n − p ∥ }$ is bounded.
From Lemma 1, we can get the following:
$∥ w n − p ∥ 2 = ∥ q n + μ n ( q n − q n − 1 ) − p ∥ 2 ≤ ∥ q n − p ∥ 2 + 2 μ n 〈 q n − q n − 1 , w n − p 〉 ≤ ∥ q n − p ∥ 2 + 2 μ n ∥ q n − q n − 1 ∥ ∥ w n − p ∥ ≤ ∥ q n − p ∥ 2 + 2 ϵ n ∥ w n − p ∥ .$
We derive that:
$∥ w n − p ∥ 2 ≤ ∥ q n − p ∥ 2 + 2 ϵ n ∥ w n − p ∥ .$
As p is chosen arbitrarily and g is a $κ$-contraction, we have the following equations:
$∥ q n + 1 − x * ∥ 2 = ∥ α n g ( q n ) + ( 1 − α n ) z n − x * ∥ 2 = ∥ α n ( g ( q n ) − x * ) + ( 1 − α n ) ( z n − x * ) ∥ 2 ≤ α n 2 ∥ g ( q n ) − x * ∥ 2 + ( 1 − α n ) 2 ∥ z n − x * ∥ 2 + 2 α n 〈 g ( q n ) − x * , z n − x * 〉 − 2 α n 2 〈 g ( q n ) − x * , z n − x * 〉 ≤ α n 2 ∥ g ( q n ) − x * ∥ 2 + ( 1 − α n ) 2 ∥ z n − x * ∥ 2 + 2 α n 〈 g ( q n ) − x * , z n − x * 〉 + 2 α n 2 ∥ g ( q n ) − x * ∥ ∥ z n − x * ∥ = α n 2 ∥ g ( q n ) − x * ∥ 2 + ( 1 − α n ) 2 ∥ z n − x * ∥ 2 + 2 α n 〈 g ( q n ) − g ( x * ) , z n − x * 〉 + 2 α n 〈 g ( x * ) − x * , z n − x * 〉 + 2 α n 2 ∥ g ( q n ) − x * ∥ ∥ z n − x * ∥ ≤ α n 2 ∥ g ( q n ) − x * ∥ 2 + ( 1 − α n ) 2 ∥ z n − x * ∥ 2 + 2 α n κ ∥ q n − x * ∥ ∥ z n − x * ∥ + 2 α n 〈 g ( x * ) − x * , z n − x * 〉 + 2 α n 2 ∥ g ( q n ) − x * ∥ ∥ z n − x * ∥ ≤ α n 2 ∥ g ( q n ) − x * ∥ 2 + 2 α n 2 ∥ g ( q n ) − x * ∥ ∥ z n − x * ∥ + ( 1 − α n ) 2 ∥ z n − x * ∥ 2 + α n κ ( ∥ q n − x * ∥ 2 + ∥ z n − x * ∥ 2 ) + 2 α n 〈 g ( x * ) − x * , z n − x * 〉 .$
From (15) and (17), we can derive that:
$∥ z n − x * ∥ 2 ≤ ∥ q n − x * ∥ 2 + 2 ϵ n ∥ w n − x * ∥ .$
It thus follows from (18) and (19) that:
$∥ q n + 1 − x * ∥ 2 ≤ α n 2 ∥ g ( q n ) − x * ∥ 2 + 2 α n 2 ∥ g ( q n ) − x * ∥ ∥ z n − x * ∥ + α n κ ( ∥ q n − x * ∥ 2 + ∥ q n − x * ∥ 2 + 2 ϵ n ∥ w n − x * ∥ ) + 2 α n 〈 g ( x * ) − x * , z n − x * 〉 + ( 1 − α n ) 2 ( ∥ q n − x * ∥ 2 + 2 ϵ n ∥ w n − x * ∥ ) = α n 2 ∥ g ( q n ) − x * ∥ 2 + 2 α n 2 ∥ g ( q n ) − x * ∥ ∥ z n − x * ∥ + ( α n 2 + ( 1 − 2 α n ( 1 − κ ) ) ) ∥ q n − x * ∥ 2 + ( 2 ϵ n ( 1 − α n ) 2 + 2 α n κ ϵ n ) ∥ w n − x * ∥ + 2 α n 〈 g ( x * ) − x * , z n − x * 〉 ≤ α n 2 ∥ g ( q n ) − x * ∥ 2 + 2 α n 2 ∥ g ( q n ) − x * ∥ ∥ z n − x * ∥ + α n 2 ∥ q n − x * ∥ 2 + ( 1 − 2 α n ( 1 − κ ) ) ∥ q n − x * ∥ 2 + 4 ϵ n ∥ w n − x * ∥ + 2 α n 〈 g ( x * ) − x * , z n − x * 〉 = ( 1 − 2 α n ( 1 − κ ) ) ∥ q n − x * ∥ 2 + α n ( α n ∥ g ( q n ) − x * ∥ 2 + 2 α n ∥ g ( q n ) − x * ∥ ∥ z n − x * ∥ + α n ∥ q n − x * ∥ 2 + 4 ϵ n α n ∥ w n − x * ∥ + 2 〈 g ( x * ) − x * , z n − x * 〉 ) = ( 1 − 2 α n ( 1 − κ ) ) ∥ q n − x * ∥ 2 + 2 α n ( 1 − κ ) 1 2 ( 1 − κ ) ( α n ∥ g ( q n ) − x * ∥ 2 + 2 α n ∥ g ( q n ) − x * ∥ ∥ z n − x * ∥ + α n ∥ q n − x * ∥ 2 + 4 ϵ n α n ∥ w n − x * ∥ + 2 〈 g ( x * ) − x * , z n − x * 〉 ) .$
On the other hand, by Lemma 1, we can derive that:
$∥ q n + 1 − x * ∥ 2 = ∥ α n ( g ( q n ) − z n ) + z n − x * ∥ 2 ≤ ∥ z n − x * ∥ 2 + 2 α n 〈 g ( x n ) − z n , x n + 1 − x * 〉 .$
From (14), (17), (21), we find the following:
$∥ q n + 1 − x * ∥ 2 ≤ ∥ w n − x * ∥ 2 − ( 1 − δ n ) γ n 1 − α n ρ n ( 4 − ρ n ) f 2 ( w n ) ∥ ∇ f ( w n ) ∥ 2 − γ n 1 − α n δ n ( 1 − δ n ) ∥ S w n − w n + τ n A * ( I − P Q ) A w n ∥ 2 − γ n 1 − α n ∥ ( I − P C ) ( ( 1 − δ n ) ( w n − τ n A * ( I − P Q ) A w n ) + δ n S w n ) ∥ 2 + 2 α n 〈 g ( q n ) − z n , q n + 1 − x * 〉 ≤ ∥ q n − x * ∥ 2 + 2 ϵ n ∥ w n − x * ∥ − ( 1 − δ n ) γ n 1 − α n ρ n ( 4 − ρ n ) f 2 ( w n ) ∥ ∇ f ( w n ) ∥ 2 − γ n 1 − α n δ n ( 1 − δ n ) ∥ S w n − w n + τ n A * ( I − P Q ) A w n ∥ 2 − γ n 1 − α n ∥ ( I − P C ) ( ( 1 − δ n ) ( w n − τ n A * ( I − P Q ) A w n ) + δ n S w n ) ∥ 2 + 2 α n 〈 g ( x n ) − z n , q n + 1 − x * 〉 .$
Thus,
$∥ q n + 1 − x * ∥ 2 ≤ ∥ q n − x * ∥ 2 + 2 ϵ n ∥ w n − x * ∥ − ( 1 − δ n ) γ n 1 − α n ρ n ( 4 − ρ n ) f 2 ( w n ) ∥ ∇ f ( w n ) ∥ 2 − γ n 1 − α n δ n ( 1 − δ n ) ∥ S w n − w n + τ n A * ( I − P Q ) A w n ∥ 2 − γ n 1 − α n ∥ ( I − P C ) ( ( 1 − δ n ) ( w n − τ n A * ( I − P Q ) A w n ) + δ n S w n ) ∥ 2 + 2 α n 〈 g ( q n ) − z n , q n + 1 − x * 〉 .$
Set the following:
$Γ n = 2 α n ( 1 − κ ) , Λ n = 1 2 ( 1 − κ ) ( α n ∥ g ( q n ) − x * ∥ 2 + 2 α n ∥ g ( q n ) − x * ∥ ∥ z n − x * ∥ + α n ∥ q n − x * ∥ 2 + 2 ϵ n α n ∥ w n − x * ∥ + 2 〈 g ( x * ) − x * , z n − x * 〉 ) , Ψ n = ( 1 − δ n ) γ n 1 − α n ρ n ( 4 − ρ n ) f 2 ( w n ) ∥ ∇ f ( w n ) ∥ 2 + γ n 1 − α n δ n ( 1 − δ n ) ∥ S w n − w n + τ n A * ( I − P Q ) A w n ∥ 2 + γ n 1 − α n ∥ ( I − P C ) ( ( 1 − δ n ) ( w n − τ n A * ( I − P Q ) A w n ) + δ n S w n ) ∥ 2 , Φ n = 2 α n 〈 g ( q n ) − z n , q n + 1 − x * 〉 .$
Then, (20) and (22) can be rewritten as follows:
$∥ q n + 1 − x * ∥ 2 ≤ ( 1 − Γ n ) ∥ q n − x * ∥ 2 + Γ n Λ n , ∥ q n + 1 − x * ∥ 2 ≤ ∥ q n − x * ∥ 2 − Ψ n + Φ n .$
It is easy to see that $lim n → ∞ Γ n = 0$, $∑ n = 0 ∞ Γ n = ∞$, and $lim n → ∞ Φ n = 0$. Therefore, by Lemma 3, we prove that $lim n → ∞ ∥ q n − x * ∥ = 0$ if we show that $lim sup k → ∞ Λ n k ≤ 0$ whenever $lim k → ∞ Ψ n k = 0$ for any subsequence ${ n k } ⊂ { n }$.
Suppose that
$lim k → ∞ Ψ n k = 0 .$
By the conditions of ${ α n }$, ${ β n }$, ${ δ n }$, and ${ γ n }$, we have the following equations:
$lim k → ∞ ρ n k ( 4 − ρ n k ) f 2 ( w n k ) ∥ ∇ f ( w n k ) ∥ 2 = 0 ,$
$lim k → ∞ ∥ S w n k − w n k + τ n k A * ( I − P Q ) A w n k ∥ 2 = 0 ,$
$lim k → ∞ ∥ ( I − P C ) ( ( 1 − δ n k ) ( w n k − τ n k ∇ f ( w n k ) ) + δ n k S w n k ) ∥ = 0 .$
Equation (24) implies that:
$f 2 ( w n k ) ∥ ∇ f ( w n k ) ∥ 2 → 0 .$
From Lemma 4, since ${ ∥ ∇ f ( w n k ) ∥ }$ is bounded, we derive that $f ( w n k ) → 0$ as $k → ∞$, so $lim k → ∞ ∥ ( I − P Q ) A w n k ∥ = 0$. By using (27) and the conditions on ${ ρ n }$, we get the following:
$τ n k ∥ ∇ f ( w n k ) ∥ = ρ n k f ( w n k ) ∥ ∇ f ( w n k ) ∥ → 0 .$
Moreover, according to (25), we can get the equation below:
$∥ S w n k − w n k ∥ → 0 .$
From (26), by expanding the formula, since $y n k = P C ( ( 1 − δ n k ) ( w n k − τ n k ∇ f ( w n k ) ) + δ n k S w n k )$, we can get:
$∥ ( 1 − δ n k ) ( w n k − τ n k ∇ f ( w n k ) ) + δ n k S w n k − y n k ∥ → 0 .$
By expanding (30), we can get the following equation:
$∥ ( 1 − δ n k ) w n k − ( 1 − δ n k ) τ n k ∇ f ( w n k ) + δ n k S w n k − y n k ∥ → 0 .$
With (31) and (28), we can derive the equation below:
$∥ ( 1 − δ n k ) w n k + δ n k S w n k − y n k ∥ → 0 ,$
i.e.,
$∥ w n k − y n k + δ n k ( S w n k − w n k ) ∥ → 0 .$
Hence, we arrive at the following:
$∥ w n k − y n k ∥ = ∥ w n k − y n k + δ n k ( S w n k − w n k ) − δ n k ( S w n k − w n k ) ∥ ≤ ∥ w n k − y n k + δ n k ( S w n k − w n k ) ∥ + ∥ δ n k ( S w n k − w n k ) ∥ .$
Then, from (29) and (33), we can derive that:
$∥ w n k − y n k ∥ → 0 .$
From the definition of $z n$, we can see the following:
$∥ z n k − w n k ∥ = β n k 1 − α n k w n k + γ n k 1 − α n k y n k − w n k = − γ n k 1 − α n k w n k + γ n k 1 − α n k y n k = γ n k 1 − α n k ∥ y n k − w n k ∥ .$
By using (34), we can get the following:
$∥ z n k − w n k ∥ → 0 .$
Combining (29) and the fact that $I − S$ is demiclosed at zero, we know $ω w ( w n k ) ⊂ Fix ( S )$. We select a subsequence ${ w n k j }$ of ${ w n k }$ to satisfy the following equation:
$lim sup k → ∞ 〈 g ( x * ) − x * , w n k − x * 〉 = lim j → ∞ 〈 g ( x * ) − x * , w n k j − x * 〉 .$
Without loss of generality, we can assume that $w n k j ⇀ z ′$. According to $f ( w n k ) → 0$, we can derive that $0 ≤ f ( z ′ ) ≤ lim inf j → ∞ f ( w n k j ) = 0$, so $f ( z ′ ) = 0$, $A z ′ ∈ Q$. This means that $z ′ ∈ SFP ( C , Q )$ by combining with (34). Therefore, $z ′ ∈ Fix ( S ) ∩ SFP ( C , Q )$. By using (35), we have the following:
$lim sup k → ∞ 〈 g ( x * ) − x * , z n k − x * 〉 = lim sup k → ∞ 〈 g ( x * ) − x * , w n k − x * 〉 = lim j → ∞ 〈 g ( x * ) − x * , w n k j − x * 〉 = 〈 g ( x * ) − x * , z ′ − x * 〉 ≤ 0 .$
This means that:
$lim k → ∞ Λ n k ≤ 0 .$
The proof is finished. □

## 4. Numerical Experiments

Now, we give two numerical experiments. We wrote these programs on Matlab 9.0, performed them on a PC Desktop Intel(R) Core(TM) i5-1035G1 CPU @ 1.00 GHz 1.19 GHz, RAM 16.0 GB.
Example 1.
Solving the system of linear equations $A x = b$. We assume that $H 1 = H 2 = R 5$. In the following, we take:
$S = 1 3 1 3 0 0 0 0 1 3 1 3 0 0 0 0 1 3 1 3 0 0 0 0 1 3 1 3 0 0 0 0 1 ,$
and $g = 0$. Consider $A x = b$, where
$A = 1 1 2 2 1 0 2 1 5 − 1 1 1 0 4 − 1 2 0 3 1 5 2 2 3 6 1 , b = 43 16 2 19 16 51 8 41 8 .$
We give the parameters and initial values as follows: For (7) and (9), we choose $α n = 1 10 n$, $β n = 0.5$, $γ n = 0.5 − 1 10 n$, $δ n = 0.5$, $q 1 = ( 1 , 1 , 1 , 1 , 1 ) T$; for (7), we choose $τ n = 1 ∥ A ∥ 2$; for (9), we choose $ϵ n = 1 n 2$, $μ = 1$, $ρ n = 3 + 1 n + 1$, $q 0 = ( 1 , 1 , 1 , 1 , 1 ) T$. Denote $x *$ by the solution of $A x = b$. Then we have $x * = ( 1 16 , 1 8 , 1 4 , 1 2 , 1 ) T$. We can see that $x * ∈ Fix ( S )$. We can see the numerical results of the main algorithms in Table 1 and Figure 1.
From Table 1, we can see that with the addition of iterative steps, ${ q n }$ is closer to the exact solution $x *$. We can also see that these errors are closer to zero. Hence, we can conclude that our algorithm is reliable. From Figure 1, we can see that our method has fewer iterations than (7), therefore our method has more advantages.
Example 2.
Seeking the solution to the following problem:
$min 1 2 ∥ A x − b ∥ 2 2 : x ∈ R s , ∥ x ∥ 1 ≤ τ ,$
where $A : R s → R m$, $m < s$ is the bounded linear operator, $b ∈ R m$ and $τ > 0$. A is a sparse matrix, and A is generated by a standard normal distribution. The uniform distribution on the interval (-2,2) generates a real sparse signal $x *$. The position of random p is not equal to zero, and the rest remains at zero. We can then obtain the sample data $b = A x *$.
The key is to seek the sparse solution of the linear system so that we can use method (9) to solve the problem.
We define $C = { x : ∥ x ∥ 1 ≤ τ }$, $Q = { b }$. Because the projection on C has no closed formal solution, we consider the subgradient projection to solve it. Assume that the convex function $c ( x )$ and the level set $C n$ are defined by the following equation:
$c ( x ) = ∥ x ∥ 1 − τ , C n = { x : c ( q n + 〈 ς n , x − q n 〉 ) ≤ 0 } ,$
then $ς n ∈ ∂ c ( q n )$. Next, we can calculate the orthogonal projection on $C n$ according to the following formula:
$P C n ( x ) = x , if c ( q n ) + 〈 ς n , x − q n 〉 ≤ 0 , x − c ( q n ) + 〈 ς n , x − x n 〉 ∥ ς n ∥ 2 , otherwise .$
Note that the subdifferential $∂ c$ on $q n$ is the following:
$∂ c ( q n ) = 1 , i f q n > 0 , − 1 , 1 , i f q n = 0 , − 1 , i f q n < 0 .$
Let $S = I$, $g = 0.4 I$. Take $1 2 ∥ A q n − b ∥ 2 2 ≤ 10 − 3$ as the stopping criterion. We give the parameters and initial values as follows: For (7) and (9), we choose $α n = 1 10 n$, $β n = 0.5$, $γ n = 0.5 − 1 10 n$, $δ n = 0.2$, $q 1 = ( 1 , 1 , ⋯ , 1 ) T$; for (7), we choose $τ n = 1 ∥ A ∥ 2$; for (9), we choose $ϵ n = 1 n 2$, $μ = 1$, $ρ n = 2$, $q 0 = ( 1 , 1 , ⋯ , 1 ) T$. We can see the numerical results of the main algorithms in Table 2. Figure 2 shows that when $( m , s , p ) = ( 240 , 1024 , 30 )$, we can obtain the relationship between the target function and the iterations.
From Table 2 and Figure 2, we can see that our iterative method has advantages in both reaction time and the number of iterations.
Example 3.
Let $H 1 = H 2 = L 2 [ 0 , 1 ]$, with the inner product given by the following:
$〈 f , g 〉 = ∫ 0 1 f ( t ) g ( t ) d t .$
Let $C = { x ∈ L 2 [ 0 , 1 ] : ∥ x ∥ ≤ 1 }$, $Q = x ∈ L 2 [ 0 , 1 ] : x , t 2 = 0$ and $( A x ) ( t ) = x ( t ) 2$. Let $S = I$, $g = 0.5 I$. Take $∥ q n − P C q n ∥ 2 + ∥ A q n − P Q A q n ∥ 2 ≤ 10 − 6$ as the stopping criterion.
We then give the parameters and initial values as follows: For (7) and (9), we choose $α n = 0.5 n − 0.7$, $β n = 0.5$, $γ n = 0.5 − 0.5 n − 0.7$, $δ n = 0.5$; For (7), we choose $τ n = 1 2 ∥ A ∥ 2$; For (9), we choose $ϵ n = 0.25 n − 1.4$, $μ = 0.5$, $ρ n = 1$, $q 0 = q 1$. The numerical results for each choice of $q 1$ are shown in Table 3. Figure 3 shows that the error plotting for $q 1 = 4 t 2 + t + 3$.

## 5. Conclusions

In this paper, we proposed a new method to solve the SFP and the fixed-point problem involving quasi-nonexpansive mappings. Compared with the work of (7), the conditions were relaxed, and the nonexpansive mapping was extended to quasi-nonexpansive mapping. The inertia was also added to accelerate the convergence rate further. In addition, the selection of step size no longer depended on the operator norm.
By solving some examples, we have illustrated the effectiveness and practicability of the method. We compared all numerical implementations of this method with (7). As shown in Figure 1 and Figure 2, we can find that (9) is superior. For these reasons, we can see that (9) is more effective than (7).

## Author Contributions

Conceptualization, T.X. and J.-C.Y.; Data curation, T.X. and B.J.; Formal analysis, Y.W.; Funding acquisition, Y.W.; Investigation, T.X. and J.-C.Y.; Methodology, Y.W.; Project administration, Y.W. and J.-C.Y.; Resources, T.X., J.-C.Y. and B.J.; Software, B.J.; Supervision, Y.W.; Visualization, B.J.; Writing—original draft, T.X. All authors have read and agreed to the published version of the manuscript.

## Funding

The National Natural Science Foundation of China (No.12171435).

Not applicable.

Not applicable.

## Data Availability Statement

The data used to support the findings of this study are included within the article.

## Acknowledgments

The authors thank the referees for their helpful comments, which notably improved the presentation of this paper.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algor. 1994, 8, 221–239. [Google Scholar] [CrossRef]
2. Byrne, C. Iterative oblique projection onto convex set and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
3. Xu, H.K. Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010, 26, 105018. [Google Scholar] [CrossRef]
4. López, G.; Martín-Márquez, V.; Wang, F.; Xu, H.K. Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012, 28, 085004. [Google Scholar] [CrossRef]
5. Qin, X.; Wang, L. A fixed point method for solving a split feasibility problem in Hilbert spaces. Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. RACSAM 2019, 13, 215–325. [Google Scholar] [CrossRef]
6. Kraikaew, R.; Saejung, S. A simple look at the method for solving split feasibility problems in Hilbert spaces. Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. RACSAM 2020, 114, 117. [Google Scholar] [CrossRef]
7. Kesornprom, S.; Pholasa, N.; Cholamjiak, P. On the convergence analysis of the gradient-CQ algorithms for the split feasibility problem. Numer. Alogr. 2020, 84, 997–1017. [Google Scholar] [CrossRef]
8. Dong, Q.L.; He, S.; Rassias, T.M. General splitting methods with linearization for the split feasibility problem. J. Global Optim. 2021, 79, 813–836. [Google Scholar] [CrossRef]
9. Shehu, Y.; Dong, Q.L.; Liu, L.L. Global and linear convergence of alternated inertial methods for split feasibility problems. Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. RACSAM 2021, 115, 53. [Google Scholar] [CrossRef]
10. Yang, J.; Liu, H. Strong convergence result for solving monotone variational inequalities in Hilbert space. Numer. Algor. 2019, 80, 741–752. [Google Scholar] [CrossRef]
11. Kraikaew, R.; Saejung, S. Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2014, 163, 399–412. [Google Scholar] [CrossRef]
12. Xu, H.K. Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150, 360–378. [Google Scholar] [CrossRef]
13. Wang, Y.; Yuan, M.; Jiang, B. Multi-step inertial hybrid and shrinking Tseng’s algorithm with Meir-Keeler contractions for variational inclusion problems. Mathematics 2021, 9, 1548. [Google Scholar] [CrossRef]
14. Jiang, B.; Wang, Y.; Yao, J.C. Multi-step inertial regularized methods for hierarchical variational inequality problems involving generalized Lipschitzian mappings. Mathematics 2021, 9, 2103. [Google Scholar] [CrossRef]
15. He, S.; Yang, C. Solving the variational inequality problem defined on intersection of finite level sets. Abstr. Appl. Anal. 2013, 2013, 942315. [Google Scholar] [CrossRef]
16. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef] [Green Version]
17. Tian, M.; Xu, G. Inertial modified Tsneg’s extragradient algorithms for solving monotone variational inequalities and fixed point problems. J. Nonlinear Funct. Anal. 2020, 2020, 35. [Google Scholar]
18. Wang, Y.H.; Xia, Y.H. Strong convergence for asymptotically pseudocontractions with the demiclosedness principle in Banach spaces. Fixed Point Theory Appl. 2012, 2012, 45. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Comparison of scheme (9) and scheme (7) in Example 1.
Figure 1. Comparison of scheme (9) and scheme (7) in Example 1.
Figure 2. Comparison of scheme (9) and scheme (7) in Example 2, with $( m , s , p ) = ( 240 , 1024 , 30 )$.
Figure 2. Comparison of scheme (9) and scheme (7) in Example 2, with $( m , s , p ) = ( 240 , 1024 , 30 )$.
Figure 3. Comparison of scheme (9) and scheme (7) in Example 3, with $q 1 = 4 t 2 + t + 3$.
Figure 3. Comparison of scheme (9) and scheme (7) in Example 3, with $q 1 = 4 t 2 + t + 3$.
Table 1. Numerical results of scheme (9) as regards Example 1.
Table 1. Numerical results of scheme (9) as regards Example 1.
$n − 1$$q n ( 1 )$$q n ( 2 )$$q n ( 3 )$$q n ( 4 )$$q n ( 5 )$$E n$
01.00001.00001.00001.00001.00001.5675 × 10$0$
100.21460.18680.29380.41590.82492.5806 × 10$− 1$
500.07040.12810.25430.49350.98272.0782 × 10$− 2$
1000.06580.12630.25190.49710.99219.3812 × 10$− 3$
5000.06320.12530.25040.49940.99841.8944 × 10$− 3$
10000.06280.12510.25020.49970.99929.4829 × 10$− 4$
50000.06260.12500.25000.49990.99981.8983 × 10$− 4$
100000.06250.12500.25000.50000.99999.4925 × 10$− 5$
Table 2. Numerical results of scheme (9) and scheme (7) as regards Example 2.
Table 2. Numerical results of scheme (9) and scheme (7) as regards Example 2.
mspScheme (9)Scheme (7)
Iter.Time (s)Iter.Time (s)
240102430400.05841811.7113
480204860980.093333713.8633
7203072901420.157845550.5543
96040961201170.2073544138.2107
120051201502460.3534795706.2521
144061441802910.54838831029.8199
Table 3. Numerical results of scheme (9) and scheme (7) as regards Example 3.
Table 3. Numerical results of scheme (9) and scheme (7) as regards Example 3.
$q 1$Scheme (9)Scheme (7)
Iter.Time (s)Iter.Time (s)
$4 t 2 + t + 3$430.0413570.0261
$e t + 2 t$370.0406550.0249
$2 t / 16$100.0338250.0146
$t 3 + sin t$210.0363460.0208
 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Share and Cite

MDPI and ACS Style

Wang, Y.; Xu, T.; Yao, J.-C.; Jiang, B. Self-Adaptive Method and Inertial Modification for Solving the Split Feasibility Problem and Fixed-Point Problem of Quasi-Nonexpansive Mapping. Mathematics 2022, 10, 1612. https://doi.org/10.3390/math10091612

AMA Style

Wang Y, Xu T, Yao J-C, Jiang B. Self-Adaptive Method and Inertial Modification for Solving the Split Feasibility Problem and Fixed-Point Problem of Quasi-Nonexpansive Mapping. Mathematics. 2022; 10(9):1612. https://doi.org/10.3390/math10091612

Chicago/Turabian Style

Wang, Yuanheng, Tiantian Xu, Jen-Chih Yao, and Bingnan Jiang. 2022. "Self-Adaptive Method and Inertial Modification for Solving the Split Feasibility Problem and Fixed-Point Problem of Quasi-Nonexpansive Mapping" Mathematics 10, no. 9: 1612. https://doi.org/10.3390/math10091612

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.