Next Article in Journal
A New Version of the Hermite–Hadamard Inequality for Riemann–Liouville Fractional Integrals
Next Article in Special Issue
A Filter and Nonmonotone Adaptive Trust Region Line Search Method for Unconstrained Optimization
Previous Article in Journal
On the Metric Dimension of Arithmetic Graph of a Composite Number
Previous Article in Special Issue
A New Hilbert-Type Inequality with Positive Homogeneous Kernel and Its Equivalent Forms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Partially Projective Algorithm for the Split Feasibility Problem with Visualization of the Solution Set

by
Andreea Bejenaru
1 and
Mihai Postolache
1,2,3,*
1
Department of Mathematics and Informatics, University Politehnica of Bucharest, 060042 Bucharest, Romania
2
Center for General Education, China Medical University, Taichung 40402, Taiwan
3
Gh. Mihoc-C. Iacob Institute of Mathematical Statistics and Applied Mathematics, Romanian Academy, 050711 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(4), 608; https://doi.org/10.3390/sym12040608
Submission received: 12 March 2020 / Revised: 4 April 2020 / Accepted: 6 April 2020 / Published: 11 April 2020
(This article belongs to the Special Issue Advance in Nonlinear Analysis and Optimization)

Abstract

:
This paper introduces a new three-step algorithm to solve the split feasibility problem. The main advantage is that one of the projective operators interferes only in the final step, resulting in less computations at each iteration. An example is provided to support the theoretical approach. The numerical simulation reveals that the newly introduced procedure has increased performance compared to other existing methods, including the classic CQ algorithm. An interesting outcome of the numerical modeling is an approximate visual image of the solution set.

1. Introduction

The split feasibility problem (abbreviated SFP) was first introduced by Censor and Elfving [1] for solving a class of inverse problems. In their paper, Censor and Elfving produced also consistent algorithms to solve the newly introduced type of problem. However, their procedure did not benefit from much popularity, as it was requiring matrix inverses at each step, thus making it less efficient.
Soon enough, Byrne [2] (see also [3]) proposed a new iterative method called CQ-method that involves only the orthogonal projections onto C and Q (subsets of the Euclidean arithmetic spaces R n and R m , respectively) and it is not requiring the matrix inverse. His study generated important developments on different directions, based on the fact that the procedure itself has multiple possible interpretations. For instance, it is well known that the SFP can be naturally rephrased as a constrained minimization problem. From this perspective, the CQ algorithm is exactly the gradient projection algorithm applied to the optimization problem. Several studies provided alternative gradient (projection) algorithms, using selection techniques (see [4,5] for the multiple-sets split equality or feasibility problems, [6] for split equality and split feasibility problems, or [7] for a self-adaptive method).
In addition, the SPF could be rephrased as a fixed point searching issue, the involved operator being nonexpansive. This opens to the perspective of using alternative iteration procedures for reckoning a solution. For instance, in [8], Xu introduced a Krasnosel’skii–Mann algorithm; in [9], Dang and Gao introduced a three-step KM-CQ-like algorithm, inspired by Noor’s three-step iteration procedure; similarly, Feng et al. introduced a three-step SFP algorithm inspired by the TTP iterative procedure defined by Thakur et al. in [10].
Moreover, in [11], the possibility of seeing the CQ algorithm as a special case of Krasnosel’skii-Mann type algorithm was pointed out. Basically, this ensures the weak convergence of the procedure. In fact, assuming the solution set of the SFP is consistent, Byrne proved that his algorithm (running for Euclidean (arithmetic) spaces) is strongly convergent to a solution. In [11], Wang and Xu pointed out that the strong convergence is kept also for arbitrary finite dimensional Hilbert spaces, but it is most likely lost when dealing with infinite dimension. Starting from the CQ algorithm, improved procedures where searched in order to ensure the strong convergence toward a solution. For instance, Wang and Xu proposed in [11] a modified CQ algorithm based on the idea of using contractions to approximate nonexpansive mappings. The three-step procedures defined by Dang and Gao in [9], and Feng et al. in [12] turned out to be strongly convergent under additional assumptions.
Inspired by Feng et al.’s three-step procedure (hereinafter referred to as SFP-TTP algorithm), we define a new algorithm, using one of the projection mappings only partially. Making fewer projections at each step, the running time of the algorithm is expected to be lower. A numerical simulation will prove the new algorithm to be more efficient than both SFP-TTP and CQ algorithms. Moreover, an approximate visual image of the set of solutions will be obtained during numerical modeling. Last, but not least, we consider that this study opens new research perspectives. Extensions to SFP constrained by variational inequalities [13], fixed point problems [14,15,16], and zeros of nonlinear operators or equilibrium problems [17,18] could be challenging research topics.

2. Preliminaries

Let H 1 and H 2 be two real Hilbert spaces, C and Q be closed, convex, and nonempty subsets of H 1 and H 2 , respectively, and let A : H 1 H 2 be a bounded and linear operator. The split feasibility problem can be mathematically described by finding a point x in H 1 such that
x C   and   A x Q .
In this paper, we assume that the split feasibility problem is consistent, meaning that the solution set
Ω = { x C : A x Q } = C A 1 Q ,
of the SFP (1) is nonempty. If so, it is not difficult to note that Ω is closed and convex.
The CQ algorithm (see [2,3]) relies on an iteration procedure defined as follows.
Algorithm 1.
For an arbitrarily chosen initial point x 0 H 1 , the sequence { x n } is generated by
x n + 1 = P C [ I γ A T ( I P Q ) A ] x n , n 0 ,
where P C and P Q denote the projections onto sets C and Q, respectively, 0 < γ < 2 / ρ ( A T A ) , with A T : H 2 H 1 being the transpose of A, while ρ ( A T A ) is the spectral radius or the largest eigenvalue of the selfadjoint operator A T A .
Let us note that we may assume x 0 C . Otherwise, after performing the first iteration step, we reach a point belonging to C. The CQ algorithm converges to a solution of the SFP, for any initial approximation x 0 C , whenever the SFP has solutions. When the SFP has no solutions, the CQ algorithm converges to a minimizer of the function
f ( x ) = 1 2 P Q A x A x 2
over the set C, provided such constrained minimizers exist. Therefore, the CQ algorithm is an iterative constrained optimization method. This algorithm could be easily extended to arbitrary (even infinite) real Hilbert spaces (naturally, A T must be substituted by A * , the adjoint operator); a significant difference is the fact, in this case, that the algorithm is only weakly convergent.
To make the presentation self-contained, we review some concepts and basic results that will be used later.
Definition 1
([3] and the references herein). Let H be a a Hilbert space and T : H H be a (possibly nonlinear) selfmapping of H. Then,
  • T is said to be nonexpansive, if
    T x T y x y , x , y H .
  • T is said to be an averaged operator if T = ( 1 α ) I + α N , where α ( 0 , 1 ) , I is the identity map and N : H H is a nonexpansive mapping.
  • T is called monotone if
    T x T y , x y 0 , x , y H .
  • Assume ν > 0 . Then, T is called ν-inverse strongly monotone (ν-ism) if
    T x T y , x y ν T x T y 2 , x , y H .
  • Any 1-ism T is also known as being firmly nonexpansive; that is,
    T x T y , x y T x T y 2 , x , y H .
Several properties are worth to be mentioned next.
Lemma 1
([3] and the references herein). The following statements hold true on Hilbert spaces.
(i)
Each firmly nonexpansive mapping is averaged and each averaged operator is nonexpansive.
(ii)
T is a firmly nonexpansive mapping if and only if its complement I T is firmly nonexpansive.
(iii)
The composition of a finite number of averaged operators is averaged.
(iv)
An operator N is nonexpansive if and only if its complement I N is a 1 2 -ism.
(v)
An operator T is averaged if and only if its complement is a ν-ism, for some ν > 1 2 . Moreover, if T = ( 1 α ) I + α N , then I T is a 1 2 α -ism.
(vi)
If T is a ν-ism and γ > 0 , then γ T is ν γ -ism.
Let C H be a nonempty, closed and convex subset of a Hilbert space. Then, for each x H , there exists a unique x * C such that
x x * = inf c C x c .
The function assigning to each x the unique proximal point x * is usually denoted by P C , and it is known as the metric projection, or the nearest point projection, proximity mapping, or the best approximation operator. Hence, one could define
P C : H C , P C ( x ) = arg min c C x c .
The following Lemma lists some important properties of the metric projection.
Lemma 2.
Let C H be a nonempty closed convex subset of a Hilbert space and let P C denote the metric projection on C. Then,
(i)
x P C ( x ) , P C ( x ) c 0 , c C .
(ii)
P C is a firmly nonexpansive operator, hence also averaged and nonexpansive.
Lemma 3
([19]). If in a Hilbert space H, the sequence { x n } is weakly convergent to a point x then, for any y x , the following inequality holds true
lim inf n x n y > lim inf n x n x .
Lemma 4
([19]). In a Hilbert space H, for every nonexpansive mapping T : C X defined on a closed convex subset C H , the mapping I T is demiclosed at 0 (if x n x and lim n x n T x n = 0 , then x = T x ).
Lemma 5
([20], Lemma 1.3). Suppose that X is a uniformly convex Banach space and 0 < p t n q < 1 for all n 1 (i.e., { t n } is bounded away from 0 and 1). Let { x n } and { y n } be two sequences of X such that lim sup n x n r , lim sup n y n r and lim sup n t n x n + ( 1 t n ) y n = r hold true for some r 0 . Then, lim n x n y n = 0 .

3. Main Results

Feng et al. initiated in [12] a three-step algorithm to solve the split feasibility problem. Their starting point was the TTP three-step iterative procedure introduced in [10], adapted for a properly chosen projective type nonexpansive mapping.
Algorithm 2
([12]). For an arbitrarily chosen initial point x 0 C , the sequence { x n } is generated by the procedure
u n = ( 1 α n ) x n + α n T x n , v n = ( 1 β n ) u n + β n T u n , x n + 1 = ( 1 γ n ) T u n + γ n T v n ,
where T = P C [ I γ A * ( I P Q ) A ] , and { α n } , { β n } , { γ n } are three real sequences in (0,1).
Note that Feng et al.’s approach relies on the following two key elements:
(1)
A point x C solves the SFP (1) if and only if it solves the fixed point equation (see [8])
P C [ I γ A * ( I P Q ) A ] x = x , x C ,
where γ > 0 denotes a positive constant. Hence, F ( T ) = Ω .
(2)
the iteration function inside Algorithm 2, namely T = P C [ I γ A * ( I P Q ) A ] is nonexpansive, for properly chosen γ (see Lemma 3.1 in [12]).
Turning back to the CQ Algorithm 1, we may rewrite it as follows:
x n + 1 = P C 1 γ A 2 2 x n + γ A 2 2 ( I B * ( I P Q ) B ) x n , n 0 ,
where
B : H 1 H 2 , B = 2 A A .
By introducing the mapping
S : H 1 H 1 , S = I B * ( I P Q ) B ,
one finds
x n + 1 = P C 1 γ A 2 2 x n + γ A 2 2 S x n , n 0 .
We notice that the resulting procedure has the pattern of a Krasnosel’skii iterative process. This inspires us a study involving the TTP procedure. Our algorithm is defined as follows.
Algorithm 3.
For an arbitrarily chosen initial point x 0 C , the sequence { x n } is generated by
u n = ( 1 γ n ) x n + γ n S x n , v n = ( 1 β n ) u n + β n S u n , x n + 1 = P C ( 1 α n ) S u n + α n S v n ,
where { α n } , { β n } , { γ n } are three real sequences in (0,1). We shall refer to this algorithm as partially projective TTP iteration procedure, since it runs as a classical TTP iterative scheme, except the last step, where a projection is included.
Lemma 6.
Ω = F ( P C S ) = F ( S ) C .
Proof. 
According to conditions ( 1 ) , we have Ω = F ( T ) for each mapping T = P C [ I γ A * ( I P Q ) A ] with γ > 0 . Looking more closely to P C S , we note that
P C S = P C ( I B * ( I P Q ) B ) = P C I 2 A 2 A * ( I P Q ) A ,
which proves the first equality. Let us prove next the inclusion F ( S ) C F ( P C S ) = Ω . Indeed, if x F ( S ) C , it follows that x C and S x = x . Then, P C S x = P C x = x , and the proof of this statement is complete.
Finally, let us also check that Ω F ( S ) C . We start with an arbitrary point x Ω . Then, x C and A x Q , resulting in
S x = ( I B * ( I P Q ) B ) x = ( I 2 A 2 A * ( I P Q ) A ) x = x 2 A 2 A * ( A x P Q A x ) = x 2 A 2 A * 0 = x .
Hence, x F ( S ) C and the proof is complete. □
Lemma 7.
The mapping S introduced in (3) is nonexpansive.
Proof. 
We start by pointing out that the mapping U = B * ( I P Q ) B is 1 L -ism, where L = B 2 (see Lemma 3.1 in [12]). On the other hand,
L = B 2 = 2 A A 2 = 2 ,
hence U is in fact 1 2 -ism. According to Lemma 1 (iv), S = I U is nonexpansive. □
Lemma 8.
Let { x n } be the sequence generated by Algorithm 3. Then, lim n x n p exists for any p Ω .
Proof. 
Let p Ω = F ( S ) C (according to Lemma 6). Since S is nonexpansive, it follows that S is also quasi-nonexpansive, i.e., S x p x p for each x H 1 . Thus, the procedure in Algorithm 3 leads to
u n p = ( 1 γ n ) x n + γ n S x n p = ( 1 γ n ) ( x n p ) + γ n ( S x n p ) ( 1 γ n ) x n p + γ n S x n p ( 1 γ n ) x n p + γ n x n p = x n p ,
therefore
u n p x n p .
The same reasoning applies to v n p , and one obtains
v n p = ( 1 β n ) u n + β n S u n p = ( 1 β n ) ( u n p ) + β n ( S u n p ) ( 1 β n ) u n p + β n S u n p ( 1 β n ) u n p + β n u n p = u n p .
Now, using inequality (5), one finds
v n p x n p .
In addition, using the property of the projection mapping P C being nonexpansive and the fact that p C , p being thus a fixed point of P C , we find that
x n + 1 p = P C ( 1 α n ) S u n + α n S v n p ( 1 α n ) S u n + α n S v n p = ( 1 α n ) ( S u n p ) + α n ( S v n p ) ,
therefore
x n + 1 p ( 1 α n ) u n p + α n v n p .
Together with (5) and (6), one obtains
x n + 1 p ( 1 α n ) x n p + α n x n p = x n p .
We conclude from (8) that { x n p } is bounded and decreasing for all p Ω . Hence, lim n x n p exists. □
Lemma 9.
Let { x n } be the sequence generated by Algorithm 3. Then,
lim n x n S x n = 0 .
Proof. 
Let p Ω = F ( S ) C . By Lemma 8, it follows that lim n x n p exists. Let us denote
r = lim n x n p .
From (5), it is known that u n p x n p . Taking lim sup on both sides of the inequality, one obtains
lim sup n u n p lim sup n x n p = r .
Again, since S is quasi-nonexpansive, one has
lim sup n S x n p lim sup n x n p = r .
Now, inequality (7) combined with (6) leads to
x n + 1 p x n p = ( 1 α n ) [ u n p x n p ] + α n [ v n p x n p ] ( 1 α n ) ( u n p x n p ) .
Dividing the above relation by ( 1 α n ) results in
x n + 1 p x n p ( 1 α n ) u n p x n p ,
and it follows that
x n + 1 p x n p x n + 1 p x n p ( 1 α n ) u n p x n p ,
that is,
x n + 1 p u n p .
Applying lim sup to (12) and using (9) together with (10), one obtains
r = lim sup n x n + 1 p lim sup n u n p r ,
which implies
lim sup n u n p = r .
Relation (13) can be rewritten as
lim sup n u n p = lim sup n ( 1 γ n ) x n + γ n S x n p = lim sup n ( 1 γ n ) ( x n p ) + γ n ( S x n p ) = r .
From (9), (11), (13), and Lemma 5, one finds lim n S x n x n = 0 . □
Theorem 4.
Let { x n } be the sequence generated by Algorithm 3. Then, { x n } is weakly convergent to a point of Ω.
Proof. 
Let
ω w ( x n ) = { x C : { x n i } weakly   convergent   to   x }
denote the weakly subsequential limit set of the sequence { x n } . One immediate consequence of Lemma 8 is that { x n } is bounded. In conclusion, there exists at least one weakly convergent subsequence, hence ω w ( x n ) is a nonempty subset. We prove next that it contains exactly one weak limit point. To start, let us assume the contrary: let x , y ω w ( x n ) , x y and let { x n i } x and { x n j } y . By Lemma 9, we have lim n x n i S x n i = 0 , where S is a nonexpansive mapping (see Lemma 7). Applying Lemma 4, we find that S x = x , hence x F ( S ) . On the other hand, since C is closed and convex, it follows that it is also weakly closed, hence it contains the weak limits of all its weakly convergent sequences. Therefore, x C and ultimately x Ω = F ( S ) C . Similar arguments provide y Ω = F ( S ) C . In general, ω w ( x n ) Ω .
From Lemma 8, the sequences { x n x } and { x n y } are convergent. These properties, together with Lemma 3, generate the following inequalities:
lim n x n x = lim n x n i x < lim n x n i y = lim n x n y = lim n x n j y < lim n x n j x = lim n x n x .
This provides the expected contradiction. Hence, ω w ( x n ) is a singleton. Let ω w ( x n ) = { p } . We just need to prove that x n p . Assume the contrary. Then, for a certain point y 0 H 1 , there exists ϵ > 0 such that, for all k N , one could find n k k satisfying | < x n k p , y 0 > | > ϵ . The resulting subsequence { x n k } is itself bounded (since { x n } is bounded), hence it contains a weakly convergent subsequence { x n k l } . However, this new subsequence is also a weakly convergent subsequence of { x n } , hence its weak limit must be p. Taking l in the inequality
| < x n k l p , y 0 > | > ϵ ,
one finds 0 ϵ > 0 , contradicting consequence. Hence, x n p Ω . □
The next two theorems provide sufficient conditions for strong convergence. In order to phrase these conditions, let us consider the mapping T : C C , T = P C S . Then, T is a nonexpansive mapping and F ( T ) = F ( P C S ) = Ω . Moreover,
x n T x n = P C x n P C ( S x n ) x n S x n ,
hence
lim n T x n x n = 0 .
The arguments to prove the following statements are not different at all from those used in the proof of Theorems 3.2 and 3.3 in [12].
Theorem 5.
Let { x n } be the sequence defined by Algorithm 3. Then, { x n } is strongly convergent to a point in Ω if and only if lim inf n d ( x n , Ω ) = 0 , where d ( x , Ω ) = inf p Ω x p .
In [21], the so-called Condition ( A ) was defined in connection with nonexpansive mappings. A mapping T : C C is said to satisfy Condition ( A ) , if there exists a nondecreasing function f : [ 0 , ) [ 0 , ) with f ( 0 ) = 0 , f ( r ) > 0 , for all r ( 0 , 1 ) , such that x T x f ( d ( x , F ( T ) ) , for all x C , where C is a nonempty subset of a normed space X.
Theorem 6.
Let { x n } be the sequence generated by Algorithm 3. If T = P C S satisfies Condition ( A ) , then { x n } is strongly convergent to a point in Ω.

4. Numerical Simulation

In the following, we shall provide an example to analyze the efficiency of the partially projective TTP algorithm compared to the SFP-TTP and the CQ algorithms.
Example 1.
Let H 1 = R 2 , H 2 = R 3 , C = { x R 2 : x 1 } and Q = { x R 3 : x 1 } . The projection mappings P C and P Q are defined as follows:
P C : R 2 C , P C ( x ) = x , x 1 1 x x , x > 1 ;
P Q : R 3 Q , P Q ( x ) = x , x 1 1 x x , x > 1 .
Consider
A : R 2 R 3 , A ( x 1 , x 2 ) = x 1 + 1 2 x 2 , 1 2 x 2 , x 1 + 1 2 x 2 .
The associated matrix of this linear operator is A = 1 1 2 0 1 2 1 1 2 , and its spectral norm is A = 2 , hence B = A .
The purpose of our numerical experiment is to apply the newly introduced algorithm in order to determine the number of iterations required to remain below an acceptable error, say ε = 10 10 . Moreover, we look to apply the analysis to all the points of the domain H 1 . An immediate consequence of this approach is obtaining an approximative visual image of the solution set Ω (as being the set of the points needing just one iterative step). Since we already established that it is sufficient to choose initial points inside the set C (the unit disc), we can limit our analysis to the [ 1 , 1 ] × [ 1 , 1 ] square. One possible problem could be the existence of initial points with very long orbits, which would slow down the performance of the algorithm considerably. However, these points are not particularly interesting to us (they do not belong to Ω ). Hence, in order to increase the performance of the numerical algorithm, we add an additional exit criterion: if a solution is not found with error less than ε , we set the algorithm to break after K = 30 iterative steps.
Let us start by assigning some values to the parameters involved in the experimental procedure. Consider, for instance, α n = β n = γ n = 1 2 . For each point in the selected squared area, we apply the algorithm until one of the two previous mentioned exit criteria are satisfied ( x n + 1 x n < ε or n > 30 ). Meanwhile, we count the number of iterations performed. Depending on the number of iterative steps taken, we associate the starting point with a certain color. For instance, we assign olive color to the points needing just one iteration step (the set Ω ), a darker yellow-green to the points needing two iterations and so on (all the color assignments are gathered into the right-sided colorbar of every image; running the color band from bottom to top, we find the number of iterations corresponding to each color.)
The final result is the image included in Figure 1. Similar numerical simulation, involving the SFP-TTP procedure in Algorithm 2, or the C Q -algorithm 1, provide the images in Figure 2 and Figure 3, respectively. For these two procedures, we have set the γ parameter to the value 1 4 .
Let us analyze the first image i.e., the one corresponding to the partially projective TTP iterative process. The central elliptic disc is olive colored, meaning that its points require just one iteration to reach the requested error. This is an approximate visual representation of the solution set Ω . Identical visual images for Ω are obtained when using the other two algorithms. Nevertheless, when comparing the three figures, except this central part, they are completely distinct. In Figure 1, corresponding to the partially projective TTP procedure, all the points outside the central elliptic disc are colored with yellow-green. Checking with the color bar display, this means that they require two iteration steps to reach the exit criterion. This changes dramatically in Figure 2. We can see that these outer points take on all kinds of colors and shades, going up to light orange, which corresponds to 30 iterative steps. This proves that the partially projective TTP procedure is in general convergent faster than the SFP-TTP and the CQ procedures. Going further and analyzing Figure 3, we notice that many points are black, meaning that they require more that 30 iteration steps. Hence, the CQ algorithm has the slowest convergence of all.
We point out again that the SFP-TTP algorithm, as well as the C Q algorithm, require an additional parameter, namely γ . For the example under consideration, this parameter range must be the interval ( 0 , 1 ) . Figure 2 and Figure 3 resulted for γ = 1 4 . We wondered if changing this parameter would cause major changes in the convergence behavior of the algorithms. We perform this analysis for a fixed initial approximation x 0 = ( 1 , 0 ) . By assigning various values to γ between 0.1 and 0.9 , and counting the number of iteration steps to be performed using the three algorithms, it results in the graph in Figure 4. As expected, the partially projective TTP procedure does not depend on γ . The interesting fact is that the CQ algorithm also reveals a uniform distribution of the number of iteration steps, despite the change of γ . Again, Algorithm 3 appears to be the most efficient, while Algorithm 1 is the slowest. Moreover, the SFP-TTP algorithm seems to improve its efficiency as γ approaches 1.

5. Conclusions

Our paper finds its motivation in recently developed approaches on split feasibility concerning the search for new algorithms, as well as strong convergence results. The reasoning behind our study was to avoid the excessive interference of the projection mappings in each iteration step. Consequently, we provided a partially projective three-step procedure, as a less computational resource-consuming alternative to other existing algorithms (including the classical CQ). In this regard, several weak and strong convergence results were stated and proved. In addition, a numerical simulation involving the new algorithm and other existing procedures revealed its advantages concerning the convergence speed. Perhaps the most important novelty comes from generating an approximate visual representation of the solution set.

Author Contributions

The conceptualization: A.B. and M.P., formal analysis: A.B. and M.P., and validation A.B. and M.P. Both authors have read and agreed to the published version of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  2. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
  3. Byrne, C. An unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef] [Green Version]
  4. Tian, D.; Jiang, L.; Shi, L. Gradient methods with selection technique for the multiple-sets split equality problem. Mathematics 2019, 7, 928. [Google Scholar] [CrossRef] [Green Version]
  5. Yao, Y.; Postolache, M.; Zhug, Z. Gradient methods with selection technique for the multiple-sets split feasibility problem. Optimization 2020, 69, 269–281. [Google Scholar] [CrossRef]
  6. Vuong, P.T.; Strodiot, J.J.; Nguyen, V.H. A gradient projection method for solving split equality and split feasibility problems in Hilbert spaces. Optimization 2014, 64, 2321–2341. [Google Scholar] [CrossRef] [Green Version]
  7. Yao, Y.; Postolache, M.; Liou, Y.C. Strong convergence of a self-adaptive method for the split feasibility problem. Fixed Point Theory Appl. 2013, 2013, 201. [Google Scholar] [CrossRef] [Green Version]
  8. Xu, H.K. A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22, 2021–2034. [Google Scholar] [CrossRef]
  9. Dang, Y.Z.; Gao, Y. The strong convergence of a three-step algorithm for the split feasibility problem. Optim. Lett. 2013, 7, 1325–1339. [Google Scholar] [CrossRef]
  10. Thakur, B.S.; Thakur, D.; Postolache, M. A new iteration scheme for approximating fixed points of nonexpensive mapping. Filomat 2016, 30, 2711–2720. [Google Scholar] [CrossRef] [Green Version]
  11. Wang, F.; Xu, H.K. Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem. J. Inequal. Appl. 2010, 102085. [Google Scholar] [CrossRef] [Green Version]
  12. Feng, M.; Shi, L.; Chen, R. A new three-step iterative algorithm for solving the split feasibility problem. U. Politeh. Buch. Ser. A 2019, 81, 93–102. [Google Scholar]
  13. Yao, Y.; Postolache, M.; Yao, J.C. Iterative algorithms for generalized variational inequalities. U. Politeh. Buch. Ser. A 2019, 81, 3–16. [Google Scholar]
  14. Yao, Y.; Agarwal, R.P.; Postolache, M.; Liu, Y.C. Algorithms with strong convergence for the split common solution of the feasibility problem and fixed point problem. FixedPoint Theory Appl. 2014, 2014, 183. [Google Scholar] [CrossRef] [Green Version]
  15. Yao, Y.; Leng, L.; Postolache, M.; Zheng, X. Mann-type iteration method for solving the split common fixed point problem. J. Nonlinear Convex Anal. 2017, 18, 875–882. [Google Scholar]
  16. Yao, Y.; Postolache, M.; Yao, J.C. Strong convergence of an extragradient algorithm for variational inequality and fixed point problems. U. Politeh. Buch. Ser. A 2020, 82, 3–12. [Google Scholar]
  17. Dadashi, V.; Postolache, M. Forward-backward splitting algorithm for fixed point problems and zeros of the sum of monotone operators. Arab. J. Math. 2020, 9, 89–99. [Google Scholar] [CrossRef] [Green Version]
  18. Dadashi, V.; Postolache, M. Hybrid proximal point algorithm and applications to equilibrium problems and convex programming. J. Optim. Theory Appl. 2017, 174, 518–529. [Google Scholar] [CrossRef]
  19. Opial, Z. Weak convergence of the sequence of successive approximations for nonexpensive mappings. Bull. Am. Math. Soc. 1967, 73, 591–597. [Google Scholar] [CrossRef] [Green Version]
  20. Schu, J. Weak and strong convergence of fixed points of asymptotically nonexpansive mappings. Bull. Austral. Math. Soc. 1991, 43, 153–159. [Google Scholar] [CrossRef] [Green Version]
  21. Senter, H.F.; Dotson, W.G. Approximating fixed points of nonexpansive mappings. Proc. Am. Math. Soc. 1974, 44, 375–380. [Google Scholar] [CrossRef]
Figure 1. The partially projective TTP algorithm.
Figure 1. The partially projective TTP algorithm.
Symmetry 12 00608 g001
Figure 2. The SFP-TTP algorithm.
Figure 2. The SFP-TTP algorithm.
Symmetry 12 00608 g002
Figure 3. The CQ algorithm.
Figure 3. The CQ algorithm.
Symmetry 12 00608 g003
Figure 4. Number of iteration steps for variable γ parameter.
Figure 4. Number of iteration steps for variable γ parameter.
Symmetry 12 00608 g004

Share and Cite

MDPI and ACS Style

Bejenaru, A.; Postolache, M. Partially Projective Algorithm for the Split Feasibility Problem with Visualization of the Solution Set. Symmetry 2020, 12, 608. https://doi.org/10.3390/sym12040608

AMA Style

Bejenaru A, Postolache M. Partially Projective Algorithm for the Split Feasibility Problem with Visualization of the Solution Set. Symmetry. 2020; 12(4):608. https://doi.org/10.3390/sym12040608

Chicago/Turabian Style

Bejenaru, Andreea, and Mihai Postolache. 2020. "Partially Projective Algorithm for the Split Feasibility Problem with Visualization of the Solution Set" Symmetry 12, no. 4: 608. https://doi.org/10.3390/sym12040608

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop