Next Article in Journal
Concurrent Control Chart Pattern  Recognition: A Systematic Review
Next Article in Special Issue
On Strengthened Inertial-Type Subgradient Extragradient Rule with Adaptive Step Sizes for Variational Inequalities and Fixed Points of Asymptotically Nonexpansive Mappings
Previous Article in Journal
Evaluation of sEMG Signal Features and Segmentation Parameters for Limb Movement Prediction Using a Feedforward Neural Network
Previous Article in Special Issue
On the Existence of Super Efficient Solutions and Optimality Conditions for Set-Valued Vector Optimization Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modified Projection Method with Inertial Technique and Hybrid Stepsize for the Split Feasibility Problem

by
Suthep Suantai
1,2,
Suparat Kesornprom
3,
Watcharaporn Cholamjiak
3 and
Prasit Cholamjiak
3,*
1
Data Science Research Center, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
2
Research Group in Mathematics and Applied Mathematics, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
3
School of Science, University of Phayao, Phayao 56000, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(6), 933; https://doi.org/10.3390/math10060933
Submission received: 8 February 2022 / Revised: 10 March 2022 / Accepted: 12 March 2022 / Published: 15 March 2022
(This article belongs to the Special Issue Applied Functional Analysis and Applications)

Abstract

:
We designed a modified projection method with a new condition of the inertial step and the step size for the split feasibility problem in Hilbert spaces. We show that our iterate weakly converged to a solution. Lastly, we give numerical examples and comparisons that could be applied to signal recovery to show the efficiency of our method.

1. Introduction

The convex feasibility problem (CFP) is to find a feasible point in the intersection of finitely many convex and closed sets. The CFP formalism is at the core of the modeling of many inverse problems in various areas of mathematics. The split equality problem (SFP) is a classical inverse problem that is formulated as follows [1]:
finding s C , t Q such that A s = B t ,
where C R K , Q R P , are nonempty closed convex sets. A and B are M × K and M × P real matrices, respectively, and M , K , P are positive integers. See also [1,2,3,4].
Now, we consider the split feasibility problem (SFP) that was proposed by Censor and Elfving [5] and is formulated as
finding s * C such that A s * Q ,
where A : X Y is a bounded linear operator, C and Q are nonempty closed and convex subsets of real Hilbert spaces X and Y, respectively. SEP is an extension of SFP.
SFP can be applied to real-world problems such as image processing, signal recovery, and data classification; see [5,6,7,8,9,10]. There are many methods for solving the split feasibility problem [11,12,13,14,15,16,17,18]. One of the most popular is the CQ method of Byrne [19]. In 2004, Yang [20] introduced the relaxed CQ method onto half space. For these methods, step sizes are based on the norm operator and are generally not easy to compute.
Afterwards, line-search step sizes that do not depend on norm operator were introduced. In 2012, Zhao et al. [21] introduced the modified projection method for SFP. In 2018, Gibali et al. [22] proposed the modified relaxation CQ algorithm for split feasibility with the Armijo line search. In 2012, López et al. [23] suggested the stepsizes that do not require the prior knowledge of matrix norms for SFP.
In addition, to speed up the convergence, the methods were also improved by adding an inertial step in iterative, see Nesterov [24]. In 2020, Shehu and Gibali [25] introduced a relaxed CQ method with alternated inertial step for SFP.
The purpose of this work is to design a new projection method by using the inertial step and the stepsize defined by López et al. [23] for solving SFP. We prove the weak convergence of our iterations. To show its efficiency, we present a comparison with algorithm of Gibali et al. [22] and algorithm of Shehu and Gibali [25] in signal recovery.
The paper is organized as follows. Section 2 presents preliminaries and lemmas that are used throughout the paper. In Section 3, we describe our new relaxed CQ algorithm with inertial step and prove the weak convergence theorem. In Section 4, we apply the proposed algorithm to signal recovery and give a comparison to algorithm of Gibali et al. [22] and algorithm of Shehu and Gibali [25]. Lastly, conclusions are given in Section 5.

2. Preliminaries and Lemmas

We now give some preliminaries and mathematical tools for proving our convergence theorem. Symbol ⇀ stands for weak convergence. Mapping T : X X is
  • firmly nonexpansive if
    T s T t , s t T s T t 2 , for all s , t X .
  • g is convex if and only if:
    g ( w ) g ( s ) + g ( s ) , w s , for all w X .
Lemma 1
([26]). Let X be a real Hilbert space, and let C be a nonempty closed convex subset of a real Hilbert space X; we have
(1) 
s P C s , w P C s 0 for all w C ;
(2) 
P C s P C t 2 P C s P C t , s t for all s , t X ;
(3) 
P C s w 2 s w 2 P C s s 2 for all w C .
Lemma 2
([27]). Let { a k } , { b k } and { c k } be positive sequences, such that
a k + 1 ( 1 + c k ) a k + b k , k 1 .
If k = 1 c k < + and k = 1 b k < + ; then, lim k + a k exists.
Lemma 3
([28]). Let { a k } and { β k } be positive sequences, such that
a k + 1 ( 1 + β k ) a k + β k a k 1 , k 1 .
Then, a k + 1 N · i = 1 k ( 1 + 2 β i ) where N = max { a 1 , a 2 } . Moreover, if k = 1 β k < + ; then, { a k } is bounded.
Lemma 4
([29]). Let X be a real Hilbert space, and let Ω be a nonempty subset of a real Hilbert space X. Assume that { s k } is a sequence in X, such that
(i) 
lim k s k s exists for each s Ω ;
(ii) 
every sequential weak limit point of { s k } is in Ω.
Then { s k } converges weakly to a point in Ω.

3. Main Results

Next, we propose a new relaxed CQ algorithm with inertial step and prove the weak convergence theorem. Let Ω be the set of solutions of SFP. Define sets C k and Q k by
C k = { s X : c ( s k ) η k , s k s } ,
where η k c ( s k ) and c : X R are convex functions,
Q k = { t Y : q ( A s k ) φ k , A s k t } ,
where φ k q ( A s k ) and q : Y R are convex functions.
Since c and q are subdifferentiable on C and Q, respectively, c and q are bounded on bounded sets.
Set
g k ( s ) = 1 2 ( I P Q k ) A s 2 , k 1 .
We then have
g k ( s ) = A * ( I P Q k ) A s ,
where A * is the adjoint operator of A.
Method 1.
A relaxed CQ method with inertial step.
Let ρ 1 > 0 , μ ( 0 , 1 2 ) , 0 < σ k < 4 and β k > 0 . Choose s 0 , s 1 X and set k = 1 .
Step 1
Construct the inertial step:
r k = s k + β k ( s k s k 1 ) .
Step 2
Compute the relaxed CQ iteration:
t k = P C k ( r k ρ k g k ( r k ) ) .
Step 3
Calculate the next iterate via:
s k + 1 = t k α k g k ( t k ) ,
where
α k = σ k g k ( t k ) g k ( t k ) 2 .
Step 4
Compute the stepsize
ρ k + 1 = min { ρ k , μ t k r k g k ( t k ) g k ( r k ) } if g k ( t k ) g k ( r k ) 0 ρ k otherwise .
Set k = k + 1 and go toStep 1.
Remark 1.
FromStep 1, the inertial term is represented by β k ( s k s k 1 ) , which is efficient in speeding up the convergence rate of the algorithms. See [24,30].
Lemma 5.
Let { s k } be generated by Method 1. Then,
t k w , g k ( t k ) 2 g k ( t k ) , w Ω
and
r k w , g k ( r k ) 2 g k ( r k ) , w Ω .
Proof. 
Let w Ω . Hence, w = P C k ( w ) , A w = P Q k ( A w ) and g k ( w ) = 0 . Since I P Q k is firmly nonexpansive by (3), we obtain
t k w , g k ( t k ) = t k w , g k ( t k ) g k ( w ) = t k w , A * ( I P Q k ) A t k A * ( I P Q k ) A w = A t k A w , ( I P Q k ) A t k ( I P Q k ) A w ( I P Q k ) A t k 2 = 2 g k ( t k ) .
We also have
r k w , g k ( r k ) 2 g k ( r k ) .
Lemma 6.
Let { s k } be generated by Method 1. Then,
2 ρ k t k r k , g k ( r k ) 2 μ ρ k ρ k + 1 t k r k 2 2 ρ k g k ( r k ) .
Proof. 
Let w Ω . Since g : H R is convex by (4), using (13), we obtain
2 ρ k t k r k , g k ( r k ) = 2 ρ k t k r k , g k ( r k ) g k ( t k ) + 2 ρ k t k r k , g k ( t k ) 2 ρ k t k r k g k ( r k ) g k ( t k ) + 2 ρ k ( g k ( t k ) g k ( r k ) ) = 2 ρ k t k r k g k ( r k ) g k ( t k ) + 2 ρ k 1 2 ( ( I P Q k ) A t k 2 ( I P Q k ) A r k 2 ) 2 ρ k t k r k g k ( r k ) g k ( t k ) 2 ρ k g k ( r k ) = 2 ρ k ρ k + 1 ρ k + 1 t k r k g k ( r k ) g k ( t k ) 2 ρ k g k ( r k ) = 2 μ ρ k ρ k + 1 t k r k 2 2 ρ k g k ( r k ) .
Lemma 7.
Let { s k } be generated by Method 1. Then,
s k + 1 w 2 r k w 2 2 ρ k g k ( r k ) ( 1 2 μ ρ k ρ k + 1 ) t k r k 2 ( 4 σ k ) σ k g k 2 ( t k ) g k ( t k ) 2 , w Ω .
Proof. 
Let w Ω . From Lemma 5, we have
s k + 1 w 2 = t k α k g k ( t k ) w 2 = t k w 2 + α k 2 g k ( t k ) 2 2 α k t k w , g k ( t k ) t k w 2 + σ k 2 g k 2 ( t k ) ( g k ( t k ) 2 ) 2 g k ( t ) 2 2 σ k g k ( t k ) g k ( t k ) 2 · 2 g k ( t k ) = t k w 2 + σ k 2 g k 2 ( t k ) g k ( t k ) 2 4 σ k g k 2 ( t k ) g k ( t k ) 2 = t k w 2 ( 4 σ k ) σ k g k 2 ( t k ) g k ( t k ) 2 .
From Lemmas 5 and 6, we obtain
t k w 2 = P C k ( r k ρ k g k ( r k ) ) w 2 r k ρ k g k ( r k ) w 2 t k r k + ρ k g k ( r k ) 2 = r k w 2 + ρ k 2 g k ( r k ) 2 2 ρ k r k w , g k ( r k ) t k r k 2 ρ k 2 g k ( r k ) 2 2 ρ k t k r k , g k ( r k ) r k w 2 4 ρ k g k ( r k ) t k r k 2 + 2 μ ρ k ρ k + 1 t k r k 2 + 2 ρ k g k ( r k ) = r k w 2 2 ρ k g k ( r k ) ( 1 2 μ ρ k ρ k + 1 ) t k r k 2 .
Substituting (22) into (21), we obtain
s k + 1 w 2 r k w 2 2 ρ k g k ( r k ) ( 1 2 μ ρ k ρ k + 1 ) t k r k 2 ( 4 σ k ) σ k g k 2 ( t k ) g k ( t k ) 2 .
Lemma 8.
Let { s k } be generated by Method 1. If k = 1 β k < . Then, lim k s k w exists for all w Ω .
Proof. 
Let w Ω . By Lemma 7, we obtain
s k + 1 w r k w = s k + β k ( s k s k 1 ) w s k w + β k s k s k 1 s k w + β k ( s k w + s k 1 w ) .
It follows that
s k + 1 w ( 1 + β k ) s k w + β k s k 1 w .
From Lemma 3, we obtain
s k + 1 w N i = 1 k ( 1 + 2 β i )
where N = max { s 1 w , s 2 w } . Since k = 1 β k < + , by Lemma 3, we get { s k w } is bounded. So k = 1 β k s k s k 1 < + . By Lemma 2 and (24), we obtain lim k s k w exists. □
Theorem 1.
Let { s k } be generated by Method 1. If k = 1 β k < and 0 < lim inf k ρ k lim sup k ρ k < 4 ; then, { s k } weakly converges to a point in Ω.
Proof. 
From definition of r k , we have
r k w 2 = s k + β k ( s k s k 1 ) w 2 = s k w 2 + 2 β k s k w , s k s k 1 + β k 2 s k s k 1 2 s k w 2 + 2 β k s k w s k s k 1 + β k 2 s k s k 1 2 .
From Lemma 7 and Equation (27), we get
s k + 1 w 2 s k w 2 + 2 β k s k w s k s k 1 + β k 2 s k s k 1 2 2 ρ k g k ( r k ) ( 1 2 μ ρ k ρ k + 1 ) t k r k 2 ( 4 σ k ) σ k g k 2 ( t k ) g k ( t k ) 2 .
Since lim k s k w exists and k = 1 β k < , hence we get
lim k g k 2 ( t k ) g k ( t k ) 2 = 0 .
It is easy to check that { g k ( t k ) } is bounded. Therefore,
lim k g k ( t k ) = lim k ( I P Q k ) A t k = 0
and
lim k g k ( r k ) = lim k ( I P Q k ) A r k = 0 .
From Equation (28), we have
lim k t k r k = 0 .
From definition of r k , s k + 1 and Equation (30), we have
lim k r k s k = 0
and
lim k s k + 1 t k = 0 .
From Equations (30) and (32), we obtain
A r k P Q k A t k = A r k A t k + A t k P Q k A t k A r k A t k + A t k P Q k A t k = A r k t k + A t k P Q k A t k 0 as k .
Since { s k } is bounded, there is a subsequence { s k n } of { s k } that s k n w * Ω . From (33) and t k n C k n , we have
c ( r k n ) + η k n , t k n r k n 0 , for η k n c ( r k n ) .
Since { η k n } is bounded and from Equation (32), we obtain
c ( r k n ) η k n , r k n t k n η k n r k n t k n 0 as n
that is c ( w * ) 0 . So w * C . From P Q k n ( A t k n ) Q k n , we have
q ( A r k n ) + φ k n , P Q k n A t k n A r k n 0 , for φ k n q ( A r k n ) .
Since { φ k n } is bounded and from Equation (35), we have
q ( A r k n ) φ k n , A r k n P Q k n A t k n φ k n A r k n P Q k n A t k n 0 as n .
Hence, q ( A w * ) 0 , that is A w * Q . Therefore, w * Ω . From Lemma 4, we gives that { s k } converges weakly to a point in Ω . □

4. Numerical Experiments

Next, we give a comparison with algorithm of Gibali et al. [22], and the algorithm of Shehu and Gibali [25] for signal recovery, which is modeled as follows:
min s R P 1 2 t A s 2 subject to s 1 e ,
where s R P is a recovered vector with m nonzero components, t R K are the observed data, e > 0 is a given constant, and A is an K × P matrix with K < P .
We see that if let C = { s R P : s 1 e } and Q = { t } ; then, (40) can be reduced to SFP.
Sparse vector s R P is constructed by the uniform distribution in [ 2 , 2 ] with m nonzero elements. Matrix A is constructed by normal distribution with mean zero and variance one. Let t be the white Gaussian noise with SNR = 40. Let e = m and initial point s 0 = s 1 = ( 0 , 0 , 0 , . . . , 0 ) R P . We use the mean square error (MSE):
M S E = 1 P s k s * 2 ,
where s k is an estimated signal of s * .
In the algorithm of Gibali et al. [22], and the algorithm of Shehu and Gibali [25], we choose γ = 1 , = μ = 0.5 . In Method 1, β k defined by a fast iterative shrinkage-thresholding algorithm (FISTA) [31]. Choose ρ 1 = 0.1 , μ = 0.4 , σ = 0.8 and set
β k = a k 1 1 a k , if 1 k 1000 0 otherwise .
where a 0 = 1 , a k = 1 + 1 + 4 a k 1 2 2 . Numerical experiments were carried out in MATLAB version R2020b on MacBook Pro M1 with ram 8 GB. The numerical results are given by the following tables.
Table 1 and Table 2 show that Method 1 had less iteration, CPU time, and lower objective function and MSE values than those of the algorithm of Shehu and Gibali [25], and the algorithm of Gibali et al. [22] for different m-sparse. This reveals that our algorithm had better convergence than that of other methods.
We next provide the convergence behavior, MSE, number of iterations, objective function values, and CPU time.
In Figure 1, Figure 2, Figure 3 and Figure 4, we see that Method 1 converged to a solution faster than the algorithms in [22,25] did.
We next show the illustration of the original signal and recovered signal by Method 1, the algorithm of Shehu and Gibali [25], and the algorithm of Gibali et al. [22] when M S E < 10 3 .
From Figure 5, the results include the recovery signal via Method 1, and the algorithm in [22,25].

5. Conclusions

In this work, we improved using the projection method by a new inertial step and a new hybrid step size. We gave a weak convergence theorem under suitable conditions for solving split feasibility. We applied the result to signal recovery and provided a comparison with other methods. Results showed that our method is more efficient than other methods in terms of iteration and CPU time.

Author Contributions

S.S.; supervision and investigation, S.K.; writing original draft, W.C.; software and P.C.; formal analysis and methodology. All authors have read and agreed to the published version of the manuscript.

Funding

The first author received funding support from the NSRF via the Program Management Unit for Human Resources and Institutional Development, Research and Innovation (grant number B05F640183) and Chiang Mai University. P. Cholamjiak was supported by National Research Council of Thailand (NRCT) under grant no. N41A640094. Furthermore, W. Cholamjiak was supported by Thailand Science Research and Innovation grant no. FF65-UoE002.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We would like to thank the editor and reviewers for the valuable comments to improve the original manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Byrne, C.; Moudafi, A. Extensions of the CQ algorithm for the split feasibility and split equality problems. Doc. Trav. 2013, 18, 1485–1496. [Google Scholar]
  2. Censor, Y.; Gibali, A.; Lenzen, F.; Schnörr, C. The implicit convex feasibility problem and its application to adaptive image denoising. J. Comput. Math. 2016, 34, 610–625. [Google Scholar] [CrossRef] [Green Version]
  3. O’Hara, J.G.; Pillay, P.; Xu, H.K. Iterative approaches to convex feasibility problems in Banach spaces. Nonlinear Anal. Theory Methods Appl. 2006, 64, 2022–2042. [Google Scholar] [CrossRef]
  4. Tian, D.; Jiang, L. Two-step methods and relaxed two-step methods for solving the split equality problem. Comput. Appl. Math. 2021, 40, 83. [Google Scholar] [CrossRef]
  5. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  6. Ansari, Q.H.; Rehan, A. An iterative method for split hierarchical monotone variational inclusions. Fixed Point Theory Appl. 2015, 2015, 121. [Google Scholar] [CrossRef] [Green Version]
  7. Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21, 2071. [Google Scholar] [CrossRef] [Green Version]
  8. Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, A. A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51, 2353. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Censor, Y.; Motova, A.; Segal, A. Perturbed projections and subgradient projections for the multiple-sets split feasibility problem. J. Math. Anal. Appl. 2007, 327, 1244–1256. [Google Scholar] [CrossRef] [Green Version]
  10. Gu, R.; Dogandžić, A. Projected nesterov’s proximal-gradient algorithm for sparse signal recovery. IEEE Trans. Signal Process. 2017, 65, 3510–3525. [Google Scholar] [CrossRef]
  11. Ceng, L.C.; Ansari, Q.H.; Yao, J.C. An extragradient method for solving split feasibility and fixed point problems. Comput. Math. Appl. 2012, 64, 633–642. [Google Scholar] [CrossRef] [Green Version]
  12. Dang, Y.; Gao, Y. The strong convergence of a KM–CQ-like algorithm for a split feasibility problem. Inverse Probl. 2010, 27, 015007. [Google Scholar] [CrossRef]
  13. Gibali, A.; Mai, D.T. A new relaxed CQ algorithm for solving split feasibility problems in Hilbert spaces and its applications. J. Ind. Manag. Optim. 2019, 15, 963. [Google Scholar] [CrossRef] [Green Version]
  14. Kesornprom, S.; Pholasa, N.; Cholamjiak, P. On the convergence analysis of the gradient-CQ algorithms for the split feasibility problem. Numer. Algorithms 2020, 84, 997–1017. [Google Scholar] [CrossRef]
  15. Kesornprom, S.; Cholamjiak, P. On inertial relaxation CQ algorithm for split feasibility problems. Commun. Math. Appl. 2019, 10, 245–255. [Google Scholar]
  16. Qu, B.; Xiu, N. A new halfspace-relaxation projection method for the split feasibility problem. Linear Algebra Its Appl. 2008, 428, 1218–1229. [Google Scholar] [CrossRef] [Green Version]
  17. Suparatulatorn, R.; Cholamjiak, W.; Gibali, A.; Mouktonglang, T. A parallel Tseng’s splitting method for solving common variational inclusion applied to signal recovery problems. Adv. Differ. Equ. 2021, 2021, 492. [Google Scholar] [CrossRef]
  18. Yambangwai, D.; Khan, S.A.; Dutta, H.; Cholamjiak, W. Image restoration by advanced parallel inertial forward-backward splitting methods. Soft Comput. 2021, 25, 6029–6042. [Google Scholar] [CrossRef]
  19. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441. [Google Scholar] [CrossRef]
  20. Yang, Q. The relaxed CQ algorithm solving the split feasibility problem. Inverse Probl. 2004, 20, 1261. [Google Scholar] [CrossRef]
  21. Zhao, J.; Zhang, Y.; Yang, Q. Modified projection methods for the split feasibility problem and the multiple-sets split feasibility problem. Appl. Math. Comput. 2012, 219, 1644–1653. [Google Scholar] [CrossRef]
  22. Gibali, A.; Liu, L.W.; Tang, Y.C. Note on the modified relaxation CQ algorithm for the split feasibility problem. Optim. Lett. 2018, 12, 817–830. [Google Scholar] [CrossRef]
  23. López, G.; Martín-Márquez, V.; Wang, F.; Xu, H.K. Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012, 28, 085004. [Google Scholar] [CrossRef]
  24. Nesterov, Y.E. A method for solving the convex programming problem with convergence rate O(1/k2). Dokl. Akad. Nauk SSSR 1983, 269, 543–547. [Google Scholar]
  25. Shehu, Y.; Gibali, A. New inertial relaxed method for solving split feasibilities. Optim. Lett. 2021, 15, 2109–2126. [Google Scholar] [CrossRef]
  26. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011; Volume 408. [Google Scholar]
  27. Osilike, M.O.; Aniagbosor, S.C.; Akuchu, B.G. Fixed points of asymptotically demicontractive mappings in arbitrary Banach spaces. Panam. Math. J. 2002, 12, 77–88. [Google Scholar]
  28. Hanjing, A.; Suantai, S. A fast image restoration algorithm based on a fixed point and optimization method. Mathematics 2020, 8, 378. [Google Scholar] [CrossRef] [Green Version]
  29. Bauschke, H.H.; Combettes, P.L. A weak-to-strong convergence principle for Fejér-monotone methods in Hilbert spaces. Math. Oper. Res. 2001, 26, 248–264. [Google Scholar] [CrossRef] [Green Version]
  30. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  31. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Graph of MSE and number of iterations for P = 2048 , K = 1024 , m = 30 when M S E < 10 5 .
Figure 1. Graph of MSE and number of iterations for P = 2048 , K = 1024 , m = 30 when M S E < 10 5 .
Mathematics 10 00933 g001
Figure 2. Graph of objective function values and number of iterations for P = 2048 , K = 1024 , m = 30 when M S E < 10 5 .
Figure 2. Graph of objective function values and number of iterations for P = 2048 , K = 1024 , m = 30 when M S E < 10 5 .
Mathematics 10 00933 g002
Figure 3. Graph of MSE values and CPU time for P = 2048 , K = 1024 , m = 30 when M S E < 10 5 .
Figure 3. Graph of MSE values and CPU time for P = 2048 , K = 1024 , m = 30 when M S E < 10 5 .
Mathematics 10 00933 g003
Figure 4. Graph of the objective function values and CPU time for P = 2048 , K = 1024 , m = 30 when M S E < 10 5 .
Figure 4. Graph of the objective function values and CPU time for P = 2048 , K = 1024 , m = 30 when M S E < 10 5 .
Mathematics 10 00933 g004
Figure 5. Original signal and recovered signal by Method 1, algorithm of Shehu and Gibali and algorithm of Gibali et al., respectively, when P = 2048 , K = 1024 , m = 30 and M S E < 10 3 . (a) Recovered signal by Method 1. (b) Recovered signal by the algorithm of Shehu and Gibali. (c) Recovered signal by the algorithm of Gibali et al.
Figure 5. Original signal and recovered signal by Method 1, algorithm of Shehu and Gibali and algorithm of Gibali et al., respectively, when P = 2048 , K = 1024 , m = 30 and M S E < 10 3 . (a) Recovered signal by Method 1. (b) Recovered signal by the algorithm of Shehu and Gibali. (c) Recovered signal by the algorithm of Gibali et al.
Mathematics 10 00933 g005
Table 1. Comparison of iteration (Iter) for P = 2048 , K = 1024 and different m-sparse, M S E < 10 3 and M S E < 10 5 .
Table 1. Comparison of iteration (Iter) for P = 2048 , K = 1024 and different m-sparse, M S E < 10 3 and M S E < 10 5 .
m-SparseMethods MSE < 10 3 MSE < 10 5
IterIter
m = 10Method 1611
Algorithm of Shehu and Gibali [25]1573
Algorithm of Gibali et al. [22]1677
m = 20Method 1817
Algorithm of Shehu and Gibali [25]2989
Algorithm of Gibali et al. [22]3094
m = 30Method 1919
Algorithm of Shehu and Gibali [25]41141
Algorithm of Gibali et al. [22]43148
Table 2. Results of MSE values, objective function values, and CPU time in seconds for each method and each iteration ( P = 2048 , K = 1024 , m = 30 , M S E < 10 5 ).
Table 2. Results of MSE values, objective function values, and CPU time in seconds for each method and each iteration ( P = 2048 , K = 1024 , m = 30 , M S E < 10 5 ).
MethodsIterMSE ValuesObjective Functions
MSECPUObjCPU
Method 110.01850.00021.8832 × 10 4 0.0007
20.01300.00523.8504 × 10 3 0.0050
30.00970.00941.8248 × 10 3 0.0091
40.00700.01261.0359 × 10 3 0.0125
196.4529 × 10 6 0.05240.07460.0522
Algorithm of
Shehu and Gibali [25]
10.01850.00061.8832 × 10 4 0.0007
20.01590.03721.2047 × 10 4 0.0364
30.01440.06767.4679 × 10 3 0.0665
40.01280.08645.5536 × 10 3 0.0862
1419.8423 × 10 6 2.50430.61122.5041
Algorithm of
Gibali et al. [22]
10.01850.00021.8832 × 10 4 0.0011
20.01590.03121.2047 × 10 4 0.0303
30.01440.05937.4679 × 10 3 0.0582
40.01300.07665.7134 × 10 3 0.0765
1489.9682 × 10 6 2.30680.62272.3067
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Suantai, S.; Kesornprom, S.; Cholamjiak, W.; Cholamjiak, P. Modified Projection Method with Inertial Technique and Hybrid Stepsize for the Split Feasibility Problem. Mathematics 2022, 10, 933. https://doi.org/10.3390/math10060933

AMA Style

Suantai S, Kesornprom S, Cholamjiak W, Cholamjiak P. Modified Projection Method with Inertial Technique and Hybrid Stepsize for the Split Feasibility Problem. Mathematics. 2022; 10(6):933. https://doi.org/10.3390/math10060933

Chicago/Turabian Style

Suantai, Suthep, Suparat Kesornprom, Watcharaporn Cholamjiak, and Prasit Cholamjiak. 2022. "Modified Projection Method with Inertial Technique and Hybrid Stepsize for the Split Feasibility Problem" Mathematics 10, no. 6: 933. https://doi.org/10.3390/math10060933

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop