Next Article in Journal
Generalized Thermoelastic Interactions in an Infinite Viscothermoelastic Medium under the Nonlocal Thermoelastic Model
Next Article in Special Issue
A Convergent Algorithm for Equilibrium Problem to Predict Prospective Mathematics Teachers’ Technology Integrated Competency
Previous Article in Journal
Metaheuristic Optimization for Improving Weed Detection in Wheat Images Captured by Drones
Previous Article in Special Issue
Existence and Approximation of Fixed Points of Enriched φ-Contractions in Banach Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Modified Inertial Parallel Viscosity-Type Algorithm for a Finite Family of Nonexpansive Mappings and Its Applications

by
Suthep Suantai
1,2,
Kunrada Kankam
3,
Damrongsak Yambangwai
3,* and
Watcharaporn Cholamjiak
3
1
Research Group in Mathematics and Applied Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
2
Data Science Research Center, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
3
School of Science, University of Phayao, Phayao 56000, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(23), 4422; https://doi.org/10.3390/math10234422
Submission received: 21 September 2022 / Revised: 10 November 2022 / Accepted: 15 November 2022 / Published: 23 November 2022

Abstract

:
In this work, we aim to prove the strong convergence of the sequence generated by the modified inertial parallel viscosity-type algorithm for finding a common fixed point of a finite family of nonexpansive mappings under mild conditions in real Hilbert spaces. Moreover, we present the numerical experiments to solve linear systems and differential problems using Gauss–Seidel, weight Jacobi, and successive over relaxation methods. Furthermore, we provide our algorithm to show the efficiency and implementation of the LASSO problems in signal recovery. The novelty of our algorithm is that we show that the algorithm is efficient compared with the existing algorithms.

1. Introduction

Let K be a nonempty closed and convex subset of a real Hilbert space H and T : K K be a mapping with the fixed point set F ( T ) , i.e.,  F ( T ) = { x K : T x = x } . A mapping T is said to be:
(1)
Contractive if there exists a constant k ( 0 , 1 ) such that T x T y k x y for all x , y K ;
(2)
Nonexpansive if T x T y x y for all x , y K .
Many problems in optimization can be solved by solving the transmission fixed point problem of a nonexpansive mapping, such as minimization problems, variational inequality, variational inclusion, etc. [1,2]. Thus, nonexpansive mapping has been studied extensively, including creating an algorithm to find its fixed point [3,4]. Constructing an algorithm to achieve strong convergence is a critical issue that many mathematicians focus on. One of them is the viscosity approximation method of selecting a particular fixed point of a given nonexpansive mapping as proposed by Moudafi [5] in 2000. Later, a class of viscosity-type methods was introduced by many authors; see [6,7]. One of the modified viscosity methods introduced by Aoyama and Kimura [8] is called the viscosity approximation method (VAM) for a countable family of nonexpansive mappings. The VAM was applied to a variational problem and zero point problem. When the contraction mappings are set by some fixed vectors, the VAM is reduced to a Halpern-type iteration method (HTI). To improve the convergence of the method, one of the most commonly used methods is the inertial method. In 1964, Polyak [9] introduced an algorithm that can speed up the gradient descent, and its modification was made immensely popular by Nesterov’s accelerated gradient algorithm, which was an algorithm proposed by Nesterov in 1983 [10]. This well-known method, which has improved the convergence rate, is known as the inertial iteration for the operator. Many researchers have given various acceleration techniques such as [11,12] to obtain a faster convergence method.
Many real-world problems can be modeled as common problems. Therefore, the study of the solving of these problem is important and has received the attention of many mathematicians. In 2015, Anh and Hieu [13] introduced a parallel monotone hybrid method (PMHM), and Khatibzadeh and Ranjbar [14] introduced Halpern-type iterations to approximate a finite family of quasi-nonexpansive mappingsin Banach space. Recently, there has been some research involving the parallel method for solving many problems. It is shown that the method can be applied in real-world problems such as image deblurring and related applications [15,16].
Inspired by previous works, in this work, we are interested in presenting a viscosity modification combined with the parallel monotone algorithm for a finite nonexpansive mapping. We provide a strong convergence theorem for the proposed algorithm with a parallel monotone algorithm. We provide numerical experiments of our algorithm for solving the linear system problem, differential problem, and signal recovery problem. The efficiency of the proposed algorithm is shown by comparing with existing algorithms.

2. Preliminaries

In this section, we give some definitions and lemmas that play an essential role in our analysis. The strong and weak convergence of { u n } to x will be denoted by u n u and u n u , respectively. The projection of s on to A is defined by
P A ( s ) = argmin t A s t
where A is a nonempty, closed set.
Lemma 1
([17]). Let { s n } be a sequence of nonnegative real numbers such that there exists a subsequence { s n i } of { s n } satisfying { s n i } < { s n i + 1 } for all i N . Then, there exists a nondecreasing sequence { m k } of N such that lim k m k = and the following properties are satisfied for all (sufficiently large) numbers k N :
s m k s m k + 1 and s k s m k + 1 .
Lemma 2
([2]). Let { s n } be a sequence of nonnegative real numbers such that
s n + 1 = ( 1 b n ) s n + d n
where { b n } ( 0 , 1 ) with n = 0 b n = and { d n } satisfies lim sup n d n b n 0 or n = 0 | d n | < . Then, lim n s n = 0 .
Lemma 3
([18]). Assume { s n } is a sequence of nonnegative real numbers such that
s n + 1 ( 1 δ n ) s n + δ n τ n , n 1
and
s n + 1 s n η n + ρ n , n 1 .
where { δ n } ( 0 , 1 ) , { η n } is a sequence of nonnegative real numbers, and { τ n } and  { ρ n } are real sequences such that:
  • n = 1 δ n = ;
  • lim n ρ n = 0 ;
  • lim k η n k = 0 implies lim sup k τ n k 0 for any subsequence of real numbers { n k } of { n } .
Then, lim n s n = 0 .
Proposition 1
([19]). Let H be a real Hilbert space. Let m N be fixed. Let { x i } i = 1 m X and t i 0 for all i = 1 , 2 , . . . , m with i = 1 m t i 1 . Then, we have
i = 1 m t i x i 2 i = 1 m t i x i 2 2 i = 1 m t i .

3. Main Results

In this section, we introduce viscosity modification combined with the inertial parallel monotone algorithm for a finite family of nonexpansive mappings. Before proving the strong convergence theorem, we give the following Definition 1:
Definition 1.
Let C be a nonempty subset of a real Hilbert space H. Let T i : C C be nonexpansive mappings for all i = 1 , 2 , . . . , N . Then, { T i } i = 1 N is said to satisfy Condition (A)
if, for each bounded sequence { z n } C , there exists sequence { i n } such that i n { 1 , 2 , . . . , N } for all n 1 with lim n z n T i n z n = 0 implying that lim n z n T i z n = 0 for all i = 1 , 2 , . . . , N .
For the example of { T i } i = 1 N , which satisfies Condition (A), we can set T i = J r i B i ( I r i A i ) , where J r i B i = ( I + r i B i ) 1 , A i : H H is an α i -inverse strongly monotone operator, B i : H 2 H is a maximal monotone operator, and r i satisfies the assumptions in Theorem 3.1 of [1]. Assume that the following conditions hold:
C1.
lim n θ n u n u n 1 α n = 0 ;
C2.
lim n α n = 0 , n = 1 α n = ;
C3.
lim inf n β n γ n > 0 , lim inf n β n ( 1 α n β n γ n ) > 0 .
Theorem 1.
Let { u n } be defined by Algorithm 1, and let { T i } i = 1 N satisfyCondition (A)such that F : = i = 1 N F ( T i ) . Then, the sequence { u n } converges strongly to z ¯ = P F f ( z ) .
Algorithm 1: Let C be a nonempty closed convex subset of a real Hilbert space H, and let f : H H be a contraction mapping. Let T i : C C be nonexpansive mappings for all i = 1 , 2 , . . . , N .
 Suppose that { θ n } [ 0 , θ ] with θ [ 0 , 1 ) and { α n } , { β n } , and { γ n } are sequences in ( 0 , 1 ) . For n = 1 , let { u n } be a sequence generated by u 0 , u 1 H , and define the following:
  Step 1. Calculate the inertial step:
v n = u n + θ n ( u n u n 1 ) .
Step 2. Compute:
s n , i = α n f ( v n ) + β n v n + γ n T i v n + ( 1 α n β n γ n ) T i ( T i v n )
Step 3. Construct u n + 1 by
u n + 1 = argmax { s n , i v n : i = 1 , 2 , . . . , N } .
Step 4. Set n = n + 1 , and go to Step 1.
Proof. 
For each n N , i { 1 , 2 , . . . , N } , we set { z n , i } to be defined by z 1 , i H and
z n + 1 , i = α n f ( z n , i ) + β n z n , i + γ n T i z n , i + ( 1 α n β n γ n ) T i ( T i z n , i ) .
Let z F . Then, for each i { 1 , 2 , . . . , N } , we have
z n + 1 , i z α n f ( z n , i ) z + β n z n , i z + γ n T i z n , i z + ( 1 α n β n γ n ) T i ( T i z n , i ) z α n f ( z n , i ) f ( z ) + α n f ( z ) z + ( 1 α n ) z n , i z ( 1 α n ( 1 k ) ) z n , i z + α n f ( z ) z = ( 1 α n ( 1 k ) ) z n , i z + α n ( 1 k ) f ( z ) z ( 1 k ) max z n , i z , f ( z ) z ( 1 k ) max z 1 , i z , f ( z ) z ( 1 k ) .
This shows that { z n , i } is bounded for all i = 1 , 2 , . . . , N . From the definition of { u n } , there exist i n { 1 , 2 , . . . , N } such that
u n + 1 = α n f ( v n ) + β n v n + γ n T i n v n + ( 1 α n β n γ n ) T i n ( T i n v n ) .
Therefore, we obtain
u n + 1 z n + 1 , i n α n f ( v n ) f ( z n , i n ) + β n v n z n , i n + γ n T i n v n T i n z n , i n + ( 1 α n β n γ n ) T i n ( T i n v n ) T i n ( T i n z n , i n ) ( 1 α n ( 1 k ) ) v n z n , i n ( 1 α n ( 1 k ) ) u n + θ n ( u n u n 1 ) z n , i n ( 1 α n ( 1 k ) ) u n z n , i n + θ n u n u n 1 .
By our assumptions and Lemma 2, we conclude that
lim n u n + 1 z n + 1 , i n = 0 .
By Proposition 1 and (2), we obtain
z n + 1 , i n z 2 = α n ( f ( z n , i n ) f ( z ) ) + α n ( f ( z ) z ) + β n ( z n , i n z ) + γ n ( T i n z n , i n z ) + ( 1 α n β n γ n ) ( T i n T i n z n , i n z ) 2 α n ( f ( z n , i n ) f ( z ) ) + β n ( z n , i n z ) + γ n ( T i n z n , i n z ) + ( 1 α n β n γ n ) ( T i n T i n z n , i n z ) 2 + α n ( f ( z ) z ) , z n + 1 , i n z α n k z n , i n z 2 + β n z n , i n z 2 + ( 1 α n β n ) T i n z n , i n z n , i n 2 β n γ n T i n z n , i n z n , i n β n ( 1 α n β n γ n ) T i n T i n z n , i n z n , i n 2 + α n ( f ( z ) z ) , z n + 1 , i n z ( 1 α n ( 1 k ) ) z n , i n z 2 β n γ n T i n z n , i n z n , i n 2 β n ( 1 α n β n γ n ) T i n T i n z n , i n z n , i n 2 + α n ( f ( z ) z ) , z n + 1 , i n z .
Setting s n = z n , i n z 2 implies that
s n + 1 ( 1 α n ( 1 k ) ) s n β n γ n T i n z n , i n z n , i n 2 β n ( 1 α n β n γ n ) T i n T i n z n , i n z n , i n 2 + α n ( f ( z ) z ) , z n + 1 , i n z .
Assume that u n z ¯ , then we show that z ¯ F . We will consider this for two possible cases on sequence { s n } :
Case 1. Suppose that there exists n 0 N such that s n + 1 s n for all n n 0 . This implies that lim n s n exists. From (5), we have
β n γ n T i n z n , i n z n , i n 2 ( 1 α n ( 1 k ) ) s n + α n ( f ( z ) z ) , z n + 1 , i n z s n + 1
From Condition (A), (6), and { u n } bounded, we obtain
lim n β n γ n T i z n , i n z n , i n 2 = 0
From Condition C3 and (7), this implies that
lim n T i n z n , i n z n , i n = 0
As u n z ¯ and lim n u n + 1 z n + 1 , i n = 0 from (3), this implies that z n , i n z ¯ . Since { T i } i = 1 N satisfies Condition (A), lim n T i z n , i n z n , i n = 0 for all i = 1 , 2 , . . . , N . By the demiclosed property of nonexpansive mapping, we obtain z ¯ F .
Case 2. Suppose that there exists a subsequence { s n j } of { s n } such that s n j < s n j + 1 for all j N . In this case, it follows from Lemma 1 that there is a nondecreasing subsequence { m k } of N such that lim k m k , and the following inequalities hold for all k N :
s m k s m k + 1 and s k s m k + 1 .
Similar to Case 1, we obtain lim n T i m k z m k , i m k z m k , i m k = 0 . It is known that u n z ¯ , which implies u m k z ¯ . Therefore, z ¯ F . We next show that z ¯ = P F f ( z ) . From (4), we see that
z n + 1 , i n z 2 ( 1 α n ( 1 k ) ) z n , i n z 2 + α n ( f ( z ) z ) , z n + 1 , i n z .
Since { z n , i n } is bounded, there exists a subsequence { z n k , i n k z } of { z n , i n z } such that
lim inf k ( z n k + 1 , i n k z z n k , i n k z ) 0 and lim sup k f ( z ) z , z n k + 1 , i n k z 0 .
For this purpose, one assumes that { z n k , i n k z } is a subsequence of { z n , i n z } such that lim inf k ( z n k + 1 , i n k z z n k , i n k z ) 0 . This implies that
lim inf k ( z n k + 1 , i n k z 2 z n k , i n k z 2 ) = lim inf k ( ( z n k + 1 , i n k z z n k , i n k z ) ( z n k + 1 , i n k z + z n k , i n k z ) ) 0 .
From the definition of z n , we obtain
z n k + 1 , i n k z n k , i n k α n k ( f ( z n k , i n k ) z n k , i n k ) + γ n k ( T i n k z n k , i n k z n k , i n k ) + ( 1 α n k β n k γ n k ) ( T i n k T i n k z n k , i n k z n k , i n k ) α n k f ( z n k , i n k ) z n k , i n k + γ n k T i n k z n k , i n k z n k , i n k + ( 1 α n k β n k γ n k ) T i n k T i n k z n k , i n k z n k , i n k ) α n k ( k z n k , i n k z + f ( z ) z n k , i n k ) + γ n k T i n k z n k , i n k z n k , i n k + ( 1 α n k β n k γ n k ) T i n k T i n k z n k , i n k z n k , i n k ) .
By Cases 1 and 2, there exists a subsequence of { z n k , i n k } such that
lim k T i n k z n k , i n k z n k , i n k = 0 ,
and
lim k T i n k T i n k z n k , i n k z n k , i n k = 0 .
By the boundedness of { z n , i } , (8), and (9), we have
lim n z n k + 1 , i n k z n k , i n k = 0 .
Since { z n , i } is bounded, there exists a subsequence { z n k j , i n k j } of { z n k , i n k } converging weakly to some z ¯ H . Without loss of generality, we replace { z n k j } by { z n k } , and we have
lim sup n f ( z ) z , z n , i n z = lim sup k f ( z ) z , z n k , i n k z = f ( z ) z , z ¯ z
Since z F , z ¯ = P F f ( z ) . From (11), we obtain
lim sup k f ( z ) z , z n k + 1 z = lim sup k f ( z ) z , z n k z + lim sup k f ( z ) z , z n k + 1 z n k = f ( z ) z , z ¯ z 0 .
Therefore, lim n z n , i n z ¯ = 0 by using Lemma 3 and (5). By lim n u n + 1 z n + 1 , i n = 0 , this implies that lim n u n z ¯ = 0 . We thus complete the proof. □

4. Numerical Experiments

In this section, we present our algorithm to solve linear systems and differential problems. All computational experiments were written in Matlab 2022b and conducted on a Processor Intel(R) Core(TM) i7-9700 CPU @ 3.00 GHz, 3000 Mhz, with 8 cores and 8 logical processors.

4.1. Linear System Problems

We now consider the linear system:
A u = b
where A : R l R l is a linear and positive operator and u , b R l . Then, the linear system (12) has a unique solution. There are many different ways of rearranging Equation (12) in the form of fixed point equation T u = u . For example, the well-known weight Jacobi (WJ) and successive over relaxation (SOR) methods [12,20,21] provide the linear system (12) as the fixed point equation T WJ u = u and T SOR u = u .
From Table 1, we set ω is the weight parameter; the diagonal component of matrix A is D, whereas the lower triangular component of matrix D A is L.
Setting T ( u ) = S ( u ) + c , where u , c C , we can see that
T ( u ) T ( t ) = S ( u ) S ( t ) S u t u t , u , t R l
where S is an operator with S < 1 . In controlling the operators T WJ and T SOR in the form of T W J u = S W J u + c W J where
S WJ = I ω D 1 A , c WJ = ω D 1 b ,
and T S O R u = S S O R u + c S O R where
S SOR = I ω D ω L 1 A , c SOR = ω D ω L 1 b .
It follows from (13) that T WJ and T SOR are nonexpansive mapping, and their weight parameter needs to be appropriately adjusted. The weight parameter ω implemented for the operator S j of the WJ and SOR methods has a norm less than one. Moreover, the optimal weight parameter ω o in obtaining the smallest norm for each type of operator S is indicated in Table 2.
The parameters λ max ( D 1 A ) and λ min ( D 1 A ) are the maximum and minimum eigenvalue of matrix D 1 A , respectively, and ρ is the spectral radius of the iteration matrix of the Jacobi method ( S WJ with ω = 1 ). Thus, we can convert this linear system into fixed point equations to obtain the solution of the linear system (12).
T i u = u , i = 1 , 2 , , M ,
where u is the common solution of Equation (14). By utilizing the nonexpansive mapping T i , i = 1 , 2 , , M , we provide a new parallel iterative method in solving the common solution of Equation (14). Iteratively, the generated sequence { u n } is produced by using two initial data u 0 , u 1 R l and
z n = u n + θ n u n u n 1 , y i , n = α n f z n + β n z n + γ n T i z n + 1 ( α n + β n + γ n ) T i T i z n , i = 1 , , M u n + 1 = argmax | | y i , n z n | | , n 1 ,
where { α n } , { ϑ n } , { γ n } are appropriate real sequences in [ 0 , 1 ] and f is a contraction mapping. The stopping criterion is employed as follows:
u n + 1 u n 2 < ϵ l ,
and after that, set u n 1 = u n and u n = u n + 1 .
Next, the proposed method (15) was compared with the well-known WJ, SOR, and Gauss–Seidel (the SOR method with ω = 1 , called the GS method) methods in obtaining the solution of the linear system:
[ 4 1 0 1 0 0 1 4 1 0 1 0 0 0 1 4 1 0 1 0 0 1 0 1 4 1 0 1 0 0 1 0 1 4 1 0 1 0 0 1 0 1 4 1 0 0 0 1 0 1 4 1 0 0 1 0 1 4 ] l × l u 1 u 2 u 3 u 4 u l 3 u l 2 u l 1 u l l × 1 = 1 1 1 1 1 1 1 1 l × 1 ,
and u 0 = [ 1 1 1 1 ] l × 1 T , u 1 = [ 0.5 0.5 0.5 0.5 ] l × 1 T with l = 50 , 100 . For simplicity, the proposed method (15) with M 3 and the nonexpansive mapping T are chosen from T WJ , T SOR , and T GS , and f ( u ) = u . The results of the WJ, GS, SOR, and proposed methods are given for the following cases:
Case 1.
The proposed method with T WJ ;
Case 2.
The proposed method with T GS ;
Case 3.
The proposed method with T SOR ;
Case 4.
The proposed method with T WJ T GS ;
Case 5.
The proposed method with T WJ T SOR ;
Case 6.
The proposed method with T GS T SOR ;
Case 7.
The proposed method with T WJ T GS T SOR .
These are demonstrated and discussed for solving the linear system (16). The weight parameter ω of the proposed methods is set as its optimum weight parameter ( ω o ) defined in Table 2. We used the following parameters:
α n = 1 / n 2 , if 1 n < N ˜ , 1 / n , otherwise ,
β n = 1 / ( 2 n + 1 ) , if 1 n < N ˜ , n / ( 2 n + 1 ) , otherwise ,
γ n = β n , and
θ n = min 1 n 2 u n u n 1 2 , 0.15 if u n u n 1 , 0.15 otherwise ,
where N ˜ is the number of iterations at which we want to stop with ϵ l = 10 7 . The estimated error per iteration step for all cases was measured by using the relative error u n u 2 / u 2 . Figure 1 shows the estimated error per iteration step for all cases with l = 50 and l = 100 .
The trend of the number of iterations for the WJ, GS, and SOR methods and all case studies of the proposed methods in solving the linear system (16) with l = 50 and l = 100 is shown in Figure 2.
Figure 1 and Figure 2 show that the proposed method using T WJ was better than the WJ method, the proposed method with T GS was better than the GS method, and the proposed method with T SOR was better than SOR method when the speed of convergence and the number of iterations are compared. We also found that, when the proposed method with M > 1 was used (parallel algorithm), the number of iterations was based on the minimum number of iterations used in the nonparallel proposed methods. That is, when the parallel algorithms were used, it can be seen from Figure 2 that the number of iterations of the proposed method with T WJ T GS was the same as the proposed method with T GS and the number of iterations of the proposed method with T WJ T SOR , T GS T SOR , T WJ T GS T SOR was the same as the proposed method with T SOR . As a result of the parallel algorithm in which T SOR was used as its partial components (The proposed methods with T WJ T SOR , T GS T SOR , T WJ T GS T SOR ), this will give us the fastest convergence.
Figure 3 shows that the CPU time of the SOR method was better than the other methods. However, the CPU time of the proposed method using the parallel technique T WJ T GS T SOR was better, close to the SOR method as compared to the other way, when the grid size of matrix A was increased.
Next, we provide a comparison of proposed algorithm with the PMHM, HTI, and VAM (where T n is the W n -mapping, which was introduced by Shimoji and Takahashi [22] with setting α n = n n + 1 ). For the parameter in the HTI and VAM, we chose α n = n n + 1 . Let f n ( u n ) = u n 8 in the VAM algorithm and f n ( u n ) = 0.7 u 0 in the HTI algorithm. The results are reported in Table 3 and Figure 4.
From Table 3 and Figure 4, we see that the CPU time and the number of iterations of the proposed algorithm were better than the PMHM, HTI, and VAM.

4.2. Differential Problems

Consider the following simple and well-known periodic heat problem with Dirichlet boundary conditions (DBCs) and initial data:
u t = ϑ u x x + f ( x , t ) , 0 < x < l , t > 0 .
u ( x , 0 ) = u 0 ( x ) , 0 < x < l ,
u ( 0 , t ) = ψ 1 ( t ) , u ( l , t ) = ψ 2 ( t ) , t > 0 ,
where ϑ is constant, u ( x , t ) represents the temperature at points ( x , t ) and f ( x , t ) , and ψ 1 ( t ) and ψ 2 ( t ) are sufficiently smooth functions. Below, we use the notations u n i and ( u x x ) n i to represent the approximate numerical values of u ( x i , t n ) and u x x ( x i , t n ) , and t n = n Δ t , where ϑ t denotes the size of the temporal mesh. The following well-known Crank–Nicolson-type scheme [12,21] is the foundation for a set of schemes used to solve the heat problem (20):
u n + 1 i u n i Δ t = ϑ 2 ( u x x ) n + 1 i + ( u x x ) n i + f n + 1 / 2 i , i = 2 , , N 1
with initial data:
u 0 i = u 0 ( x i ) , i = 2 , , N 1
and DBCs:
u n + 1 1 = ψ 1 ( t n + 1 ) , u n + 1 N = ψ 2 ( t n + 1 ) .
To approximate the term of ( u x x ) k i , k = n , n + 1 , we used the standard centered discretization with space. The matrix form of the well-known second-order finite difference scheme (FDS) in solving the heat problem (20) can be written as
A u n + 1 = G n
where G n = B u n + f n + 1 / 2 :
A = 1 + η η 2 η 2 1 + η η 2 η 2 1 + η η 2 η 2 1 + η , B = 1 η η 2 η 2 1 η η 2 η 2 1 η η 2 η 2 1 η ,
u n = u k 2 u k 3 u k N 2 u k N 1 , f n + 1 / 2 = η 2 ψ n + 1 / 2 1 + Δ t f n + 1 / 2 2 Δ t f n + 1 / 2 3 Δ t f n + 1 / 2 N 2 η 2 ψ 2 n + 1 / 2 + Δ t f n + 1 / 2 N 1 ,
η = ϑ Δ t / Δ x 2 , γ n + 1 / 2 i = γ i ( t n + 1 / 2 ) , i = 1 , 2 , and f n + 1 / 2 i = f ( x i , t n + 1 / 2 ) , i = 2 , , N 1 . According to Equation (24), matrix A is square and symmetrically positive definite. This scheme uses a three-point stencil and reaches the second-order approximation with time and space. The scheme (21) is consistent with the problem (20). The required and sufficient criterion for the stability of the scheme (21) is A 1 B 1 (see [12]).
The discretization of the considered problem (24) has traditionally been solved using iterative methods. Here, the well-known WJ and SOR methods were chosen as examples (see Table 4).
The weight parameter is ω ; the diagonal component of matrix A is D; the lower triangular part of matrix D A is L. Moreover, the optimal weight parameter ω o is also indicated with the same formula in Table 2. The step sizes of the time play an important role in the stability needed for the WJ and SOR methods in solving the linear systems (24) generated from the discretization of the considered problem (20). The discussion on the stability of the WJ and SOR methods in solving the linear systems (24) can be found in [20,21].
Let us consider the linear system:
A u = G
where A : R l R l is a linear and positive operator and u , G R l . We transformed this linear system into the form of a fixed point equation T ( u ) = u to determine the solution of the linear system (25). For example, the well-known WJ, SOR, and GS approaches present the linear system (25) as a fixed point equation (see Table 5).
We introduced a new parallel iterative method using the nonexpansive mapping T j , j = 1 , 2 , , M . Iteratively, the generated sequence { u n } is created by employing two initial data u 0 = u ( 0 , 1 ) , u 1 = u ( 1 , 1 ) R l and
t ( n , s + 1 ) = u ( n , s + 1 ) + θ n u ( n , s + 1 ) u ( n , s ) , v ( n , s + 1 ) j = α n f t ( n , s + 1 ) + β n t ( n , s + 1 ) + γ n T j t ( n , s + 1 ) + 1 ( α n + β n + γ n ) T j T j t ( n , s + 1 ) , j = 1 , , M u ( n + 1 , s + 1 ) = argmax v ( n , s + 1 ) i t ( n , s + 1 ) , n 1 ,
where the second superscript “s”, s = 1 , 2 , , S ^ n , denotes the number of iterations, { α n } , { ϑ n } , { γ n } are appropriate real sequences in [ 0 , 1 ] , and f is a contraction mapping. The following stopping criteria were employed:
u ( n + 1 , S ^ n + 1 ) u ( n + 1 , S ^ n ) 2 < ϵ d ,
where ` ` S ^ n ” denotes the last iteration at time t n , and then, we set
u ( n , 1 ) = u ( n 1 , 1 ) , u ( n + 1 , S ^ n + 1 ) = u ( n , 1 ) .
Next, the proposed method (26) in obtaining the solution of the problem (24) generated from the discretization of the heat problem with DBCs and the initial data (20) was then compared to the well-known WJ, GS, and SOR methods with their optimal parameters. The proposed method (26) with M 3 and the nonexpansive mapping T chosen from T WJ , T SOR , and T GS were compared.
Let us consider the simple heat problems:
u t = ϑ u x x + 0.4 ϑ ( 4 π 2 1 ) e 4 ϑ t cos ( 4 π x ) , 0 x 1 , 0 < t < t s , u ( x , 0 ) = cos ( 4 π x ) / 10 , u ( 0 , t ) = e 4 ϑ t / 10 , u ( 1 , t ) = e 4 ϑ t / 10 , u ( x , t ) = e 4 ϑ t cos ( 4 π x ) / 10 .
The results of the WJ, GS, and SOR methods were compared with all case studies of the proposed methods, which is the same as Section 4.1. Because we focused on the convergence of the proposed method, the stability analysis in selecting the time step sizes is not described in depth. The proposed methods’ time step size was based on the least step size selected from the WJ and SOR methods in solving the problem (24) obtained from the discretization of the considered problem (27).
All computations were carried out on a uniform grid of N nodes, which corresponds to the solution of the problem (24) with N 2 × N 2 sizes of matrix A and Δ x = 1 / ( N 1 ) . The weight parameter ω of the proposed method is defined as the optimum weight parameter ( ω o ) in Table 2.
We used ϑ = 25 , Δ t = Δ x 2 / 10 , ϵ d = 10 7 , the default parameters α n , β n , γ n , and the function f set as Equations (17)–(19) and
θ n = min 1 n 2 u n u n 1 2 , 0.121 if u n u n 1 , 0.121 otherwise ,
where N ˜ is the number of iterations at which we want to stop. For testing purposes only, all computations were carried out for 0 t 0.01 (when t 0.05 , u ( x , t ) 0 ). Figure 5 shows the approximate solution of the problem (27) with 101 nodes at t = 0.01 by using the WJ, GS, and SOR methods and the proposed methods.
It can be seen from Figure 5 that all numerical solutions matched the analytical solution reasonably well. Figure 6 shows the trend of the iteration number for the WJ, GS, and SOR methods and the proposed methods in solving the problem (24) generated from the discretization of the considered problem (27) with 101 nodes. It was found that the proposed method with T WJ was better than the WJ method, the proposed method with T GS was better than the GS method, and the proposed method with T SOR was better than the SOR method when the number of iterations was compared.
We see that the number of iterations of the proposed method with 0 < n 3 depends on the minimum number of iterations of the nonparallel proposed methods used. That is, the number of iterations of the proposed method at each time step with T WJ T GS was the same as the proposed method with T GS , and the number of iterations at each time step of the proposed methods with T WJ T SOR , T GS T SOR , and T WJ T GS T SOR was the same as the proposed method with T SOR .
Next, the proposed method with T WJ T GS T SOR was chosen in solving the problem (24) generated from the discretization of the considered problem (27) to test and verify the order of accuracy for the presented FDS in solving the heat problem. All computations were carried out on uniform grids of 11, 21, 41, 81, and 161 nodes, which correspond to the solution of the discretization of the heat problem (27) with Δ x = 0.1, 0.05, 0.025, 0.0125, and 0.0625, respectively. The evolution of their relative error u n u 2 / u 2 at each time step reached under the acceptable tolerance ϵ d = 1 × 10 7 for the numerical solution of the heat problem problem with various grid sizes is shown in Figure 7.
When the distance between the graphs of all computational grid sizes was examined, the proposed method using T WJ T GS T SOR was shown to have second-order accuracy. That is, the order of the accuracy of the proposed method using T WJ T GS T SOR corresponds to the FDS construction. Figure 8 shows the trend of the iteration number for the WJ, GS, and SOR methods compared with all cases of the proposed methods in solving the discretization of the considered problem (27) with varying grid sizes.
It can be seen that, when the grid size was small, the parallel algorithm in which the T GS or T SOR was used as its partial components gave us the lowest number of iterations under the accepted tolerance.
From Figure 9, we see that the CPU time of the SOR method was better than the other methods. However, the CPU time of the proposed method using the parallel technique T WJ T GS T SOR was better, close to the SOR method as compared to the other way, when the grid size of matrix A was increased.
Next, we provide a comparison of the proposed algorithm with the PMHM, HTI, and VAM (where T n is a W n mapping, which was introduced by Shimoji and Takahashi [22] with setting α n = n n + 1 ). As the parameter in the HTI and VAM, we chose α n = n n + 1 . Let f n ( u n ) = u n 8 in the VAM algorithm and f n ( u n ) = 0.7 u 0 in the HTI algorithm. The results are reported in Figure 10 and Figure 11.
From Figure 10 and Figure 11, we see that the CPU time and number of iterations of the proposed algorithm were better than the PMHM, HTI, and VAM.
Moreover, the our method can solve many real-word problems such as image and signal processing, optimal control, regression, and classification problems by setting T i as the proximal gradient operator. Therefore, we present the examples of the signal recovery in the next.

4.3. Signal Recovery

In this part, we present some numerical examples of the signal recovery by the proposed methods. A signal recovery problem can be modeled as the following underdetermined linear equation system:
b = A u + ϵ ,
where u R N is the original signal and b R M is the observed signal, which is squashed by the filter matrix A : R N R M and noise ϵ . It is well known that the problem (29) can be solved by the LASSO problem:
min u R N 1 2 b A u 2 2 + λ u 1 ,
where λ > 0 . As a result, various techniques and iterative schemes have been developed over the years to solve the LASSO problem; see [15,16]. In this case, we set T u n = prox λ g ( u n λ f ( u n ) ) , where f ( u ) = 1 2 b A u 2 2 and g ( u ) = λ u 1 . It is known that T is a nonexpansive mapping when λ ( 0 , 2 / L ) , and L is the Lipschitz constant of f .
The goal in this paper was to remove noise without knowing the type of filter and noise. Thus, we are interested in the following problem:
min u R N 1 2 A 1 u b 1 2 2 + λ 1 u 1 , min u R N 1 2 A 2 u b 2 2 2 + λ 2 u 1 , min u R N 1 2 A N u b N 2 2 + λ N u 1 .
where u is the original signal, A i is a bounded linear operator, and b i is an observed signal with noise for all i = 1 , 2 , . . . , N .
We can apply Algorithm 1 to solve the problem (31) by setting T i u n = prox λ i g i ( u n λ i f i ( u n ) ) .
In our experiment, the sparse vector u R N was generated by the uniform distribution in [−2, 2] with n nonzero elements. b 1 , b 2 , b 3 were generated by the normal distribution matrix A 1 , A 2 , A 3 R M × N , respectively, with white Gaussian noise such that the signal-to-noise ratio (SNR) = 40. The initial point u 1 was picked randomly. We used the mean-squared error (MSE) for estimating the restoration accuracy, which is defined as follows:
MSE = 1 N u n u * 2 2 < 10 4 ,
where u * is the estimated signal of u.
In what follows, let the step size parameter λ i = 1.999 max 1 i 3 ( A i 2 ) for all i = 1 , 2 , 3 , when the contraction mapping f : H H is defined by f ( x ) = 0.9 x , x H . We study the convergence behavior of the sequence θ n when
θ n = ε n , if n K and u n u n 1 α n n 2 u n u n 1 , if n > K and u n u n 1 0.4 , if otherwise ,
where K is the number of iterations at which we want to stop.
The iterative scheme was varied by choosing different ε n in the following cases:
Case 1.
ε n = 0 ;
Case 2.
ε n = min ( 0.13 , 1 u 1 u 0 ) ;
Case 3.
ε n = min ( 0.45 , 1 u 1 u 0 ) ;
Case 4.
ε n = min ( 0.87 , 1 u 1 u 0 ) ;
Case 5.
ε n = 0.45 ;
Case 6.
ε n = 0.89 .
We set the number of iterations at which we wanted to stop K = 10,000, and in all cases, we set α n = 1 5 n + 1 , β n = 0.3 , and γ n = β n . The results are reported in Table 6.
From Table 6, it is shown that the parameter θ depends on ε n = 0 using the number of iterations with the CPU time of our algorithm more than the other ε n . Furthermore, we see that the case of inputting had a lower number of iterations and CPU time than the case of inputting A i A i , i = 1 , 2 and A i , i = 1 , 2 , 3 for all of the cases. This means that the efficiency of the proposed algorithm is better when the number of subproblems is increasing.
Next, we provide a comparison of the proposed algorithm with the PMHM, HTI, and VAM (where T n is a W n -mapping, which was introduced by Shimoji and Takahashi [22] with setting α n = n n + 1 ). We set the parameter in the PMHM algorithm as α n = 1 n n + 1 . The parameter in the HTI and VAM was chosen as α n = n n + 1 and β n = n n + 1 . Let f n ( u n ) = u n 8 in the VAM algorithm and f n ( u n ) = 0.7 u 0 in the HTI algorithm. We plot the number of iterations versus the mean-squared error (MSE) and the original signal, observation data, and recovered signal for one case with N = 2560 , M = 1280 , and m = 210 . The results are reported in Table 7.
From Table 7, we see by the MSE values that our algorithm using the parallel method was faster than the PMHM, HTI, and VAM in terms of the number of iterations and the CPU time.
From Figure 12, it is shown that the MSE value of Algorithm 1 with A 1 A 2 A 3 decreased faster than that of Algorithm 1 with A i A i , i = 1 , 2 , and that with A i A i , i = 1 , 2 decreased faster than that of Algorithm 1 with A i , i = 1 , 2 , 3 .
The original signal, observation data, and recovered signal are shown in Figure 13, Figure 14, Figure 15 and Figure 16.
From Figure 16 and Figure 17, it is shown that Algorithm 1 with A1A2A3 converged faster than the PMHM, HTI, and VAM.

5. Conclusions

In this work, we introduced a viscosity modification combined with the parallel monotone algorithm for a finite family of nonexpansive mappings. We also established a strong convergence theorem. We provided numerical experiments of our algorithm for solving linear system problems and differential problems and showed the efficiency of the proposed algorithm. In the signal recovery problem, it was found that our algorithm had a better convergence behavior than the other algorithms. In the future, the proposed algorithm can be executed to solve a generalized nonexpansive mapping and be applied to solve many real-word problems such as image and signal processing, optimal control, regression, and classification problems.

Author Contributions

Funding acquisition and supervision, S.S.; writing—review and editing, W.C.; writing—original draft, K.K.; software, D.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Chiang Mai University, Thailand and the NSRI via the program Management Unit for Human Resources & Institutional Development, Research and Innovation (Grant No. B05F640183). W. Cholamjiak was supported by the Thailand Science Research and Innovation Fund and University of Phayao (Grant No. FF66-UoE). D. Yambangwai was supported by the School of Science, University of Phayao (Grant No. PBTSC65008).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank reviewers and the editor for valuable comments for improving the original manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cholamjiak, W.; Dutta, H. Viscosity modification with parallel inertial two steps forward-backward splitting methods for inclusion problems applied to signal recovery. Chaos Solitons Fractals 2022, 157, 111858. [Google Scholar] [CrossRef]
  2. Osilike, M.O.; Aniagbosor, S.C. Weak and strong convergence theorems for fixed points of asymptotically nonexpansive mappings. Math. Comput. Model. 2000, 32, 1181–1191. [Google Scholar] [CrossRef]
  3. Browder, F.E. Convergence of approximants to fixed points of nonexpansive nonlinear mappings in Banach spaces. Arch. Ration. Mech. Anal. 1967, 24, 82–90. [Google Scholar] [CrossRef]
  4. Halpern, B. Fixed points of nonexpanding maps. Bull. Am. Math. Soc. 1967, 73, 957–961. [Google Scholar] [CrossRef] [Green Version]
  5. Moudafi, A. Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef] [Green Version]
  6. Kankam, K.; Cholamjiak, P. Strong convergence of the forward–backward splitting algorithms via linesearches in Hilbert spaces. Appl. Anal. 2021, 1–20. [Google Scholar] [CrossRef]
  7. Xu, H.K. Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298, 279–291. [Google Scholar] [CrossRef] [Green Version]
  8. Aoyama, K.; Kimura, Y. Viscosity approximation methods with a sequence of contractions. Cubo 2014, 16, 9–20. [Google Scholar] [CrossRef] [Green Version]
  9. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  10. Nesterov, Y.E. A method for solving the convex programming problem with convergence rate O(1/k2). Dokl. Akad. Nauk SSSR 1983, 269, 543–547. [Google Scholar]
  11. Maingé, P.E. Inertial iterative process for fixed points of certain quasi-nonexpansive mappings. Set-Valued Anal. 2007, 15, 67–79. [Google Scholar] [CrossRef]
  12. Yambangwai, D.; Moshkin, N. Deferred correction technique to construct high-order schemes for the heat equation with Dirichlet and Neumann boundary conditions. Eng. Lett. 2013, 21, 61–67. [Google Scholar]
  13. Anh, P.K.; Van Hieu, D. Parallel and sequential hybrid methods for a finite family of asymptotically quasi ϕ-nonexpansive mappings. J. Appl. Math. Comput. 2015, 48, 241–263. [Google Scholar] [CrossRef]
  14. Khatibzadeh, H.; Ranjbar, S. Halpern type iterations for strongly quasi-nonexpansive sequences and its applications. Taiwan. J. Math. 2015, 19, 1561–1576. [Google Scholar] [CrossRef]
  15. Suantai, S.; Kankam, K.; Cholamjiak, W.; Yajai, W. Parallel Hybrid Algorithms for a Finite Family of G-Nonexpansive Mappings and Its Application in a Novel Signal Recovery. Mathematics 2022, 10, 2140. [Google Scholar] [CrossRef]
  16. Suantai, S.; Kankam, K.; Cholamjiak, P.; Cholamjiak, W. A parallel monotone hybrid algorithm for a finite family of G-nonexpansive mappings in Hilbert spaces endowed with a graph applicable in signal recovery. Comput. Appl. Math. 2021, 40, 1–17. [Google Scholar] [CrossRef]
  17. Maingé, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  18. He, S.; Yang, C. Solving the variational inequality problem defined on intersection of finite level sets. Abstr. Appl. Anal. 2013, 2013, 942315. [Google Scholar] [CrossRef] [Green Version]
  19. Cholamjiak, P. A generalized forward-backward splitting method for solving quasi inclusion problems in Banach spaces. Numer. Algorithm. 2016, 71, 915–932. [Google Scholar] [CrossRef]
  20. Grzegorski, S.M. On optimal parameter not only for the SOR method. Appl. Comput. Math. 2019, 8, 82–87. [Google Scholar] [CrossRef]
  21. Yambangwai, D.; Cholamjiak, W.; Thianwan, T.; Dutta, H. On a new weight tridiagonal iterative method and its applications. Soft Comput. 2021, 25, 725–740. [Google Scholar] [CrossRef]
  22. Shimoji, K.; Takahashi, W. Strong convergence to common fixed points of infinite nonexpansive mappings and applications. Taiwan. J. Math. 2001, 5, 387–404. [Google Scholar] [CrossRef]
Figure 1. Relative error of the GS, WJ, and SOR methods and all cases of the proposed methods in solving the problem (16) with l = 50 and l = 100 , respectively.
Figure 1. Relative error of the GS, WJ, and SOR methods and all cases of the proposed methods in solving the problem (16) with l = 50 and l = 100 , respectively.
Mathematics 10 04422 g001
Figure 2. The progression in the number of iterations for the GS, WJ, and SOR methods and the proposed methods in solving the problem (12)) with l = 50 and l = 100 , respectively.
Figure 2. The progression in the number of iterations for the GS, WJ, and SOR methods and the proposed methods in solving the problem (12)) with l = 50 and l = 100 , respectively.
Mathematics 10 04422 g002
Figure 3. The progression of the CPU time for the GS, WJ, and SOR methods and the proposed methods in solving the problem (12) with l = 50 and l = 100 , respectively.
Figure 3. The progression of the CPU time for the GS, WJ, and SOR methods and the proposed methods in solving the problem (12) with l = 50 and l = 100 , respectively.
Mathematics 10 04422 g003
Figure 4. The progression of the number of iterations for the proposed methods and the PMHM, HTI, and VAM in solving the problem (12) with l = 50 and l = 100 , respectively.
Figure 4. The progression of the number of iterations for the proposed methods and the PMHM, HTI, and VAM in solving the problem (12) with l = 50 and l = 100 , respectively.
Mathematics 10 04422 g004
Figure 5. Approximate solutions of the GS, WJ, and SOR methods and all cases of the proposed methods in solving the problem (27) with 101 nodes.
Figure 5. Approximate solutions of the GS, WJ, and SOR methods and all cases of the proposed methods in solving the problem (27) with 101 nodes.
Mathematics 10 04422 g005
Figure 6. The evolution of the number of iterations for the WJ, GS, and SOR methods and the proposed methods in solving the problem (20) with 101 nodes and t ( 0 , 1 ] .
Figure 6. The evolution of the number of iterations for the WJ, GS, and SOR methods and the proposed methods in solving the problem (20) with 101 nodes and t ( 0 , 1 ] .
Mathematics 10 04422 g006
Figure 7. The evolution of the relative error in obtain the numerical solution of the problem (27) with various grid sizes by using the proposed method with T WJ T GS T SOR .
Figure 7. The evolution of the relative error in obtain the numerical solution of the problem (27) with various grid sizes by using the proposed method with T WJ T GS T SOR .
Mathematics 10 04422 g007
Figure 8. The evolution of the iteration number for the GS, WJ, and SOR methods and the proposed methods in solving the problem (20) with ϑ = 25 and t ( 0 , 1 ] .
Figure 8. The evolution of the iteration number for the GS, WJ, and SOR methods and the proposed methods in solving the problem (20) with ϑ = 25 and t ( 0 , 1 ] .
Mathematics 10 04422 g008
Figure 9. The evolution of the CPU time for the GS, WJ, and SOR methods and the proposed method in solving problem (20) with ϑ = 25 and t ( 0 , 1 ] .
Figure 9. The evolution of the CPU time for the GS, WJ, and SOR methods and the proposed method in solving problem (20) with ϑ = 25 and t ( 0 , 1 ] .
Mathematics 10 04422 g009
Figure 10. The average number of iterations for the proposed methods and the PMHM, HTI, and VAM.
Figure 10. The average number of iterations for the proposed methods and the PMHM, HTI, and VAM.
Mathematics 10 04422 g010
Figure 11. The CPU time for the proposed methods and the PMHM, HTI, and VAM.
Figure 11. The CPU time for the proposed methods and the PMHM, HTI, and VAM.
Mathematics 10 04422 g011
Figure 12. The graphs of the MSE for Algorithm 1 with the input A i , i = 1 , 2 , 3 .
Figure 12. The graphs of the MSE for Algorithm 1 with the input A i , i = 1 , 2 , 3 .
Mathematics 10 04422 g012
Figure 13. The original signal size N = 2560 , M = 1280 , and 250 spikes and the measured values with A i , i = 1 , 2 , 3 , SNR = 40, respectively.
Figure 13. The original signal size N = 2560 , M = 1280 , and 250 spikes and the measured values with A i , i = 1 , 2 , 3 , SNR = 40, respectively.
Mathematics 10 04422 g013
Figure 14. The recovered signal with m = 210 by A1 (503 Iter, CPU = 1.6346), A2 (474 Iter, CPU = 1.5908), and A3 (427 Iter, CPU = 1.3930), respectively.
Figure 14. The recovered signal with m = 210 by A1 (503 Iter, CPU = 1.6346), A2 (474 Iter, CPU = 1.5908), and A3 (427 Iter, CPU = 1.3930), respectively.
Mathematics 10 04422 g014
Figure 15. The recovered signal with m = 210 by A1A2 (78 Iter, CPU = 1.5538), A1A3 (82 Iter, CPU = 1.3578), and A2A3 (75 Iter, CPU = 1.2515), respectively.
Figure 15. The recovered signal with m = 210 by A1A2 (78 Iter, CPU = 1.5538), A1A3 (82 Iter, CPU = 1.3578), and A2A3 (75 Iter, CPU = 1.2515), respectively.
Mathematics 10 04422 g015
Figure 16. The recovered signal with m = 210 by A1A2A3 (37 Iter, CPU = 1.8094), PMHM (264 Iter, CPU = 2.4789), HTI (130 Iter, CPU = 2.0365), and VAM (60 Iter, CPU = 2.0194), respectively.
Figure 16. The recovered signal with m = 210 by A1A2A3 (37 Iter, CPU = 1.8094), PMHM (264 Iter, CPU = 2.4789), HTI (130 Iter, CPU = 2.0365), and VAM (60 Iter, CPU = 2.0194), respectively.
Mathematics 10 04422 g016
Figure 17. The graphs of the MSE for Algorithm 1 with A1A2A3 and the PMHM algorithm, respectively.
Figure 17. The graphs of the MSE for Algorithm 1 with A1A2A3 and the PMHM algorithm, respectively.
Mathematics 10 04422 g017
Table 1. The different ways of rearranging the linear systems (12) into the form u = T ( u ) .
Table 1. The different ways of rearranging the linear systems (12) into the form u = T ( u ) .
Linear SystemFixed Point Mapping T ( u )
A u = b T WJ u = I ω D 1 A u + ω D 1 b
T SOR u = I ω D ω L 1 A u + ω D ω L 1 b
Table 2. Implemented weight parameter and optimal weight parameter of operator S.
Table 2. Implemented weight parameter and optimal weight parameter of operator S.
The Different Types
of Operator S
Implemented Weight
Parameter ω
Optimal Weight
Parameter ω o
S WJ 0 < ω < 2 min λ min ( D ) λ min ( A ) , λ max ( D ) λ max ( A ) ω o = 1 2 λ min ( A ) + λ max ( A )
S SOR 0 < ω < 2 ω o = 2 d d + λ min ( A ) λ max ( A ) )
Table 3. The progression of the CPU time for the linear system problem.
Table 3. The progression of the CPU time for the linear system problem.
l = 50 , eps = 10 4
AlgorithmProposedPMHMHTIVAM
CPU time0.00520.00720.31670.8132
l = 100 , eps = 10 4
AlgorithmProposedPMHMHTIVAM
CPU time0.00630.00741.13203.5399
Table 4. The specific name of WJ and SOR in solving the discretization of the considered problem (24).
Table 4. The specific name of WJ and SOR in solving the discretization of the considered problem (24).
Consideration Problem (24)Iterative MethodSpecific Name
A u n + 1 = G n D u ( n + 1 , s + 1 ) = D ω A u ( n + 1 , s ) + ω G n WJ
D ω L u ( n + 1 , s + 1 ) = ( D ω L ) ω A u ( n + 1 , s ) + ω G n SOR
Table 5. The alternative method of rearranging the linear systems (25) into the form u = T ( u ) .
Table 5. The alternative method of rearranging the linear systems (25) into the form u = T ( u ) .
Linear SystemFixed Point Mapping T ( u )
A u = G T WJ u = I ω D 1 A u + ω D 1 G
T GS u = I D L 1 A u + ω D L 1 G
T SOR u = I ω D ω L 1 A u + ω D ω L 1 G
Table 6. The convergence of Algorithm 1 with each ε n for parameter θ n .
Table 6. The convergence of Algorithm 1 with each ε n for parameter θ n .
Parameter θ n Case 1Case 2Case 3Case 4Case 5Case 6
A1CUP7.39277.01534.22891.82955.35433.0668
Iter38303661207645126841402
A2CUP7.57176.54413.87121.30475.43602.9778
Iter39933436190243827621419
A3CUP7.49726.65844.09251.32445.47993.3457
Iter40093509197145328361447
A1A2CUP2.93802.59442.01191.32252.59282.1553
Iter49940225973410259
A1A3CUP2.87892.75741.93611.37962.59892.3089
Iter47545924876409249
A2A3CUP2.95612.77472.10931.3332.61212,1656
Iter50344727877394231
A1A2A3CUP199071.96411,76861.75251.81023.2931
Iter7468413745171
Table 7. The computational results for solving the LASSO problem.
Table 7. The computational results for solving the LASSO problem.
N = 2560 , M = 1280
m = 210 m = 230 m = 250
A1CUP1.39531.42321.6346
Iter443449503
A2CUP1.39781.41641.5908
Iter440474474
A3CUP1.29741.28711.3930
Iter411419427
A1A2CUP1.38881.47321.5538
Iter757578
A1A3CUP1.44871.30611.3578
Iter727082
A2A3CUP1.37281.28771.2515
Iter736875
A1A2A3CUP1.72521.87061.8094
Iter373837
PMHMCUP2.62052.73982.4789
Iter252262264
HTICUP2.04542.00432.0365
Iter121127130
VAMCUP1.79172.26852.0194
Iter586460
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Suantai, S.; Kankam, K.; Yambangwai, D.; Cholamjiak, W. A Modified Inertial Parallel Viscosity-Type Algorithm for a Finite Family of Nonexpansive Mappings and Its Applications. Mathematics 2022, 10, 4422. https://doi.org/10.3390/math10234422

AMA Style

Suantai S, Kankam K, Yambangwai D, Cholamjiak W. A Modified Inertial Parallel Viscosity-Type Algorithm for a Finite Family of Nonexpansive Mappings and Its Applications. Mathematics. 2022; 10(23):4422. https://doi.org/10.3390/math10234422

Chicago/Turabian Style

Suantai, Suthep, Kunrada Kankam, Damrongsak Yambangwai, and Watcharaporn Cholamjiak. 2022. "A Modified Inertial Parallel Viscosity-Type Algorithm for a Finite Family of Nonexpansive Mappings and Its Applications" Mathematics 10, no. 23: 4422. https://doi.org/10.3390/math10234422

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop