Next Article in Journal
Walking on a Vertically Oscillating Platform with Simulated Gait Asymmetry
Previous Article in Journal
A Cooperative Partner Selection Study of Military-Civilian Scientific and Technological Collaborative Innovation Based on Interval-Valued Intuitionistic Fuzzy Set
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tikhonov Regularization Terms for Accelerating Inertial Mann-Like Algorithm with Applications

1
Department of Mathematics, Faculty of Science, Sohag University, Sohag 82524, Egypt
2
Department of Mathematics, Monglkut´s University of Technology, Bangkok 10140, Thailand
3
Department of Mathematics, College of Sciences, Jazan Univesity, Jazan 45142, Saudi Arabia
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(4), 554; https://doi.org/10.3390/sym13040554
Submission received: 29 January 2021 / Revised: 19 March 2021 / Accepted: 22 March 2021 / Published: 27 March 2021
(This article belongs to the Section Mathematics)

Abstract

:
In this manuscript, we accelerate the modified inertial Mann-like algorithm by involving Tikhonov regularization terms. Strong convergence for fixed points of nonexpansive mappings in real Hilbert spaces was discussed utilizing the proposed algorithm. Accordingly, the strong convergence of a forward–backward algorithm involving Tikhonov regularization terms was derived, which counts as finding a solution to the monotone inclusion problem and the variational inequality problem. Ultimately, some numerical discussions are presented here to illustrate the effectiveness of our algorithm.

1. Introduction

Let be a real HS equipped with inner product . , . and induced norm . , and A be a non-empty CCS of a real HS . The mapping P : A A is called, for all φ , A ,
  • L Lipschitzian if and only if P φ P L φ , L > 0 ;
  • NE if and only if P φ P φ .
The set of FPs of the mapping P is denoted by Ω ( P ) , that is, Ω ( P ) = { φ A : φ = P φ } . It is common in the science arena for iterations that Mann’s algorithm is one of many successful iteration schemes for approximating FPs of NEMs. The Mann algorithm is written as follows: Let φ 0 be an arbitrary point in , for all n 0 ,
φ n + 1 = ( 1 η n ) φ n + η n P φ n ,
where { η n } is a sequence of non-negative real numbers in ( 0 , 1 ) . Researchers in this direction proved under the hypothesis Ω ( P ) and suitable stipulations forced on { η n } , the sequence { φ n } created by (1) converges weakly to an FP of P.
Approximating FP problems for NEMs has many vital applications. Many problems can be seen as FP problems of NEMs, such as convex optimization problems, image restoration problems, monotone variational inequalities, and convex feasibility [1,2,3]. A large number of the symmetrical iteration methods (in Mann-term) for FP problems of NEMs were presented in the literature; for example, see [4,5,6,7,8,9]. Therefore, the innovation of efficient and stable algorithm methods that involved the Mann algorithm attracted tremendous interest from researchers, for example, the forward–backward algorithm [10] and the Douglas–Rachford algorithm [11]. All these symmetric algorithms have a weak convergence, which is a disadvantage of the algorithms mentioned above. Despite this defect, they have more applications in infinite-dimensional spaces, such as quantum physics and image reconstruction. The need for strong convergence became desperate to save time and effort, and especially a weak convergence is disappointing. Several authors obtained strong convergence by putting solid restrictions on the involved mapping, such as in optimization, they considered the concept of strong convexity. In monotone inclusions, they considered strong monotonicity. As a result that there are many cases, this system cannot obtain strong convergence, so it was necessary to investigate new effective algorithms. Recently, several mathematicians were able to apply the strong convergence of algorithms, see [12,13,14,15].
In 2019, Bot et al. [16] proposed a new form for Mann’s algorithm to overcome the deficiency described before, and he formulated it as follows: Let φ 0 be an arbitrary point in , for all n 0 ,
φ n + 1 = η n 2 φ n + η n 1 P ( η n 2 φ n ) η n 2 φ n .
Under mild stipulations for { η n 1 } and { η n 2 } , they proved that the iterative sequence { φ n } by (2) is strongly convergent. In addition, they applied (2) to obtain the strong convergence of the forward–backward algorithm for MIPs. Sequence { η n 1 } in scheme (2) plays an important role in accelerating convergence and is called Tikhonov regularization sequence. On the other hand, the Tikhonov method which generates a sequence { φ n } by the rule
φ n = J ζ n P u , for all n N ,
where , J are defined in the next section, u , and ζ n > 0 .
Many theoretical and numerical results for studying strong convergence by Tikhonov regularization technique have been provided, for example, see [16,17,18,19,20,21].
Regularization terms intervene loosely in many applications about artificial regularization. To various gradient flows, the artificial regularization term has played an essential role in the energy stability analysis such as epitaxial thin film model either with or without slope selection, a second order energy stable BDF method for the epitaxial thin film equation with slope selection, and a third order exponential time differencing numerical scheme for no-slope-selection epitaxial thin film model. For more details about this topic, see [22,23,24,25,26,27,28].
Recently, new types of fast algorithms have emerged. One of these algorithms is called the inertial algorithm which was first proposed by Polyak [29]. He presented an inertial extrapolation technique for minimizing a smooth convex function. A common characteristic of inertial algorithms is that the next iteration depends on a combination of the previous two iterates. It should be noted that this slight change has significantly improved the effectiveness and performance of these algorithms. After the emergence of this idea, the authors made efforts to introduce other terms to the inertial algorithm with the aim of increasing the acceleration and studying more in-depth applications, such as inertial extragradient algorithms [9,30,31], inertial projection algorithms [15,32,33], inertial Mann algorithms [33,34], and inertial forward–backward splitting algorithms [35,36,37]. All of these algorithms have one symmetrical convergence form, which is stronger and faster than the inertial algorithms.
According to the above results, this manuscript aims to accelerate the inertial algorithm by introducing a modified inertial Mann-like algorithm. This algorithm is used here to study the strong convergence of FPs of NEMs in real HSs. The forward–backward terms are involved in studying the strong convergence for the minimal norm solution in zeros set under the symmetrical conditions. Finally, our algorithm’s performance and efficiency have been illustrated by some numerical comparisons with previous algorithms. We found that our algorithms converge faster, which indicates our method’s success.

2. Preliminaries

In this section, we shall introduce some previous symmetrical subsequences which greatly help in understanding our paper. Throughout this manuscript, the notions ⟶ and ⇒ denote strong convergence and multivalued mappings, respectively, G ( P ) denotes the graph of the mapping P and A is a non-empty CCS in a real HS .
Lemma 1 ([38]).
Let κ , ω , and υ be points in a HS ℶ and ρ ( 0 , 1 ) . Then
ρ κ + ( 1 ρ ) ω υ 2 ρ κ υ 2 + ( 1 ρ ) ω υ 2 ρ ( 1 ρ ) κ ω 2 .
Lemma 2 ([39]).
Let ℶ be a real HS. Then, for each κ , ω ,
(i) κ ω 2 κ 2 + ω 2 2 κ , ω ,
(ii) κ + ω 2 κ 2 2 ω , κ + ω ,
Definition 1.
Let η 1 ( 0 , 1 ) be fixed, we say that a mapping V : is η 1 averaged if V is written as V = ( 1 η 1 ) I + η 1 P , where I is an identity mapping and P : is an NEM.
It is easy to prove that η 1 averaged operators are also NE.
Definition 2.
Assume that Γ : is a multi-valued operator and its graph described as G ( Γ ) = { ( φ , n ) × : n Γ φ } . The operator Γ is called:
(1) 
monotone, if for all ( φ , n ) , ( , m ) G ( Γ ) , φ , n m 0 ;
(2) 
MM, if it is monotone and its graph G ( Γ ) is not a proper subset of one of any other monotone mapping;
(3) 
ϑ I S M , if for all φ , , there is a constant ϑ > 0 so that φ , Γ φ Γ ϑ Γ φ Γ 2 .
The resolvent of Γ , J Γ : is described by J Γ = ( I + Γ ) 1 and the reflected resolvent of A , V Γ : is defined by V Γ = 2 J Γ I . The mapping J Γ is single-valued, maximally monotone, and NE if Γ is maximally monotone [1]. In addition, 0 Γ + Ξ φ iff φ = J Γ ( I τ Ξ ) ( φ ) , τ > 0 . If Ξ is ϑ ISM with 0 < τ < 2 ϑ , then J τ Γ ( I τ Ξ ) is averaged.
We have to remember that, if the function r : ( , + ) is convex, proper, and lower-semicontinuous, then r : is called the differentiable of r and it described by
r φ = { t : r ( ) r ( φ ) t , φ , } ,
for φ with r φ + . We define the proximal operator of r as
p r o x r : , p r o x r ( φ ) = arg min 1 2 φ 2 + r ( ) .
It is worth mentioning that p r o x r = J r , i.e., the proximal operator of r, and the resolvent of r [1] are symmetric. The indicator function δ A ( φ ) of a non-empty closed and convex set A is given by
δ A ( φ ) = 0 , if φ A , + , if φ A .
By the theorem of Baillon-Haddad ([1], Corollary 18.16), s : is a ϑ ISM operator provided that the function s : R is Fréchet differentiable with 1 ϑ Lipschitz gradient.
Lemma 3
(Demi-closedness property). Assume that A is a non-empty CCS of a real HS ℶ. Let P : A be an NEM and { φ n } be a sequence in A and φ so that φ n converges weakly to φ and P φ n φ 0 as n + , then φ Ω ( P ) .
Lemma 4 ([40])
Assume that { e n } is a sequence of non-negative real numbers so that
e n + 1 ( 1 ϖ n ) e n + q n + v n , n 1 ,
where ϖ n ( 0 , 1 ) is a sequence and q n is a real sequence. Let n v n < . Then the assumptions below hold:
(1) 
For some α 0 , if q n ϖ n α , then the sequence { e n } is bounded;
(2) 
If n ϖ n = and lim sup n q n ϖ n 0 , then lim n e n = 0 .

3. The Strong Convergence of a Modified Inertial Mann-Like Algorithm

In this part, we shall discuss the strong convergence for our proposed algorithm under mild conditions.
Theorem 1.
Let A be a CCS of a real HS ℶ and P : A A be an NE mapping so that Ω ( P ) . In addition, suppose the hypotheses below are fulfilled:
(i) 
lim n + η n 1 = 0 = lim n + η n 2 , n η n 1 = , n η n 2 = , η n 1 , η n 2 with η n 1 η n 2 and η n 3 [ a , b ] ( 0 , 1 ] ,
(ii) 
lim n + ξ n η n 1 φ n φ n 1 = 0 = lim n + ξ n ρ n φ n φ n 1 , where ρ n ( 0 , 1 ) and ρ n = η n 1 η n 1 η n 3 + η n 2 η n 3 .
Set φ 0 , φ 1 arbitrary. Define the sequence { φ n } by the following iterative scheme:
Λ n = φ n + ξ n φ n φ n 1 , n = ( 1 η n 1 ) Λ n , n = ( 1 η n 2 ) Λ n , φ n + 1 = ( 1 η n 3 ) n + η n 3 P n , n 1 .
Then the sequence { φ n } defined by the algorithm (3) converges to Ω ( P ) in norm.
Proof. 
We split the proof into the following steps:
Step 1. Show that { φ n } is bounded. Indeed, for any σ Ω ( P ) , as P is NE and by the definition of { φ n + 1 } , we have
φ n + 1 σ = ( 1 η n 3 ) n + η n 3 P n σ = ( 1 η n 3 ) n σ + η n 3 P n σ ( 1 η n 3 ) n σ + η n 3 n σ .
From (3), we get
n σ ( 1 η n 1 ) Λ n σ + η n 1 σ ( 1 η n 1 ) φ n σ + η n 1 σ + ( 1 η n 1 ) ξ n φ n φ n 1 .
Similarly, we can obtain
n σ ( 1 η n 2 ) φ n σ + η n 2 σ + ( 1 η n 2 ) ξ n φ n φ n 1 .
From (5) and (6) in (4), and using the hypothesis (ii), we get
φ n + 1 σ ( 1 η n 3 ) ( 1 η n 1 ) φ n σ + η n 1 σ + ( 1 η n 1 ) ξ n φ n φ n 1 + η n 3 ( 1 η n 2 ) φ n σ + η n 2 σ + ( 1 η n 2 ) ξ n φ n φ n 1 = ( 1 η n 3 ) ( 1 η n 1 ) + η n 3 ( 1 η n 2 ) φ n σ + η n 1 ( 1 η n 3 ) + η n 3 η n 2 σ + 1 η n 3 ) ( 1 η n 1 ) + η n 3 ( 1 η n 2 ξ n φ n φ n 1 = ( 1 η n 1 η n 1 η n 3 + η n 2 η n 3 ) φ n σ + η n 1 η n 1 η n 3 + η n 2 η n 3 σ + ( 1 η n 1 η n 1 η n 3 + η n 2 η n 3 ) ξ n φ n φ n 1 = ( 1 ρ n ) φ n σ + ρ n σ + ( 1 ρ n ) ξ n φ n φ n 1 .
By the fact that sup n 1 ξ n ρ n φ n φ n 1 exists, consider
Θ = 2 max σ , ( 1 ρ n ) ξ n ρ n φ n φ n 1 .
Then, inequality (7) is reduced to
φ n + 1 σ ( 1 ρ n ) φ n σ + ρ n Θ .
By Lemma 4, we conclude that { φ n } is bounded.
On the other side, by the definition of Λ n , we have
Λ n σ 2 = φ n σ 2 + 2 ξ n φ n φ n 1 , φ n σ + ξ n 2 φ n φ n 1 2 .
Therefore, by the boundedness of { φ n } and hypothesis (ii), we see that { Λ n } is bounded, and thus { n } and { n } are bounded too. From (3), one has
P n n = 1 η n 3 φ n + 1 n .
and
Λ n φ n + 1 2 = φ n φ n + 1 2 + 2 ξ n φ n φ n 1 , φ n φ n + 1 + ξ n 2 φ n φ n 1 2 .
Using (4) and Lemma 1, 2 (i), we have
φ n + 1 σ 2 = ( 1 η n 3 ) n + η n 3 P n σ 2 = η n 3 P n σ 2 + ( 1 η n 3 ) n σ 2 η n 3 ( 1 η n 3 ) n P n 2 = η n 3 P n n ( σ n ) 2 + ( 1 η n 3 ) n σ 2 η n 3 ( 1 η n 3 ) n P n 2 η n 3 n P n 2 + η n 3 n σ 2 2 P n n , σ n + ( 1 η n 3 ) n σ 2 η n 3 ( 1 η n 3 ) n P n 2 n σ 2 + η n 3 2 n P n 2 2 P n n , σ n = n σ 2 + φ n + 1 n 2 2 η n 3 φ n + 1 n , σ n .
From Lemma 2, one gets
2 φ n + 1 n , σ n φ n + 1 σ 2 n σ 2 .
Applying (12) in (11), one can write
φ n + 1 σ 2 n σ 2 + φ n + 1 n 2 + 1 η n 3 φ n + 1 σ 2 n σ 2 ,
this implies that
( 1 1 η n 3 ) φ n + 1 σ 2 ( 1 1 η n 3 ) n σ 2 + φ n + 1 n 2 .
Setting η n 3 η n 3 1 = k , one can obtain
φ n + 1 σ 2 n σ 2 k φ n + 1 n 2 ,
From (8), (10), and (13) and Lemma 2 (i), we have
φ n + 1 σ 2 n σ 2 k n φ n + 1 2 Λ n σ η n 1 Λ n 2 k Λ n φ n + 1 η n 1 Λ n 2 = Λ n σ 2 2 η n 1 Λ n , Λ n σ + η n 1 2 Λ n 2 k Λ n φ n + 1 2 + 2 k η n 1 Λ n , Λ n φ n + 1 k η n 1 2 Λ n 2 = Λ n σ 2 k Λ n φ n + 1 2 + η n 1 [ 2 Λ n , Λ n σ + 2 k Λ n , Λ n φ n + 1 + ( 1 k ) η n 1 Λ n 2 ] = φ n σ 2 k φ n φ n + 1 2 + 2 ξ n φ n φ n 1 , φ n σ + 2 k φ n φ n 1 , φ n φ n + 1 + ( 1 k ) ξ n 2 φ n φ n 1 2 + η n 1 [ 2 Λ n , Λ n σ + 2 k Λ n , Λ n φ n + 1 + ( 1 k ) η n 1 Λ n 2 ] .
Since { φ n } and { Λ n } are bounded, then there is a constant Θ 1 0 so that
2 Λ n , Λ n σ + 2 k Λ n , Λ n φ n + 1 + ( 1 k ) η n 1 Λ n 2 Θ 1 , for all n 0 .
Set
Θ 2 = 2 ξ n φ n φ n 1 , φ n σ + 2 k φ n φ n 1 , φ n φ n + 1 + ( 1 k ) ξ n 2 φ n φ n 1 2 .
Then, the inequality (14) is reduced to
φ n + 1 σ 2 φ n σ 2 + k φ n φ n + 1 2 η n 1 Θ 1 + Θ 2 .
Step 2. Prove that { φ n } converges strongly to σ . In order to reach it, we will discuss the two cases below:
Case 1. If the sequence { φ n σ } is monotonically decreasing, then it is convergent. Therefore, we obtain
φ n + 1 σ 2 φ n σ 2 0 , as n ,
this implies with (15) and hypotheses (i) and (ii) that
φ n φ n + 1 0 , as n .
On the other hand, it is easy to see that
Λ n φ n + 1 = φ n φ n + 1 + ξ n φ n φ n 1 0 , as n ,
and
n Λ n η n 1 Λ n 0 , as n ,
Similarly, one has
n Λ n η n 2 Λ n 0 , as n .
It follows from (17) and (18) and the triangle inequality that
n φ n + 1 n Λ n + Λ n φ n + 1 0 , as n .
By the same manner, using (17) and (19), one can obtain
n φ n + 1 n Λ n + Λ n φ n + 1 0 , as n .
Applying (16) and (20), we get
n φ n n φ n + 1 + φ n + 1 φ n 0 , as n .
In addition, applying (16) and (21), we have
n φ n n φ n + 1 + φ n + 1 φ n 0 , as n .
Going to Equation (9) and using (20), we can write
P n n = 1 η n 3 φ n + 1 n 0 , as n .
Combining (22)–(24), one sees that
φ n P φ n φ n n + n P n + P n P φ n φ n n + n P n + n φ n 0 , as n .
Using the fact that I P is demi-closed, then { φ n } converges weakly to an FP of P .
Still in our minds is proof that { φ n } converges strongly to an FP of P . In view of (14), we have
φ n + 1 σ 2 n σ 2 k n φ n + 1 2 = ( 1 η n 1 ) ( Λ n σ ) η n 1 Λ n 2 k n φ n + 1 2 ( 1 η n 1 ) Λ n σ 2 2 η n 1 n σ , σ k n φ n + 1 2 ( 1 η n 1 ) φ n σ 2 + 2 ξ n φ n φ n 1 , φ n σ + ξ n 2 φ n φ n 1 2 2 η n 1 n σ , σ k n φ n + 1 2 .
By hypotheses (i) and (ii), we deduce that
2 ξ n φ n φ n 1 , φ n σ + ξ n 2 φ n φ n 1 2 2 η n 1 n σ , σ 0 , as n .
Applying (20) and (27) in (26) after taking the limit as n , we get
φ n + 1 σ 2 ( 1 η n 1 ) φ n σ 2 .
Thus, by Lemma 4, we conclude that φ n σ .
Case 2. Assume that { φ n σ } is not a monotonically decreasing sequence. Put n = φ n σ . Suppose that χ ( n ) : N N is a mapping defined by χ ( n ) = max { ϑ N : ϑ n , k k + 1 } for all n n 0 (for some large enough n 0 ). It is clear that χ ( n ) is a non-decreasing with χ ( n ) as n and for all n n 0 , χ ( n ) χ ( n ) + 1 . By Applying (15), we have
φ χ ( n ) φ χ ( n ) + 1 2 η χ ( n ) 1 Θ 1 + Θ 2 k 0 , as n ,
which leads to φ χ ( n ) φ χ ( n ) + 1 0 , as n . By the same manner as (17)–(27) in Case 1, we get directly that { φ χ ( n ) } converges weakly to σ as χ ( n ) . Using (26) for n n 0 , we get
φ χ ( n ) + 1 σ 2 ( 1 η n 1 ) φ χ ( n ) σ 2 + 2 ξ χ ( n ) φ χ ( n ) φ χ ( n ) 1 , φ χ ( n ) σ + ξ χ ( n ) 2 φ χ ( n ) φ χ ( n ) 1 2 2 η χ ( n ) 1 χ ( n ) σ , σ k χ ( n ) φ χ ( n ) + 1 2 ( 1 η n 1 ) φ χ ( n ) σ 2 + 2 ξ χ ( n ) φ χ ( n ) φ χ ( n ) 1 , φ χ ( n ) σ + ξ χ ( n ) 2 φ χ ( n ) φ χ ( n ) 1 2 2 η χ ( n ) 1 χ ( n ) σ , σ ,
in another form, we have
0 φ χ ( n ) + 1 σ 2 φ χ ( n ) σ 2 η χ ( n ) 1 2 σ χ ( n ) , σ φ χ ( n ) σ 2 + 2 ξ χ ( n ) φ χ ( n ) φ χ ( n ) 1 , φ χ ( n ) σ + ξ χ ( n ) 2 φ χ ( n ) φ χ ( n ) 1 2 ,
which leads to
φ χ ( n ) σ 2 2 σ χ ( n ) , σ + 2 ξ χ ( n ) η χ ( n ) 1 φ χ ( n ) φ χ ( n ) 1 , φ χ ( n ) σ + ξ χ ( n ) 2 η χ ( n ) 1 φ χ ( n ) φ χ ( n ) 1 2 .
Hence, by hypotheses (i), (ii), and (28), we can obtain
lim n φ χ ( n ) σ = 0 .
Therefore, lim n χ ( n ) = lim n χ ( n ) + 1 = 0 . Further, for any n n 0 , it can be easily obtained that n χ ( n ) + 1 if n χ ( n ) , that is n > χ ( n ) , because l l + 1 for χ ( n ) + 1 l n . Based on it, we find that for each n n 0
0 l max { χ ( n ) , χ ( n ) + 1 } = χ ( n ) + 1 .
Consequently, we conclude that lim n n = 0 , that is { φ n } σ in norm. This finishes the proof. □
Remark 1.
We have discussions below about algorithm (3):
(1) 
If ξ n = 0 , and η n 1 = η n 2 for each n 1 in algorithm (3), then we get the exciting results of Bot et al. [16].
(2) 
For the Tikhonov regularization sequences { η n 1 } and { η n 2 } , one can set η n 1 = η n 2 = 1 n + 1 , for any n 1 .
(3) 
Algorithm (3) is reduced to inertial Mann algorithm introduced by Maingé [34], if we take η n 1 = η n 2 = 0 .
(4) 
The hypothesis (ii) of our theorem is easy to calculate and not complicated because the value φ n φ n 1 is known before choosing ξ n so it plays an important role in the numerical discussions. For special options, the parameter ξ n in the proposed algorithm can be taken as:
0 ξ n ξ n ^ , ξ n ^ = min ξ n φ n φ n 1 , n 1 n + ϑ 1 , if φ n φ n 1 , n 1 n + ϑ 1 , otherwise ,
for some ϑ 3 and { ξ n } is a positive sequence so that lim n + ξ n η n 1 = 0 = lim n + ξ n ρ n . This concept was introduced by Beck and Teboulle [41] for the inertial extrapolated step.
(5) 
If we set η n 1 = η n 2 in our algorithm, we have the results of Tan and Cho [42].
The result below is very important in the next section, where it has a prominent role in obtaining the strong convergence by the forward–backward method equipped with the Tikhonov regularization term.
Corollary 1.
Assume that A is a non-empty CCS of a real HS ℶ and V : A A is an η 1 averaged mapping, where η 1 ( 0 , 1 ) , with Ω ( V ) . Suppose that the assumptions below hold:
(C 1 )
lim n + η n 1 = 0 = lim n + η n 2 , n η n 1 = , n η n 2 = , η n 1 , η n 2 ( 0 , 1 ) with η n 1 η n 2 and η n 3 [ a , b ] ( 0 , 1 η 1 ] ,
(C 2 )
lim n + ξ n η n 1 φ n φ n 1 = 0 = lim n + ξ n ρ n φ n φ n 1 , where ρ n ( 0 , 1 ) and ρ n = η n 1 η n 1 η n 3 + η n 2 η n 3 .
Set φ 0 , φ 1 arbitrary and define the sequence { φ n } by:
Λ n = φ n + ξ n φ n φ n 1 , n = ( 1 η n 1 ) Λ n , n = ( 1 η n 2 ) Λ n , φ n + 1 = ( 1 η n 3 ) n + η n 3 V n , n 1 .
Then the iterative sequence { φ n } by (29) converges to an FP of V in norm.
Proof. 
Algorithm (29) is equivalent to algorithm (3) with the equation V = ( 1 η n 1 ) I + η n 1 P . As a result that V : A A is an η 1 averaged mapping, then P : is NE. Therefore, Ω ( P ) = Ω ( V ) and the result follow immediately by Theorem 1. □

4. Applications

4.1. Solve Monotone Inclusion Problem

According to the general algorithm in Theorem 1, in this part, we will proposed a strongly convergent inertial forward–backward algorithm with Tikhonov regularization terms for solving the MIP below
Find φ such that 0 Γ φ + Ξ φ
where Γ : is an MM operator and Ξ : is a ϑ ISM operator. MIP (30) has wonderful applications in linear inverse problem, machine learning, and image processing.
Theorem 2.
Let Γ : be an MM operator and Ξ : be an ϑ ISM operator so that Ω Γ + Ξ . Assume that τ ( 0 , 2 ϑ ] . Let the assumptions below hold:
(MI 1 )
lim n + η n 1 = 0 = lim n + η n 2 , n η n 1 = , n η n 2 = , η n 1 , η n 2 ( 0 , 1 ) with η n 1 η n 2 and η n 3 [ a , b ] ( 0 , 4 ϑ τ 2 ϑ ] ,
(MI 2 )
lim n + ξ n η n 1 φ n φ n 1 = 0 = lim n + ξ n ρ n φ n φ n 1 , where ρ n ( 0 , 1 ) and ρ n = η n 1 η n 1 η n 3 + η n 2 η n 3 .
Set φ 0 , φ 1 arbitrary and define the sequence { φ n } by the iterative scheme below:
Λ n = φ n + ξ n φ n φ n 1 , n = ( 1 η n 1 ) Λ n , n = ( 1 η n 2 ) Λ n , φ n + 1 = ( 1 η n 3 ) n + η n 3 J τ Γ ( n τ Ξ ( n ) ) , n 1 .
Then the sequence { φ n } generated by (31) converges to ( Γ + Ξ ) 1 ( 0 ) in norm.
Proof. 
Since the mapping P : can be described by P = J τ Γ ( I τ Ξ ) , then algorithm (31) can be written as (3). To discuss the convergence, we consider two situations. The first one is τ ( 0 , 2 ϑ ] . According to ([1], Remark 4.24(iii) and Corollary 23.8), J τ Γ is 1 2 ISM. Moreover, by means of ([1], Proposition 4.33), I τ Ξ is τ 2 ϑ averaged. This implies from ([16], Proposition 6) that P is 2 ϑ 4 ϑ τ averaged. By noticing that Ω ( P ) = ( Γ + Ξ ) 1 ( 0 ) , then the desired result follows by Corollary 1. The second situation is τ = 2 ϑ . Since I τ Ξ is NE, then P = J τ Γ ( I τ Ξ ) is too. Therefore, the desired conclusion follows immediately by Theorem 1. □
Remark 2.
The applications below follow from Theorem 2:
  • If η n 3 = 1 , η n 1 = η n 2 , and Ξ = 0 , then the algorithm (31) turn into φ n + 1 = J τ Γ ( 1 η n 1 ) Λ n . This equation takes another form as,
    Λ n 1 1 η n 1 φ n + 1 + τ 1 η n 1 Γ φ n + 1 = I + γ n I + τ 1 η n 1 Γ φ n + 1
    where γ n = 1 1 η n 1 1 > 0 , lim n + γ n = 0 , and the term γ n I is called the Tikhonov regularization. This term makes rapid convergence of iterative sequence { φ n } the minimal norm solution. To more details about solving MIOs by Tikhonov-like methods, see [16,43,44].
  • Suppose that the function r : ( , + ) is convex, proper, and lower-semicontinuous, and the function s : R is a convex and Fréchet differentiable with 1 ϑ Lipschitz continuous gradient so that arg min ( r + s ) . Taking τ ( 0 , 2 ϑ ] . For initial values φ 0 , φ 1 , and { η n 1 } , { η n 2 } , { η n 3 } , and { ξ n } are real sequences verifying assumptions ( M I 1 ) and ( M I 2 ) of Theorem 2, the sequence { φ n } created by the iterative scheme
    Λ n = φ n + ξ n φ n φ n 1 , n = ( 1 η n 1 ) Λ n , n = ( 1 η n 2 ) Λ n , φ n + 1 = ( 1 η n 3 ) n + η n 3 p r o x τ r ( n τ s ( n ) ) , n 1 ,
    converges to arg min ( r + s ) ( 0 ) in norm.

4.2. Solve Variational Inequality Problem

There are many applications to the variational inequality theory, and this is the reason why researchers working in this direction are constantly increasing. Among the developments involved in variational inequality techniques, contact problems in elasticity [45], economics equilibrium [46], transportation problems [47,48] and fluid flow through porous media [49]. Ideas explaining these formulas have led to the development of powerful new techniques for solving a wide class of linear and nonlinear problems.
In this part, we reformulate the algorithm (3) of Theorem 1 to find a solution to the VIP:
Find φ ^ A such that Γ φ ^ , φ ^ 0 , A ,
where Γ : is a mapping and A is a non-empty CCS of . It is common that, for υ , an arbitrary positive constant φ ^ is a solution of (32) if and only if φ ^ = A ( φ ^ υ Γ φ ^ ) .
Theorem 3.
Suppose that Γ : is a monotone and L LC operator on a non-empty CCS A and υ = 1 L . Put P = A ( I υ Γ ) . Let the assumptions below be fulfilled:
(i) 
lim n + η n 1 = 0 = lim n + η n 2 , n η n 1 = , n η n 2 = , η n 1 , η n 2 , η n 3 ( 0 , 1 ) with η n 1 η n 2 .
(ii) 
lim n + ξ n η n 1 φ n φ n 1 = 0 = lim n + ξ n ρ n φ n φ n 1 , where ρ n ( 0 , 1 ) and ρ n = η n 1 η n 1 η n 3 + η n 2 η n 3 .
Set φ 0 , φ 1 arbitrary, then the iterative sequence { φ n } created by:
Λ n = φ n + ξ n φ n φ n 1 , n = ( 1 η n 1 ) Λ n , n = ( 1 η n 2 ) Λ n , φ n + 1 = ( 1 η n 3 ) n + η n 3 A ( I υ Γ ) n , n 1 .
converges to Ω ( P ) ( 0 ) in norm.
Proof. 
Since P = A ( I υ Γ ) is NE, then the proof is finished by Theorem 1. □

5. Numerical Results

In this section, we include some experimental studies that appear in finite and infinite dimensional spaces to illustrate the computational efficiency of the suggested algorithm and to correlate them with certain previously reported algorithms. The MATLAB codes were run in MATLAB version 9.5 (R2018b) on a PC Intel(R) Core(TM)i5-6200 CPU @ 2.30 GHz 2.40 GHz, RAM 8.00 GB.
Example 1.
Consider that a fixed point problem taken from [50] in which the HS = R through the usual real number space R . A mapping P : is defined by
P ( φ ) = ( 5 φ 2 2 φ + 48 ) 1 3 , φ A ,
where the set A is defined by A : = { φ : 0 φ 50 } . The control conditions for both algorithms are taken as follows:
  • Algorithm (3) in [42] (shortly, Algorithm 1)
    ξ n = 10 ( n + 1 ) 2 , η = 4.00 , β n = 0.90 , α n = 1 100 ( n + 2 ) .
  • Algorithm in (3) (shortly, Algorithm 2)
    ξ n = 10 ( n + 1 ) 2 , η = 4.00 , η n 3 = 0.90 , η n 1 = 1 100 ( n + 2 ) , 2 η n 2 = 1 100 ( n + 2 ) .
To illustrate the behaviour in this example see Table 1, Table 2, Table 3 and Table 4 and Figure 1 and Figure 2.
Example 2.
Let P : A be a mapping defined by
P ( φ ) = max { 0 , φ } .
Then P is NE and has a unique fixed point φ = 0 . The set A is defined by A : = { φ : 100 φ 100 } . The control condition for both algorithms are taken as follows:
  • Algorithm 1
    ξ n = 15 ( n + 1 ) 2 , η = 3.00 , β n = 0.85 , α n = 1 10 ( n + 2 ) .
  • Algorithm 2
    ξ n = 15 ( n + 1 ) 2 , η = 3.00 , η n 3 = 0.85 , η n 1 = 1 10 ( n + 2 ) , η n 2 = 1 20 ( n + 2 ) .
To illustrate the behaviour in this example see Figure 3, Figure 4, Figure 5 and Figure 6.
Example 3.
Let P : be a mapping defined by
P ( φ ) = φ + 2 , φ R .
Then P is non-expansive and has a unique fixed point φ = 0 . The set A is defined by A : = { φ : 100 φ 100 } . The control condition for both algorithms are taken as follows:
  • Algorithm 1
    ξ n = 30 ( n + 1 ) 2 , η = 6.50 , β n = 0.75 , α n = 1 50 ( n + 2 ) .
  • Algorithm 2
    ξ n = 30 ( n + 1 ) 2 , η = 6.50 , η n 3 = 0.75 , η n 1 = 1 50 ( n + 2 ) , η n 2 = 1 100 ( n + 2 ) .
To illustrate the behaviour in this example see Figure 7 and Figure 8.
Example 4.
Let F : A be an operator and the variational inequality problem is defined in the following way:
Find φ * A such that F ( φ * ) , y φ * 0 , y A .
Assume that P : A is a mapping defined by P : = P A ( I λ F ) where 0 < λ < 2 L and L is the Lipschitz constant of the mapping F . Consider the constraint set A described by A = { φ R 4 : 1 φ i 5 , i = 1 , 2 , 3 , 4 } and the mapping P : R 4 R 4 evaluated by
P ( φ ) = φ 1 + φ 2 + φ 3 + φ 4 4 φ 2 φ 3 φ 4 φ 1 + φ 2 + φ 3 + φ 4 4 φ 1 φ 3 φ 4 φ 1 + φ 2 + φ 3 + φ 4 4 φ 1 φ 2 φ 4 φ 1 + φ 2 + φ 3 + φ 4 4 φ 1 φ 2 φ 3 .
To illustrate the behaviour in this example see Table 5 and Table 6.
  • Algorithm 1
    ξ n = 10 ( n + 1 ) 2 , η = 4.00 , β n = 0.90 , α n = 1 100 ( n + 2 ) .
  • Algorithm 2
    ξ n = 10 ( n + 1 ) 2 , η = 4.00 , η n 3 = 0.90 , η n 1 = 1 100 ( n + 2 ) , η n 2 = 1 200 ( n + 2 ) .

6. Conclusions

In the subject of algorithms, the effectiveness of the algorithm is measured by two main factors. The first is reaching the desired point with the fewest iterations possible, and the second factor is the time. When the taken time to obtain strong convergence is short, results are good. There is no doubt that the paper [42] addressed many algorithms and proved, under symmetrical conditions, that its algorithm accelerates better than the previous one. Based on previous tables and pictures, we were able to verify that our algorithm converges faster than the algorithm from [42], so it converges faster than all the algorithms included in the paper [42]. Additionally, the numerical results (tables and images) show that our algorithm needs fewer iterations in a short time to achieve the desired goal, and this is what makes our method successful in obtaining strong convergence to the fixed point from its symmetrical one in the previous literature.

Author Contributions

H.A.H. contributed in conceptualization, investigation, methodology, validation and writing the original draft; H.u.R. contributed in conceptualization writing the original draft; H.A. contributed in funding acquisition, methodology, project administration and writing and editing. All authors contributed equally and significantly in writing this article. All authors have read and agreed to the published version of the manuscript.

Funding

This paper not received any external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this article as no data sets are generated or analyzed during the current study.

Conflicts of Interest

The authors declare that they have no competing interests concerning the publication of this article.

Abbreviations

HSsHilbert spaces
CCSconvex closed subset
FPsFixed points
NEMsNon-expansive mappings
MIPsMonotone inclusion problems
VIPsVariational inequality problems
MMMaximal monotone
ISMInverse strongly monotone
BDFbackward differentiation formula

References

  1. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: Berlin, Germany, 2011. [Google Scholar]
  2. Bauschke, H.H.; Borwein, J.M. On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38, 367–426. [Google Scholar] [CrossRef] [Green Version]
  3. Chen, P.; Huang, J.; Zhang, X. A primal-dual fixed point algorithm for convex separable minimization with applications to image restoration. Inverse Probl. 2013, 29, 025011. [Google Scholar] [CrossRef]
  4. Picard, E. Memoire sur la theorie des equations aux derivees partielles et la methode des approximations successives. J. Math. Pures Appl. 1890, 6, 145–210. [Google Scholar]
  5. Halpern, B. Fixed points of nonexpanding maps. Bull. Amer. Math. Soc. 1967, 73, 957–961. [Google Scholar] [CrossRef] [Green Version]
  6. He, S.; Yang, C. Boundary point algorithms for minimum norm fixed points of nonexpansive mappings. Fixed Point Theory Appl. 2014, 2014, 1–9. [Google Scholar] [CrossRef] [Green Version]
  7. An, N.T.; Nam, N.M.; Qin, X. Solving k-center problems involving sets based on optimization techniques. J. Global Optim. 2020, 76, 189–209. [Google Scholar] [CrossRef]
  8. Liu, L. A hybrid steepest descent method for solving split feasibility problems involving nonexpansive mappings. J. Nonlinear Convex Anal. 2019, 20, 471–488. [Google Scholar]
  9. Hammad, H.A.; Rehman, H.U.; la Sen, M.D. Advanced Algorithms and common solutions to variational inequalities. Symmetry 2020, 12, 1198. [Google Scholar] [CrossRef]
  10. Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef] [Green Version]
  11. Douglas, J.; Rachford, H.H. On the numerical solution of the heat conduction problem in 2 and 3 space variables. Trans. Am. Math. Soc. 1956, 82, 421–439. [Google Scholar] [CrossRef]
  12. Ansari, Q.H.; Islam, M.; Yao, J.C. Nonsmooth variational inequalities on Hadamard manifolds. Appl. Anal. 2020, 99, 340–358. [Google Scholar] [CrossRef]
  13. Chang, S.S.; Wen, C.F.; Yao, J.C. Common zero point for a finite family of inclusion problems of accretive mappings in Banach spaces. Optimization 2018, 67, 1183–1196. [Google Scholar] [CrossRef]
  14. Dang, Y.; Sun, J.; Xu, H. Inertial accelerated algorithms for solving a split feasibility problem. J. Ind. Manag. Optim. 2017, 13, 1383–1394. [Google Scholar] [CrossRef] [Green Version]
  15. Hammad, H.A.; Rehman, H.U.; la Sen, M.D. Shrinking projection methods for accelerating relaxed inertial Tseng-type algorithm with applications. Math. Probl. Eng. 2020, 2020, 14. [Google Scholar] [CrossRef]
  16. Boţ, R.I.; Csetnek, E.R.; Meier, D. Inducing strong convergence into the asymptotic behaviour of proximal splitting algorithms in Hilbert spaces. Optim. Methods Softw. 2019, 34, 489–514. [Google Scholar] [CrossRef]
  17. Noeiaghdam, S.; Araghi, M.A.F. A novel approach to find optimal parameter in the homotopy-regularization method for solving integral equations. Appl. Math. Inf. Sci. 2020, 14, 1–8. [Google Scholar]
  18. Tikhonov, A.N. Regularization of incorrectly posed problems, Soviet Mathematics Doklady. Sov. Math. Dokl. 1963, 4, 1624–1627. [Google Scholar]
  19. Tikhonov, A.N. On the Solution of Incorrectly Posed Problem and the Method of Regularization. Sov. Math. 1963, 4, 1035–1038. [Google Scholar]
  20. Attouch, H. Viscosity solutions of minimization problems. SIAM J. Optim. 1996, 6, 769–806. [Google Scholar] [CrossRef]
  21. Sahu, D.R.; Yao, J.C. The prox-Tikhonov regularization method for the proximal point algorithm in Banach spaces. J. Global Optim. 2011, 51, 641–655. [Google Scholar] [CrossRef]
  22. Zheng, K.; Qiao, Z.; Cheng, W. A third order exponential time differencing numerical scheme for no-slope-selection epitaxial thin film model with energy stability. J. Sci. Comput. 2019, 81, 154–185. [Google Scholar] [CrossRef] [Green Version]
  23. Feng, W.; Wang, C.; Wise, S.M.; Zhang, Z. A second-order energy stable backward differentiation formula method for the epitaxial thin film equation with slope selection. Numer. Methods Partial. Differ. 2018, 34, 1975–2007. [Google Scholar] [CrossRef] [Green Version]
  24. Yan, Y.; Chen, W.; Wang, C.; Wise, S.M. A second-order energy stable BDF numerical scheme for the Cahn-Hilliard equation. Commun. Comput. Phys. 2018, 23, 572–602. [Google Scholar] [CrossRef] [Green Version]
  25. Wang, C.; Chen, W.; Li, W.; Luo, Z.; Wang, X. A stabilized second order exponential time differencing multistep method for thin film growth model without slope selection. Esaim Math. Model. Numer. 2019, 54, 727–750. [Google Scholar] [CrossRef] [Green Version]
  26. Meng, X.; Qiao, Z.; Wang, C.; Zhang, Z. Artificial regularization parameter analysis for the no-slope-selection epitaxial thin film model. CSIAM Trans. Appl. Math. 2020, 1, 441–462. [Google Scholar] [CrossRef]
  27. Chen, W.; Wang, C.; Wang, X.; Wise, S.M. A linear iteration algorithm for a second-order energy stable Scheme for a thin film model without slope selection. J. Sci. Comput. 2014, 59, 574–601. [Google Scholar] [CrossRef] [Green Version]
  28. Cheng, K.; Wang, C.; Wise, S.M.; Yue, X. A second-order, weakly energy-stable pseudo-spectral scheme for the Cahn–Hilliard equation and its solution by the homogeneous linear iteration method. J. Sci. Comput. 2016, 69, 1083–1114. [Google Scholar] [CrossRef]
  29. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  30. Dong, Q.L.; Lu, Y.Y.; Yang, J. The extragradient algorithm with inertial effects for solving the variational inequality. Optimization 2016, 65, 2217–2226. [Google Scholar] [CrossRef]
  31. Fan, J.; Liu, L.; Qin, X. A subgradient extragradient algorithm with inertial effects for solving strongly pseudomonotone variational inequalities. Optimization 2020, 69, 2199–2215. [Google Scholar] [CrossRef]
  32. Shehu, Y.; Li, X.H.; Dong, Q.L. An efficient projection-type method for monotone variational inequalities in Hilbert spaces. Numer. Algorithm 2020, 84, 365–388. [Google Scholar] [CrossRef]
  33. Tan, B.; Xu, S.; Li, S. Inertial shrinking projection algorithms for solving hierarchical variational inequality problems. J. Nonlinear Convex Anal. 2020, 21, 871–884; 2193–2206. [Google Scholar]
  34. Maingé, P.E. Convergence theorems for inertial KM-type algorithms. J. Comput. Appl. Math. 2008, 219, 223–236. [Google Scholar] [CrossRef] [Green Version]
  35. Dong, Q.L.; Yuan, H.B.; Cho, Y.J.; Rassias, T.M. Modified inertial Mann algorithm and inertial CQ-algorithm for nonexpansive mappings. Optim. Lett. 2018, 12, 87–102. [Google Scholar] [CrossRef]
  36. Tuyen, T.M.; Hammad, H.A. Effect of shrinking projection and CQ-methods on two inertial forward–backward algorithms for solving variational inclusion problems. Rend. Del Circ. Mat. Palermo Ser. 2 2021, 1–15. [Google Scholar] [CrossRef]
  37. Hammad, H.A.; Rahman, H.U.; Gaba, Y.U. Solving a split feasibility problem by the strong convergence of two projection algorithms in Hilbert spaces. J. Funct. Spaces 2021, 2021, 5562694. [Google Scholar]
  38. Berinde, V. Iterative Approximation of Fixed Points; Efemeride: Baia Mare, Romania, 2002. [Google Scholar]
  39. Takahashi, W. Nonlinear Functional Analysis; Yokohama Publishers: Yokohama, Japan, 2000. [Google Scholar]
  40. Xu, H.K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  41. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  42. Tan, B.; Cho, S.Y. An inertial Mann-like algorithm for fixed points of nonexpansive mappings in Hilbert spaces. J. Appl. Numer. Optim 2020, 2, 335–351. [Google Scholar]
  43. Lehdili, N.; Moudafi, A. Combining the proximal algorithm and Tikhonov regularization. Optimization 1996, 37, 239–252. [Google Scholar] [CrossRef]
  44. Sahu, D.R.; Ansari, Q.H.; Yao, J.C. The prox-Tikhonov-like forward-backward method and applications. Taiwanese J. Math. 2015, 19, 481–503. [Google Scholar] [CrossRef]
  45. Kikuchi, N.; Oden, J.T. Contact Problems in Elasticity; SIAM: Philadelphia, PA, USA, 1988. [Google Scholar]
  46. Dafermos, S. Exchange price equilibria and variational inequalities. Math. Program. 1990, 46, 391–402. [Google Scholar] [CrossRef]
  47. Bertsekas, D.P.; Gafni, E.M. Projection methods for variational inequalities with applications to the traffic assignment problem. Math. Prog. Study 1982, 17, 139–159. [Google Scholar]
  48. Harker, P.T. Predicting Zntercity Freight Flows; VNU Science Press: Utrecht, The Netherlands, 1987. [Google Scholar]
  49. Baiocchi, C.; Capelo, A. Variational and Quasi-Variational Inequalities; Wiley: New York, NY, USA, 1984. [Google Scholar]
  50. Shatanawi, W.; Bataihah, A.; Tallafha, A. Four-step iteration scheme to approximate fixed point for weak contractions. Comput. Mater. Contin. 2020, 64, 1491–1504. [Google Scholar] [CrossRef]
Figure 1. (For Example 1): Numerical illustration of Algorithm 1 and Algorithm 2 while φ 0 = φ 1 = 10 .
Figure 1. (For Example 1): Numerical illustration of Algorithm 1 and Algorithm 2 while φ 0 = φ 1 = 10 .
Symmetry 13 00554 g001
Figure 2. (For Example 1): Numerical illustration of Algorithm 1 and Algorithm 2 while φ 0 = φ 1 = 50 .
Figure 2. (For Example 1): Numerical illustration of Algorithm 1 and Algorithm 2 while φ 0 = φ 1 = 50 .
Symmetry 13 00554 g002
Figure 3. (For Example 2):Numerical illustration of Algorithm 1 and Algorithm 2 while φ 0 = φ 1 = 20 .
Figure 3. (For Example 2):Numerical illustration of Algorithm 1 and Algorithm 2 while φ 0 = φ 1 = 20 .
Symmetry 13 00554 g003
Figure 4. (For Example 2): Numerical illustration of Algorithm 1 and Algorithm 2 while φ 0 = φ 1 = 40 .
Figure 4. (For Example 2): Numerical illustration of Algorithm 1 and Algorithm 2 while φ 0 = φ 1 = 40 .
Symmetry 13 00554 g004
Figure 5. (For Example 2): Numerical illustration of Algorithm 1 and Algorithm 2 while φ 0 = φ 1 = 60 .
Figure 5. (For Example 2): Numerical illustration of Algorithm 1 and Algorithm 2 while φ 0 = φ 1 = 60 .
Symmetry 13 00554 g005
Figure 6. (For Example 2): Numerical illustration of Algorithm 1 and Algorithm 2 while φ 0 = φ 1 = 100 .
Figure 6. (For Example 2): Numerical illustration of Algorithm 1 and Algorithm 2 while φ 0 = φ 1 = 100 .
Symmetry 13 00554 g006
Figure 7. (For Example 3):Numerical illustration of Algorithm 1 and Algorithm 2 while φ 0 = φ 1 = 50 .
Figure 7. (For Example 3):Numerical illustration of Algorithm 1 and Algorithm 2 while φ 0 = φ 1 = 50 .
Symmetry 13 00554 g007
Figure 8. (For Example 3): Numerical illustration of Algorithm 1 and Algorithm 2 while φ 0 = φ 1 = 100 .
Figure 8. (For Example 3): Numerical illustration of Algorithm 1 and Algorithm 2 while φ 0 = φ 1 = 100 .
Symmetry 13 00554 g008
Table 1. Example 1: Numerical study of Algorithm 1 and φ 0 = φ 1 = 10 .
Table 1. Example 1: Numerical study of Algorithm 1 and φ 0 = φ 1 = 10 .
Algorithm 1
Iter ( n ) Number of IterationsElapsed Time in Seconds
19.529062835550830.0003853
28.980115983734190.0004156
38.546889157346460.0004351
48.208413167256700.0004468
57.929266539379870.0004531
67.692824981181230.0004591
535.990221473297750.0007752
545.989868114118920.0007816
555.989601428203440.0007885
565.989406270630360.0007957
575.989269933494470.0008036
585.989181813704620.0008099
CPU time is seconds0.001018
Table 2. Example 1: Numerical study of Algorithm 2 and φ 0 = φ 1 = 10 .
Table 2. Example 1: Numerical study of Algorithm 2 and φ 0 = φ 1 = 10 .
Algorithm 2
Iter ( n ) Number of IterationsElapsed Time in Seconds
19.318594253326240.0023392
28.566854203240290.0032773
38.051379890200160.0041990
48.208413167256700.0042091
57.359211912321960.0042124
67.114976860741450.0042160
325.990744316881760.0044811
335.990001457499590.0044869
345.989519502762410.0044931
355.989229453923300.0044997
365.989079050774400.0045057
375.989029239391680.0045116
CPU time is seconds0.004607
Table 3. Example 1: Numerical study of Algorithm 1 and φ 0 = φ 1 = 50 .
Table 3. Example 1: Numerical study of Algorithm 1 and φ 0 = φ 1 = 50 .
Algorithm 1
Iter ( n ) Number of IterationsElapsed Time in Seconds
143.82580843392440.0039262
238.46649335173160.0044479
334.06505991675210.0051613
430.37076501735030.0055962
527.23848509316890.0105704
624.56518830905730.0123661
815.993000784186020.0129177
825.992729768475120.0129221
835.992513988706490.0129267
845.992345307493350.0129311
855.992216142824120.0129354
865.992119874299950.0129405
CPU time is seconds0.012987
Table 4. Example 1: Numerical study of Algorithm 2 while φ 0 = φ 1 = 50 .
Table 4. Example 1: Numerical study of Algorithm 2 while φ 0 = φ 1 = 50 .
Algorithm 2
Iter ( n ) Number of IterationsElapsed Time in Seconds
140.86371265088660.0012314
233.69422797645210.0017103
328.24760048817870.0023184
424.01928260820080.0029645
520.69288230670800.0062987
618.04871926619080.0090428
515.993213728177500.0094243
525.992758671520010.0094308
535.992445047847340.0094372
545.992238798988600.0094432
555.992112644879470.0094495
565.992045559212630.0094557
CPU time is seconds0.009512
Table 5. Example 4: Numerical illustration of Algorithm 1 while φ 0 = φ 1 = ( 1 , 2 , 3 , 4 ) T .
Table 5. Example 4: Numerical illustration of Algorithm 1 while φ 0 = φ 1 = ( 1 , 2 , 3 , 4 ) T .
Iter ( n ) φ 1 φ 2 φ 3 φ 4
17.8811010525954911.133592105214711.660802631532911.4916973684078
22.725170699713475.521963070099095.910626174869845.78231681677249
32.746503581697795.238924622056545.435409729655475.37058363048288
42.795896525517005.105600977981455.204817310388185.17209179402342
52.848153004775265.043212083183835.093301954272145.07678246381620
62.902423255402755.014447277308505.039727558059875.03139070422338
72.958769798575885.001541131071225.014295414227145.01008946137975
944.999491595540964.999491595540964.999491595540964.99949159554096
954.999494201455184.999494201455184.999494201455184.99949420145518
964.999496780789794.999496780789794.999496780789794.99949678078979
974.999499333949414.999499333949414.999499333949414.99949933394941
984.999501861330474.999501861330474.999501861330474.99950186133047
CPU time is seconds1.011633
Table 6. Example 4: Numerical illustration of Algorithm 2 while φ 0 = φ 1 = ( 1 , 2 , 3 , 4 ) T .
Table 6. Example 4: Numerical illustration of Algorithm 2 while φ 0 = φ 1 = ( 1 , 2 , 3 , 4 ) T .
Iter ( n ) φ 1 φ 2 φ 3 φ 4
14.9481481470988420.165907407310420.228944444351418.9075370362809
2-8.046708584930212.459730788360912.505402194682711.3787988702116
31.039729354402474.901320950887904.894528027709334.98533018397129
41.119735376357424.892180786760534.885378899658094.95475431398367
51.201285230140714.891699088309414.884930571813134.94635475329384
61.277745391107794.896259293752734.889534849964494.94790178020963
71.349838113124254.903783946561154.897103362137194.95376528318611
81.418963974548864.913348709778284.906709816299614.96199173340817
444.999489117827814.999489117827814.999489117827814.99948911782781
454.999492617199444.999492617199444.999492617199444.99949261719944
464.999496068958114.999496068958114.999496068958114.99949606895811
474.999499474069024.999499474069024.999499474069024.99949947406902
484.999502833471424.999502833471424.999502833471424.99950283347142
CPU time is seconds0.7115419
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hammad, H.A.; Rehman, H.u.; Almusawa, H. Tikhonov Regularization Terms for Accelerating Inertial Mann-Like Algorithm with Applications. Symmetry 2021, 13, 554. https://doi.org/10.3390/sym13040554

AMA Style

Hammad HA, Rehman Hu, Almusawa H. Tikhonov Regularization Terms for Accelerating Inertial Mann-Like Algorithm with Applications. Symmetry. 2021; 13(4):554. https://doi.org/10.3390/sym13040554

Chicago/Turabian Style

Hammad, Hasanen A., Habib ur Rehman, and Hassan Almusawa. 2021. "Tikhonov Regularization Terms for Accelerating Inertial Mann-Like Algorithm with Applications" Symmetry 13, no. 4: 554. https://doi.org/10.3390/sym13040554

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop