Next Article in Journal
Robust Stability of Time-Varying Markov Jump Linear Systems with Respect to a Class of Structured, Stochastic, Nonlinear Parametric Uncertainties
Next Article in Special Issue
Water Particles Monitoring in the Atacama Desert: SPC Approach Based on Proportional Data
Previous Article in Journal
On the Colored and the Set-Theoretical Yang–Baxter Equations
Previous Article in Special Issue
Explicit Formulas for Some Infinite 3F2(1)-Series
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inertial Accelerated Algorithm for Fixed Point of Asymptotically Nonexpansive Mapping in Real Uniformly Convex Banach Spaces

by
Murtala Haruna Harbau
1,2,
Godwin Chidi Ugwunnadi
3,4,*,
Lateef Olakunle Jolaoso
4 and
Ahmad Abdulwahab
5
1
Department of Mathematical Sciences, Bayero University, Kano, Nigeria
2
Department of Science and Technology Education, Bayero University, Kano, Nigeria
3
Department of Mathematics, Faculty of Science and Engineering, University of Eswatini, Private Bag 4, Kwaluseni M201, Eswatini
4
Department of Mathematics and Applied Mathematics, Sefako Makgatho Health Sciences University, P.O. Box 94, Medunsa, Pretoria 0204, South Africa
5
Department of Mathematics, Federal College of Education, Katsina, Nigeria
*
Author to whom correspondence should be addressed.
Axioms 2021, 10(3), 147; https://doi.org/10.3390/axioms10030147
Submission received: 15 February 2021 / Revised: 5 April 2021 / Accepted: 12 April 2021 / Published: 3 July 2021
(This article belongs to the Collection Mathematical Analysis and Applications)

Abstract

:
In this work, we introduce a new inertial accelerated Mann algorithm for finding a point in the set of fixed points of asymptotically nonexpansive mapping in a real uniformly convex Banach space. We also establish weak and strong convergence theorems of the scheme. Finally, we give a numerical experiment to validate the performance of our algorithm and compare with some existing methods. Our results generalize and improve some recent results in the literature.

1. Introduction

Let X be a real Banach space and C a nonempty closed and convex subset of X. Let T : C C be a mapping. A point x C is called a fixed point of T if T x = x . We denote by F i x ( T ) the set of all fixed points of T, that is, F i x ( T ) : = { x C : T x = x } . Then, the mapping T : C C is said to be:
(i)
Nonexpansive if | | T x T y | | | | x y | | x , y C ;
(ii)
Asymptotically nonexpansive (see [1] ) if there exists a sequence { k n } [ 0 , ) , with lim n k n = 0 such that
| | T n x T n y | | ( 1 + k n ) | | x y | | x , y C a n d n 1 ;
and
(iii)
Uniformly L-Lipschitzian if there exists a constant L > 0 such that, for all x , y C , | | T n x T n y | | L | | x y | | n 1 .
The class of asymptotically nonexpansive mappings was first introduced and studied by Goebel and Kirk [1] as a generalization of the class of nonexpansive mappings. They proved that if C is a nonempty closed convex and bounded subset of a real uniformly convex Banach space and T is an asymptotically nonexpansive mapping on C, then T has a fixed point.
Many problems in pure and applied sciences, like those related to the theory of differential equations, optimization, game theory, image recovery, and signal processing (see [2,3,4,5,6] and the references contained therein) can be formulated as fixed-point problems of nonexpansive mappings. Iterative methods for approximating fixed points of nonexpansive and asymptotically nonexpansive mappings using Mann and Ishikawa iterative processes have been studied by many authors. Mann and Ishikawa methods were first studied for nonexpansive mappings and later modified to study the convergence analysis of fixed points of asymptotically nonexpansive mappings; see for example [7,8,9,10,11] and references therein. In 1978, Bose [12] started the study of iterative methods for approximating fixed points of asymptotically nonexpansive mapping in a bounded closed convex nonempty subset C of a uniformly convex Banach space which satisfies Opial’s condition. Bose [12] proved that the sequence { T n x } of asymptotically nonexpansive mapping converges weakly to the fixed point of asymptotically nonexpansive mapping T, provided T is asymptotically regular at x C ; that is, lim n | | T n + 1 x T n x | | = 0 . Later, Schu [13,14] was the first to study the following modified Mann iteration process for approximating the fixed point of an asymptotically nonexpansive mapping T on nonempty closed convex and bounded subsets C of both Hilbert space and (resp.) uniformly convex Banach space with Opial’s condition. The modified Mann sequence { x n } generated with any arbitrary x 1 C and for any control sequence { α n } in [ 0 , 1 ] is as follows:
x n + 1 = ( 1 α n ) x n + α n T n x n , n 1 .
In 2000, Osilike and Aniagbosor [15] proved that the theorems of Schu [13,14] remain true without the boundedness condition imposed on C provided that the fixed-points set of asymptotically nonexpansive mapping is nonempty. Later, in 2015, Dong and Yuan [16] studied the accelerated convergence rate of Mann’s iterative method [9] by combining Picard’s method [10] with the conjugate gradient methods [17]. Consequently, they obtained the following fast algorithm for nonexpansive mapping in Hilbert space:
d n + 1 = 1 λ T ( x n ) x n   +   β n d n y n = x n + λ d n + 1 x n + 1 = μ γ n x n + ( 1 μ γ n ) y n , n 0 ,
where μ ( 0 , 1 ] , λ > 0 , and { γ n } and { β n } are real nonnegative sequences. They proved weak convergence of the sequence { x n } in (2) under the following conditions:
(C1)
n = 0 μ γ n ( 1 μ γ n ) = .
(C2)
n = 0 β n < .
(C3)
T ( x n ) x n is bounded.
Finally, they provided some numerical examples to validate that the accelerated Mann algorithm (2) is more efficient than the Mann algorithm.
On the other hand, in the light of inertial-type iterative methods which are based upon a discrete version of a second-order dissipative dynamical system [18,19,20], it has been proved that the procedure improves the performance and increases the rate of convergence of the iterative sequence (see [21,22,23,24,25,26] and the references therein). In [22], Dong et al. proposed the following modified inertial Mann algorithm for nonexpansive mappings for Hilbert space, by combining the accelerated Mann algorithm (2) and an inertial-type extrapolation method. Consequently, they studied the following accelerated Mann algorithm:
x 0 , x 1 H , w n = x n + α n ( x n x n 1 ) , d n + 1 = 1 λ T ( w n ) w n + β n d n y n = w n + λ d n + 1 x n + 1 = μ γ n w n + ( 1 μ γ n ) y n , n 1 ,
where α n [ 0 , α ] is nonincreasing with α 1 = 0 and 0 α < 1 , { γ n } satisfies
δ > α 2 ( 1 + α ) + α δ 1 α 2
and
0 < 1 μ γ 1 μ γ n δ α [ α ( 1 + α ) + α δ + σ ] δ + α [ α ( 1 + α ) + α δ + σ ]
where γ , σ , δ > 0 . Under the assumption that the sequence { w n } satisfies:
  • (D1) { T w n w n } is bounded; and
  • (D2) { T w n y } is bounded for any y F i x ( T )
They proved that { x n } converges weakly to a point in F i x ( T ) .
Inspired and motivated by the above results, it is our purpose in this paper to extend and generalize the result of Dong et al. [22] from nonexpansive mapping to asymptotically nonexpansive mapping in the setting of real uniformly convex Banach space, which is more general than Hilbert space. We use an inertial parameter which is different from the one in [22]. Finally, we give some numerical examples to validate the convergence of our algorithm.

2. Preliminaries

We use the following notations:
(i)
for weak convergence and → for strong convergence.
(ii)
ω w ( x n ) = { x : x n k x } to denote the set of w-weak cluster limits of { x n } .
Definition 1.
A normed linear space X is said to be a uniformly convex Banach space if for any ϵ ( 0 , 2 ] there exists a δ ( ϵ ) > 0 such that for any x , y X with | | x | | 1 , | | y | | 1 and | | x y | | ϵ , then, | | x + y 2 | | 1 δ ( ϵ ) .
Remark 1.
We oberve from Definition 1 that every Hilbert space is a uniformly convex Banach space.
Definition 2.
Let X be a Banach space and X be its dual space. A mapping J φ : X 2 X associated with a gauge function φ defined by
J φ ( x ) = { x X : x , x = | | x | | | | x | | , | | x | | = φ ( | | x | | ) }
is called the generalized duality mapping where φ is defined by φ ( t ) = t p 1 for all t 0 and 1 < p < . In particular, if p = 2 , J 2 is known as the normalized duality map written as J, which is defined by
J ( x ) = { x X : x , x = | | x | | 2 , | | x | | = | | x | | } .
The space X is said to have a weakly sequentially continuous duality map if J φ is single-valued and sequentially continuous from X with weak topology to X with weak topology.
Definition 3
(Browder, [27]). The duality mapping J is said to be weakly sequentially continuous if for any sequence { x n } in E such that x n x , implies J ( x n ) J ( x ) , where means weak convergence.
Definition 4.
(1) 
Demiclosed at y 0 C , if for any sequence { x n } in C which converges weakly to x 0 C and T x n y 0 , it holds that T x 0 = y 0 .
(2) 
Semicompact, if for any bounded sequence { x n } in C such that lim n | | x n T x n | | = 0 there exists a subsequence { x n k } { x n } such that x n k x C .
The following lemmas will be needed in the proof of the main results.
Lemma 1
(see [28] Opial’s property). If in a Banach space X having a weakly continuous duality mapping J, the sequence { x n } is weakly convergent to x 0 , then for any x X :
lim inf n | | x n x | | lim inf n | | x n x 0 | | .
In particular, if the space X is uniformly convex, then equality holds if and only if x = x 0 .
It is known that in every Hilbert space and p space, 1 p < satisfies!Opial’s condition. However L p with p 2 does not satisfy this condition; (see [29] for more details). Additionally, it is clear in [30] that every Banach space with weakly sequentially continuous duality mapping satisfies Opial’s condition. An example of a space with a weakly sequentially continuous duality map is p ( 1 < p < ) space.
Lemma 2
(see [11]). Let X be a real uniformly convex Banach space, let C be a nonempty closed convex subset of X, and T : C C an asymptotically nonexpansive mapping with a sequence { k n } [ 0 , ) and lim n k n = 0 . Then, the mapping ( I T ) is demiclosed at zero.
Lemma 3
(see [15] Lemma 1). Let { a n } , { b n } , and { c n } be nonnegative sequences such that
a n + 1 ( 1 + c n ) a n + b n
with n = 0 b n < + and n = 0 c n < + n 0 . Then
(i) 
The sequence { a n } converges.
(ii) 
In particular, if lim inf n a n = 0 , then lim n a n = 0 .
Lemma 4
(see [31]). Let r > 0 be a fixed number. Then, a real Banach space X is uniformly convex if and only if there exists a continuous and strictly increasing function g : [ 0 , ) [ 0 , ) with g ( 0 ) = 0 , such that;
| | λ x + ( 1 λ ) y | | 2 λ | | x | | 2 + ( 1 λ ) | | y | | p λ ( 1 λ ) g ( | | x y | | )
for all x , y in B r = { x X : | | x | | r } and λ [ 0 , 1 ] .

3. Main Results

In this section, we prove weak and strong convergence theorems for asymptotically nonexpansive mapping in real uniformly convex Banach space.
Weak Convergence Theorem
Assumption 1.
Let X be a real uniformly convex Banach space.
(i) 
Choose sequences { α n } ( 0 , 1 ) , { β n } , { δ n } [ 0 , ) and n = 1 δ n < with δ n = ( β n ) which means lim n δ n β n = 0 .
(ii) 
Let x 0 , x 1 X be arbitrary points, for the iterates x n 1 and x n for each n 1 , choose θ n such that 0 θ n θ ¯ n where, for any η 3
θ ¯ n : = min n 1 n + η 1 , δ n | | x n x n 1 | | ,   i f   x n x n 1 ; n 1 n + η 1 , O t h e r w i s e .
This idea was obtained from the recent inertial extrapolation step introduced in [32].
Remark 2.
It is easy to see from Assumption 1 that for each n 1 , we have
θ n | | x n x n 1 | | δ n ,
which together with n = 1 δ n < and lim n δ n β n = 0 , we respectively obtain
n = 1 θ n | | x n x n 1 | | <
and
lim n θ n β n | | x n x n 1 | | lim n δ n β n = 0 .
Theorem 1.
Let X be a real uniformly convex Banach space with Opial’s property. Let T : X X be an asymptotically nonexpansive mapping with sequence { k n } [ 0 , ) such that n = 0 k n < and F i x ( T ) . Let { x n } be the sequence generated as follows:
x 0 , x 1 X , w n = x n + θ n ( x n x n 1 ) , d n + 1 = 1 λ T n ( w n ) w n + β n d n y n = w n + λ d n + 1 x n + 1 = μ α n w n + ( 1 μ α n ) y n , n 1 ,
where μ ( 0 , 1 ] , λ > 0 , assuming that Assumption 1 holds and set d 1 = 1 λ ( T 1 w 0 w 0 ) . Then, the sequence { x n } converges weakly to a point x F i x ( T ) , provided that the following conditions hold:
(C1)  n = 0 β n < .
(C2)  lim inf n μ α n ( 1 μ α n ) > 0 .
Moreover, { w n } satisfies
(C3)  { T n w n w n } is bounded.
Proof. 
We divide the proof into the following steps:
Step (i): We show that { d n } is bounded.
We have from (C1) that lim n β n = 0 ; thus, there exists n 0 N such that β n 1 2 for all n n 0 . Let M 1 be defined as follows:
M 1 : = m a x m a x 1 k n 0 | | d k | | , 2 λ sup n N | | T n w n w n | | .
Then, by (C3), we have M 1 < . Assume that | | d n | | M 1 for some n n 0 , then
| | d n + 1 | | = | | 1 λ ( T n w n w n ) + β n d n | | 1 λ | | T n w n w n | | + β n | | d n | | M 1 .
This implies that
| | d n | | M 1 f o r a l l n 0 ,
and consequently { d n } is bounded.
Step (ii): We show that lim n | | x n p | | exists for any p F i x ( T ) .
From the scheme (7), we have
y n = w n + λ d n + 1 = w n + λ 1 λ ( T n w n w n ) + β n d n = T n w n + λ β n d n .
By (8), (9), and for any p F i x ( T ) , we have
| | y n p | | = | | T n w n + λ β n d n p | | | | T n w n p | | + λ β n | | d n | | ( 1 + k n ) | | w n p | | + λ M 1 β n .
Additionally,
| | w n p | | = | | x n p + θ n ( x n x n 1 ) | | | | x n p | | + θ n | | x n x n 1 | | .
Combining (10) and (11), we obtain
| | y n p | | ( 1 + k n ) | | x n p | | + θ n | | x n x n 1 | | + λ M 1 β n = ( 1 + k n ) | | x n p | | + β n ( 1 + k n ) θ n β n | | x n x n 1 | | + λ M 1 .
By (6) in Remark 2, we know that the sequence δ n β n | | x n x n 1 | | converges, and since n = 1 k n < , then it converges, so there exists some constant say M 2 > 0 such that for all n 1
( 1 + k n ) θ n β n | | x n x n 1 | | + λ M 1 M 2
thus
| | y n p | | ( 1 + k n ) | | x n p | | + β n M 2 .
Now, using (10), (12), and for some M 3 > 0 , any p F i x ( T ) , we have
| | x n + 1 p | | = | | μ α n w n + ( 1 μ α n ) y n p | | μ α n | | w n p | | + ( 1 μ α n ) | | y n p | | μ α n | | w n p | | + μ α n θ n | | y n p | | + ( 1 μ α n ) [ ( 1 + k n ) | | x n p | | + β n M 2 ] [ 1 + ( 1 μ α n ) k n ] | | x n p | | + β n M 3 .
Therefore,
| | x n + 1 p | | ( 1 + k n ) | | x n p | | + β n M 3 .
Hence, using the fact that n = 1 k n < , together with condition (C1) and Lemma 3 in (13), we get that lim n | | x n p | | exists. Consequently, the sequence { x n } is bounded.
Step (iii): Next we show that lim n | | x n T x n | | = 0 .
Since the sequence { x n } is bounded, it follows that { w n } is bounded and consequently { T n w n } is bounded. Let r = sup n 1 { | | w n | | , | | T n w n | | } . Then, for any p F i x ( T ) and since X is a uniformly convex Banach space, by Lemma 4, there exists a continuous and strictly increasing function g : [ 0 , ) [ 0 , ) with g ( 0 ) = 0 such that
| | λ x + ( 1 λ ) y | | 2 λ | | x | | 2 + ( 1 λ ) | | y | | p λ ( 1 λ ) g ( | | x y | | )
for all x , y in B r = { x X : | | x | | r } and λ [ 0 , 1 ] .
Therefore,
| | x n + 1 p | | 2 = | | μ α n w n + ( 1 μ α n ) y n p | | 2 = | | μ α n w n + ( 1 μ α n ) ( T n w n + λ β n d n ) p | | 2 = | | μ α n ( w n p ) + ( 1 μ α n ) ( T n w n p ) + ( 1 μ α n ) λ β n d n | | 2 | | μ α n ( w n p ) + ( 1 μ α n ) ( T n w n p ) | | + ( 1 μ α n ) λ β n | | d n | | 2 = | | μ α n ( w n p ) + ( 1 μ α n ) ( T n w n p ) | | 2 + ( 1 μ α n ) 2 λ 2 β n 2 | | d n | | 2 + 2 ( 1 μ α n ) λ β n | | d n | | | | μ α n ( w n p ) + ( 1 μ α n ) ( T n w n p ) | | μ α n | | w n p | | 2 + ( 1 μ α n ) | | T n w n p | | 2 μ α n ( 1 μ α n ) g ( | | w n T n w n | | ) + 2 ( 1 μ α n ) λ β n | | d n | | | | μ α n ( w n p ) + ( 1 μ α n ) ( T n w n p ) | | + ( 1 μ α n ) 2 λ 2 β n 2 | | d n | | 2
μ α n | | w n p | | 2 + ( 1 μ α n ) ( 1 + k n ) 2 | | w n p | | 2 μ α n ( 1 μ α n ) g ( | | w n T n w n | | ) + 2 ( 1 μ α n ) λ β n | | d n | | μ α n | | w n p | | + ( 1 μ α n ) | | T n w n p | | + ( 1 2 μ α n + μ 2 α n 2 ) λ 2 β n 2 | | d n | | 2 μ α n | | w n p | | 2 + ( 1 μ α n ) ( 1 + 2 k n + k n 2 ) | | w n p | | 2 μ α n ( 1 μ α n ) g ( | | w n T n w n | | ) + λ 2 β n 2 | | d n | | 2 + 2 ( 1 μ α n ) λ β n | | d n | | μ α n | | w n p | | + ( 1 μ α n ) ( 1 + k n ) | | w n p | | 2 μ α n λ 2 β n 2 | | d n | | 2 + μ 2 α n 2 λ 2 β n 2 | | d n | | 2 | | w n p | | 2 + 2 k n | | w n p | | 2 + k n 2 | | w n p | | 2 μ α n ( 1 μ α n ) g ( | | w n T n w n | | ) + 2 ( 1 μ α n ) λ β n | | d n | | | | w n p | | + k n | | w n p | | μ α n k n | | w n p | | + λ 2 β n 2 | | d n | | 2 + μ 2 λ 2 α n 2 β n 2 | | d n | | 2
= | | w n p | | 2 + 2 k n | | w n p | | 2 + k n 2 | | w n p | | 2 μ α n ( 1 μ α n ) g ( | | w n T n w n | | ) + 2 λ β n | | d n | | | | w n p | | + 2 λ β n k n | | d n | | | | w n p | | 2 λ β n μ α n k n | | d n | | | | w n p | | 2 μ α n λ β n | | d n | | | | w n p | | 2 μ α n λ β n k n | | d n | | | | w n p | | + λ 2 β n 2 | | d n | | 2 + 2 μ 2 α n 2 λ β n k n | | d n | | | | w n p | | + μ 2 α n 2 λ 2 β n 2 | | d n | | 2 | | w n p | | 2 + 2 k n | | w n p | | 2 + k n 2 | | w n p | | 2 μ α n ( 1 μ α n ) g ( | | w n T n w n | | ) + 2 λ β n | | d n | | | | w n p | | + 2 λ β n k n | | d n | | | | w n p | | + λ 2 β n 2 | | d n | | 2 + μ 2 α n 2 λ 2 β n 2 | | d n | | 2 | | w n p | | 2 + 2 k n | | w n p | | 2 + k n 2 | | w n p | | 2 μ α n ( 1 μ α n ) g ( | | w n T n w n | | ) + 2 λ β n | | d n | | | | w n p | | + 2 λ β n k n | | d n | | | | w n p | | + 2 λ 2 β n 2 | | d n | | 2 .
Since lim n | | x n p | | exists for any p F i x ( T ) , then using (5) it follows from (11) that there exists L > 0 such that | | w n p | | L for any p F i x ( T ) , and using (8), we have
| | x n + 1 p | | 2 | | w n p | | 2 + 2 L 2 k n + L 2 k n 2 μ α n ( 1 μ α n ) g ( | | w n T n w n | | ) + 2 λ M 1 L β n + 2 λ M 1 L k n β n + 2 λ 2 M 1 2 β n 2 ( | | x n p | | + θ n | | x n x n 1 | | ) 2 + 2 L 2 k n + L 2 k n 2 μ α n ( 1 μ α n ) g ( | | w n T n w n | | ) + 2 λ M 1 L β n + 2 λ M 1 L k n β n + 2 λ 2 M 1 2 β n 2 = | | x n p | | 2 + 2 θ n | | x n x n 1 | | | | x n p | | + ( θ n | | x n x n 1 | | ) 2 + 2 L 2 k n + L 2 k n 2 μ α n ( 1 μ α n ) g ( | | w n T n w n | | ) + 2 λ M 1 L β n + 2 λ M 1 L k n β n + 2 λ 2 M 1 2 β n 2 .
Since lim n | | x n p | | exists for any p F i x ( T ) , then | | x n p | | is bounded; therefore, there exists H > 0 such that | | x n p | | H for all n 1 . Hence,
| | x n + 1 p | | 2 | | x n p | | 2 + 2 θ n | | x n x n 1 | | H + ( θ n | | x n x n 1 | | ) 2 + 2 L 2 k n + L 2 k n 2 μ α n ( 1 μ α n ) g ( | | w n T n w n | | ) + 2 λ M 1 L β n + 2 λ M 1 L k n β n + 2 λ 2 M 1 2 β n 2 .
Therefore,
μ α n ( 1 μ α n ) g ( | | w n T n w n | | ) | | x n p | | 2 | | x n + 1 p | | 2 + 2 H θ n | | x n x n 1 | | + ( θ n | | x n x n 1 | | ) 2 + 2 L 2 k n + L 2 k n 2 + 2 λ M 1 L β n + 2 λ M 1 L k n β n + 2 λ 2 M 1 2 β n 2 .
Hence,
n = 0 μ α n ( 1 μ α n ) g ( | | w n T n w n | | ) n = 0 ( | | x n p | | 2 | | x n + 1 p | | 2 ) + 2 H n = 0 θ n | | x n x n 1 | | + n = 0 ( θ n | | x n x n 1 | | ) 2 + 2 L 2 n = 0 k n + L 2 n = 0 k n 2 + 2 λ M 1 L n = 0 β n + 2 λ M 1 L n = 0 k n β n + 2 λ 2 M 1 2 n = 0 β n 2 | | x 0 p | | 2 + 2 H n = 0 θ n | | x n x n 1 | | + n = 0 ( θ n | | x n x n 1 | | ) 2 + 2 L 2 n = 0 k n + L 2 n = 0 k n 2 + 2 λ M 1 L n = 0 β n + 2 λ M 1 L n = 0 k n β n + 2 λ 2 M 1 2 n = 0 β n 2 .
Using (C1), (5), and n = 0 k n < , we obtain
n = 0 μ α n ( 1 μ α n ) g ( | | w n T n w n | | ) < ,
which implies that
lim n μ α n ( 1 μ α n ) g ( | | w n T n w n | | ) = 0 .
By (C2), we have
lim n g ( | | w n T n w n | | ) = 0 .
Using the property of g, we have
lim n | | w n T n w n | | = 0 .
Now,
| | w n x n | | = | | x n + θ n ( x n x n 1 ) x n | | = θ n | | x n x n 1 | | .
Taking the sum over n of both sides and considering (5), we have
n = 0 | | w n x n | | = n = 0 θ n | | x n x n 1 | | < ,
which implies that
lim n | | w n x n | | = 0 .
Since T is asymptotically nonexpansive, we obtain
| | x n T n x n | | = | | x n w n + w n T n w n + T n w n T n x n | | | | x n w n | | + | | w n T n w n | | + | | T n w n T n x n | | | | x n w n | | + | | w n T n w n | | + ( 1 + k n ) | | w n x n | | = ( 2 + k n ) | | w n x n | | + | | w n T n w n | | .
Using (14) and (15) in (16), we have
lim n | | x n T n x n | | = 0 .
On the other hand,
| | w n y n | | = | | w n ( w n + λ d n + 1 ) | | = | | λ d n + 1 | | = λ | | d n + 1 | | λ 1 λ | | w n T n w n | | + β n | | d n | | = | | w n T n w n | | + λ β n | | d n | | | | w n T n w n | | + λ M 1 β n .
Using (14) and (C1), we have
lim n | | w n y n | | = 0 .
Similarly, using (8), we have
| | x n + 1 T n x n + 1 | | = | | μ α n w n + ( 1 μ α n ) y n T n x n + 1 | | = | | μ α n ( w n T n w n ) + ( 1 μ α n ) λ β n d n + ( T n w n T n x n + 1 ) | | μ α n | | w n T n w n | | + ( 1 μ α n ) λ β n | | d n | | + | | T n w n T n x n + 1 | | μ α n | | w n T n w n | | + λ β n | | d n | | + ( 1 + k n ) | | w n x n + 1 | | = μ α n | | w n T n w n | | + λ β n | | d n | | + ( 1 + k n ) ( 1 μ α n ) | | w n y n | | μ α n | | w n T n w n | | + λ M 1 β n + ( 1 + k n ) | | w n y n | | .
It follows from (14), (18), and (C1) that
lim n | | x n + 1 T n x n + 1 | | = 0 .
Thus,
| | x n + 1 T x n + 1 | | = | | x n + 1 + T n + 1 x n + 1 T n + 1 x n + 1 T x n + 1 | | | | x n + 1 T n + 1 x n + 1 | | + | | T x n + 1 T n + 1 x n + 1 | | = | | x n + 1 T n + 1 x n + 1 | | + | | T x n + 1 T ( T n x n + 1 ) | | | | x n + 1 T n + 1 x n + 1 | | + k 1 | | x n + 1 T n x n + 1 | | .
From (17) and (20), we have
lim n | | x n T x n | | = 0 .
This completes the proof of (iii).
Since { x n } is bounded and X is a reflexive Banach space, there exists a subsequence { x n k } of { x n } which converges weakly to a point where p X . Therefore, from (21), it follows that lim k | | x n k T x n k | | = 0 and consequently by Lemma 2 we have T p = p . Therefore, we obtain that ω w ( x n ) F i x ( T ) .
Now, to prove that the sequence { x n } converges weakly to a fixed point of T, it suffices to show that ω w ( x n ) is a singleton. To do that, we proceed as follows.
By our assumption that X satisfies Opial’s property, using Lemma 1, taking p 1 , p 2 ω w ( x n ) and let { x n i } and { x n j } be subsequences of { x n } such that x n i p 1 and x n j p 2 . Then for p 1 p 2 , we have
lim n | | x n p 1 | | = lim i | | x n i p 1 | | < lim inf i | | x n i p 2 | | = lim n | | x n p 2 | | = lim inf j | | x n j p 2 | | < lim inf j | | x n j p 1 | | = lim n | | x n p 1 | | .
This is a contradiction, showing that ω w ( x n ) is a singleton. This completes the proof. □
Now we prove strong convergence theorem.
Theorem 2.
If in addition to all the hypotheses of Theorem 1, the map T is semicompact, then the iterative sequence { x n } generated by (7) converges strongly to a fixed point of T.
Proof. 
Assume that T is semicompact. Since from step (ii) and step (iii) in the proof of Theorem 1, we know that the sequence { x n } is bounded and lim k | | x n T x n | | = 0 , then there exists a subsequence { x n k } of { x n } such that x n k x as k . Therefore x n k x and so x ω w ( x n ) F i x ( T ) . From step (ii) in the proof of Theorem 1, lim n | | x n x | | exists, then
lim n | | x n x | | = lim k | | x n k x | | = 0 ,
which means that x n x F i x ( T ) . This completes the proof. □
If in Theorem 1 we assume that T is nonexpansive, we obtain the following corollary.
Corollary 1.
Let X be a real uniformly convex Banach space with Opial’s property and T : X X be nonexpansive mapping with F i x ( T ) Ø . Let { x n } be the sequence generated as follows:
x 0 , x 1 X , w n = x n + θ n ( x n x n 1 ) , d n + 1 = 1 λ T ( w n ) w n + β n d n y n = w n + λ d n + 1 x n + 1 = μ α n w n + ( 1 μ α n ) y n , n 1 ,
where μ ( 0 , 1 ] , λ > 0 , assuming that Assumption 1 holds, and set d 0 : = 1 λ ( T w 0 w 0 ) . Then, the sequence { x n } converges weakly to a point x F i x ( T ) , provided that the following conditions hold:
(C1) n = 0 β n < .
(C2) lim inf n μ α n ( 1 μ α n ) > 0 .
Moreover, { w n } satisfies
(C3) { T w n w n } is bounded.
If in Theorem 1 we assume that X is a real Hilbert space, we get the following corollary.
Corollary 2.
Let H be a real Hilbert space. Let T : H H be an asymptotically nonexpansive mapping with sequence { k n } [ 0 , ) such that n = 0 k n < and F i x ( T ) . Let { x n } be the sequence generated as follows:
x 0 , x 1 H , w n = x n + θ n ( x n x n 1 ) , d n + 1 = 1 λ T n ( w n ) w n + β n d n y n = w n + λ d n + 1 x n + 1 = μ α n w n + ( 1 μ α n ) y n , n 1 ,
where μ ( 0 , 1 ] , λ > 0 , assuming that Assumption 1 holds and set d 0 : = ( T w 0 w 0 ) λ . Then, the sequence { x n } converges weakly to a point x F i x ( T ) , provided that the following conditions hold:
(C1) n = 0 β n < .
(C2) lim inf n μ α n ( 1 μ α n ) > 0 .
Moreover, { w n } satisfies
(C3) { T n w n w n } is bounded.
If in Corollary 2 we assume that T is nonexpansive, we obtain the following Corollary.
Corollary 3.
Let H be a real Hilbert space and T : H H be nonexpansive mapping with F i x ( T ) . Let { x n } be the sequence generated as follows
x 0 , x 1 H , w n = x n + θ n ( x n x n 1 ) , d n + 1 = 1 λ T ( w n ) w n + β n d n y n = w n + λ d n + 1 x n + 1 = μ α n w n + ( 1 μ α n ) y n , n 1 ,
where μ ( 0 , 1 ] , λ > 0 , assuming the Assumption 1 holds and set d 0 : = ( T x 0 x 0 ) λ . Then, the sequence { x n } converges weakly to a point x F i x ( T ) , provided that the following conditions hold;
(C1) n = 0 β n < .
(C2) lim inf n μ α n ( 1 μ α n ) > 0 .
Moreover, { w n } satisfies
(C3) { T w n w n } is bounded.
Remark 3.
Our results extend and generalize many results in the literature for this important class of nonlinear mappings. In particular, Theorem 1 extends Theorem 3.1 of Dong et al. [22] to a more general class of asymptotically nonexpansive mappings in the setting of a real uniformly convex Banach space, more general than a real Hilbert space.

4. Numerical Examples

In this section, we present a numerical example to illustrate the behavior of the sequences generated by the iterative scheme (7). The numerical implementation is done with the aid of MATLAB 2019b programming on a PC with Processor AMD Ryzen 53500 U, 2.10 GHz, 8.00 GB RAM.
Example 1.
Let X = 4 ( R ) , where
4 ( R ) = u = ( u 1 , u 2 , , u k , ) , u k R : k = 1 | u k | 4 < ,
with
u 4 = k = 1 | u k | 4 1 4 , f o r a l l u 4 ( R ) .
The duality mapping with respect to 4 ( R ) is defined by (see [33])
J p ( u ) = | u 1 | 3 s g n ( u 1 ) , | u 2 | 3 s g n ( u 2 ) , .
More so, X is not a real Hilbert space. Let T : X X be defined by T n u = 10 n 2 + 1 10 n 2 u . We take μ = 1 13 , λ = 5 , α n = 1 2 + 1 n , θ n = 1 n 2 + 1 , β n = 97 n 1 100 ( n + 1 ) . Then, all the conditions of Theorem 1 are satisfied with k n = 10 n 2 + 1 10 n 2 . Then, from (7) we get
x 0 , x 1 X , w n = x n + 1 n 2 + 1 ( x n x n 1 ) , d n + 1 = w n 50 n 2 + 97 n 1 100 n d n , y n = w n + 5 d n + 1 , x n + 1 = n + 2 26 n w n + 25 n 2 26 n y n , n N ,
where d 1 = w 1 50 . We compare the performance of (25) with the methods of Pan and Wang [34] and Vaish and Ahmad [35], which are given respectively by
x n + 1 = α n x n + β n f ( x n ) + γ n T n ( t n x n + ( 1 t n ) x n + 1 ) , n N
and
x n + 1 = ρ n g ( x n ) + σ n x n + δ n T n ( η n x n + ( 1 η n ) x n + 1 ) , n N ,
where α n , β n , γ n , t n , ρ n , σ n , δ n , and η n are sequences in ( 0 , 1 ) such that α n + β n + γ n = 1 , ρ n + σ n + δ n = 1 , f : X X is a Meir–Keeler contraction mapping and g : X X is a contraction mapping with coefficient α ( 0 , 1 ) . In our computation, we take t n = η n = 3 n 3 n + 1 , α n = ρ n = 2 n 10 ( n + 1 ) , γ n = δ n = 97 n 1 100 ( n + 1 ) , β n = 1 α n γ n , σ n = 1 ρ n δ n , f ( x ) = x 20 , and g ( x ) = x 8 . We test the iterative methods for the following initial points:
Case I:
x 0 = 1 2 , 1 3 , 1 4 , and x 1 = 1 , 1 2 , 1 3 , ;
Case II:
x 0 = ( 2 , 2 , 2 , ) and x 1 = ( 5 , 5 , 5 , ) ;
Case III:
x 0 = ( 1 , 3 , 5 , ) and x 1 = 1 4 , 1 16 , 1 64 , ;
Case IV:
x 0 = ( 3 , 9 , 27 , ) and x 1 = ( 2 , 4 , 8 , ) .
We used x n + 1 x n 4 < 10 4 as the stopping criterion for all the algorithms. The numerical results are shown in Table 1 and Figure 1.
Example 2.
Let X = R 4 , endowed with the inner product x , y = x 1 y 1 + x 2 y 2 + x 3 y 3 + x 4 y 4 and the norm | | x | | = i = 1 4 | x i | 2 1 2 for all x = ( x 1 , x 2 , x 3 , x 4 ) , y = ( y 1 , y 2 , y 3 , y 4 ) R 4 . Define T : R 4 R 4 as follows:
T x = x 1 , 1 + x 2 2 , 1 + x 3 3 , x 4 2 , x = ( x 1 , x 2 , x 3 , x 4 ) R 4 .
Then, clearly F i x ( T ) = { ( 0 , 2 , 3 2 , 0 ) } and for all x , y R 4 , it is easy to see that
T 2 x = T ( T x ) = x 1 , 1 + 1 2 + x 2 2 2 , 1 + 1 3 + x 3 3 2 , x 4 2 2 .
In general, for any n 1 we have
T n x = x 1 , j = 0 n 1 1 2 j + x 2 2 n , j = 0 n 1 1 3 j + x 3 3 n , x 4 2 n .
So
| | T n x T n y | | = | x 1 y 1 | 2 + 1 2 n 2 | x 2 y 2 | 2 + 1 3 n 2 | x 3 y 3 | 2 + 1 2 n 2 | x 4 y 4 | 2 1 2 | x 1 y 1 | 2 + 1 2 n 2 | x 2 y 2 | 2 + 1 2 n 2 | x 3 y 3 | 2 + 1 2 n 2 | x 4 y 4 | 2 1 2 1 + 1 2 n | x 1 y 1 | 2 + | x 2 y 2 | 2 + | x 3 y 3 | 2 + | x 4 y 4 | 2 1 2 = 1 + 1 2 n 1 2 | | x y | | = 1 + 1 2 × 1 2 n + 1 2 ( 1 2 1 ) 2 ! × 1 2 n 2 + 1 2 ( 1 2 1 ) 1 2 2 ) 3 ! × 1 2 n 3 + | | x y | | 1 + 1 2 n + 1 2 n 3 + 1 2 n 5 + | | x y | | = 1 + 1 2 n 1 1 2 n 2 | | x y | | 1 + 1 2 n | | x y | | .
This implies that T is an asymptotically nonexpansive mapping with k n = 1 2 n 0 as n . Similarly, we compare the performance of (7) with that of Pan and Wang [34] and Vaish and Ahmad [35]. For (7), we choose μ = 1 8 , λ = 2 , δ n = 1 n + 1 , β n = δ n 2 . For the Pan and Wang algorithm, we take f ( x ) = 2 x , β n = 1 5 ( n + 1 ) , α n = 2 n 5 n + 8 , t n = n 3 ( n + 1 ) . For the Vaish and Ahmad algorithm, we take g ( x ) = x 2 , ρ n = 2 n 5 n + 8 , σ n = 1 10 ( n + 1 ) , δ n = 1 σ n ρ n , η n = n 3 n + 3 . We test the algorithm using the following initial points:
Case I:
x 0 = ( 2 , 2 , 2 , 2 ) , x 1 = ( 5 , 5 , 5 , 5 ) ;
Case II:
x 0 = ( 1 , 3 , 3 , 1 ) , x 1 = ( 0.5 , 1 , 1.5 , 3 ) ;
Case III:
x 0 = ( 2 , 0 , 0 , 2 ) , x 1 = ( 8 , 3 , 3 , 8 ) ;
Case IV:
x 0 = ( 3 , 3 , 3 , 3 ) , x 1 = ( 10 , 10 , 10 , 10 ) .
We use | | x n + 1 x n | | < 10 4 as the stopping criterion. The numerical results are shown in Table 2 and Figure 2.
Example 3.
Finally, we apply our algorithm to solve an image restoration problem which involves the reconstruction of an image degraded by blur and additive noise. We solve the l 1 -norm regularization problem, that is, find a solution to the following continuous optimization problem:
min x R N x 1 : A x = b ,
where b is a vector in R M , A is a matrix of dimension M × N ( M < < N ), and x 1 = i = 1 N | x i | is the l 1 -norm of x . The expression in (28) can be reformulated as the following least absolute selection and shrinkage operator (LASSO) problem [36,37]:
min x R N ω x 1 + 1 2 b A x 2 2 ,
where ω > 0 is a balancing parameter. Clearly, (29) is a convex unconstrained minimization problem which appears in compress sensing and image reconstruction, where the original signal (or image) is sparse in some orthogonal basis by the process
b = A x + η ,
where x is the original signal (or image), A is the blurring operator, η is a noise, and b is the degraded or blurred data which needs to be recovered. Many iterative methods have been proposed for solving (29), with the earliest being the projection method by Figureido et al. [36]. Note that the LASSO problem (29) can be expressed as a variational inequality problem, that is, finding x R N such that F ( x ) , y x 0 , for all y R N , where F = A T ( A x b ) (see [38]). Equivalently, we can rewrite (29) as a fixed point problem with T P R N ( I λ F ) (for λ > 0 ) which is nonexpansive. Our aim here is to recover the original image x given the data of the blurred image b. We consider the greyscale image of M pixels width and N pixels height, where each value is known to be in the range [ 0 , 255 ] . Let D = M × N . The quality of the restored image is measured by the signal-to-noise ratio defined as
S N R = 20 × log 10 x 2 x x 2 ,
where x is the original image and x is the restored image. Typically, the larger the S N R , the better the quality of the restored image. In our experiments, we use the greyscale test images Cameraman ( 256 × 256 ) and Pout ( 291 × 240 ) in the Image Processing Toolbox in MATLAB, and each test image is degraded by a Gaussian 7 × 7 blur kernel with standard deviation 4 . For our iterative scheme (7), we choose α n = 1 2 n + 1 , μ = 1 8 , β n = 1 n 0.5 , δ n = β n 2 , η = 3.5 , while for the Pan and Wang algorithm [34] and the Vaish and Ahmad algorithm [35] we take t n = η n = 19 n 20 n + 21 , γ n = δ n = 1 2 + 5 n 19 n + 20 , α n = ρ n = 1 n + 1 , β n = 1 α n γ n , σ n = 1 ρ n δ n , f ( x ) = 2 x , g ( x ) = x 2 . The initial values are chosen by x 0 , x 1 R D . Figure 3 and Figure 4 shows the original, blurred, and restored images using the algorithms. Figure 5 shows the graphs of SNR against number of iterations for each algorithm, and in Table 3 we report the time (in seconds) for each algorithm in the experiments.
From the numerical results, we observe that all the algorithms are able to restore the degraded images. Algorithm (7) performs better than the other algorithms in terms of the SNR (quality) of the restored image, but with more time taken.

5. Conclusions

We studied a modified inertial accelerated Mann algorithm in real uniformly convex Banach spaces. A strong convergence theorem was proved for approximating a fixed point of asymptotically nonexpansive mapping. Finally, we applied our results to study an image restoration problem and presented some numerical experiments to demonstrate and clarify the efficiency of our proposed iterative method compared to some existing methods in the literature.

Author Contributions

All the authors (M.H.H., G.C.U., L.O.J. and A.A.) contributed equally in the development of this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Sefako Makgatho Health Sciences University Postdoctoral research fund and the APC was funded by Department of Mathematics and Applied Mathematics, Sefako Makgatho Health Sciences University, Pretoria, South Africa.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Goebel, K.; Kirk, W.A. A fixed point theorem for asymptotically nonexpansive mappings. Proc. Am. Math. Soc. 1972, 35, 171–174. [Google Scholar] [CrossRef]
  2. Ali, B.; Harbau, M.H. Covergence theorems for pseudomonotone equilibrium problems, split feasibility problems and multivalued strictly pseudocontractive mappings. Numer. Funct. Anal. Opt. 2019, 40. [Google Scholar] [CrossRef]
  3. Castella, M.; Pesquet, J.-C.; Marmin, A. Rational optimization for nonlinear reconstruction with approximate 0 penalization. IEEE Trans. Signal Process. 2019, 67, 1407–1417. [Google Scholar] [CrossRef]
  4. Combettes, P.L.; Eckstein, J. Asynchronous block-iterative primal-dual decomposition methods for monotone inclusions. Math. Program. 2018, B168, 645–672. [Google Scholar] [CrossRef] [Green Version]
  5. Noor, M.A. New approximation Schemes for General Variational Inequalities. J. Math. Anal. Appl. 2000, 251, 217–229. [Google Scholar] [CrossRef] [Green Version]
  6. Xu, H.K. A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22, 2021–2034. [Google Scholar] [CrossRef]
  7. Cai, G.; Shehu, Y.; Iyiola, O.S. Iterative algorithms for solving variational inequalities and fixed point problems for asymptotically nonexpansive mappings in Banach spaces. Numer. Algorithms 2016, 73, 869–906. [Google Scholar] [CrossRef]
  8. Halpern, B. Fixed points of nonexpansive maps. Bull. Am. Math. Soc. 1967, 73, 957–961. [Google Scholar] [CrossRef] [Green Version]
  9. Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  10. Picard, E. Mémoire sur la théorie des équations aux dérivées partielles et la méthode des approximations successives. J. Math. Pures Appl. 1890, 6, 145–210. [Google Scholar]
  11. Tan, K.K.; Xu, H.K. Fixed Point Iteration Proccesses for Asymtotically Nonnexpansive Mappings. Proc. Am. Math. Soc. 1994, 122, 733–739. [Google Scholar] [CrossRef]
  12. Bose, S.C. Weak convergence to the fixed point of an asymptotically nonexpansive map. Proc. Am. Math. Soc. 1978, 68, 305–308. [Google Scholar] [CrossRef]
  13. Schu, J. Weak and strong convergence to fixed points of asymptotically nonexpansive mappings. Bull. Aust. Math. Soc. 1991, 43, 153–159. [Google Scholar] [CrossRef] [Green Version]
  14. Schu, J. Iterative construction of fixed points of asymptotically nonexpansive mappings. J. Math. Anal. Appl. 1991, 158, 407–413. [Google Scholar] [CrossRef] [Green Version]
  15. Osilike, M.O.; Aniagbosor, S.C. Weak and Strong Convergence Theorems for Fixed Points of Asymtotically Nonexpansive Mappings. Math. Comput. Model. 2000, 32, 1181–1191. [Google Scholar] [CrossRef]
  16. Dong, Q.L.; Yuan, H.B. Accelerated Mann and CQ algorithms for finding a fixed point of a nonexpansive mapping. Fixed Point Theory Appl. 2015, 125. [Google Scholar] [CrossRef] [Green Version]
  17. Nocedal, J.; Wright, S.J. Numerical Optimization, 2nd ed.; Springer Series in Operations Research and Financial Engineering; Springer: Berlin, Germany, 2006. [Google Scholar]
  18. Attouch, H.; Peypouquet, J.; Redont, P. A dynamical approach to an inertial forward–backward algorithm for convex minimization. SIAM J. Optim. 2014, 24, 232–256. [Google Scholar] [CrossRef]
  19. Attouch, H.; Goudon, X.; Redont, P. The heavy ball with friction. I. The continuous dynamical system. Commun. Contemp. Math. 2000, 2, 1–34. [Google Scholar] [CrossRef]
  20. Attouch, H.; Peypouquent, J. The rate of convergence of Nesterov’s accelarated forward-backward method is actually faster than 1 k 2 . SIAM J. Optim. 2016, 26, 1824–1834. [Google Scholar] [CrossRef]
  21. Bot, R.I.; Csetnek, E.R.; Hendrich, C. Inertial Douglas-Rachford splitting for monotone inclusion problems. Appl. Math. Comput. 2015, 256, 472–487. [Google Scholar] [CrossRef] [Green Version]
  22. Dong, Q.L.; Yuan, H.B.; Je, C.Y.; Rassias, T.M. Modified inertial Mann algorithm and inertial CQ-algorithm for nonexpansive mappings. Optim. Lett. 2016, 12, 87–102. [Google Scholar] [CrossRef]
  23. Dong, Q.L.; Kazmi, K.R.; Ali, R.; Li, H.X. Inertial Krasnol’skii-Mann type hybrid algorithms for solving hierarchical fixed point problems. J. Fixed Point Theory Appl. 2019, 21, 57. [Google Scholar] [CrossRef]
  24. Lorenz, D.A.; Pock, T. An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 2015, 51, 311–325. [Google Scholar] [CrossRef] [Green Version]
  25. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  26. Shehu, Y.; Gibali, A. Inertial Krasnol’skii-Mann method in Banach spaces. Mathematics 2020, 8, 638. [Google Scholar] [CrossRef] [Green Version]
  27. Browder, F.E. Convergence theorems for sequence of nonlinear mappings in Hilbert spaces. Math. Z. 1967, 100, 201–225. [Google Scholar] [CrossRef]
  28. Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73, 591–597. [Google Scholar] [CrossRef] [Green Version]
  29. Browder, F.E. Fixed point theorems for nonlinear semicontractive mappings in Banach spaces. Arch. Ration. Mech. Anal. 1966, 21, 259–269. [Google Scholar] [CrossRef]
  30. Gossez, J.-P.; Dozo, E.L. Some geometric properties related to the fixed point theory for nonexpansive mappings. Pac. J. Math. 1972, 40, 565–573. [Google Scholar] [CrossRef] [Green Version]
  31. Xu, H.K. Inequalities in Banach Spaces with Applications. Nonlinear Anal. 1991, 16, 1127–1138. [Google Scholar] [CrossRef]
  32. Beck, A.; Teboulle, M. A fast iterative shrinkage thresholding algorithm for linear inverse problem. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  33. Agarwal, R.P.; Regan, D.O.; Sahu, D.R. Fixed Point Theory for Lipschitzian-Type Mappings with Applications; Springer: Berlin, Germany, 2009. [Google Scholar]
  34. Pan, C.; Wang, Y. Generalized viscosity implicit iterative process for asymptotically non-expansive mappings in Banach spaces. Mathematics 2019, 7, 379. [Google Scholar] [CrossRef] [Green Version]
  35. Vaish, R.; Ahmad, M.K. Generalized viscosity implicit schemewith Meir-Keeler contraction for asymptotically nonexpansive mapping in Banach spaces. Numer. Algorithms 2020, 84, 1217–1237. [Google Scholar] [CrossRef]
  36. Figueiredo, M.A.T.; Nowak, R.D.; Wright, S.J. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 2007, 1, 586–597. [Google Scholar] [CrossRef] [Green Version]
  37. Shehu, Y.; Vuong, P.T.; Cholamjiak, P. A self-adaptive projection method with an inertial technique for split feasibility problems in Banach spaces with applications to image restoration problems. J. Fixed Point Theory Appl. 2019, 21, 1–24. [Google Scholar] [CrossRef]
  38. Shehu, Y.; Iyiola, O.S.; Ogbuisi, F.U. Iterative method with inertial terms for nonexpansive mappings, Applications to compressed sensing. Numer. Algorithms 2020, 83, 1321–1347. [Google Scholar] [CrossRef]
Figure 1. Example 1. Top left: Case I; top right: Case II; bottom left: Case III; bottom right: Case IV.
Figure 1. Example 1. Top left: Case I; top right: Case II; bottom left: Case III; bottom right: Case IV.
Axioms 10 00147 g001
Figure 2. Example 2. Top left: Case I; top right: Case II; bottom left: Case III; bottom right: Case IV.
Figure 2. Example 2. Top left: Case I; top right: Case II; bottom left: Case III; bottom right: Case IV.
Axioms 10 00147 g002
Figure 3. Example 2. The top row shows the original Cameraman image (left) and the degraded Cameraman image (right). The bottom row shows the images recovered by our algorithm, by the algorithm of Pan and Wang, and by that of Vaish and Ahmad.
Figure 3. Example 2. The top row shows the original Cameraman image (left) and the degraded Cameraman image (right). The bottom row shows the images recovered by our algorithm, by the algorithm of Pan and Wang, and by that of Vaish and Ahmad.
Axioms 10 00147 g003
Figure 4. Example 2. The top row shows the original Pout image (left) and the degraded Pout image (right). The bottom row shows the images recovered by our algorithm, by that of Pan and Wang, and by that of Vaish and Ahmad.
Figure 4. Example 2. The top row shows the original Pout image (left) and the degraded Pout image (right). The bottom row shows the images recovered by our algorithm, by that of Pan and Wang, and by that of Vaish and Ahmad.
Axioms 10 00147 g004
Figure 5. Example 2: Graphs of signal-to-noise ratio (SNR) values against the number of iterations for Cameraman (left)and Pout (right).
Figure 5. Example 2: Graphs of signal-to-noise ratio (SNR) values against the number of iterations for Cameraman (left)and Pout (right).
Axioms 10 00147 g005
Table 1. Computational results showing the performance of the algorithms.
Table 1. Computational results showing the performance of the algorithms.
Alg. (25) Alg. (26) Alg. (27)
Iter. CPU (sec) Iter. CPU (sec) Iter. CPU (sec)
Case I320.0063840.0750690.0102
Case II330.0065740.0724690.0120
Case III380.0084870.0847780.0103
Case IV480.00951230.0781890.0131
Table 2. Computational results showing the performance of the algorithms for Example 2.
Table 2. Computational results showing the performance of the algorithms for Example 2.
Our Alg. Pan and Wang Alg. Vaish and Ahmad Alg.
Iter. CPU (sec) Iter. CPU (sec) Iter. CPU (sec)
Case I350.0103880.0382580.0156
Case II220.0067430.0163400.0104
Case III300.0090700.0487450.0175
Case IV300.0072740.0368450.0175
Table 3. Time (s) for restoring the images for each algorithm.
Table 3. Time (s) for restoring the images for each algorithm.
Our Alg. Pan and Wang Alg. Vaish and Ahmad Alg.
Cameraman image2.79282.64222.6709
Pout image4.82374.42483.45630
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Harbau, M.H.; Ugwunnadi, G.C.; Jolaoso, L.O.; Abdulwahab, A. Inertial Accelerated Algorithm for Fixed Point of Asymptotically Nonexpansive Mapping in Real Uniformly Convex Banach Spaces. Axioms 2021, 10, 147. https://doi.org/10.3390/axioms10030147

AMA Style

Harbau MH, Ugwunnadi GC, Jolaoso LO, Abdulwahab A. Inertial Accelerated Algorithm for Fixed Point of Asymptotically Nonexpansive Mapping in Real Uniformly Convex Banach Spaces. Axioms. 2021; 10(3):147. https://doi.org/10.3390/axioms10030147

Chicago/Turabian Style

Harbau, Murtala Haruna, Godwin Chidi Ugwunnadi, Lateef Olakunle Jolaoso, and Ahmad Abdulwahab. 2021. "Inertial Accelerated Algorithm for Fixed Point of Asymptotically Nonexpansive Mapping in Real Uniformly Convex Banach Spaces" Axioms 10, no. 3: 147. https://doi.org/10.3390/axioms10030147

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop