Next Article in Journal
Spectral Analysis for Comparing Bitcoin to Currencies and Assets
Previous Article in Journal
Study on Queue Length in the Whole Process of a Traffic Accident in an Extra-Long Tunnel
Previous Article in Special Issue
On the Order of Convergence of the Noor–Waseem Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extending the Domain with Application of Four-Step Nonlinear Scheme with Average Lipschitz Conditions

by
Akanksha Saxena
1,
Jai Prakash Jaiswal
2,*,
Kamal Raj Pardasani
1 and
Ioannis K. Argyros
3,*
1
Department of Mathematics, Maulana Azad National Institute of Technology, Bhopal 462003, MP, India
2
Department of Mathematics, Guru Ghasidas Vishwavidyalaya (A Central University), Bilaspur 495009, CG, India
3
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
*
Authors to whom correspondence should be addressed.
Mathematics 2023, 11(8), 1774; https://doi.org/10.3390/math11081774
Submission received: 21 February 2023 / Revised: 20 March 2023 / Accepted: 5 April 2023 / Published: 7 April 2023
(This article belongs to the Special Issue Computational Methods in Analysis and Applications 2023)

Abstract

:
A novel local and semi-local convergence theorem for the four-step nonlinear scheme is presented. Earlier studies on local convergence were conducted without particular assumption on Lipschitz constant. In first part, the main local convergence theorems with a weak ϰ -average (assuming it as a positively integrable function and dropping the essential property of ND) are obtained. In comparison to previous research, in another part, we employ majorizing sequences that are more accurate in their precision along with the certain form of ϰ average Lipschitz criteria. A finer local and semi-local convergence criteria, boosting its utility, by relaxing the assumptions is derived. Applications in engineering to a variety of specific cases, such as object motion governed by a system of differential equations, are illustrated.

1. Introduction

Let the nonlinear operator T be a map in the domain D from χ to Y and taken as a Fréchet differentiable with T its Fréchet derivative which maps χ a Banach space to another Banach space Y, D convex open subset, which can be generated as
T ( x ) = 0 .
Computational sciences have advanced significantly in mathematics, economic equilibrium theory, and engineering sciences. Iteration techniques are also used to solve optimization difficulties. In computer sciences, the discipline of numerical analysis for determining such solutions is fundamentally linked to versions of Newton’s approach as
x n + 1 = x n [ T ( x n ) ] 1 T ( x n ) , n 0 ,
It is chosen despite its slow convergence speed. A survey on Newton’s method [1] can be found in Kantorovich [2] and the references by Rall [3].
There is an extensive literature on the local convergence for Newton, Jarratt, Weerakoon schemes, etc., in the Banach space in the refs. [4,5,6,7,8,9,10,11]. Our objectives here are centered on the local convergence study of a four-step nonlinear scheme (FSS) under generalized/weak Lipschitz criteria which Wang [12] developed, where a non-decreasing positive integrable function (NDPIF) was incorporated rather than a Lipschitz constant. However, Wang with Li [13] discovered new conclusions on the convergence study of Newton’s method (NM) in the Banach spaces where the T meets the radius/center Lipschitz criteria but relaxing ϰ -average. Shakhno [14] has explored local convergence for the Secant-type method [2] with a first-order non-differentiable operator satisfying the generalized/weak Lipschitz conditions.
We shall use the classical FSS [15] under the ϰ -average condition to study the local convergence of FSS that is expressed as:
y n = x n [ T ( x n ) ] 1 T ( x n ) , z n = y n [ T ( x n ) ] 1 T ( y n ) , q n = z n [ T ( x n ) ] 1 T ( z n ) , x n + 1 = q n [ T ( x n ) ] 1 T ( q n ) , n 0 .
The method (3) is notable for being the simplest and most efficient fifth-order iterative procedure. We find, in the literature, a study using ω -continuity conditions on T . While methods of greater R-order convergence are often not implemented regularly despite their great speed of convergence, this is due to the high operational expense. That being said, in stiff system challenges, the method of higher R-order convergence can be used cited by [2] where quick convergence is necessary.
We are extremely motivated from the captivating study [13] which gave us the possibility of relaxing the ϰ -average Lipschitz condition and property of the ND of ϰ to be essential for the convergence of a fifth-order FSS scheme. In [16], we also illustrated the local convergence of a third-order Newton-like method under the same ϰ -average Lipschitz condition taken above. Using such considerations, we derive a new local convergence study for the scheme (3), which enables us to enlarge the convergence ball by dropping out additional assumptions along-with an error/distance estimate. In addition, few corollaries with numerical examples are also stated.
In the literature, L.V. Kantorovich first investigated the semi-local convergence results in [2]. Many other scholars have since examined the enhancement of outcomes based on majorizing sequences and its variants [1,3,17,18,19,20], which is described as [21]:
Definition 1
(Majorized sequence). Let { a n } be a sequence in a Banach space X and { t n } be an increasing scalar sequence. We could say { a n } is majorized by { t n } if a n + 1 a n t n + 1 t n , for each n = 0 , 1 , 2 , .
It is also important to provide a unified semi-local convergence analysis for the FSS (3) along-with the uniqueness of the solution. This analysis can improve existing results through specialization.
The structure of the presentation of the work is as follows. Section Section 2 comprises some conditions and preliminary lemma for ϰ -average weak conditions. In Section 3 and Section 4, we provide local convergence with its domain of uniqueness for FSS while relaxing the assumption that T should satisfy radius/center Lipschitz criteria under weak ϰ -average saying ϰ / ϰ 0 is assumed to be belonging to one of the families of PIF, which are not always ND for convergence-related theorems. This work unifies the semi-local analysis of FSS in Section Section 5 under majorizing sequences and more weak Lipschitz-type conditions than previously. Finally, applications and further corollaries are given in order to justify the significance of the findings.

2. Notions and Preliminary Results

Making the research as self-contained as one possibly can, we reintroduce some essential concepts and findings [12,13]. Let M ( Σ * , ρ ) = { r : | | r Σ * | | < ρ } be a ball where the radius is denoted by ρ and the center is denoted by Σ * .
The notions about Lipschitz criteria are defined as follows.
Definition 2.
The operator T satisfies the radius Lipschitz criterion if
| | T ( r ) T ( s θ ) | | ϰ ( 1 θ ) ( | | r Σ * | | + | | s Σ * | | ) , r , s M ( Σ * , ρ ) ,
in which s θ = Σ * + θ ( s Σ * ) , 0 θ 1 . This definition is previously used by researchers with constant ϰ.
Definition 3.
The operator T satisfies the center Lipschitz criterion if
| | T ( r ) T ( Σ * ) | | 2 ϰ 0 | | r Σ * | | , r M ( Σ * , ρ ) ,
with the constant ϰ 0 in which ϰ 0 ϰ . It turns out that substituting ϰ as ϰ 0 when ϰ 0 < ϰ [4,9,22] leads to:
(i) 
Larger convergence radius/domain.
(ii) 
At least as specific information on the solution’s location Σ * .
(iii) 
Closer error boundaries on distances | | r n + 1 r n | | , | | r n Σ * | | .
The novelty of our work is to see that ϰ used in the Lipschitz criteria is not required to be essentially constant; rather, it takes the form of an integrable positively function. In that case, condition (4) is substituted with:
Definition 4.
The operator T satisfies the ϰ-average or generalized/weak Lipschitz criterion, if
| | T ( r ) T ( s θ ) | | θ ( ϱ ( r ) + ϱ ( s ) ) ϱ ( r ) + ϱ ( s ) ϰ ( u ) d u , r , s M ( Σ * , ρ ) , 0 θ 1 ,
And condition (5), respectively, is substituted with:
Definition 5.
The operator T satisfies the center ϰ-average criterion, if
| | T ( r ) T ( Σ * ) | | 0 2 ϱ ( r ) ϰ 0 ( u ) d u , r M ( Σ * , ρ ) ,
in which ϱ ( r ) = | | r Σ * | | together with ϰ 0 ( u ) ϰ ( u ) .
As an illustration of motivation, assume that the motion of a three-dimensional object is regulated by a system of differential equations
f 1 ( p ) f 1 ( p ) 1 = 0 , f 2 ( p ) ( e 1 ) q 1 = 0 , f 3 ( r ) 1 = 0 ,
Let χ = Y = 3 , ω = M ¯ ( 0 , 1 ) and the solution represented by Σ * = ( 0 , 0 , 0 ) t . Define the function T on ω for o = ( p , q , r ) t as
T ( o ) = ( e p 1 , e 1 2 q 2 + q , r ) t .
We find the Fréchet derivative as
T ( o ) = e p 0 0 0 ( e 1 ) q + 1 0 0 0 1 .
Thus, ϰ = e 2 , ϰ 0 = e 1 2 where ϰ 0 < ϰ (as per Definitions 2 and 3). As a result, substituting ϰ with ϰ 0 at the denominator enhances the convergence radius mentioned in example 1. When ϰ , ϰ 0 are not considered to be constants, then we can find ϰ 0 ( u ) = ( e 1 ) u 2 , ϰ ( u ) = e u 2 a n d ϰ ¯ ( u ) = e 1 ( e 1 ) u 2 (as per Definitions 4, 5 and Remark 1).
Next, we shall show in Lemma 1 the two major double integrals that will be used in the main results by solving through a change of variables.
Lemma 1.
Assuming T being continuously differentiable in M ( Σ * , ρ ) , T ( Σ * ) = 0 , [ T ( Σ * ) ] 1 exists
(a) 
If the center average Lipschitz condition under the ϰ 0 -average is satisfied by [ T ( Σ * ) ] 1 T :
| | [ T ( Σ * ) ] 1 ( T ( r θ ) T ( Σ * ) ) | | 0 2 θ ϱ ( r ) ϰ 0 ( u ) d u , r M ( Σ * , ρ ) , 0 θ 1 ,
in which ϱ ( r ) = | | r Σ * | | and ϰ 0 is (ND); therby, we see
0 1 | | [ T ( Σ * ) ] 1 ( T ( r θ ) T ( Σ * ) ) | | ϱ ( r ) d θ 0 2 ϱ ( r ) ϰ 0 ( u ) ϱ ( r ) u 2 d u .
(b) 
If the radius average Lipschitz condition under the ϰ-average is satisfied by [ T ( Σ * ) ] 1 T :
| | [ T ( Σ * ) ] 1 ( T ( r ) T ( s θ ) ) | | θ ( ϱ ( r ) + ϱ ( s ) ) ρ ( r ) + ρ ( s ) ϰ ( u ) d u , r , s M ( Σ * , ρ ) , 0 θ 1 ,
in which s θ = Σ * + θ ( s Σ * ) , ϰ is positively integrable. Then,
0 1 | | [ T ( Σ * ) ] 1 ( T ( r ) T ( s θ ) ) | | ϱ ( s ) d θ 0 ϱ ( r ) + ϱ ( s ) ϰ ( u ) u ϱ ( r ) + ϱ ( s ) ϱ ( s ) d u .
Proof. 
The definition for average Lipschitz criteria (11) and (9), respectively, infers that
0 1 | | [ T ( Σ * ) ] 1 ( T ( r ) T ( s θ ) ) | | ϱ ( s ) d θ 0 1 θ ( ϱ ( r ) + ϱ ( s ) ) ϱ ( r ) + ϱ ( s ) ϰ ( u ) d u ϱ ( s ) d θ = 0 ϱ ( r ) + ϱ ( s ) ϰ ( u ) u ϱ ( r ) + ϱ ( s ) ϱ ( s ) d u , 0 1 | | [ T ( Σ * ) ] 1 ( T ( r θ ) T ( Σ * ) ) | | ϱ ( r ) d θ 0 1 0 2 θ ϱ ( r ) ϰ 0 ( u ) d u ϱ ( r ) d θ = 0 2 ϱ ( r ) ϰ 0 ( u ) ϱ ( r ) u 2 d u .
where r θ = Σ * + θ ( r Σ * ) .  □

3. Local Convergence Results for Four-Step Scheme (3)

Under this section, we present the key findings about local convergence as well as improved error estimates with distances.
Let the relation be satisfied by ρ as:
0 2 ρ ϰ 0 ( u ) d u 1 a n d 0 2 ρ ϰ ( u ) u d u 2 ρ ( 1 0 2 ρ ϰ 0 ( u ) d u ) 1 .
Lemma 2
([13]). Assume that ϰ is PIF along with the function ϰ α given by expression (46) to be ND for some α with 0 α 1 . Then, the function ψ β , α for each β 0 takes the form
ψ β , α ( t ) = 1 t α + β 0 t u β ϰ ( u ) d u ,
is also ND.
Lemma 3.
Assume that ϰ is NDPIF. Then, the function defined by 1 t 2 0 t ϰ ( u ) u d u is also ND with respect to t.
Proof. 
Obviously, since ϰ is monotone, we arrive at
1 t 2 2 0 t 2 1 t 1 2 0 t 1 ϰ ( u ) u d u = 1 t 2 2 t 1 t 2 + 1 t 2 2 1 t 1 2 0 t 1 ϰ ( u ) u d u ϰ ( t 1 ) 1 t 2 2 t 1 t 2 + 1 t 2 2 1 t 1 2 0 t 1 u d u = ϰ ( t 1 ) 1 t 2 2 0 t 2 1 t 1 2 0 t 1 u d u = 0 ,
for 0 < t 1 < t 2 . Thus, 1 t 2 0 t ϰ ( u ) u d u is ND with respect to t.  □
Theorem 1.
Assuming T being continuously differentiable in M ( Σ * , ρ ) , T ( Σ * ) = 0 , [ T ( Σ * ) ] 1 exists and Definitions 4 and 5 are satisfied by [ T ( Σ * ) ] 1 T along with ϰ and ϰ 0 to be ND, ρ satisfies the relation (13). Then, the FSS (3) converges x 0 M ( Σ * , ρ ) with
| | y n Σ * | | 0 2 ϱ ( x n ) ϰ ( u ) u d u 2 ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) C 1 ϱ ( x 0 ) ϱ ( x n ) 2 ,
| | z n Σ * | | 0 ϱ ( x n ) + ϱ ( y n ) ϰ ( u ) u d u ( ϱ ( x n ) + ϱ ( y n ) ) ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) ϱ ( y n ) C 1 2 ρ ( x 0 ) 2 ρ ( x t ) 3 ,
| | q n Σ * | | 0 ϱ ( y n ) + ϱ ( z n ) ϰ ( u ) u d u ( ϱ ( y n ) + ϱ ( z n ) ) ( 1 0 2 ϱ ( y n ) ϰ 0 ( u ) d u ) ϱ ( z n ) C 1 3 ρ ( x 0 ) 3 ρ ( x t ) 4 ,
| | x n + 1 Σ * | | 0 ϱ ( z n ) + ϱ ( q n ) ϰ ( u ) u d u ( ϱ ( z n ) + ϱ ( q n ) ) ( 1 0 2 ϱ ( z n ) ϰ 0 ( u ) d u ) ϱ ( q n ) C 1 4 ρ ( x 0 ) 4 ρ ( x t ) 5 ,
in which the quantities
C 1 = 0 2 ϱ ( x 0 ) ϰ ( u ) u d u 2 ϱ ( x 0 ) ( 1 0 2 ϱ ( x 0 ) ϰ 0 ( u ) d u ) , C 2 = 0 ϱ ( x 0 ) + ϱ ( y 0 ) ϰ ( u ) u d u ( ϱ ( x 0 ) + ϱ ( y 0 ) ) ( 1 0 2 ϱ ( x 0 ) ϰ 0 ( u ) d u ) , C 3 = 0 ϱ ( x 0 ) + ϱ ( z 0 ) ϰ ( u ) u d u ( ϱ ( x 0 ) + ϱ ( z 0 ) ) ( 1 0 2 ϱ ( x 0 ) ϰ 0 ( u ) d u ) , C 4 = 0 ϱ ( x 0 ) + ϱ ( q 0 ) ϰ ( u ) u d u ( ϱ ( x 0 ) + ϱ ( q 0 ) ) ( 1 0 2 ϱ ( x 0 ) ϰ 0 ( u ) d u ) ,
are found to be less than 1. In addition, with
| | x n Σ * | | C 1 5 n 1 | | x 0 Σ * | | , n = 1 , 2 , .
Then, we propose a uniqueness theorem with a center-average Lipschitz condition for FSS, (3).
Theorem 2.
Assuming T being continuously differentiable in M ( Σ * , ρ ) , T ( Σ * ) = 0 , [ T ( Σ * ) ] 1 exists and Definition 5 is satisfied by [ T ( Σ * ) ] 1 T and ρ satisfies the relation
0 2 ρ ϰ 0 ( u ) ( 2 ρ u ) d u 2 ρ 1 .
Then, T ( x ) = 0 has a unique solution Σ * M ( Σ * , ρ ) .
Next, we provide proofs for the two core results.
Proof of Theorem 1.
Clearly, if x M ( Σ * , ρ ) , we have with the help of the center-average Lipschitz condition under ϰ -average along with the assumption (13):
| | [ T ( Σ * ) ] 1 [ T ( x ) T ( Σ * ) ] | | 0 2 ϱ ( x ) ϰ 0 ( u ) d u 1 .
By the virtue of the Banach Lemma [2] and the above equation
| | I ( [ T ( Σ * ) ] 1 T ( x ) I ) | | 1 = | | [ T ( x ) ] 1 T ( Σ * ) | | ,
using the expression (21), we arrive at the following inequality:
| | [ T ( x ) ] 1 T ( Σ * ) | | 1 1 0 2 ϱ ( x ) ϰ 0 ( u ) d u .
WLOG picking x 0 M ( Σ * , ρ ) , in which C 1 , C 2 , C 3 and C 4 are given as per the relation (19) and ρ fulfills the inequality (13), can be proved as:
C 1 = 0 2 ϱ ( x 0 ) ϰ ( u ) u d u 2 ϱ ( x 0 ) 2 ( 1 0 2 ϱ ( x 0 ) ϰ 0 ( u ) d u ) ϱ ( x 0 ) 0 2 ρ ϰ ( u ) u d u 2 ρ 2 ( 1 0 2 ρ ϰ 0 ( u ) d u ) ϱ ( x 0 ) | | x 0 Σ * | | ρ < 1 , C 2 = 0 ϱ ( x 0 ) + ϱ ( y 0 ) ϰ ( u ) u d u ( ϱ ( x 0 ) + ϱ ( y 0 ) ) 2 ( 1 0 2 ϱ ( x 0 ) ϰ 0 ( u ) d u ) ( ϱ ( x 0 ) + ϱ ( y 0 ) ) 0 2 ρ ϰ ( u ) u d u 2 ρ 2 ( 1 0 2 ρ ϰ 0 ( u ) d u ) ( ϱ ( x 0 ) + ϱ ( y 0 ) ) | | x 0 Σ * | | + | | y 0 Σ * | | 2 ρ < 1 C 3 = 0 ϱ ( x 0 ) + ϱ ( z 0 ) ϰ ( u ) u d u ( ϱ ( x 0 ) + ϱ ( z 0 ) ) 2 ( 1 0 2 ϱ ( x 0 ) ϰ 0 ( u ) d u ) ( ϱ ( x 0 ) + ϱ ( z 0 ) ) 0 2 ρ ϰ ( u ) u d u 2 ρ 2 ( 1 0 2 ρ ϰ 0 ( u ) d u ) ( ϱ ( x 0 ) + ϱ ( z 0 ) ) | | x 0 Σ * | | + | | z 0 Σ * | | 2 ρ < 1 C 4 = 0 ϱ ( x 0 ) + ϱ ( q 0 ) ϰ ( u ) u d u ( ϱ ( x 0 ) + ϱ ( q 0 ) ) 2 ( 1 0 2 ϱ ( x 0 ) ϰ 0 ( u ) d u ) ( ϱ ( x 0 ) + ϱ ( q 0 ) ) 0 2 ρ ϰ ( u ) u d u 2 ρ 2 ( 1 0 2 ρ ϰ 0 ( u ) d u ) ( ϱ ( x 0 ) + ϱ ( q 0 ) ) | | x 0 Σ * | | + | | q 0 Σ * | | 2 ρ < 1 .
In what follows, if x n M ( Σ * , ρ ) , then we have from the scheme (3)
| | y n Σ * | | = | | x n Σ * [ T ( x n ) ] 1 T ( x n ) | | = | | [ T ( x n ) ] 1 [ T ( x n ) ( x n Σ * ) T ( x n ) + T ( Σ * ) ] | | .
Through Taylor’s expansion, we obtain from the expansion of T ( x n ) along Σ * :
T ( Σ * ) T ( x n ) + T ( x n ) ( x n Σ * ) = T ( Σ * ) 0 1 [ T ( Σ * ) ] 1 [ T ( x n ) T ( x θ ) ] d θ ( x n Σ * ) .
So, combining expressions (23) and (24) along with Definition 4,
| | y n Σ * | | | | [ T ( x n ) ] 1 T ( Σ * ) | | . | | 0 1 [ T ( Σ * ) ] 1 [ T ( x n ) T ( x n θ ) ] d θ | | . | | ( x n Σ * ) | | 1 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u 0 1 2 θ ϱ ( x n ) 2 ϱ ( x n ) ϰ ( u ) d u ρ ( x n ) d θ .
That gives the first inequality of relation (15) with Lemma 1. With the method’s second sub-step (3) and a parallel analogy, we see that
| | z n Σ * | | | | [ T ( x n ) ] 1 T ( Σ * ) | | . | | 0 1 [ T ( Σ * ) ] 1 [ T ( x n ) T ( y n θ ) ] d θ | | . | | ( y n Σ * ) | | 1 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u 0 1 θ ( ϱ ( x n ) + ϱ ( y n ) ) ϱ ( x n ) + ϱ ( y n ) ϰ ( u ) d u ϱ ( y n ) d θ .
That gives the first inequality of relation (16) with Lemma 1. With the method’s third sub-step (3) and in similar analogy, we obtain
| | q n Σ * | | | | [ T ( x n ) ] 1 T ( Σ * ) | | . | | 0 1 [ T ( Σ * ) ] 1 [ T ( x n ) T ( z n θ ) ] d θ | | . | | ( z n Σ * ) | | 1 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u 0 1 θ ( ϱ ( x n ) + ϱ ( z n ) ) ϱ ( x n ) + ϱ ( z n ) ϰ ( u ) d u ϱ ( z n ) d θ .
Looking the Lemma 1, we obtain relation (17). At last, in the last sub-step of the scheme (3), we obtain
| | x n + 1 Σ * | | | | [ T ( x n ) ] 1 T ( Σ * ) | | . | | 0 1 [ T ( Σ * ) ] 1 [ T ( x n ) T ( q n θ ) ] d θ | | . | | ( q n Σ * ) | | 1 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u 0 1 θ ( ϱ ( x n ) + ϱ ( q n ) ) ϱ ( x n ) + ϱ ( q n ) ϰ ( u ) d u ϱ ( q n ) d θ .
That gives the expression (18). Moreover, ϱ ( x n ) , ϱ ( z n ) , ϱ ( q n ) and ϱ ( y n ) are monotonically decreasing; hence, n = 0 , 1 , , and we see
| | y n Σ * | | 0 2 ϱ ( x n ) ϰ ( u ) u d u 2 ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) 0 2 ϱ ( x 0 ) ϰ ( u ) u d u 2 ϱ ( x 0 ) 2 ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) 2 ϱ ( x n ) 2 C 1 ϱ ( x 0 ) ϱ ( x n ) 2 .
Setting n = 0 above gives us y 0 Σ * C 1 . ϱ ( x 0 ) < ϱ ( x 0 ) . This result shows that y 0 M ( Σ * , ρ ) , and it can therefore be repeated indefinitely as per Equation (3). Furthermore, all values of y n will belong to M ( Σ * , ρ ) by a mathematical induction, and the value of ϱ ( y n ) = y n Σ * will decrease monotonically.
By some computation in the first part of expression (15) and (16), we obtain
| | z n Σ * | | 0 ϱ ( x n ) + ϱ ( y n ) ϰ ( u ) u d u ( ϱ ( x n ) + ϱ ( y n ) ) 2 ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) ϱ ( y n ) . [ ϱ ( x n ) + ϱ ( y n ) ] C 2 ϱ ( x 0 ) + ϱ ( y 0 ) [ ϱ ( x n ) + ϱ ( y n ) ] ϱ ( y n ) ·
By simplifying further,
z n ϵ * C 1 2 ϱ ( x 0 ) 2 ϱ ( x n ) 3 ·
Setting n = 0 in inequality (29) gives us z 0 Σ * C 2 . ϱ ( y 0 ) < ϱ ( x 0 ) . This result shows that z 0 M ( Σ * , ρ ) , and it can therefore be repeated indefinitely as per Equation (3). Furthermore, all values of z n will belong to M ( Σ * , ρ ) by mathematical induction, and the value of ϱ ( z n ) = z n Σ * will decrease monotonically.
| | q n Σ * | | 0 ϱ ( x n ) + ϱ ( z n ) ϰ ( u ) u d u ( ϱ ( x n ) + ϱ ( z n ) ) 2 ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) ϱ ( z n ) . [ ϱ ( x n ) + ϱ ( z n ) ] C 3 ϱ ( x 0 ) + ϱ ( z 0 ) [ ϱ ( x n ) + ϱ ( z n ) ] ϱ ( z n ) .
By simplifying further,
q n ϵ * C 1 3 ϱ ( x 0 ) 3 ϱ ( x n ) 4 .
Setting n = 0 in inequality (29) gives us q 0 Σ * C 3 . ϱ ( z 0 ) < ϱ ( x 0 ) . This result shows that q 0 M ( Σ * , ρ ) , and it can therefore be repeated indefinitely as per Equation (3). Furthermore, all values of q n will belong to M ( Σ * , ρ ) by mathematical induction, and the value of ϱ ( q n ) = q n Σ * will decrease monotonically. In addition, the last expression (18) gives
| | x n + 1 Σ * | | 0 ϱ ( x n ) + ϱ ( q n ) ϰ ( u ) u d u ( ϱ ( x n ) + ϱ ( q n ) ) 2 ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) ϱ ( q n ) . [ ϱ ( x n ) + ϱ ( q n ) ] C 4 ϱ ( x 0 ) + ϱ ( q 0 ) [ ϱ ( x n ) + ϱ ( q n ) ] ϱ ( q n ) .
By simplifying further,
x n + 1 ϵ * C 1 4 ϱ ( x 0 ) 4 ϱ ( x n ) 5 .
That is all regarding inequalities (15)–(18). It remains to check (20); for that, we use mathematical induction. For n = 0 , we have by the relation (16),
x n + 1 ϵ * C 1 4 ϱ ( x 0 ) 4 ϱ ( x n ) 5 .
Subsequently, the aforementioned inequality can be transformed into an alternative form:
| | x 1 ϵ * | | C 1 ( 5 1 ) ϱ ( x 0 ) .
That means the equality (20) is said to be true for n = 1 . Next, assume the relation (20) holds for some integer n > 1 . The below form is preserved by the aforementioned inequality:
| | x n + 1 Σ * | | C 1 5 t + 1 1 ϱ ( x 0 ) 4 ϱ ( x 0 ) 5 C 1 ( 5 t + 1 1 ) ϱ ( x 0 ) .
 □
We are now prepared to demonstrate the uniqueness result.
Proof of Theorem 2.
WLOG picking Σ 1 * M ( Σ * , ρ ) , Σ 1 * Σ * and considering the scheme, we obtain
| | Σ 1 * Σ * | | = | | Σ 1 * Σ * [ t ( Σ * ) ] 1 t ( Σ 1 * ) | | . = | | [ t ( Σ * ) ] 1 [ t ( Σ * ) ( Σ 1 * Σ * ) t ( Σ 1 * ) + t ( Σ * ) ] | | .
Through Taylor’s expansion, we obtain from expansion of T ( Σ 1 * ) along Σ * :
T ( Σ * ) T ( Σ 1 * ) + T ( Σ * ) ( Σ 1 * Σ * ) = 0 1 [ T ( Σ * ) ] 1 [ T ( Σ 1 * ) θ T ( Σ * ) ] d θ ( Σ 1 * Σ * ) .
So, combining expressions (35) and (36) along with Definition 5,
| | Σ 1 * Σ * | | | | [ T ( Σ * ) ] 1 T ( Σ * ) | | . | | 0 1 [ T ( Σ * ) ] 1 [ T ( Σ 1 * ) θ T ( Σ * ) ] d θ | | . | | ( Σ 1 * Σ * ) | | 0 1 0 2 θ ϱ ( Σ 1 * ) ϰ 0 ( u ) d u ϱ ( Σ 1 * ) d θ ·
Looking at the relation (37) with Lemma 1, we have
| | Σ 1 * Σ * | | 1 2 ϱ ( Σ 1 * ) 0 2 ϱ ( Σ 1 * ) ϰ 0 ( u ) [ 2 ϱ ( Σ 1 * ) u ] d u ( Σ 1 * Σ * ) 0 2 ρ ϰ 0 ( u ) ( 2 ρ u ) d u 2 ρ ϱ ( Σ 1 * ) | | Σ 1 * Σ * | | .
However, that is found to be a contradiction. Hence, we find that Σ 1 * = Σ * . This gives the core result for this part.  □
Specifically, assuming that ϰ and ϰ 0 are constants, we can obtain the usual results on the Lipschitz condition.

4. Local Convergence with Weak ϰ -Average

We shall provide local convergence results on re-assuming the hypotheses already presented in the first theorem by weakening it where ϰ is not taken as an ND function. This concept has already resulted in a more precise convergence study by decreasing the convergence order.
Theorem 3.
Assuming T being continuously differentiable in M ( Σ * , ρ ) , T ( Σ * ) = 0 , [ T ( Σ * ) ] 1 exists and Definitions 4 and 5 are satisfied by [ T ( Σ * ) ] 1 T along with ϰ and ϰ 0 to be PIF, and ρ satisfies the relation
0 2 ρ ϰ 0 ( u ) d u 1 a n d 0 2 ρ ( ϰ ( u ) + ϰ 0 ( u ) ) d u 1 .
Then, the FSS (3) converges x 0 M ( Σ * , ρ ) with
| | y n Σ * | | 0 2 ϱ ( x n ) ϰ ( u ) u d u 2 ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) M 1 ϱ ( x n ) ,
| | z n Σ * | | 0 ϱ ( x n ) + ϱ ( y n ) ϰ ( u ) u d u ( ϱ ( x n ) + ϱ ( y n ) ) ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) ϱ ( y n ) M 2 M 1 ϱ ( x n ) ,
| | q n Σ * | | 0 ϱ ( y n ) + ϱ ( z n ) ϰ ( u ) u d u ( ϱ ( y n ) + ϱ ( z n ) ) ( 1 0 2 ϱ ( y n ) ϰ 0 ( u ) d u ) ϱ ( z n ) M 3 M 2 M 1 ϱ ( x n ) ,
| | x n + 1 Σ * | | 0 ϱ ( z n ) + ϱ ( q n ) ϰ ( u ) u d u ( ϱ ( z n ) + ϱ ( q n ) ) ( 1 0 2 ϱ ( z n ) ϰ 0 ( u ) d u ) ϱ ( q n ) M 4 M 3 M 2 M 1 ϱ ( x n ) ,
in which the quantities
M 1 = 0 2 ρ ( x 0 ) ϰ ( u ) d u 1 0 2 ρ ( x 0 ) ϰ 0 ( u ) d u , M 2 = 0 ρ ( x 0 ) + ρ ( y 0 ) ϰ ( u ) d u 1 0 2 ρ ( x 0 ϰ 0 ( u ) d u , M 3 = 0 ρ ( x 0 ) + ρ ( z 0 ) ϰ ( u ) d u 1 0 2 ρ ( x 0 ϰ 0 ( u ) d u , M 4 = 0 ρ ( x 0 ) + ρ ( q 0 ) ϰ ( u ) d u 1 0 2 ρ ( x 0 ϰ 0 ( u ) d u ,
are found to be less than 1. In addition,
| | x n Σ * | | ( M 1 M 2 M 3 M 4 ) n | | x 0 Σ * | | , n = 1 , 2 , . . . .
Moreover, assuming the function ϰ α given as
ϰ α ( t ) = t 1 α ϰ ( t ) ,
is ND for some α with 0 α 1 and ρ satisfies
1 2 ρ 0 2 ρ ( 2 ρ ϰ 0 ( u ) + u ϰ ( u ) ) d u 1 .
Then, the FSS (3) converges x 0 M ( Σ * , ρ ) with
| | x n Σ * | | m 1 ( 1 + 4 α ) n 1 α | | x 0 Σ * | | , n = 1 , 2 , ,
in which quantity m 1 is the same as C 1 given in inequality (19) and less than 1.
Proof. 
Clearly, if x M ( Σ * , ρ ) , we have with the help of the center-average Lipschitz condition under a weak average along with the assumption (39):
| | [ T ( Σ * ) ] 1 [ T ( x ) T ( Σ * ) ] | | 0 2 ϱ ( x ) ϰ 0 ( u ) d u x M ( Σ * , ρ ) 1 .
Using the Banach Lemma [2] and the equation below
| | I ( [ T ( Σ * ) ] 1 T ( x ) I ) | | 1 = | | [ T ( x ) ] 1 T ( Σ * ) | | ,
using expression (49), we arrive at the following inequality:
| | [ T ( x ) ] 1 T ( Σ * ) | | 1 1 0 2 ϱ ( x ) ϰ 0 ( u ) d u .
WLOG picking x 0 M ( Σ * , ρ ) , in which M 1 , M 2 , M 3 and M 4 are given as per the relation (44) and ρ fulfills the inequality (39), can be proved as:
M 1 = 0 2 ϱ ( x 0 ) ϰ ( u ) d u ( 1 0 2 ϱ ( x 0 ) ϰ 0 ( u ) d u ) 0 ρ ϰ ( u ) d u ( 1 0 2 ρ ϰ 0 ( u ) d u ) < 1 , M 2 = 0 ϱ ( x 0 ) + ϱ ( y 0 ) ϰ ( u ) d u ( 1 0 2 ϱ ( x 0 ) ϰ 0 ( u ) d u ) 0 2 ρ ϰ ( u ) d u ( 1 0 2 ρ ϰ 0 ( u ) d u ) < 1 M 3 = 0 ϱ ( x 0 ) + ϱ ( z 0 ) ϰ ( u ) d u ( 1 0 2 ϱ ( x 0 ) ϰ 0 ( u ) d u ) 0 2 ρ ϰ ( u ) d u ( 1 0 2 ρ ϰ 0 ( u ) d u ) < 1 M 4 = 0 ϱ ( x 0 ) + ϱ ( q 0 ) ϰ ( u ) d u ( 1 0 2 ϱ ( x 0 ) ϰ 0 ( u ) d u ) 0 2 ρ ϰ ( u ) d u ( 1 0 2 ρ ϰ 0 ( u ) d u ) < 1 .
In what follows, if x n M ( Σ * , ρ ) , then from scheme (3), the first inequality of relation (40)–(43) is completely similar to Theorem 3. Additionally, ϱ ( x n ) , ϱ ( z n ) , ϱ ( q n ) and ϱ ( y n ) are monotonically decreasing, hence n = 0 , 1 , , which leads to
| | y n Σ * | | 0 2 ϱ ( x n ) ϰ ( u ) u d u 2 ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) 0 2 ϱ ( x 0 ) ϰ ( u ) d u ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) ϱ ( x n ) m 1 ϱ ( x n ) .
By some computation in first part of expression (41)–(43), we obtain
| | z n Σ * | | 0 ϱ ( x n ) + ϱ ( y n ) ϰ ( u ) u d u ( ϱ ( x n ) + ϱ ( y n ) ) ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) ϱ ( y n ) 0 ϱ ( x 0 ) + ϱ ( y 0 ) ϰ ( u ) d u ( 1 0 2 ϱ ( x 0 ) ϰ 0 ( u ) d u ) ϱ ( y n ) m 2 m 1 ρ ( x n ) . | | q n Σ * | | 0 ϱ ( x n ) + ϱ ( z n ) ϰ ( u ) u d u ( ϱ ( x n ) + ϱ ( z n ) ) ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) ϱ ( z n ) 0 ϱ ( x 0 ) + ϱ ( z 0 ) ϰ ( u ) d u ( 1 0 2 ϱ ( x 0 ) ϰ 0 ( u ) d u ) ϱ ( y n ) m 3 m 2 m 1 ρ ( x n ) . | | x n + 1 Σ * | | 0 ϱ ( x n ) + ϱ ( q n ) ϰ ( u ) u d u ( ϱ ( x n ) + ϱ ( y n ) ) ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) ϱ ( q n ) 0 ϱ ( x 0 ) + ϱ ( q 0 ) ϰ ( u ) d u ( 1 0 2 ϱ ( x 0 ) ϰ 0 ( u ) d u ) ϱ ( q n ) m 4 m 3 m 2 m 1 ρ ( x n ) .
We also can easily derive the inequality (45) through the aforementioned result. Assuming the function ϰ α given by the relation (46) is ND for some α with 0 α 1 and ρ is given by expression (47), in view of Lemma 2 and the first part of inequality of relation (40), we see
| | y n Σ * | | ψ 1 , α ( 2 ϱ ( x n ) ) 2 α ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) ϱ ( x n ) α + 1 ψ 1 , α ( 2 ϱ ( x 0 ) ) 2 α ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) ϱ ( x n ) α + 1 = m 1 ϱ ( x 0 ) α ϱ ( x n ) α + 1 .
Following that, when we see Lemma 2 and the initial part of inequality of (41)–(43), we find
| | z n Σ * | | ψ 1 , α ( ϱ ( x n ) + ϱ ( y n ) ) ( ϱ ( x n ) + ϱ ( y n ) ) α ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) ϱ ( y n ) , ψ 1 , α ( ϱ ( x 0 ) + ϱ ( y 0 ) ) ( ϱ ( x n ) + ϱ ( y n ) ) α ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) ϱ ( y n ) , m 1 ( 2 ϱ ( x 0 ) ) α ( ϱ ( x n ) + ϱ ( y n ) ) α ϱ ( y n ) , m 1 2 ϱ ( x 0 ) 2 α ϱ ( x n ) 2 α + 1 . | | q n Σ * | | ψ 1 , α ( ϱ ( x n ) + ϱ ( z n ) ) ( ϱ ( x n ) + ϱ ( z n ) ) α ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) ϱ ( z n ) , ψ 1 , α ( ϱ ( x 0 ) + ϱ ( z 0 ) ) ( ϱ ( x n ) + ϱ ( z n ) ) α ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) ϱ ( z n ) , m 1 ( 2 ϱ ( x 0 ) ) α ( ϱ ( x n ) + ϱ ( z n ) ) α ϱ ( z n ) , m 1 3 ϱ ( x 0 ) 3 α ϱ ( x n ) 3 α + 1 . | | x n + 1 Σ * | | ψ 1 , α ( ϱ ( x n ) + ϱ ( q n ) ) ( ϱ ( x n ) + ϱ ( q n ) ) α ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) ϱ ( q n ) , ψ 1 , α ( ϱ ( x 0 ) + ϱ ( q 0 ) ) ( ϱ ( x n ) + ϱ ( q n ) ) α ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) ϱ ( q n ) , m 1 ( 2 ϱ ( x 0 ) ) α ( ϱ ( x n ) + ϱ ( q n ) ) α ϱ ( q n ) , m 1 4 ϱ ( x 0 ) 4 α ϱ ( x n ) 4 α + 1 .
in which (19) proves m 1 < 1 . Moving forward the derivation of inequality (48), the method of mathematical induction will be employed. Initially, when n = 0 , the inequality transforms into the following expression:
| | x 1 Σ * | | m 1 4 . ϱ ( x 0 ) .
Consequently, the expression (48) holds true for n = 1 . To continue, let us assume that the inequality (48) is valid for an arbitrary integer n > 1 . By utilizing the inequalities (48) for n = n , and rearranging its terms, the inequality retains its form:
| | x n + 1 Σ * | | m 1 4 . ϱ ( x k ) ( 1 + 4 α ) ϱ ( x 0 ) 4 α m 1 ( 1 + 4 α ) n 1 α ϱ ( x 0 ) .
This demonstrates that the result holds true for n = n + 1 and is clearly concluding that x n is convergent to Σ * . Hence, the proof is said to be completed.  □
Theorem 4.
Assuming T being continuously differentiable in M ( Σ * , ρ ) , T ( Σ * ) = 0 , [ T ( Σ * ) ] 1 exists and Definition 5 is satisfied by [ T ( Σ * ) ] 1 T along with ϰ 0 to be PIF, and ρ satisfies the relation.
0 2 ρ ϰ 0 ( u ) d u 1 3 .
Then, the FSS (3) converges x 0 M ( Σ * , ρ ) with
| | x n Σ * | | ( η 1 η 2 η 3 η 4 ) n | | x 0 Σ * | | , n = 1 , 2 , . . . ,
holds for
η 1 = 2 0 2 ϱ ( x 0 ) ϰ 0 ( u ) d u ( 1 0 2 ϱ ( x 0 ) ϰ 0 ( u ) d u ) , η 2 = 0 2 ϱ ( x 0 ) ϰ 0 ( u ) d u + 0 2 ϱ ( y 0 ) ϰ 0 ( u ) d u ( 1 0 2 ϱ ( x 0 ) ϰ 0 ( u ) d u ) , η 3 = 0 2 ϱ ( x 0 ) ϰ 0 ( u ) d u + 0 2 ϱ ( z 0 ) ϰ 0 ( u ) d u ( 1 0 2 ϱ ( x 0 ) ϰ 0 ( u ) d u ) , η 2 = 0 2 ϱ ( x 0 ) ϰ 0 ( u ) d u + 0 2 ϱ ( q 0 ) ϰ 0 ( u ) d u ( 1 0 2 ϱ ( x 0 ) ϰ 0 ( u ) d u ) .
Additionally, assuming the function ϰ α given by the inequality (46) to be ND for some α when 0 α 1 , we see
| | x n Σ * | | η 1 ( 1 + 4 α ) n 1 α | | x 0 Σ * | | , n = 1 , 2 , .
Proof. 
WLOG picking x 0 M ( Σ * , ρ ) , in which η 1 , η 2 , η 3 and η 4 are given as per the relation (55) and ρ fulfills the inequality (53). In what follows, if x n M ( Σ * , ρ ) , then we have from the scheme (3), we are able to give the distance norms as in Theorem 3. Looking at Definition 5 and with relation (24), we obtain
| | y n Σ * | | | | [ T ( x n ) ] 1 T ( Σ * ) | | . | | 0 1 [ T ( Σ * ) ] 1 [ T ( x n ) T ( Σ * ) + T ( Σ * ) T ( x ) θ ] d θ | | . | | ( x n Σ * ) | | 1 ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u 0 1 0 2 θ ϱ ( x n ) ϰ 0 ( u ) d u ϱ ( x n ) d θ + 0 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ϱ ( x n ) d θ .
Looking at the Lemma 1, the aforementioned inequality becomes
| | y n Σ * | | 2 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ϱ ( x n ) 1 2 0 2 ϱ ( x n ) ϰ 0 ( u ) u d u 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u 2 0 2 ϱ ( x n ) ϰ 0 ( u ) d u 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ϱ ( x n ) = η 1 ϱ ( x n ) ,
The method’s remaining sub-step (3) and a parallel analogy gives
| | z n Σ * | | | | [ T ( x n ) ] 1 T ( Σ * ) | | | | 0 1 [ T ( Σ * ) ] 1 [ T ( x n ) T ( Σ * ) ] d θ | | . | | ( y n Σ * ) | | + | | 0 1 [ T ( Σ * ) ] 1 [ T ( Σ * ) T ( y θ ) ] d θ | | . | | ( y n Σ * ) | | 1 ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u 0 1 0 2 θ ϱ ( y n ) ϰ 0 ( u ) d u ϱ ( y n ) d θ + 0 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ϱ ( y n ) d θ .
Looking at Lemma 1, the aforementioned inequality becomes
| | z n Σ * | | 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ϱ ( y n ) + 0 2 ϱ ( y n ) ϰ 0 ( u ) d u ϱ ( y n ) 1 2 0 2 ϱ ( y n ) ϰ 0 ( u ) u d u 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ϱ ( y n ) + 0 2 ϱ ( y n ) ϰ 0 ( u ) d u ϱ ( y n ) 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u = η 2 η 1 ϱ ( x n )
| | q n Σ * | | 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ϱ ( z n ) + 0 2 ϱ ( z n ) ϰ 0 ( u ) d u ϱ ( z n ) 1 2 0 2 ϱ ( z n ) ϰ 0 ( u ) u d u 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ϱ ( z n ) + 0 2 ϱ ( z n ) ϰ 0 ( u ) d u ϱ ( z n ) 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u = η 3 η 2 η 1 ϱ ( x n )
| | x n + 1 Σ * | | 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ϱ ( q n ) + 0 2 ϱ ( q n ) ϰ 0 ( u ) d u ϱ ( q n ) 1 2 0 2 ϱ ( q n ) ϰ 0 ( u ) u d u 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ϱ ( q n ) + 0 2 ϱ ( q n ) ϰ 0 ( u ) d u ϱ ( q n ) 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u = η 4 η 3 η 2 η 1 ϱ ( x n ) ,
in which expression (55) gives η 1 < 1 , η 2 , η 3 and η 4 < 1 . The relations proved above yield inequality (54) proving x n is convergent to Σ * . Assuming the function ϰ α given by the relation (46) is ND for some α with 0 α 1 and ρ is given by expression (53), in view of Lemma 2 and the aforementioned relations, we have
| | y n Σ * | | 2 ψ 0 , α ( 2 ϱ ( x n ) ) 2 α ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) ϱ ( x n ) α + 1 , 2 ψ 0 , α ( 2 ϱ ( x 0 ) ) 2 α ( 1 0 2 ϱ ( x 0 ) ϰ 0 ( u ) d u ) ϱ ( x n ) α + 1 = η 1 ϱ ( x 0 ) α ϱ ( x n ) α + 1 .
Following that, Lemma 2 gives
| | z n Σ * | | ψ 0 , α ( 2 ϱ ( x n ) ) + ψ 0 , α ( 2 ϱ ( y n ) ) . ( 2 ϱ ( x n ) ) α . ϱ ( y n ) ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) ϱ ( y n ) , ψ 0 , α ( 2 ϱ ( x 0 ) ) + ψ 0 , α ( 2 ϱ ( y 0 ) ) . ( 2 ϱ ( x n ) ) α . ϱ ( y n ) ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) ϱ ( y n ) , η 1 ( 2 ϱ ( x 0 ) ) α ( ϱ ( x n ) + ϱ ( y n ) ) α ϱ ( y n ) , η 1 2 ϱ ( x 0 ) 2 α ϱ ( x n ) 2 α + 1 . | | q n Σ * | | ψ 0 , α ( 2 ϱ ( x n ) ) + ψ 0 , α ( 2 ϱ ( z n ) ) . ( 2 ϱ ( x n ) ) α . ϱ ( z n ) ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) ϱ ( z n ) , ψ 0 , α ( 2 ϱ ( x 0 ) ) + ψ 0 , α ( 2 ϱ ( z 0 ) ) . ( 2 ϱ ( x n ) ) α . ϱ ( z n ) ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) ϱ ( z n ) , η 1 ( 2 ϱ ( x 0 ) ) α ( ϱ ( x n ) + ϱ ( z n ) ) α ϱ ( z n ) , η 1 3 ϱ ( x 0 ) 3 α ϱ ( x n ) 3 α + 1 . | | x n + 1 Σ * | | ψ 0 , α ( 2 ϱ ( x n ) ) + ψ 0 , α ( 2 ϱ ( q n ) ) . ( 2 ϱ ( x n ) ) α . ϱ ( q n ) ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) ϱ ( q n ) , ψ 0 , α ( 2 ϱ ( x 0 ) ) + ψ 0 , α ( 2 ϱ ( q 0 ) ) . ( 2 ϱ ( x n ) ) α . ϱ ( q n ) ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) ϱ ( q n ) , η 1 ( 2 ϱ ( x 0 ) ) α ( ϱ ( x n ) + ϱ ( q n ) ) α ϱ ( q n ) , η 1 4 ϱ ( x 0 ) 4 α ϱ ( x n ) 4 α + 1 .
Continuing the derivation further results in the inequality (56) indicating that x n is convergent to Σ * .  □
Next, we will give special cases and the applications to our novel improved theorems with few specific functions on ϰ , and the results through Theorems 3 and 4 are reconstructed.
Corollary 1.
Assuming T being continuously differentiable in M ( Σ * , ρ ) , T ( Σ * ) = 0 , [ T ( Σ * ) ] 1 exists and Definitions 4 and 5 are satisfied by [ T ( Σ * ) ] 1 T along with ϰ ( u ) = ζ a u a 1 and ϰ 0 ( u ) = ζ 0 a u a 1 :
| | [ T ( Σ * ) ] 1 ( T ( x ) T ( y θ ) ) | | ζ . ( 1 θ a ) ( | | x Σ * | | + | | y Σ * | | ) a
with
| | [ T ( Σ * ) ] 1 ( T ( x ) T ( Σ * ) ) | | ζ 0 2 a | | x Σ * | | a ,
x , y M ( Σ * , ρ ) , 0 θ 1 , where y θ = Σ * + θ ( y Σ * ) , 0 < a < 1 , ζ > 0 and ζ 0 > 0 . ρ satisfies the relation
ρ = a + 1 2 a ( ζ 0 ( a + 1 ) + ζ a ) 1 a .
Then, the FSS, (3) converges x 0 M ( Σ * , ρ ) with
| | y n Σ * | | 0 2 ϱ ( x n ) ϰ ( u ) u d u 2 ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) M 1 ϱ ( x n ) ,
| | z n Σ * | | 0 ϱ ( x n ) + ϱ ( y n ) ϰ ( u ) u d u ( ϱ ( x n ) + ϱ ( y n ) ) ( 1 0 2 ϱ ( x n ) ϰ 0 ( u ) d u ) ϱ ( y n ) M 2 M 1 ϱ ( x n ) ,
| | q n Σ * | | 0 ϱ ( y n ) + ϱ ( z n ) ϰ ( u ) u d u ( ϱ ( y n ) + ϱ ( z n ) ) ( 1 0 2 ϱ ( y n ) ϰ 0 ( u ) d u ) ϱ ( z n ) M 3 M 2 M 1 ϱ ( x n ) ,
| | x n + 1 Σ * | | 0 ϱ ( z n ) + ϱ ( q n ) ϰ ( u ) u d u ( ϱ ( z n ) + ϱ ( q n ) ) ( 1 0 2 ϱ ( z n ) ϰ 0 ( u ) d u ) ϱ ( q n ) M 4 M 3 M 2 M 1 ϱ ( x n ) ,
holds for
M 1 = ζ a 2 a ϱ ( x 0 ) a ( 1 + a ) [ 1 2 a ζ 0 ϱ ( x 0 ) a ] , M 2 = ζ a ( ϱ ( x 0 ) + ϱ ( y 0 ) ) a ( a + 1 ) ( 1 2 a ζ 0 ϱ ( x 0 ) a ) , M 3 = ζ a ( ϱ ( x 0 ) + ϱ ( z 0 ) ) a ( a + 1 ) ( 1 2 a ζ 0 ϱ ( x 0 ) a ) , M 4 = ζ a ( ϱ ( x 0 ) + ϱ ( q 0 ) ) a ( a + 1 ) ( 1 2 a ζ 0 ϱ ( x 0 ) a ) ,
Hence,
| | x n Σ * | | ( M 1 M 2 M 3 M 4 ) n | | x 0 Σ * | | , n = 1 , 2 , . . . .
Corollary 2.
Assuming T being continuously differentiable in M ( Σ * , ρ ) , T ( Σ * ) = 0 , [ T ( Σ * ) ] 1 exists and Definition 5 is satisfied by [ T ( Σ * ) ] 1 T along with ϰ 0 ( u ) = ζ 0 a u a 1 :
| | [ T ( Σ * ) ] 1 ( T ( x ) T ( Σ * ) ) | | ζ 0 2 a | | x Σ * | | a , x M ( Σ * , ρ ) ,
in which 0 < a < 1 and ζ 0 > 0 .ρ satisfies the relation
ρ = 1 3 ζ 0 2 a 1 a .
Then, the FSS (3) converges x 0 M ( Σ * , ρ ) with
η 1 = ζ 0 2 a + 1 ϱ ( x 0 ) a [ 1 2 a ζ 0 ϱ ( x 0 ) a ] , η 2 = ζ 0 2 a ( ϱ ( x 0 ) a + ϱ ( y 0 ) a ) ( 1 2 a ζ 0 ϱ ( x 0 ) a ) , η 3 = ζ 0 2 a ( ϱ ( x 0 ) a + ϱ ( z 0 ) a ) ( 1 2 a ζ 0 ϱ ( x 0 ) a ) , η 2 = ζ 0 2 a ( ϱ ( x 0 ) a + ϱ ( q 0 ) a ) ( 1 2 a ζ 0 ϱ ( x 0 ) a ) ,
| | x n Σ * | | ( η 1 η 2 η 3 η 4 ) n | | x 0 Σ * | | , n = 1 , 2 , . . . .
Remark 1.
(i) 
If ϰ 0 = ϰ , then our results narrow down to those proved by earlier researchers [5,8,12,13,23]. Thus, the results of the above condition mentioned above are special cases of our results. However, if ϰ 0 < ϰ , the wider convergence radius is achieved in our results due to the weakening of Lipschitz continuity conditions (the same as illustrated in the Examples 1 and 3) in the next section.
(ii) 
The extension of scope of application of our results is described below. Assume equation 2 ϰ 0 ( u ) u 1 = 0 is said to have a minimal positive root ρ ¯ and (5) holds. Set M ~ = M ( Σ * , ρ ) M ( Σ * , ρ ¯ ) . Furthermore, set
| | T ( x ) T ( y θ ) | | θ ( ϱ ( x ) + ϱ ( y ) ) ϱ ( x ) + ϱ ( y ) ϰ ¯ ( u ) d u ,
in which x , y M ˜ , 0 θ 1 , a n d ϰ ¯ is as ϰ. We see
ϰ ¯ ( u ) ϰ ( u ) u [ 0 , m i n { ρ , ρ ¯ } ] .
So, according to the above proofs, ϰ ¯ can replace ϰ in all the results under ϰ. However, then if
ϰ ¯ ( u ) < ϰ ( u )
the advantages mentioned in ( i ) above can be extended even more. Thus, according to the motivational example, by setting a lower upper bound of ϰ ( u ) as ϰ ¯ which will further enhance the convergence radius, we obtain
ϰ 0 < ϰ ¯ = e 1 ( e 1 ) 2 < ϰ .

5. Semi-Local Convergence

This section follows semi-local convergence outcomes for highly comprehensive majorizing sequences of FSS (3). The study of iterative methods highly values majorizing sequences as they significantly contribute to the analysis of the given scheme. This is because majorizing sequences provide a way to bound the error of the iterative method, which is crucial in understanding the convergence properties of the method. By providing a tight upper bound on the error, majorizing sequences can be used to establish convergence results for iterative methods. We introduce an extensive majorizing sequence. Suppose that there exists a real function κ 0 defined on the interval [ 0 , + ) such that the equation κ 0 ( n ) 1 = 0 has a smallest positive solution ρ . Let also κ be a real function defined on the interval [ 0 , ρ ) . Let a 0 = 0 and b 0 be a non-negative parameter. Then, define the sequence { a t } by
c t = b t + 0 1 κ ( ( 1 Θ ) ( b t a t ) ) d Θ ( b t a t ) 1 κ 0 ( a t ) , α t = c 1 + 0 1 κ 0 ( b t + Θ ( c t b t ) ) d Θ ( c t b t ) + 0 1 κ ( ( 1 Θ ) ( b t a t ) ) d Θ ( b t a t ) , d t = c t + α t 1 κ 0 ( a t ) , l t = c 1 + 0 1 κ 0 ( b t + Θ ( d t b t ) ) d Θ ( d t b t ) + 0 1 κ ( ( 1 Θ ) ( b t a t ) ) d Θ ( b t a t ) , a t + 1 = d t + l t 1 κ 0 ( a t ) , r t + 1 = 0 1 κ ( ( 1 Θ ) ( a t + 1 a t ) ) d Θ ( a t + 1 a t ) + ( 1 κ 0 ( a t ) ) ( a t + 1 a t ) ,
and
b t + 1 = a t + 1 + r t + 1 1 κ 0 ( a t + 1 ) .
The sequence a t is shown to be majorizing for the sequence x t in Theorem 4. Let us first develop convergence criteria for the sequence { a t } .
Lemma 4.
Suppose there exists ρ 0 [ 0 , ρ ) such that for each t = 0 , 1 , 2 ,
κ 0 ( a t ) < 1 a n d a t ρ 0 .
Then, the following assertions hold
0 a t b t c t d t a t + 1 ρ 0 ,
and there exists a * [ 0 , ρ 0 ] such that a t a * ρ 0 and lim t a t = a * .
Proof. 
By the definition of the sequence { a t } given by the formula (73) and the conditions (74), we see that the assertion (75) holds. Hence, the rest of the assertions also hold.  □
Remark 2.
If the function κ 0 is strictly increasing, then set ρ 0 = κ 0 1 (1). The functions κ 0 , κ and the limit point a * are associated with the method (3). Suppose:
(A1) 
There exists a parameter b 0 0 and a point x 0 Ω such that the linear operator T ( x 0 ) is invertible and [ T ( x 0 ) ] 1 T ( x 0 ) b 0 .
(A2) 
T ( x 0 ) 1 ( T ( u ) T ( x 0 ) ) κ 0 ( u x 0 ) for each u Ω . Set U 1 = U ( x 0 , ρ ) Ω
(A3) 
T ( x 0 ) 1 ( T ( u 2 ) T ( u 1 ) ) κ ( u 2 u 1 ) for each u 1 , u 2 U 1 .
(A4) 
The conditions in (74) hold and
(A5) 
U [ x 0 , a * ] Ω .
Remark 3.
(1) 
The parameter ρ can replace the limit point a * in the condition ( A 5 ) .
(2) 
Suppose that
( A 3 ) T ( x 0 ) 1 T ( x ) κ 1 ( x x 0 ) for each x U 1 , where κ 1 is a continuous and non-decreasing real function defined on the interval [ 0 , ρ ] .
Then, under the conditions ( A 2 ) , we obtain in turn
T ( x 0 ) 1 ( T ( x ) T ( x 0 ) + T ( x 0 ) ) 1 + T ( x 0 ) 1 ( T ( x ) T ( x 0 ) ) 1 + κ 0 ( x x 0 ) .
That is, we can choose κ 1 ( n ) = 1 + κ 0 ( n ) . Then, the condition ( A 3 ) holds for this choice. However, the function κ 1 can be smaller than the function 1 + κ 0 ( u ) in some examples. As an example, define the real function T ( x ) = sin x . Then, we obtain κ 1 ( n ) = n < 1 + κ 0 ( n ) . In practice, we shall be using the smaller of the functions κ 1 and 1 + κ 0 ( n ) . Moreover, if κ 1 is smaller, then ( A 3 ) should be added in the conditions ( A 1 ) ( A 5 ) , since ( A 2 ) implies ( A 3 ) but not necessarily vice versa.
The main result for the FSS (3)’ semi-local convergence is:
Theorem 5.
Suppose that the conditions ( A 1 ) ( A 5 ) hold. Then, the sequence x t generated by the method (3) is well defined in the ball U ( x 0 , a * ) , remains in U ( x 0 , a * ) for each t = 0 , 1 , 2 , and is convergent to a solution x * U [ x 0 , a * ] of the equation T ( x ) = 0 . Additionally, the following assertions hold for each t = 0 , 1 , 2 ,
y t x t b t a t ,
z t y t c t b t ,
q t z t d t c t ,
x t + 1 q t a t + 1 d t ,
and
x t x * a * a t .
Proof. 
Induction shall determine the assertions. The condition ( A 1 ) and the method (3) for t = 0 give
y 0 x 0 = [ T ( x 0 ) ] 1 T ( x 0 ) b 0 = b 0 a 0 < a * .
Thus, the iterate y 0 U ( x 0 , a * ) and the assertion (76) is established for t = 0 . Let u U ( x 0 , a * ) . Then, by the condition ( A 2 ) , it follows
T ( x 0 ) 1 ( T ( u ) T ( x 0 ) ) κ 0 ( u x 0 ) κ ) ( a * < 1 ,
thus
[ T ( u ) ] 1 T ( x 0 ) 1 1 κ 0 ( u x 0 ) .
We can write by the first sub-step
T ( y k ) = T ( y k ) T ( x k ) + T ( x k ) = T ( y k ) T ( x k ) T ( x k ) ( y k x k ) = 0 1 [ T ( x k + Θ ( y k x k ) ) d Θ T ( x k ) ] ( y k x k ) .
Hence, by ( A 2 ) ,
[ T ( x 0 ) ] 1 T ( y k ) 0 1 κ ( ( 1 Θ ) y k x k ) d Θ y k x k , 0 1 κ ( ( 1 Θ ) y k x k ) d Θ y k x k , 0 1 κ ( ( 1 Θ ) ( b k a k ) d Θ ( b k a k ) ,
and by the second sub-step
z k x k [ T ( x k ) ] 1 T ( x 0 ) . [ T ( x 0 ) ] 1 T ( y k ) 0 1 κ ( ( 1 Θ ) ( b k a k ) d Θ ( b k a k ) 1 κ 0 ( a k ) = c k b k ,
and
z k x 0 z k y k + y k x 0 c k b k + b k a 0 = c k < a * ,
where we also used (76) for u = x k . Hence, the iterate z k U ( x 0 , a * ) and the assertion (77) holds. Then, we can write
T ( z k ) = T ( z k ) T ( y k ) + T ( y k ) = 0 1 [ T ( y k + Θ ( z k y k ) ) d Θ ] ( z k y k ) + T ( y k ) .
Therefore,
[ T ( x 0 ) ] 1 T ( z k ) ( 0 1 κ 0 ( y k x 0 + Θ z k y k ) d Θ z k y k + [ T ( x 0 ) ] 1 T ( y k ) ( 0 1 κ 0 ( b k + Θ ( c k b k ) ) d Θ ) ( c k b k ) + ( 0 1 κ ( ( 1 Θ ) ( b k a k ) ) d Θ ) ( b k a k ) ,
Consequently,
q k z k [ T ( x k ) ] 1 T ( x 0 ) . [ T ( x 0 ) ] 1 T ( z k ) α k 1 κ 0 ( a k ) = d k c k ,
and
q k x 0 q k z k + z k x 0 d k c k + c k a 0 = d k < a * .
Hence, the iterate q k U ( x 0 , a * ) and the assertion (78) holds. Similarly, the last sub-step gives in turn
x k + 1 q k [ T ( x k ) ] 1 T ( x 0 ) . [ T ( x 0 ) ] 1 ( T ( q k ) T ( y k ) + T ( y k ) ( 1 + 0 1 κ 0 ( y k x 0 + Θ q k y k ) d Θ ) q k y k + [ T ( x 0 ) ] 1 T ( y k ) 1 κ 0 ( a k ) l k 1 κ 0 ( a k ) = a k + 1 d k ,
and
x k + 1 x 0 x k + 1 q k + q k x 0 a k + 1 d k + d k a 0 = a k + 1 < a * .
Hence, the iterate x k + 1 U ( x 0 , a * ) and the assertion (79) holds. Moreover, we can write by the first sub-step
T ( x k + 1 ) = T ( x k + 1 ) T ( x k ) + T ( x k ) = T ( x k + 1 ) T ( x k ) T ( x k ) ( x k + 1 x k ) + T ( x k ) ( x k + 1 x k ) T ( x k ) ( y k x k ) = T ( x k + 1 ) T ( x k ) T ( x k ) ( x k + 1 x k ) + T ( x k ) ( x k + 1 y k ) ,
so
[ T ( x 0 ) ] 1 T ( x k + 1 ) 0 1 κ ( ( 1 Θ ) x k + 1 x k ) d Θ x k + 1 x k + ( 1 + κ 0 ( x k x 0 ) ) x k + 1 y k ( 0 1 κ ( ( 1 Θ ) ( a k + 1 a k ) ) d Θ ( a k + 1 a k ) + ( 1 + κ 0 ( a k ) ) ( a k + 1 b k ) = r k + 1 ,
Consequently,
y k + 1 x k + 1 [ T ( x k + 1 ) ] 1 T ( x 0 ) . [ T ( x 0 ) ] 1 T ( x k + 1 ) r k + 1 1 κ 0 ( a k + 1 ) = b k + 1 a k + 1
and
y k + 1 x 0 y k + 1 x k + 1 + x k + 1 x 0 b k + 1 a k + 1 + a k + 1 a 0 = b k + 1 < a * .
Hence, the iterate y k + 1 U ( x 0 , a * ) and the assertion (79) holds. The induction is terminated. Therefore, it is established that the sequence a k majorizes the sequence x k . Moreover, the sequence a k is complete as convergent by the condition ( A 4 ) . Thus, the sequence x k is also complete in Banach space . Hence, there exists x * U ( x 0 , a * ) such that lim k x k = x * . Furthermore, if k in (82), then we conclude from lim k r k + 1 = 0 , that T ( x * ) = 0 . Finally, let i in the estimate
x t + i x t a t + i a t ,
to obtain the assertion (80).  □
The determination of the solution region’s uniqueness follows.
Proposition 1.
Suppose there exists a solution y * U ( x 0 , ρ 1 ) of the equation T ( x ) = 0 for some ρ 1 > 0 , the condition ( A 2 ) holds in the ball U ( x 0 , ρ 1 ) and there exists ρ 2 ρ 1 such that
0 1 κ 0 ( ( 1 Θ ) ρ 1 + Θ ρ 2 ) d Θ < 1 .
Set U 2 = U ( x 0 , ρ 2 ) Ω . Subsequently, in the region U 2 , the equation T ( x ) = 0 has to be uniquely solvable by y * .
Proof. 
Let z * U 2 be such that T ( z * ) = 0 . Then, by applying ( A 2 ) and the condition (83), we obtain in turn for M = 0 1 T ( y * + Θ ( z * y * ) ) d Θ that
[ T ( x 0 ) ] 1 ( M T ( x 0 ) ) 0 1 κ 0 ( ( 1 Θ ) y * x 0 + Θ z * x 0 ) d Θ 0 1 κ 0 ( ( 1 Θ ) ρ 1 + Θ ρ 2 ) d Θ < 1 ,
Thus, the linear operator M is invertible. Hence, we obtain z * y * = M 1 ( T ( z * ) T ( y * ) ) = M 1 ( 0 ) = 0 . Therefore, we conclude z * = y * .  □
Remark 4.
If all the conditions ( A 1 ) ( A 5 ) hold in Proposition 1, then choose y * = x * and ρ 1 = a * .

6. Applications

The applications are illustrated by the examples.
Example 1.
Looking back at the inspirational example, we are able to apply our hypothesis and see that all the assumptions are proved to be true. Applying (13) and T ( Σ * ) = ( 1 , 1 , 1 ) t , we find the following:
The old case we find for outcomes of past researchers [1,22] ϰ 0 ( u ) = ϰ ( u ) = e 2 gives
ρ 0 = 0.245253 .
Again, narrowing down ϰ 0 ( u ) further, we have following two cases. Case ϰ 0 ( u ) = e 1 2 a n d ϰ ( u ) = e 2 gives
ρ 1 = 0.324947
Case ϰ 0 ( u ) = e 1 2 a n d ϰ ¯ ( u ) = e 1 ( e 1 ) 2 gives
ρ 2 = 0.382692
We clearly notice that
ρ 0 < ρ 1 < ρ 2 .
Therefore, we are able to justify the advantages mentioned in the Remark 1 evaluation, i.e., it leads to the convergence domain of the proposed scheme of our study.
Example 2.
Let χ = Y = . We take
T ( u ) = 0 u 1 + 2 u sin π u d u , u .
So
T ( u ) = 1 + 2 u sin π u , u 0 , 1 , u = 0 ,
Clearly, Σ * = 0 , a root for T. T fulfills
| | [ T ( Σ * ) ] 1 ( T ( x ) T ( Σ * ) ) | | = 2 u sin π u 2 | u Σ * | , u .
In the view of Theorem 4, for any x 0 M ( Σ * , 1 / 6 ) , we obtain an expression
| | x n Σ * | | C 5 n 1 | | x 0 Σ * | | , n = 1 , 2 , , C = ( 8 | x 0 | 4 ) . ( | x 0 | + | y 0 | ) . ( | x 0 | + | z 0 | ) ( 1 2 | x 0 | ) 3 . | y 0 | . | z 0 | . | q 0 | .
Meanwhile, there seems to be no PIF ϰ that satisfies the inequality (6). Take note of the fact that since
| | [ T ( Σ * ) ] 1 ( T ( u ) T ( v θ ) ) | | = 2 u sin π u 2 v θ sin π v θ = 4 2 i + 1 ,
for u = 1 / i , v = 1 / i , θ = 2 i 2 i + 1 and i = 1 , 2 , Hence, we can say that possibly if there was a positively integrable function ϰ s.t., the relation (6) follows on M ( Σ * , ρ ) ; for some ρ > 0 , it follows that ∃ some i 0 > 1 s.t.
0 2 ρ ϰ ( u ) d u i = i 0 + 4 2 i + 1 2 i ϰ ( u ) d u i = i 0 + 4 2 i + 1 = + ,
that contradicts the above results. This mentioned example illustrates that Theorem 4 is the consequence of Theorem 3 as critical enhancement if the convergence radius is to be ignored.
Example 3
([15]). Choosing χ = C [ 0 , 1 ] and Y = C [ 0 , 1 ] , Υ = M ¯ ( 0 , 1 ) , Σ * = 0 . So, set T on the Υ:
T ( p ) ( u ) = p ( u ) 5 0 1 u θ p ( θ ) 3 d θ .
Then,
T ( p ( s ) ) ( u ) = s ( u ) 15 0 1 u θ p ( θ ) 2 s ( θ ) d θ f o r a l l s Υ .
Therefore, we arrive at
ϰ 0 ( u ) = 15 2 u < ϰ ( u ) = ϰ ¯ ( u ) = 15 u .
Thereby, this leads to the same advantages as in Example 1 by solving (13) and hence, it extends the scope of application of the scheme. In addition, over the previous work described in [15], we have expanded the convergence domain, making our findings more beneficial.

7. Conclusions

To estimate a locally unique solution, a local convergence criteria is successfully proposed for FSS using this new idea of weak ϰ -average on a high-order scheme and the combination of weak/average radius Lipschitz/center Lipschitz criteria. In comparison to previous work in [15], our analysis is more beneficial in terms of the following advantages: sufficient weaker convergence criteria and a broader convergence domain. However, the scheme considered here is without a coefficient, which is a limitation, and this issue can be addressed by modifying the assumptions on the radius which the authors intend to take up in the future. This work has further scope of enhancement in condition for the scheme considered in this theory to make it applicable for semi-local and global domains. The proposed convergence criteria is superior over the existing convergence criteria for the FSS scheme of fifth order. By providing semi-local convergence results for incredibly broad majoring sequences, emphasis is on the broad applicability of results and their potential significance in the study of iterative schemes. In all, it is a contribution of new research directions in computational methods and numerical functional analysis.

Author Contributions

A.S. and J.P.J. wrote the framework and the original draft of this paper. K.R.P. and I.K.A. reviewed and validated the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to pay their sincere thanks to the reviewer for their useful suggestions. The second author is also thankful to the Department of Science and Technology, New Delhi, India for approving the proposal under the scheme FIST program (Ref. No. SR/FST/MS/2022 dated 19 December 2022).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2000. [Google Scholar]
  2. Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: Oxford, UK, 1982. [Google Scholar]
  3. Rall, L.B. Computational Solution of Nonlinear Operator Equations; Robert, E., Ed.; Krieger Publishing Company: Blaufelden, Germany, 1979. [Google Scholar]
  4. Argyros, I.K.; Cho, Y.J.; George, S. Local convergence for some third-order iterative methods under weak conditions. J. Korean Math. Soc. 2016, 53, 781–793. [Google Scholar] [CrossRef] [Green Version]
  5. Chen, J.; Li, W. Convergence behaviour of inexact Newton methods under weak Lipschitz condition. J. Comput. Appl. Math. 2006, 191, 143–164. [Google Scholar] [CrossRef] [Green Version]
  6. Homeier, H.H.H. On Newton-type methods with cubic convergence. J. Comput. Appl. Math. 2005, 176, 425–432. [Google Scholar] [CrossRef] [Green Version]
  7. Kanwar, V.; Kukreja, V.K.; Singh, S. On some third-order iterative methods for solving nonlinear equations. Appl. Math. Comput. 2005, 171, 272–280. [Google Scholar]
  8. Kou, J.; Li, Y.; Wang, X. A modification of Newton method with third-order convergence. Appl. Math. Comput. 2006, 181, 1106–1111. [Google Scholar] [CrossRef]
  9. Magreñán, Á.A.; Argyros, I. A Contemporary Study of Iterative Methods; Academic Press: New York, NY, USA, 2018. [Google Scholar]
  10. Nazeer, W.; Tanveer, M.; Kang, S.M.; Naseem, A. A new Householder’s method free from second derivatives for solving nonlinear equations and polynomiography. J. Nonlinear Sci. Appl. 2016, 9, 998–1007. [Google Scholar] [CrossRef] [Green Version]
  11. Sharma, D.; Parhi, S.K. On the local convergence of modified Weerakoon’s method in Banach spaces. J. Anal. 2020, 28, 867–877. [Google Scholar] [CrossRef]
  12. Wang, X. Convergence of Newton’s method and uniqueness of the solution of equations in Banach space. IMA J. Numer. Anal. 2000, 20, 123–134. [Google Scholar] [CrossRef] [Green Version]
  13. Wang, X.H.; Li, C. Convergence of Newton’s method and uniqueness of the solution of equations in Banach spaces II. Acta Math. Sin. 2003, 19, 405–412. [Google Scholar] [CrossRef]
  14. Shakhno, S. On a two-step iterative process under generalized Lipschitz conditions for first-order divided differences. J. Math. Sci. 2010, 168, 576–584. [Google Scholar] [CrossRef]
  15. Regmi, S.; Argyros, C.I.; Argyros, I.K.; George, S. Efficient Fifth Convergence Order Methods for Solving Equations. Trans. Math. Program. Appl. Math. 2021, 9, 23–34. [Google Scholar]
  16. Saxena, A.; Jaiswal, J.P.; Pardasani, K.R.; Argyros, I.K. Convergence Criteria of a Three-Step Scheme under the Generalized Lipschitz Condition in Banach Spaces. Mathematics 2022, 10, 3946. [Google Scholar] [CrossRef]
  17. Argyros, I.K.; Cho, Y.J.; Hilout, S. Numerical Methods for Equations and Its Applications; CRC Press: Boca Raton, FL, USA, 2012. [Google Scholar]
  18. Fernández, J.A.E.; Verón, M.Á.H. Newton’s Method: An Updated Approach of Kantorovich’s Theory; Birkhäuser: Cham, Switzerland; Geneva, Switzerland, 2017. [Google Scholar]
  19. Potra, F.A.; Pták, V. Sharp error bounds for Newton’s process. Numer. Math. 1980, 34, 63–72. [Google Scholar] [CrossRef]
  20. Zabrejko, P.P.; Nguen, D.F. The majorant method in the theory of Newton-Kantorovich approximations and the Ptak error estimates. Numer. Funct. Anal. Optim. 1987, 9, 671–684. [Google Scholar] [CrossRef]
  21. Moccari, M.; Lotfi, T. Using majorizing sequences for the semi-local convergence of a high-order and multipoint iterative method along with stability analysis. J. Math. Ext. 2021, 15, 1–32. [Google Scholar]
  22. Traub, J.F. Iterative Methods for the Solution of Equations; Chelsea Publishing Company: New York, NY, USA, 1977. [Google Scholar]
  23. Magrenan Ruiz, A.A.; Argyros, I.K. Two-step Newton methods. J. Complex. 2014, 30, 533–553. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Saxena, A.; Jaiswal, J.P.; Pardasani, K.R.; Argyros, I.K. Extending the Domain with Application of Four-Step Nonlinear Scheme with Average Lipschitz Conditions. Mathematics 2023, 11, 1774. https://doi.org/10.3390/math11081774

AMA Style

Saxena A, Jaiswal JP, Pardasani KR, Argyros IK. Extending the Domain with Application of Four-Step Nonlinear Scheme with Average Lipschitz Conditions. Mathematics. 2023; 11(8):1774. https://doi.org/10.3390/math11081774

Chicago/Turabian Style

Saxena, Akanksha, Jai Prakash Jaiswal, Kamal Raj Pardasani, and Ioannis K. Argyros. 2023. "Extending the Domain with Application of Four-Step Nonlinear Scheme with Average Lipschitz Conditions" Mathematics 11, no. 8: 1774. https://doi.org/10.3390/math11081774

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop