Next Article in Journal
The Simultaneous Confidence Interval for the Ratios of the Coefficients of Variation of Multiple Inverse Gaussian Distributions and Its Application to PM2.5 Data
Next Article in Special Issue
Formation of Singularity for Isentropic Irrotational Compressible Euler Equations
Previous Article in Journal
Viscous Dissipation and Mixed Convection Effects on the Induced Magnetic Field for Peristaltic Flow of a Jeffrey Nanofluid
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Symmetric-Type Multi-Step Difference Methods for Solving Nonlinear Equations

1
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
2
Department of Theory of Optimal Processes, Ivan Franko National University of Lviv, Universytetska Str. 1, 79000 Lviv, Ukraine
3
Department of Mathematics, University of Houston, Houston, TX 77205, USA
4
Department of Computational Mathematics, Ivan Franko National University of Lviv, Universytetska Str. 1, 79000 Lviv, Ukraine
5
Department of Computer Science, University of Oklahoma, Norman, OK 73501, USA
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(3), 330; https://doi.org/10.3390/sym16030330
Submission received: 24 February 2024 / Revised: 4 March 2024 / Accepted: 6 March 2024 / Published: 8 March 2024
(This article belongs to the Special Issue Advances in Mathematical Models and Partial Differential Equations)

Abstract

:
Symmetric-type methods (STM) without derivatives have been used extensively to solve nonlinear equations in various spaces. In particular, multi-step STMs of a higher order of convergence are very useful. By freezing the divided differences in the methods and using a weight operator a method is generated using m steps (m a natural number) of convergence order 2 m. This method avoids a large increase in the number of operator evaluations. However, there are several problems with the conditions used to show the convergence: the existence of high order derivatives is assumed, which are not in the method; there are no a priori results for the error distances or information on the uniqueness of the solutions. Therefore, the earlier studies cannot guarantee the convergence of the method to solve nondifferentiable equations. However, the method may converge to the solution. Thus, the convergence conditions can be weakened. These problems arise since the convergence order is determined using the Taylor series which requires the existence of high-order derivatives which are not present in the method, and they may not even exist. These concerns are our motivation for authoring this article. Moreover, the novelty of this article is that all the aforementioned problems are addressed positively, and by using conditions only related to the divided differences in the method. Furthermore, a more challenging and important semi-local analysis of convergence is presented utilizing majorizing sequences in combination with the concept of the generalized continuity of the divided difference involved. The convergence is also extended from the Euclidean to the Banach space. We have chosen to demonstrate our technique in the present method. But it can be used in other studies using the Taylor series to show the convergence of the method. The applicability of other single- or multi-step methods using the inverses of linear operators with or without derivatives can also be extended with the same methodology along the same lines. Several examples are provided to test the theoretical results and validate the performance of the method.

1. Introduction

Let E stand for Banach space and D E for an open set. Suppose that P : D E is a continuous operator. A plethora of applications are thus reduced to solving the equation
P ( x ) = 0 .
A solution x * D to this equation is needed in closed form. But this is attainable only in special situations. That is why most solution approaches are iterative when a sequence is developed that is convergent to x * under some conditions of the operator P and the starting point x 0 D .
The existence and uniqueness of the solution’s results are usually developed for an equation like (1). In this case, Equation (1) has solutions and so it is needed to present and analyze iterative methods (ITs).
No matter which IT is utilized, there are three concerns. First, the iterations must exist in the domain D. That is, if the IT requires the evaluation of an IT at all x k , it must be assured that these iterates remain in the domain of the operator P. If we refer to Newton’s method, the Fréchet derivative P ( x k ) which is a linear operator as well as the inverse P ( x k ) 1 must be well defined at all x k . That is why we usually provide conditions that ensure that the iterates exist provided that the IT initiates from an initial point x 0 . Another more challenging concern is the convergence of the sequence { x k } , and if their limits are indeed solutions to Equation (1). A plethora of such results exists [1,2]. The first one is usually called local convergence, where we start by assuming the existence of some x * D , and then we provide some neighborhoods of x * called convergence balls such that all of the initial points are in it, and all of the iterates produced by the IT exist and converge to x * . The second type of convergence, usually called semilocal, does not rely on the existence of the solution x * , but it shows certain, usually difficult to verify, conditions of the operator P and the initial point x 0 . However, under these conditions, the convergence of the sequence to x * is assured. In the semilocal case, we also provide computable error estimates for the distances x k + 1 x k or x * x k , which are not given in the local convergence theorems. However, even these estimates are usually pessimistic.
A widely used single-step method is Newton’s (NA), defined as [1,3]
x 0 D , x k + 1 = x k P ( x k ) 1 P ( x k ) .
The convergence order of (NA) is two [1]. But the inversion of the derivative P ( x k ) is required at each step. This inversion may not be possible or may very expensive to carry out.
The modified Newton’s method [1,2] is defined by
x 0 D , x k + 1 = x k P ( x 0 ) 1 P ( x k ) .
This method requires only the inversion of P ( x 0 ) . But the convergence order of it is only one.
To avoid the generally expensive computation of the Fréchet derivative P ( x k ) of the operator P and increase the convergence to be higher than one other ITs have been developed [1,3,4,5,6,7] using divided differences of order one [1,2]. The methods of chords or Regula falsi or the Secant method are some of the most used ITs for solving Equation (1). They are defined by
x 1 , x 0 D , x k + 1 = x k [ x k , x k 1 ; P ] 1 P ( x k ) ,
where L ( E , E ) denotes the space for mappings of E into E for linear operators that are bounded, and [ · , · ; P ] : D × D L ( E , E ) is called a divided difference of order one. But the R-order of convergence is 1 + 5 2 which is larger than one but smaller than two. However, it is possible to still use divided differences and obtain a convergence order greater than 1 + 5 2 .
By utilizing the symmetric difference formula
P ( x ) [ x + h , x h ; P ] ,
we derive the Symmetric-Steffensen-type method (SSTM)
x k + 1 = x k [ x k + P ( x k ) , x k P ( x k ) ; P ] 1 P ( x k ) ,
which is also of convergence order two [4,5]. But the divided difference [ · , · ; P ] is used instead of the derivative P . Other Steffensen-type methods are studied in [6,7,8,9]. Generalizations of the SSTM have been suggested to increase the convergence order in combination with weight operators, and frozen derivatives.
Symmetries play a central role in the dynamics of physical systems. Quantum physics are at the core of symmetry principles. Symmetries do not only naturally appear in geometry. They appear every time that a mathematical object stays unchanged under transformations. Even and odd functions studied in calculus are examples of symmetry. Symmetric matrices or graphs are another example. Symmetries characterize the solutions of differential or integral equations. That is why it makes sense to consider iterative methods of a symmetric nature to solve such equations.
Let us revisit the family of methods defined by [5]
u k = x k + a P ( x k ) , A k = [ u k , x k ; P ] , y k ( 1 ) = x k A k 1 P ( x k ) , z k = y k ( 1 ) + b P ( y k ( 1 ) ) , y k ( 2 ) = y k ( 1 ) B ( s k ) A k 1 P ( y k ( 1 ) ) , s k = A k 1 [ y k ( 1 ) , z k ; P ] , y k ( m 1 ) = y k ( m 2 ) B ( s k ) A k 1 P ( y k ( m 2 ) ) , x k + 1 = y k ( m ) = y k ( m 1 ) B ( s k ) A k 1 P ( y k ( m 1 ) ) ,
where k = 0 , 1 , 2 , is the number of iterations; m is the number of steps; a , b are real numbers; and B ( s k ) is a linear weight operator [10,11]. The weight B ( s ) is any real matrix function that satisfies B ( 0 ) = 1 , B ( 1 ) = 1 and B ( 0 ) < . The convergence order 2 m is proven in [5] using the Taylor series for E = R n . A detailed favorable comparison with other competing methods using similar information has been carried out in [5,10,11]. This includes the computation of the CPU time. It is because of these advantages that we picked method (2) to demonstrate our technique. But, we noticed that there are also limitations to the approach [5,10,11].
Motivation
( L 1 ) Although method (2) is derivative free, Theorem 1 in [5] can be used provided that P ( q ) and q = 1 , 2 , , 5 exist. But these derivatives do not appear in (2). Let us consider the real function f : D R R defined by f ( t ) = θ 1 t 7 + θ 2 t 6 + θ 3 t 4 log t , for t 0 and f ( t ) = 0 for t = 0 provided that θ 1 , θ 2 , θ 3 are real parameters satisfying θ 1 + θ 2 = 0 and θ 3 0 . Choose D to be any interval containing 0 and 1. Then, t * = 1 solves the equations f ( t ) = 0 . But the function f ( s ) is not continuous at t = 0 D . Hence, there is no assurance from Theorem 1 in [5] that the sequence { x k } is convergent to t * = 1 . However, this sequence converges to t * = 1 if, for example, a = θ 1 = 1 , b = θ 2 = 1 , B = I , m = 2 , and x 0 = 1.2 . Hence, the conditions of Theorem 1 in [5] can be weakened.
( L 2 ) There are no computable a priori estimates for x * x k . So, the number of iterations to be carried out to reach a desired error tolerance is not known in advance.
( L 3 ) The uniqueness of the solution is not known in a neighborhood containing it.
( L 4 ) The radius of convergence is unknown. Thus, the selection of x 0 assuring the convergence of the sequence { x k } to x * is a very difficult task.
( L 5 ) The results in [5] are restricted by R n .
( L 6 ) The more important and challenging semi-local analysis of the method (2) has not been studied previously.
The limitations ( L 1 ) ( L 6 ) are the motivation for writing this article. Addressing these limitations is the novelty of this paper:
Novelty
( L 1 ) The local analysis of convergence only uses conditions based on the operators in (2).
( L 2 ) The required number of iterations to reach an error tolerance is known in advance since a priori error estimates for x * x k become available.
( L 3 ) A neighborhood of x * is determined containing no other solution.
( L 4 ) A computable radius of convergence is provided, Thus, the solution of x 0 becomes possible.
( L 5 ) The results are valid in the more Banach space.
( L 6 ) The semi-local analysis of convergence is provided making use of the majorizing sequence.
It is worth noting that the concerns ( L 1 ) ( L 6 ) always appear in the study of iterative methods using the Taylor series to show convergence such as the ones in [1,3,4,5,6,8,9,10,11,12,13,14,15]. But our technique avoids the Taylor series and uses conditions only for the operators in the method. This way, the benefits ( L 1 ) - ( L 6 ) become possible.
Both types of analysis of convergence rely on the concept of the generalized continuity of the divided difference. This is how we extend the applicability of the method (2). It this worth noting that the approach in this paper may be used to extend the applicability of other methods making use of inverses of linear operators along the same lines [1,3,4,12,13,14]. A similar approach was used for studying the convergence of some methods in [15].
The divided difference in both the local and semilocal convergence analysis is controlled by a majorant real function. In particular, semilocal convergence relies on the scalar majorizing sequences being constructed a priori. The frozen derivatives replace the expensive in the general inversion of the linear operators at each substep of the iteration and still increase the convergence order. This way, the convergence of the method is governed by real functions and sequences that are easier to handle. The monotone convergence of the method in (2) has not been addressed in this paper for Banach space valued operators. But we plan to study this type of convergence in future research in the setting of Hilbert space or partially ordered topological spaces using the extension of the fixed point theory in these spaces. The results are expected to be fruitful, since it has already been established in [4,5,6,7,8,9] that STMs have advantages over existing methods.
Notice also that, due to the a priori estimates we have provided, the minimum number of iterations needed to reach a desired tolerance is known in advance (see the error estimates (7) and (8) in Theorem 1, and (19) and (20) in the Theorem 2).
The rest of the article is divided as follows: Section 2 and Section 3 deal with local followed by semilocal analyses of convergence. The numerical experimentations appear in Section 4 and the Concluding remarks in Section 5.

2. Convergence 1: Local

Some concepts related to the order of convergence of iterative methods should be mentioned.
Let { x k } be a sequence in E which converges to x * . Then, the convergence is of order q, q > 1 if there exist a constant C , C > 0 and a natural number k ¯ , such that
x k + 1 x * C x k x * q for each k k ¯ ,
or for e k = x k x *
e k + 1 C e k q for each k k ¯ .
The convergence is said to be linear if q = 1 , and C exists such that C ( 0 , 1 ) Moreover, the error in the k iteration for E = R n is defined by
e k + 1 = M e k q + O ( e k q + 1 ) ,
where q is the order of convergence and M is a q linear function, i.e., M L ( R n × R n × × R n ) .
The computational efficiency of an iterative method is Ω = q 1 p or Ω = log q p , where q is the order of convergence and p is the computational cost per iteration. Moreover, if x k 2 , x k 1 , x k , x k + 1 are four consecutive iterates of an iterative method approximating the solution x * of the equation P ( x ) = 0 , then the following types of convergence have been suggested in [16] and [3], respectively
δ 1 ln x k + 1 x * x k x * ln x k x * x k 1 x * ,
δ 2 ln x k + 1 x k x k x k 1 ln x k x k 1 x k 1 x k 2 .
The former is usually called the computational order and the latter the approximate computational order of convergence. It is worth noticing that, since the solution x * is usually unknown, the second formula is more useful.
Define the open ball V ( x , a ) = { y E : y x < a } , where x E stands for the center of the ball and a > 0 is the radius. Moreover, define the closed ball V [ x , a ] = { y E : y x a } .
Let M = [ 0 , + ) . We give the definition of the proof of the local analysis of convergence for the method (2) which relies on some conditions.
Suppose:
( H 1 ) There exist continuous as well as nondecreasing functions (CND) f 1 : M M , φ 0 : M × M M such that the equation φ 0 ( f 1 ( t ) , t ) 1 = 0 admits the smallest solution which is positive (SSP). Denote such a solution by ϱ . Set M 0 = [ 0 , ϱ ) .
( H 2 ) There exists x * D solving the equation P ( x ) = 0 and an invertible linear operator T such that for all x D , u = x + a P ( x )
u x * f 1 ( x x * )
and
T 1 ( [ u , x ; P ] T ) φ 0 ( u x * , x x * ) .
Set D 0 = V ( x * , ϱ ) D .
The definition of ϱ , and the condition ( H 2 ) imply
T 1 ( [ u , x ; P ] T ) φ 0 ( u x * , x x * ) < 1 .
Thus, the operator A = [ u , x ; P ] is invertible by the Banach perturbation Lemma [2] on linear operators (see also Lemma 1).
( H 3 ) There exist CND f 2 : M 0 M and φ : M 0 × M 0 M , φ 1 : M 0 × M 0 × M 0 × M 0 M and φ 2 : M 0 M , such that, for each v , x , y D 0 , u = x + a P ( x ) , z = y + b P ( y ) , A = [ u , x ; P ] , s = A 1 [ y , z ; P ] , and provided that B ( s ) is a weight function
T 1 ( [ u , x ; P ] [ x , x * ; P ] ) φ ( u x * , x x * ) ,
z x * f 2 ( x x * ) ,
I B ( s ) A 1 [ v , x * ; P ] φ 1 ( x x * , u x * , z x * , v x * ) ,
and
T 1 ( [ x , x * ; P ] T ) φ 2 ( x x * .
Define the functions h i , i = 1 , 2 , , m on the interval M 0 by
h 1 ( t ) = φ ( f 1 ( t ) , t ) 1 φ 0 ( f 1 ( t ) , t ) ,
and for j = 2 , , m
h j ( t ) = φ 1 ( t , h j 1 ( t ) t , f 1 ( t ) , f 2 ( t ) ) h j 1 ( t ) .
( H 4 ) The equations h j ( t ) 1 = 0 admit SPS in the interval M 0 . Thus, we denote such solutions with s j , respectively.
Set
s * = min { s j } and M * = [ 0 , s * ) .
These definitions assure that for each t M *
0 φ 0 ( f 1 ( t ) , t ) < 1
and
0 h j ( t ) < 1 .
( H 5 ) V [ x * , s * ] D .
Remark 1.
(1) The functions f 1 and f 2 can be defined as follows:
f 1 ( t ) = ( I + a T + | a | T φ 2 ( t ) ) t
and
f 2 ( t ) = ( I + b T + | b | T φ 2 ( h 1 ( t ) t ) ) h 1 ( t ) t .
The justification for these choices follows, in turn, from the calculations:
u x * = x x * + a T T 1 [ x , x * ; P ] ) ( x x * ) = ( I + a T T 1 ( [ x , x * ; P ] ) ( x x * ) = ( I + a T T 1 ( [ x , x * ; P ] T + T ) ( x x * ) = [ ( I + a T ) + a T T 1 ( [ x , x * ; P ] T ) ] ( x x * ) .
Thus, it follows that
u x * ( I + a T + | a | T φ 2 ( x x * ) ) x x * ,
which justifies the choice of the function f 1 .
Similarly, we can write
z x * = [ ( I + b T ) + b T T 1 ( [ y , x * ; P ] T ) ] ( y x * )
leading to
z x * [ I + b T + | b | T φ 2 ( y x * ) ] y x * ,
which justifies the choice of the function f 2 .
(2) A popular pick for T = P ( x * ) . But this implies the invertibility of the function P . Moreover, x * is a simple solution in this case. We do not assume the invertability of P ( x * ) and it is not necessarily implied by any of the conditions.
This way, method (2) can be employed to approximate solutions for the equation P ( x ) = 0 which are not necessarily simple [5]. Other choices are possible (see also the Numerical Section).
The following is a useful lemma [2].
Lemma 1.
Let Q be a linear operator satisfying Q < 1 . Then, the linear operator I Q is invertible and
( I Q ) 1 1 1 Q .
Recall that Q 1 is an approximate inverse of Q is I Q 1 A < 1 , provided that Q 1 is also a linear operator. Moreover, in this case, the following hold Q and Q 1 are both invertible and
Q 1 Q 1 1 I Q 1 Q ,
and
Q 1 1 Q 1 I Q 1 Q .
This result is also mentioned as the Banach lemma [2].
Next, the local analysis of convergence for the method (2) uses the conditions ( H 1 ) ( H 5 ) as well as the preceding notation.
Theorem 1.
Suppose that the conditions ( H 1 ) ( H 5 ) hold and pick x 0 V ( x * , s * ) { x * } . Then, the following assertions hold
{ x k } V ( x * , s * ) ,
y k ( 1 ) x * h 1 ( x k x * ) x k x * x k x * < s * ,
for j = 2 , , m
y k ( j ) x * h j ( x k x * ) x k x * x k x * ,
and
lim k x k = x * .
In particular for j = m
x k + 1 x * = y m x * h m ( x k x * ) x k x * c x k x * ,
where c = h m ( x 0 x * ) [ 0 , 1 ) .
Proof. 
Mathematical induction shall establish these assertions. According to the hypothesis, x 0 V ( x * , s * ) { x * } V ( x * , s * ) , so the assertions (6) holds if k = 0 . The application of conditions ( H 1 ) and ( H 2 ) give, in turn,
T 1 ( A 0 T ) φ 0 ( u 0 x * , x 0 x * ) φ 0 ( f 1 ( x 0 x * ) , x 0 x * ) φ 0 ( f 1 ( s * ) , s * ) < 1 .
Thus, A 0 1 exists, and
A 0 1 T 1 1 φ 0 ( f 1 ( x 0 x * ) , x 0 x * ) .
It also follows that the iterate y 0 ( 1 ) is well defined by the first substep of the method (2). We can also write
y 0 ( 1 ) x * = x 0 x * A 0 1 P ( x 0 ) = A 0 1 ( A 0 [ x 0 , x * ; P ] ) ( x 0 x * )
leading, using (3), (11), and ( H 3 ) , to
y 0 ( 1 ) x * φ ( f 1 ( x 0 x * ) , x 0 x * ) x 0 x * 1 φ 0 ( f 1 ( x 0 x * ) , x 0 x * ) h 1 ( x 0 x * ) x 0 x * x 0 x * < s * .
Hence, the iterate y 0 ( 1 ) V ( x * , s * ) , and the assertion (7) holds provided that k = 0 .
Concerning the rest of the substeps in method (2), since the iterates y 0 ( 2 ) , , y 0 ( j ) are well defined, we obtain, in turn,
y 0 ( j ) x * = y 0 ( j 1 ) x * B ( s 0 ) A 0 1 [ y 0 ( j 1 ) , x * ; P ] ( y 0 ( j 1 ) x * ) = ( I B ( s 0 ) A 0 1 [ y 0 ( j 1 ) , x * ; P ] ) ( y 0 ( j 1 ) x * ) ,
from which we deduce
y 0 ( j ) x * φ 1 ( x 0 x * , y 0 ( j 1 ) x * , u 0 x * , z 0 x * ) y 0 ( j 1 ) x * h j ( x 0 x * ) x 0 x * x 0 x * .
The induction for assertion (8) is completed for k = 0 . In particular, for j = m , we obtain
x 1 x * = y 0 ( m ) x * h m ( x 0 x * ) x 0 x * c x 0 x * x 0 x * ,
resulting in (6) and (10) provided that k = 1 . The inductions for (6)–(8) and (10) are completed for k = 0 . But these calculations can be repeated if x i replaces x 0 in the preceding estimations.
Therefore, we have as in (14) that
x k + 1 x * c x k x * c k + 1 x 0 x * < x 0 x * < s * ,
which implies that the iterate x k + 1 V ( x * , s * ) , and lim k x k = x * .
The uniqueness of the solution x * is discussed in the following result.
Proposition 1.
Suppose that the condition ( H 2 ) holds in the ball V ( x * , ϱ 1 ) for some ϱ 1 > 0 ; there exists ϱ 2 ϱ 1 such that the last condition in ( H 4 ) holds and
φ 2 ( ϱ 2 ) < 1 .
Set D 1 = V [ x * , ϱ 2 ] D . Then, the equation P ( x ) = 0 is uniquely solvable by x * in the region D 1 .
Proof. 
Suppose that there exists w * D 1 , solving the equation P ( x ) = 0 , and w * x * , Then, define the divided difference L 1 = [ w * , x * ; P ] . Then, the condition ( H 2 ) and (16) imply
T 1 ( L 1 T ) φ 2 ( w * x * ) φ 2 ( ϱ 2 ) < 1 .
Hence, L 1 1 exists. Finally, from the identity w * x * = L 1 1 ( P ( w * ) P ( x * ) ) = L 1 1 ( 0 ) = 0 , concluding that w * = x * .    □
Remark 2.
It is clear that ϱ 1 = s * provided that all the conditions of Theorem 1 hold in Proposition 1.

3. Convergence 2: Semi-Local

The semi-local analysis of convergence relies on majorizing sequences and similar computations. But the solution x * and the function “ φ ” are exchanged by x 0 , and the function “ ψ ”, respectively.
Recall that, a sequence { s k } [ 0 , + ) for which
x k + 1 x k s k + 1 s k , k = 0 , 1 , 2 ,
holds is majorizing for { x k } . Moreover, suppose that lim k + s k = s * < + exist.
Then, it follows that lim k + x k = x * and
x * x k s * s k , k = 0 , 1 , 2 ,
Therefore, the study of the convergence of the sequence { x k } is reduced to the study of the scalar sequence { s k } [2].
Suppose:
( C 1 ) There exist CND g 1 : M M , ψ 0 : M × M M such that the equation ψ 0 ( g 1 ( t ) , t ) 1 = 0 has a SPS. Denote such a solution by p 0 . Set N = [ 0 , p 0 ) .
There exist CND ψ : N × N × N M , g 2 : N M , ψ 1 : N × N × N × N M and ψ 2 : N × N M .
Define the scalar sequences for α 0 ( 0 ) = 0 , some α 0 ( 1 ) = 0 , i = 1 , 2 , , m and each k = 0 , 1 , 2 , by
b k ( i ) = ψ ( α k ( 0 ) , α k ( i ) , g 1 ( α k ( 0 ) ) ) ( α k ( i ) α k ( 0 ) ) , γ k ( i ) = ψ 1 ( α k ( 0 ) , α k ( i ) , g 1 ( α k ( 0 ) ) , g 2 ( α k ( 0 ) ) ) , α k ( i + 1 ) = α k ( i ) + γ k ( i ) b k ( i ) , b k ( m 2 ) = ( 1 + ψ 2 ( α k ( 1 ) , α k ( m 2 ) ) ) ( α k ( m 2 ) α k ( 1 ) ) + b k ( i ) , α k ( m 1 ) = α k ( m 2 ) + γ k ( m 2 ) b k ( m 2 ) , α k + 1 ( 0 ) = α k ( m 1 ) + γ k ( m 1 ) b k ( m 1 ) , δ k + 1 = ( 1 + ψ 2 ( α k + 1 ( 0 ) , α k ( 0 ) ) ) ( α k + 1 ( 0 ) α k ( 0 ) ) + ( 1 + ψ 2 ( α k + 1 ( 0 ) , g 2 ( α k ( 0 ) ) ) ( α k + 1 ( 1 ) α k + 1 ( 0 ) )
and
α k + 1 ( 1 ) = α k + 1 + δ k + 1 1 ψ 0 ( g 1 ( α k + 1 ( 0 ) ) , α k + 1 ( 0 ) ) .
This sequence is shown to be majorizing for { x k } in Theorem 2. But first, a convergence condition is required for this sequence.
( C 2 ) There exist a parameter p [ 0 , p 0 ) such that for each i = 0 , 1 , , m ,   k = 0 , 1 , 2 ,
ψ 2 ( g 1 ( α k ( 0 ) ) , α k ( 0 ) ) < 1 and α k ( i ) < p .
This condition and the formula (17) imply that the sequence { α k ( i ) } is nonnegative, nondecreasing, and bounded from above by p 0 . Consequently, the sequence { α k ( i ) } is convergent to its least upper bound which is a unique number. Denote such a number with α * .
As in the local analysis, the scalar parameters and sequences relate to the functions in method (2).
( C 3 ) A point x 0 D and an invertible function T exist, such that, for each x D , u = x + a P ( x ) , z = x + b P ( x ) ,
u x 0 g 1 ( x x 0 ) , z x 0 ) g 2 ( x x 0 ) ,
and
T 1 ( [ u , x ; P ] T ) ψ 0 ( u x 0 , x x 0 ) .
Set D 2 = V ( x 0 , p 0 ) D .
This condition and the definition of p 0 imply, for x = x 0 , that
T 1 ( [ u 0 , x 0 ; P ] T ) ψ 0 ( g 1 ( x 0 x * ) , 0 ) ψ 0 ( g 1 ( p 0 ) , 0 ) < 1 .
Thus, A 0 0 . Hence, we can take α 0 ( 1 ) A 0 1 P ( x 0 ) .
( C 4 ) For each x D 2 , u = x + a P ( x ) , y = x A 1 P ( x ) , v = y + b P ( y ) ,
z x 0 g 1 ( x x 0 ) , v x 0 g 2 ( x x 0 ) ,
T 1 ( [ u , x ; P ] [ v , x ; P ] ) ψ ( x x 0 , u x 0 , | v x 0 ) ,
B ( s ) A 1 T ψ 1 ( x x 0 , u x 0 , z x 0 , v x 0 ) ,
and
T 1 ( [ x , w ; P ] T ) ψ 2 ( x x 0 , w x 0 ) ,
and
( C 5 ) V [ x 0 , α * ] D .
Remark 3.
( i ) The functions g 1 and g 2 can be chosen to be
g 1 ( t ) = ( I + a T + | a | T ψ 2 ( t , 0 ) t ) + | a | T P ( x 0 )
and
g 2 ( t ) = ( I + b T + | b | T ψ 2 ( t , 0 ) t ) + | b | T P ( x 0 ) .
As in the local analysis, the motivational calculations are:
u x 0 = x x 0 + a ( P ( x ) P ( x 0 ) + P ( x 0 ) ) = [ ( I + a T ) + a T T 1 ( [ x , x 0 ; P ] T ) ] ( x x 0 ) + a P ( x 0 ) ,
so
u x 0 [ I + a T + | a | T ψ 2 ( x x 0 , 0 ) ] x x 0 + | a | P ( x 0 ) ,
justifying the choice of the function g 1 .
Similarly
z x 0 = y k ( 1 ) x 0 + b T T 1 ( P ( y k ( 1 ) ) P ( x 0 ) + P ( x 0 ) ) ;
thus,
z k x 0 ( I + b T + | b | T ψ 2 ( y k ( 1 ) x 0 , 0 ) ) y k ( 1 ) x 0 + | b | T P ( x 0 ,
which justifies the choice of the function g 2 .
( i i ) A popular choice for T = P ( x 0 ) . The rest of the comments are omitted as they are similar to the ones in Remark 1.
The semilocal analysis of convergence is provided in the next result.
Theorem 2.
Suppose that the conditions ( C 1 ) - ( C 5 ) hold. Then, the sequence { x k } is well defined in V ( x 0 , α * ) , remains in V ( x 0 , α * ) for each k = 0 , 1 , 2 , . and is convergent to a solution x * V [ x 0 , α * ] on the equation P ( x ) = 0 such that
x * x k α * α k ( 0 ) .
Proof. 
As in the local analysis, mathematical induction is utilized to show the assertions
y k ( 1 ) x k α k ( 1 ) α k ( 0 ) ,
and
y k ( i + 1 ) y k ( i ) α k ( i + 1 ) α k ( i ) .
The assertion (19) holds if k = 0 since
y 0 ( 1 ) x 0 A 0 1 P ( x 0 ) α 0 ( 1 ) = α 0 ( 1 ) α 0 ( 0 ) < α * ,
and the iterate y 0 ( 1 ) V ( x 0 , α * ) . Then, we have the estimates
P ( y k ( 1 ) ) = P ( y k ( 1 ) ) P ( x k ) A k ( y k ( 1 ) x k ) = ( [ y k ( 1 ) , x k ; P ] A k ) ( y k ( 1 ) x k ) .
So,
T 1 P ( y k ( 1 ) ) = ψ ( x x x 0 , y k ( 1 ) x 0 , u k x 0 ) y k ( 1 ) x k ψ ( α k ( 0 ) , α k ( 1 ) , g 1 ( α k ( 0 ) ) ) ( α k ( 1 ) α k ( 0 ) ) = b k ( 0 ) ,
y k ( 2 ) y k ( 1 ) = B ( s k ) A k 1 P ( y k ( 1 ) ) ,
y k ( 2 ) y k ( 1 ) B ( s k ) A k 1 T T 1 P ( y k ( 1 ) ) ψ 1 ( x k x 0 , z k x 0 , y k ( 1 ) x 0 , u k x 0 ) b k ( 1 ) γ k ( 1 ) b k ( 1 ) = α k ( 2 ) α k ( 1 )
and
y k ( 2 ) x 0 y k ( 2 ) y k ( 1 ) + y k ( 1 ) x 0 = α k ( 2 ) α k ( 1 ) + α k ( 1 ) α k ( 0 ) = α k ( 2 ) < α * .
Hence, the iterate y k ( 2 ) V ( x 0 , α * ) and the assertion (20) hold for i = 1 .
Then, we can write
P ( y k ( m 2 ) ) = P ( y k ( m 2 ) ) P ( y k ( 1 ) ) + P ( y k ( 1 ) ) = [ y k ( m 2 ) , y k ( 1 ) ; P ] ( y k ( m 2 ) y k ( 1 ) ) + P ( y k ( 1 ) ) ;
thus,
T 1 P ( y k ( m 2 ) ) T 1 ( [ y k ( m 2 ) , y k ( 1 ) ; P ] T + T ) ( y k ( m 2 ) y k ( 1 ) ) + T 1 P ( y k ( 1 ) ) ( 1 + ψ 2 ( α k ( 1 ) , α k ( m 2 ) ) ) ( α k ( m 2 ) α k ( 1 ) ) + b k ( 1 ) = b k ( m 2 )
leading to
y k ( m 1 ) y k ( m 2 ) ψ 2 ( x k x 0 , u k x 0 , z k x 0 , y k ( m 2 ) x 0 ) α k ( m 2 ) = α k ( m 1 ) α k ( m 2 ) ,
y k ( m 1 ) x 0 y k ( m 1 ) y k ( m 2 ) + y k ( m 2 ) x 0 α k ( m 1 ) α k ( m 2 ) + α k ( m 2 ) α 0 ( 0 ) = α k ( m 1 ) < α * ,
x k + 1 y k ( m 1 ) = y k ( m ) y k ( m 1 ) ψ 1 ( x k x 0 , z k x 0 , u k x 0 , y k ( m 1 ) x 0 ) λ k ( m 1 ) = α k ( m ) α k ( m 1 ) = α k + 1 ( 0 ) α k ( m 1 )
and
x k + 1 x 0 x k + 1 y k ( m 1 ) + y k ( m 1 ) x 0 α k + 1 ( 0 ) α k ( m 1 ) + α k ( m 1 ) α 0 ( 0 ) = α k + 1 ( 0 ) < α * .
Hence, the iterates y k ( i ) V ( x 0 , α * ) and the iteration (20) holds.
Then, we can also write
P ( x k + 1 ) = P ( x k + 1 ) P ( x k ) A k + 1 ( y k + 1 ( 1 ) x k + 1 ) = P ( x k + 1 ) P ( x k ) [ x k + 1 , x k ; P ] ( x k + 1 x k ) + [ x k + 1 , x k ; P ] ( x k + 1 x k ) A k + 1 ( y k + 1 ( 1 ) x k + 1 ) = [ x k + 1 , x k ; P ] ( x k + 1 x k ) A k + 1 ( y k + 1 ( 1 ) x k + 1 ) .
But
T 1 ( [ x k + 1 , x k ; P ] T + T ) 1 + ψ 2 ( α k + 1 ( 0 ) , α k ( 0 ) )
and similarly
T 1 ( A k + 1 T + T ) 1 + ψ 2 ( α k + 1 ( 0 ) , g 2 ( α k + 1 ( 0 ) ) ) ,
leading to
T 1 P ( x k + 1 ) ( 1 + ψ 2 ( α k + 1 ( 0 ) , α k ( 0 ) ) ) ( α k + 1 ( 0 ) α k ( 0 ) ) + ( 1 + ψ 2 ( α k + 1 ( 0 ) , g 2 ( α k + 1 ( 0 ) ) ) ) ( α k + 1 ( 1 ) α k + 1 ) = δ k + 1 ,
so
y k + 1 ( 1 ) x k + 1 A k + 1 1 T T 1 P ( x k + 1 ) δ k + 1 1 ψ 0 ( g 1 ( α k + 1 ( 0 ) ) , α k + 1 ( 0 ) ) ) = α k + 1 ( 1 ) α k + 1
and
y k + 1 ( 1 ) x 0 y k + 1 ( 1 ) x k + 1 + x k + 1 x 0 α k + 1 ( 1 ) α k + 1 ( 0 ) + α k + 1 ( 0 ) α k ( 0 ) = α k + 1 ( 1 ) < α * .
Hence, the induction for the assertions (19) and (20) is completed and all the iterates of the method (2) belong in V ( x 0 , α * ) . The condition ( C 2 ) implies that the sequence { α k ( i ) } is complete as it is convergent to α * . Then, using (19) and (20), the sequence { x k } is also complete in E and, as such, it is convergent to some x * V [ x 0 , x * ] . Then, by letting k + in (21) and the continuity of P, we deduce that P ( x * ) = 0 . Finally, the assertion (18) follows from the estimate
x k + j x k α k + j α k
by letting j + .  □
The uniqueness of the solution follows.
Proposition 2.
Suppose that there exists a solution w 0 V ( x 0 , ϱ 3 ) to the equation P ( x ) = 0 for some ϱ 3 > 0 ;
the last condition in ( C 4 ) holds in the ball V ( x 0 , ϱ 3 ) and ϱ 4 > ϱ 3 exists, such that
ψ 2 ( ϱ 3 , ϱ 4 ) < 1 .
Set D 3 = V [ x 0 , ϱ 4 ] D .
Then, the only solution to the equation P ( x ) = 0 in the region D 3 is w 0 .
Proof. 
Suppose that w D 3 exists, solving the equation P ( x ) = 0 and satisfying w w 0 . Define T 2 = [ w 0 , w ; P ] . It follows from the last condition in ( C 4 ) and (22) that
T 1 ( T 2 T ) ψ 2 ( w 0 x 0 , w x 0 ) ψ 2 ( ϱ 3 , ϱ 4 ) < 1 ,
so, T 2 1 exists. Then, similarly to Proposition 1, from the identity
w 0 w = T 2 1 ( P ( w 0 ) P ( w ) ) = T 2 1 ( 0 ) = 0 ,
we conclude w = w 0 .    □
Remark 4.
( 1 ) Under all the conditions of Theorem 2, take w 0 x * and ϱ 3 = α * in Proposition 2.
( 2 ) The limit α * can be replaced by p 0 in Theorem 2.

4. Numerical Experiments

In this section, we present numerical examples that confirm the local theoretical results and show the results of testing the method on systems of nonlinear equations. The calculations are carried out in GNU Octave 7.3.0. Systems of nonlinear algebraic and transcendental equations arise as a result of applying the difference method for solving boundary value problems or the quadrature method for solving integral equations.
Let us compute the radii of convergence for the Algorithm 1 for different values of the real parameter a and b. For this, consider the following nonlinear equation.
Algorithm 1: The algorithm from method (2) for solving the system of nonlinear equation consists of the following steps:
  1. Select the starting approximation x 0 , real parameters a, b and the tolerance  ε .
  2. For k = 0 , 1 , 2 , while x k + 1 x k ε or (and) F ( x k + 1 ) ε do:
    2.1. calculate P ( x k ) ;
    2.2. calculate u k = x k + a P ( x k ) ;
    2.3. calculate A k = [ u k , x k ; P ] ;
    2.4. calculate y k ( 1 ) = x k A k 1 P ( x k ) ;
     if m 2 then
     2.5. calculate z k = y k ( 1 ) + b P ( y k ( 1 ) ) ;
     2.6. calculate s k = A k 1 [ y k ( 1 ) , z k ; P ] and B ( s k ) ;
     2.7. calculate L k = B ( s k ) A k 1 ;
     2.8. for j = 2 , , m
      2.8.1. calculate P ( y k ( j 1 ) ) ;
      2.8.2. calculate y k ( j ) = y k ( j 1 ) L k P ( y k ( j 1 ) ) ;
    2.9. set x k + 1 = y k ( m ) .
Example 1.
Let E = R , D = V ( x * , 0.5 ) , c [ 0 , 1 )
P ( x ) = x 3 c = 0
and the exact solutions is x * = c 1 / 3 .
Let T = P ( x * ) and c = 0.9 . Then, for the function P ( x ) we have P ( x ) = 3 x 2 and [ x , y ; P ] = x 2 + x y + y 2 . To define functions f 1 and f 2 we use the following equalities:
u x * = x x * + a [ x , x * ; P ] ( x x * ) = ( 1 + a [ x , x * ; P ] ) ( x x * )
and
z x * = y x * + b [ y , x * ; P ] ( y x * ) = ( 1 + b [ y , x * ; P ] ) ( y x * ) ,
respectively. For m = 2 we obtain the following radii:
for a = b = 0.1 , s * = min { 0.1909 , 0.053751 } = 0.053751 ;
for a = b = 0.5 , s * = min { 0.088357 , 0.030085 } = 0.030085 ;
for a = b = 1 , s * = min { 0.048933 , 0.018782 } = 0.018782 .
Now let us analyze the behavior of the method depending on the choice of function B ( s ) . We considered the following cases:
(a)
If B ( s ) = I ( s I ) + ( s I ) ( s I ) , then this method was considered in [5];
(b)
If B ( s ) = I , then we have a multi-step Steffensen-type method.
Example 2.
Consider the system of n equations
j = 1 n x j + e x i 1 = 0 , i = 1 , , n .
Here E = I R n , D R n and the exact solution x * = ( 0 , , 0 ) T .
Let us choose n = 10 , m = 2 , and starting approximation x 0 = ( 2 , , 2 ) T . We use the following stopping criterion for an iterative process
x k + 1 x k 10 10 .
Here denotes the Euclidean norm.
Figure 1 and Figure 2 show the change in the correction’s norm at each iteration. The results are given for different values of parameters a and b.
The number of iterations of the method (2) differs little for the considered function B ( s ) . However, the following differences can be noted. Method (2)-(a) converges no slower than method (2)-(b); however, the computational complexity of one iteration is higher by O ( n 3 ) operations. Starting from a certain iteration, the correction rate of the method (2)-(a) decreases faster than for the method (2)-(b). In addition, the study of the considered method on different examples showed that it is advisable to choose parameters a and b close to zero. In this case, method (2) converges for a larger number of initial approximations.
Example 3.
Consider the boundary value problem
y ( t ) y ( t ) t g ( t ) + 2 y 2 ( t ) sin ( t ) = 0 , 0 < t < π 2 , y ( 0 ) = 0 , y ( π / 2 ) = 1 .
One of the exact solutions is y * ( t ) = sin ( t ) . Denote υ i y ( t i ) , i = 0 , , n + 1 , where t i = i h and h = π 2 ( n + 1 ) . Using the approximation for the first and second-order derivatives
υ i υ i 1 2 υ i + υ i + 1 h 2 , υ i υ i + 1 υ i 1 2 h , i = 1 , , n ,
we obtain the following system of the nonlinear equations
P i ( x ) = 2 υ i + υ i + 1 h 2 υ i + 1 t g ( t i ) + 2 h 2 υ i 2 sin ( t i ) = 0 , i = 1 , P i ( x ) = υ i 1 2 υ i + υ i + 1 h 2 ( υ i + 1 υ i 1 ) t g ( t i ) + 2 h 2 υ i 2 sin ( t i ) = 0 , i = 2 , , n 1 , P i ( x ) = υ i + 1 2 υ i + 1 h 2 ( 1 υ i 1 ) t g ( t i ) + 2 h 2 υ i 2 sin ( t i ) = 0 , i = n
with x = ( υ 1 , , υ n ) T . Let a = b = 0.1 , n = 19 and the initial approximation be x 0 , i = sin t i + 0.5 , i = 1 , , n . We use F ( x k ) 10 10 as the stopping criteria for this problem. The norm of residuals at each iteration is presented in Table 1.

5. Conclusions

A new methodology is developed to deal with methods using derivatives not related to divided differences or derivatives in the proof of convergence.
In this article, using this methodology, we positively address concerns limiting the applicability of methods whose convergence is based on the Taylor series. Other limitations such as the lack of information on the uniqueness of a solution and a priori estimates of x * x k or x k + 1 x k are also addressed in this article under weak conditions.
The methodology is demonstrated on Symmetric-Steffensen-type multi-step and derivative-free methods using frozen divided differences. But it can be used on other single- as well as multi-step methods as long as they use linear operators which are invertible.
In our future research we plan to utilize this methodology in other methods [1,2,4,6,7,8,9,10,11,12,13,14,15,16].

Author Contributions

Conceptualization, I.K.A., S.S., S.R., H.Y. and M.I.A.; methodology, I.K.A., S.S., S.R., H.Y. and M.I.A.; software, I.K.A., S.S., S.R., H.Y. and M.I.A.; validation, I.K.A., S.S., S.R., H.Y. and M.I.A.; formal analysis, I.K.A., S.S., S.R., H.Y. and M.I.A.; investigation, I.K.A., S.S., S.R., H.Y. and M.I.A.; resources, I.K.A., S.S., S.R., H.Y. and M.I.A.; data curation, I.K.A., S.S., S.R., H.Y. and M.I.A.; writing—original draft preparation, I.K.A., S.S., S.R., H.Y. and M.I.A.; writing—review and editing, I.K.A., S.S., S.R., H.Y. and M.I.A.; visualization, I.K.A., S.S., S.R., H.Y. and M.I.A.; supervision, I.K.A., S.S., S.R., H.Y. and M.I.A.; project administration, I.K.A., S.S., S.R., H.Y. and M.I.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Upper Saddle River, NJ, USA, 1964. [Google Scholar]
  2. Potra, F.; Pták, V. Nondiscrete Induction and Iterative Processes; Pitman Publishing: Lanham, MD, USA, 1984. [Google Scholar]
  3. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  4. Amat, S.; Ezquerro, J.; Hernández, M. On a Steffensen-like method for solving nonlinear equations. Calcolo 2016, 53, 171–188. [Google Scholar] [CrossRef]
  5. Cordero, A.; Villalba, E.G.; Torregrosa, J.R.; Triguero-Navarro, P. Introducing memory to a family of multi-step multidimensional iterative methods with weight function. Expo. Math. 2023, 42, 398–417. [Google Scholar] [CrossRef]
  6. Alarcón, V.; Amat, S.; Busquier, S.; López, D. Steffensen’s type method in Banach spaces with applications on boundary-value problems. J. Comput. Appl. Math. 2008, 216, 243–250. [Google Scholar] [CrossRef]
  7. Amat, S.; Argyros, I.; Busquier, S.; Hernández-Verón, M.; Magreñán, A.; Martínez, E. A multistep Steffensen-type method for solving nonlinear systems of equations. Math. Methods Appl. Sci. 2020, 43, 7518–7536. [Google Scholar] [CrossRef]
  8. Bhalla, S.; Kumar, S.; Argyros, I.; Behl, R.; Motsa, S. High-order modification of Stefensen’s method for solving system of nonlinear equations. Comput. Appl. Math. 2018, 37, 1913–1940. [Google Scholar] [CrossRef]
  9. Narang, M.; Bhatia, S.; Alshomrani, A.S.; Kanwar, V. General efficient class of Steffensen type methods with memory for solving systems of nonlinear equations. J. Comput. Appl. Math. 2019, 352, 23–39. [Google Scholar] [CrossRef]
  10. Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. Stability and applicability of iterative methods with memory. J. Math. Chem. 2019, 57, 1282–1300. [Google Scholar] [CrossRef]
  11. Chun, C.; Neta, B.; Kozdon, J.; Scott, M. Choosing weight functions in iterative methods for simple roots. Appl. Math. Comput. 2014, 227, 788–800. [Google Scholar] [CrossRef]
  12. George, S. On convergence of regularized modified Newton’s method for nonlinear ill-posed problems. J. Inv. Ill-Posed Probl. 2010, 18, 133–146. [Google Scholar] [CrossRef]
  13. George, S.; Nair, M.T. An a posteriory parameter choice for simplified regularization of ill-posed problems. Inter. Equat. Oper. Th. 1993, 16, 392–399. [Google Scholar] [CrossRef]
  14. Kung, H.T.; Traub, J.F. Optimal order of one-point and multi-point iteration. J. Assoc. Comput. Mach. 1974, 21, 643–651. [Google Scholar] [CrossRef]
  15. Argyros, I.K.; Shakhno, S. Extended local convergence for the combined Newton-Kurchatov method under the generalized Lipschitz conditions. Mathematics 2019, 7, 207. [Google Scholar] [CrossRef]
  16. Weeracoon, S.; Ferando, T. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
Figure 1. Example 2: norm of correction at each iteration.
Figure 1. Example 2: norm of correction at each iteration.
Symmetry 16 00330 g001
Figure 2. Example 2: norm of correction at each iteration.
Figure 2. Example 2: norm of correction at each iteration.
Symmetry 16 00330 g002
Table 1. Results for Example 3.
Table 1. Results for Example 3.
kMethod 2-(a)Method 2-(b)
17.9038 × 10 3 2.1947 × 10 2
21.9599 × 10 5 1.1784 × 10 4
34.5432 × 10 13 2.0348 × 10 8
4 3.0205 × 10 16
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Argyros, I.K.; Shakhno, S.; Regmi, S.; Yarmola, H.; Argyros, M.I. Symmetric-Type Multi-Step Difference Methods for Solving Nonlinear Equations. Symmetry 2024, 16, 330. https://doi.org/10.3390/sym16030330

AMA Style

Argyros IK, Shakhno S, Regmi S, Yarmola H, Argyros MI. Symmetric-Type Multi-Step Difference Methods for Solving Nonlinear Equations. Symmetry. 2024; 16(3):330. https://doi.org/10.3390/sym16030330

Chicago/Turabian Style

Argyros, Ioannis K., Stepan Shakhno, Samundra Regmi, Halyna Yarmola, and Michael I. Argyros. 2024. "Symmetric-Type Multi-Step Difference Methods for Solving Nonlinear Equations" Symmetry 16, no. 3: 330. https://doi.org/10.3390/sym16030330

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop