Next Article in Journal
SE-IYOLOV3: An Accurate Small Scale Face Detector for Outdoor Security
Next Article in Special Issue
New Improvement of the Domain of Parameters for Newton’s Method
Previous Article in Journal
Fractional q-Difference Inclusions in Banach Spaces
Previous Article in Special Issue
A Comparison of Methods for Determining the Time Step When Propagating with the Lanczos Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Semi-Local Analysis and Real Life Applications of Higher-Order Iterative Schemes for Nonlinear Systems

1
Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
2
Department of Mathematics, Chandigarh University, Gharuan 140413, Mohali, Punjab, India
3
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
4
School of Mathematics, Thapar Institute of Engineering and Technology University, Patiala 147004, Punjab, India
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(1), 92; https://doi.org/10.3390/math8010092
Submission received: 25 November 2019 / Revised: 22 December 2019 / Accepted: 31 December 2019 / Published: 6 January 2020
(This article belongs to the Special Issue Computational Methods in Analysis and Applications 2020)

Abstract

:
Our aim is to improve the applicability of the family suggested by Bhalla et al. (Computational and Applied Mathematics, 2018) for the approximation of solutions of nonlinear systems. Semi-local convergence relies on conditions with first order derivatives and Lipschitz constants in contrast to other works requiring higher order derivatives not appearing in these schemes. Hence, the usage of these schemes is improved. Moreover, a variety of real world problems, namely, Bratu’s 1D, Bratu’s 2D and Fisher’s problems, are applied in order to inspect the utilization of the family and to test the theoretical results by adopting variable precision arithmetics in Mathematica 10. On account of these examples, it is concluded that the family is more efficient and shows better performance as compared to the existing one.

1. Introduction

A lot of problems related to obtaining the solutions of nonlinear systems are brought forward in many applied sciences and engineering applications. Generally, the zeros of a nonlinear system cannot be expressed in closed form. Therefore, iterative methods for obtaining the approximating solutions of systems of nonlinear equations are the most frequently used techniques. This topic has always been of paramount importance in mathematics for approximating the roots of nonlinear system of the form
Γ ( ϑ ) = 0 ,
where Γ ( ϑ ) = ( f 1 ( ϑ ) , f 2 ( ϑ ) , . . , f n ( ϑ ) ) T , ϑ = ( ϑ 1 , ϑ 2 , . , ϑ n ) T and Γ : R n R n is a sufficiently differentiable vector function. One of the most basic and common among iterative methods for approximating the solutions of nonlinear system is Newton’s method [1,2,3,4,5,6], defined as follows:
ϑ σ + 1 = ϑ σ Γ ( ϑ σ ) 1 Γ ( ϑ σ ) , σ = 0 , 1 , 2 , ,
where Γ ( ϑ σ ) 1 is the inverse of the first-order Fréchet derivative of Γ ( ϑ ) and has a quadratic convergence. Some higher-order Newton-like schemes can be found in [7,8,9,10,11,12,13,14,15,16]. However, these schemes need the usage of Γ ( ϑ σ ) 1 . On the other hand, this usage is very expensive and/or it demands a huge amount of time for them to be given, computed or calculated. That is why Traub [17] established a second-order convergent scheme, which is defined as follows:
ϑ σ + 1 = ϑ σ [ Λ σ , ϑ σ ; Γ ] 1 Γ ( ϑ σ ) ,
where Λ σ = ϑ σ + β Γ ( ϑ σ ) , β R { 0 } . The finite difference [ Λ σ , ϑ σ ; Γ ] (see [18,19]) is given by
[ Λ σ , ϑ σ ; Γ ] i j = f i ( λ 1 σ , , λ j 1 σ , λ j σ , ϑ j + 1 σ , , ϑ n σ ) f i ( λ 1 σ , , λ j 1 σ , ϑ j σ , ϑ j + 1 σ , , ϑ n σ ) λ j σ ϑ j σ ,
where ϑ σ = ( ϑ 1 σ , , ϑ j 1 σ , ϑ j σ , ϑ j + 1 σ , , ϑ n σ ) , Λ σ = ( λ 1 σ , , λ j 1 σ , λ j σ , λ j + 1 σ , , λ n σ ) and 1 i , j n . For β = 1 , the Traub’s scheme reduces to Steffensen’s scheme [20].
Bhalla et al. [21] have also presented the following scheme:
ψ 1 σ = ϑ σ + 1 = ϑ σ [ ϑ σ + Γ , ϑ σ Γ ; Γ ] 1 Γ ( ϑ σ ) , ψ 2 σ = ψ 1 σ η Γ ( ψ 1 σ ) , ψ 3 σ = ψ 2 σ η Γ ( ψ 2 σ ) , ψ 4 σ = ψ 3 σ η Γ ( ψ 3 σ ) , ψ t 1 σ = ψ t 2 σ η Γ ( ψ t 2 σ ) , ψ t σ = ψ t 1 σ η Γ ( ψ t 1 σ ) . ,
where
η = τ 1 ( α 2 2 2 α 1 α 2 ) [ ψ 1 σ , ϑ σ ; Γ ] + ( α 1 2 + α 2 2 3 α 1 α 2 ) [ ϑ σ + Γ , ϑ σ Γ ; Γ ] ,   τ = ( 2 α 1 α 2 α 2 2 ) [ ψ 1 σ , ϑ σ ; Γ ] [ ψ 1 σ , ϑ σ ; Γ ] α 1 α 2 [ ψ 1 σ , ϑ σ ; Γ ] [ ϑ σ + Γ , ϑ σ Γ ; Γ ]     + ( 2 α 1 2 + α 2 2 3 α 1 α 2 ) [ ϑ σ + Γ , ϑ σ Γ ; Γ ] [ ψ 1 σ , ϑ σ ; Γ ]     ( α 1 2 α 1 α 2 ) [ ϑ σ + Γ , ϑ σ Γ ; Γ ] [ ϑ σ + Γ , ϑ σ Γ ; Γ ] and t = 2 , 3 , 4 , .
Here α 1 and α 2 are non-real parameters provided both parameters do not equal zero simultaneously. In particular, for global convergence schemes for algebraic systems can be found in [22,23,24]. However, here we work on Banach spaces to determine only a locally unique solution.
Let us consider as a motivational example the nonlinear mixed Hammerstein integral type appearing in numerous studies [9,10,11,12] given by
x ( s ) = ζ 0 1 K ( s , ζ ) x ( ζ ) 5 3 + x ( ζ ) 2 d ζ 0 s 1 ,
where
K ( s , ζ ) = ζ ( 1 s ) , ζ s s ( 1 ζ ) , s ζ .
Solving Equation (7) is equivalent to solving Q ( ϑ ) = 0 , with Q : D C [ 0 , 1 ] C [ 0 , 1 ] given as
Q ( ϑ ) ( s ) = x ( s ) 0 1 K ( s , ζ ) x ( ζ ) 5 3 + x ( ζ ) 2 d ζ .
It follows that
Q ( ϑ ) y ( s ) = y ( s ) 0 1 K ( s , ζ ) 5 3 x ( ζ ) 2 3 + 2 x ( ζ ) d ζ .
The local convergence analysis given in [21] uses Taylor expansions and derivatives up to the eighth order. Consequently, these results cannot be used on Q ( ϑ ) = 0 . These types of suppositions confine the usage of Scheme (3) and related schemes. Motivated by these problems, we develop a semi-local analysis for Scheme (5) but with suppositions only for the first Fréchet derivative as well as divided differences of order one (see, in particular, the continuation of this example in Example 1). Equation (10) is used to determine how close the initial guess should be to the solution to assure convergence of the method. Moreover, the ( H ) conditions that follow indicate the constraints on the involved operators. Notice also that the work in [22,23,24] cannot be used to solve the motivational example.

2. Semi-Local Convergence

To introduce the semi-local analysis of Scheme (5), consider Γ : D B B to be a Fréchet-differentiable operator and B to be a Banach space. Let α 1 , α 2 R ( or C ) , γ 0 0 , d > 0 , d 0 > 0 , d 1 0 and d 2 > 0 be parameters. Moreover, let w 0 : [ 0 , ) 2 [ 0 , ) , w : [ 0 , ) 2 [ 0 , ) be continuous and nondecreasing functions in both variables. Define r , b and b i , i = 1 , 2 , 3 , 4 , 5 as
γ = γ 0 1 w 0 ( d 1 , d 1 ) , b 1 = α 1 2 + α 1 α 2 , b 2 = 2 α 1 2 + α 2 2 3 α 1 α 2 , b 3 = α 1 α 2 , b 4 = 2 α 1 α 2 α 2 2 , b 5 = α 1 2 + α 2 2 3 α 1 α 2 , and b = b 1 + b 2 .
Suppose that
b 0 .
Define functions v 0 , v 1 , v 2 , v 3 and v on the interval [ 0 , ) in the following ways
v 0 ( t ) = | b | 1 | b 1 | w 0 ( t + d 1 , t + d 1 ) + | b 2 | w 0 ( t , t ) + ( | b 4 | + d 2 | b 3 | ) d 0 d 1 w 0 ( t + d 1 , t + d 1 ) , v 1 ( t ) = | b 4 | d 1 w 0 ( t + d 1 , t + d 1 ) + | b 2 | w ( γ + d 1 , d 1 ) | b | ( 1 v 0 ( t ) ) , v 2 ( t ) = | b 4 | d 1 w 0 ( t + d 1 , t + d 1 ) + | b 2 + 1 | , v 3 ( t ) = d 0 d 2 | b | ( 1 v 0 ( t ) ) ( 1 v 2 ( t ) ) | b 4 | d 1 w 0 ( t + d 1 , t + d 1 ) + | b 2 | d ( 2 | b 4 | + | b 3 | ) 1 w 0 ( t + d 1 , t + d 1 ) + 2 | b 2 | + | b 1 | , and v ( t ) = γ 1 v 3 ( t ) t .
Suppose function v has a smallest zero R > 0 with
w 0 ( R + d 1 , R + d 1 ) < 1 ,
v 0 ( R ) < 1 ,
v 1 ( R ) < 1 ,
v 2 ( R ) < 1
and
v 3 ( R ) < 1 .
Define parameter q by
q = max { v 1 ( R ) , v 3 ( R ) } .
By using q , v 1 ( R ) and v 3 ( R ) , we conclude that q ( 0 , 1 ) . Moreover, it follows from Equations (9)–(15) that the v " functions are well defined. Next, we demonstrate the convergence of Equation (5) by adopting the preceding notations.
Let the ( H ) conditions be used in the semi-local convergence in the following way
(H1
Let Γ : D B B be a continuous operator that is Fréchet differentiable at some ϑ 0 D with Γ ( ϑ 0 ) 1 L ( B ) .
(H2
There exists a divided difference of order one [ · , · ; Γ ] : D × D L ( B ) .
(H3
There exist γ 0 0 , d 0 > 0 , d 1 0 , d 2 > 0 and d > 0 such that for each ϑ , Λ D
Γ ( ϑ 0 ) 1 Γ ( ϑ 0 ) γ 0 , [ ϑ , Λ ; Γ ] d 0 , Γ ( ϑ ) d 1 , Γ ( ϑ 0 ) 1 d 2 and Γ ( ϑ 0 ) 1 [ ϑ , Λ ; Γ ] d .
(H4
There exist functions w 0 : [ 0 , ) × [ 0 , ) [ 0 , ) , w : [ 0 , ) × [ 0 , ) [ 0 , ) continuous and nondecreasing with ϑ , Λ , Z D
Γ ( ϑ 0 ) 1 [ ϑ , Λ ; Γ ] Γ ( ϑ 0 ) w 0 ϑ ϑ 0 , Λ ϑ 0 Γ ( ϑ 0 ) 1 [ ϑ , Λ ; Γ ] [ Z , W ; Γ ] w ϑ Z , Λ W
  • Function v defined by Equation (10) has the smallest positive zero denoted by R.
  • Conditions (9) and (11)–(15) hold.
  • U ¯ ( ϑ 0 , R + d 1 ) D .
Let U ( γ , ρ ) , U ¯ ( γ , ρ ) be, respectively, open and closed balls in B centered at γ B and of radius ρ > 0 . Next, let the local convergence analysis of Scheme (5) be under the ( H ) conditions as follows:
Theorem 1.
Let us assume that the ( H ) conditions are satisfied. Then, the sequence achieved by Scheme (5) is valid, remains in U ¯ ( ϑ 0 , R ) and converges to the required solution ϑ * of expression Γ ( ϑ ) = 0 .
Proof. 
We first show that Scheme (5) is well defined and remains in U ¯ ( ϑ 0 , R ) . Using the third condition in ( H 3 ) and ( H 7 ) , we have
ϑ + Γ ( ϑ ) ϑ 0 Γ ( ϑ ) + ϑ ϑ 0 d 1 < d 1 + R , and ϑ + Γ ( ϑ ) ϑ 0 ϑ ϑ 0 + Γ ( ϑ ) d 1 + R , for each ϑ U ( ϑ 0 , R + d 1 ) D .
Hence, we have
ϑ + Γ ( ϑ ) , ϑ Γ ( ϑ ) U ( ϑ 0 , R + d 1 ) .
In particular
ϑ 0 + Γ ( ϑ 0 ) , ϑ 0 Γ ( ϑ 0 ) U ( ϑ 0 , R + d 1 ) .
By the first condition in ( H 4 ) and Equation (11), we yield
Γ ( ϑ 0 ) 1 [ ϑ + Γ ( ϑ ) , ϑ Γ ( ϑ ) ; Γ ] Γ ( ϑ 0 ) w 0 ϑ + Γ ( ϑ ) ϑ 0 , ϑ Γ ( ϑ ) ϑ 0 w 0 ϑ ϑ 0 + Γ ( ϑ ) , ϑ ϑ 0 + Γ ( ϑ ) = w 0 ( R + d 1 , R + d 1 ) < 1 .
This takes from Equation (17) and the Banach Lemma for inverse operators [10,11] so that [ ϑ + Γ ( ϑ ) , ϑ Γ ( ϑ ) ; Γ ] 1 L ( ϑ ) and
[ ϑ + Γ ( ϑ ) , ϑ Γ ( ϑ ) ; Γ ] 1 Γ ( ϑ 0 ) 1 1 w 0 ( ϑ ϑ 0 + Γ ( ϑ ) , ϑ ϑ 0 + Γ ( ϑ ) ) 1 1 w 0 ( R + d 1 , R + d 1 ) .
We also have that by Equation (18) for ϑ = ϑ 0 and the first sub-step of Scheme (5), ψ 1 0 is well defined. Then, we have by the first condition in ( H 3 ) and Equation (18) the estimate
ψ 1 0 ϑ 0 = ϑ 0 ϑ 0 [ ϑ 0 + Γ ( ϑ 0 ) , ϑ 0 Γ ( ϑ 0 ) ; Γ ] 1 Γ ( ϑ 0 ) [ ϑ 0 + Γ ( ϑ 0 ) , ϑ 0 Γ ( ϑ 0 ) ; Γ ] 1 Γ ( ϑ 0 ) Γ ( ϑ 0 ) 1 Γ ( ϑ 0 ) = γ 0 1 w 0 ( d 1 , d 1 ) : = γ .
We can write
Γ ( ψ 1 0 ) = Γ ( ψ 1 0 ) Γ ( ϑ 0 ) [ ϑ 0 + Γ ( ϑ 0 ) , ϑ 0 Γ ( ϑ 0 ) ; Γ ] ( ψ 1 0 ϑ 0 ) = [ ψ 1 0 , ϑ 0 ; Γ ] [ ϑ 0 + Γ ( ϑ 0 ) , ϑ 0 Γ ( ϑ 0 ) ; Γ ] ( ψ 1 0 ϑ 0 ) ,
since
[ ϑ , Λ ; Γ ] ( ϑ Λ ) = Γ ( ϑ ) Γ ( Λ ) .
By the second conditions in ( H 3 ) , ( H 4 ) and Equation (20), we obtain
Γ ( ϑ 0 ) 1 Γ ( ψ 1 0 ) = Γ ( ϑ 0 ) 1 [ ψ 1 0 , ϑ 0 ; Γ ] [ ϑ 0 + Γ ( ϑ 0 ) , ϑ 0 Γ ( ϑ 0 ) ; Γ ] ( ψ 1 0 ϑ 0 ) w ψ 1 0 ϑ 0 Γ ( ϑ 0 ) , ϑ 0 ϑ 0 + Γ ( ϑ 0 ) ψ 1 0 ϑ 0 w ψ 1 0 ϑ 0 + Γ ( ϑ 0 ) , ϑ 0 ϑ 0 + Γ ( ϑ 0 ) ψ 1 0 ϑ 0 = w ( γ + d 1 , d 1 ) ψ 1 0 ϑ 0 .
Set A σ = b 4 [ ϑ σ + Γ ( ϑ σ ) , ϑ σ Γ ( ϑ σ ) ; Γ ] 1 [ ψ 1 σ , ϑ σ ; Γ ] + b 2 I
and
B k = b 1 [ ϑ σ + Γ ( ϑ σ ) , ϑ σ Γ ( ϑ σ ) ; Γ ] + b 2 [ ψ 1 σ , ϑ σ ; Γ ] + b 3 [ ϑ σ + Γ ( ϑ σ ) , ϑ σ Γ ( ϑ σ ) ; Γ ] 1 [ ψ 1 σ , ϑ σ ; Γ ] [ ϑ σ + Γ ( ϑ σ ) , ϑ σ Γ ( ϑ σ ) ; Γ ] + b 4 [ ϑ σ + Γ ( ϑ σ ) , ϑ σ Γ ( ϑ σ ) ; Γ ] 1 [ ψ 1 σ , ϑ σ ; Γ ] 2 .
We must show A σ 1 , B σ 1 L ( ϑ ) . By Equations (14), (18) and (23) and the last condition in ( H 3 ) , we get that
A 0 + I | b 4 | [ ϑ 0 + Γ ( ϑ 0 ) , ϑ 0 Γ ( ϑ 0 ) ; Γ ] 1 Γ ( ϑ 0 ) × Γ ( ϑ 0 ) 1 [ ψ 1 0 , ϑ 0 ; Γ ] + | b 2 + 1 | I | b 4 | d 1 w 0 ( R + d 1 , R + d 1 ) + | b 2 + 1 | = v 2 ( R ) < 1 .
It follows from Equation (25) that
A 0 1 1 1 v 2 ( R ) .
Similarly, by Equations (9) and (12), and ( H 3 ) and ( H 4 ) , we get that
b Γ ( ϑ 0 ) 1 B 0 ( b 1 + b 2 ) Γ ( ϑ 0 ) | b | 1 [ | b 1 | Γ ( ϑ 0 ) 1 [ ϑ 0 + Γ ( ϑ 0 ) , ϑ 0 Γ ( ϑ 0 ) ; Γ ] Γ ( ϑ 0 ) + | b 2 | Γ ( ϑ 0 ) 1 [ ψ 1 0 , ϑ 0 ; Γ ] Γ ( ϑ 0 ) + | b 3 | Γ ( ϑ 0 ) 1 [ ϑ 0 + Γ ( ϑ 0 ) , ϑ 0 Γ ( ϑ 0 ) ; Γ ] 1 Γ ( ϑ 0 ) × Γ ( ϑ 0 ) 1 [ ψ 1 0 , ϑ 0 ; Γ ] [ ϑ 0 + Γ ( ϑ 0 ) , ϑ 0 Γ ( ϑ 0 ) ; Γ ] + | b 4 | Γ ( ϑ 0 ) 1 [ ψ 1 0 , ϑ 0 ; Γ ] [ ψ 1 0 , ϑ 0 ; Γ ] × [ ϑ 0 + Γ ( ϑ 0 ) , ϑ 0 Γ ( ϑ 0 ) ; Γ ] 1 Γ ( ϑ 0 ) ] | b | 1 [ | b 1 | w 0 ( ϑ 0 ϑ 0 + Γ ( ϑ 0 ) , ϑ 0 Γ ( ϑ 0 ) ϑ 0 ) + | b 2 | w 0 ( ψ 1 0 ϑ 0 , ϑ 0 ϑ 0 ) + | b 3 | d 0 d 2 d 1 w 0 ( R + d 1 , R + d 1 ) + | b 4 | d 0 d 1 w 0 ( R + d 1 , R + d 1 ) ]
| b | 1 | b 1 | w 0 ( R + d 1 , R + d 1 ) + | b 2 | w 0 ( R , R ) + d 0 d ( | b 4 | + d 2 | b 3 | ) 1 w 0 ( R + d 1 , R + d 1 ) = v 0 ( R ) < 1 .
Hence, by Equation (27), we get that
B 0 1 Γ ( ϑ 0 ) 1 | b | ( 1 v 0 ( R ) ) .
Then, by Equations (13), (16), (25) and (28), we have that
ψ 2 0 ψ 1 0 A 0 B 0 1 Γ ( ϑ 0 ) Γ ( ϑ 0 ) 1 Γ ( ψ 1 0 ) × | b 4 | d 1 w 0 ( R + d 1 , R + d 1 ) + | b 2 | w ( γ + d 1 , d 1 ) | b | ( 1 v 0 ( R ) ) ψ 1 0 ψ 0 0 v 1 ( R ) ψ 1 0 ψ 0 0 q ψ 1 0 ψ 0 0 .
Since ϑ 0 = ψ 0 0 , we have that
ψ 2 0 ψ 0 0 = ψ 2 0 ϑ 0 ψ 2 0 ψ 1 0 + ψ 1 0 ϑ 0 = 1 q 2 1 q γ < γ 1 q R ,
so ψ 2 0 U ¯ ( ϑ 0 , R ) , and we yield
Γ ( ψ 2 0 ) = Γ ( ψ 2 0 ) Γ ( ψ 1 0 ) η 1 ( ψ 2 0 ψ 1 0 ) = A 0 1 A 0 [ ψ 2 0 , ψ 1 0 ; Γ ] B 0 ( ψ 2 0 ψ 1 0 ) .
By Equations (23), (24), (26) and (31), and ( H 3 ) and ( H 4 ) , we have
Γ ( ϑ 0 ) 1 Γ ( ψ 2 0 ) Γ ( ϑ 0 ) 1 A 0 1 A 0 [ ψ 2 0 , ψ 1 0 ; Γ ] + B 0 ψ 2 0 ψ 1 0 d 2 1 v 2 ( R ) d 0 d | b 4 | 1 w 0 ( R + d 1 , R + d 1 ) + d 0 ( | b 1 | + | b 2 | ) + d 0 d | b 4 | 1 w 0 ( R + d 1 , R + d 1 ) ψ 2 0 ψ 1 0 .
Then, using Equations (15), (16), (28) and (32), we get that
ψ 3 0 ψ 2 0 A 0 B 0 1 Γ ( ϑ 0 ) Γ ( ϑ 0 ) 1 Γ ( ψ 2 0 ) w 3 ( R ) ψ 2 0 ψ 0 0 q ψ 2 0 ψ 1 0 .
We also have that
ψ 3 0 ϑ 0 ψ 3 0 ψ 2 0 + ψ 2 0 ψ 1 0 + ψ 1 0 ϑ 0 ( 1 + q + q 2 ) ψ 1 0 ϑ 0 < 1 q 3 1 q γ < R .
That is ψ 3 0 U ( ϑ 0 , R ) . By simply replacing ψ 2 0 , ψ 3 0 by ψ k m , ψ k + 1 m , σ = 1 , 2 , 3 , , m = 0 , 1 , 2 , in the preceding estimates, we get that
ψ k + 1 m ψ k m q ψ k m ψ k 1 m
and
ψ k + 1 m ϑ 0 R ,
which complete the induction. However, then Scheme (5) is complete in B (Banach space) and by Equation (35) and ψ k + 1 m U ¯ ( ϑ 0 , R ) converges to some ϑ * U ¯ ( ϑ 0 , R ) . Moreover, there exists (see Equations (14), (15) and (32)) δ 0 with
Γ ( ϑ 0 ) 1 Γ ( ψ i j ) δ ψ i j ψ i 1 j i = 1 , 2 , , p , j = 0 , 1 , 2 , .
By letting j in (37) we deduce that Γ ( ϑ * ) = 0 . Hence, ψ p 0 U ( ϑ 0 , R ) . □
Next, a uniqueness result is presented.
Proposition 1.
By adopting hypotheses ( H ) , we consider that there will be an R 1 R such that
w 0 ( R , R 1 ) < 1 .
Then, ϑ * is the unique solution of the expression Γ ( ϑ ) = 0 in U ¯ ( ϑ 0 , R 1 ) .
Proof. 
The existence of the solution ϑ * was established in Theorem 1. Let S = [ ϑ * , Λ * ; Γ ] , where Λ * U ¯ ( ϑ 0 , R 1 ) with Γ ( Λ * ) = 0 . Then, using the first condition in ( H 4 ) and Equation (38), we get that
Γ ( ϑ 0 ) 1 [ ϑ * , Λ * ; Γ ] Γ ( ϑ 0 ) w 0 ( ϑ * ϑ 0 , Λ * ϑ 0 ) w 0 ( R , R 1 ) < 1 .
It follows from Equation (39) that S 1 L ( ϑ ) . Then, by ϑ * = Λ * is deduced from 0 = S ( ϑ * ) S ( Λ * ) = S ( ϑ * Λ * ) . □

3. Numerical Results

Several real life problems are provided to illustrate the convergence nature of the schemes shown in Scheme (5). Therefore, we consider all five higher-order methods out of our proposed scheme, namely ( ψ 2 for α 1 = ± 10 20 & α 2 = ± 10 1000 ) , ( ψ 2 for α 1 = ± 3 & α 2 = ± 10 2000 ) , ( ψ 3 for α 1 = ± 10 20 & α 2 = ± 10 1000 ) , ( ψ 3 for α 1 = ± 3 & α 2 = ± 10 2000 ) , ( ψ 4 for α 1 = R { 0 } & α 2 = 0 ) , ( ψ 4 for α 1 = ± 10 20 & α 2 = ± 10 1000 ) and ( ψ 4 for α 1 = ± 3 & α 2 = ± 10 2000 ) , denoted by ( O M 1 4 ) , ( O M 2 4 ) , ( O M 3 6 ), ( O M 4 6 ), ( O M 5 8 ), ( O M 6 8 ) and ( O M 7 8 ), respectively, to test the convergence criteria. Out these seven schemes first two are of order four; the third and fourth are of order six; and the last three are of order eight. We compare them with fourth-order schemes proposed by Sharma and Arora [25], out of them we consider Equations (2) and (3) denoted by ( S A 1 4 ) and ( S A 2 4 ) , respectively. In addition, we consider two higher-order methods of order four and six presented by Grau et al. [18], out of them we choose Equations (12) and (14), denoted by ( G M 1 4 ) and ( G M 2 6 ) , respectively. Finally, we also compared them with seventh-order methods given by Wang and Zhang [26]; out them we consider Equations ( 56 ) and ( 57 ) , denoted W Z 1 7 and W Z 2 7 , respectively.
For fair contrast of our schemes, we have depicted the error among the exact and estimated zero ϑ m ϑ * ; COC stands for computational order of convergence, and CPU stands for time in Table 1, Table 2, Table 3 and Table 4. We used the succeeding formulas [14]
ρ = ln ϑ m + 1 ϑ * ϑ m ϑ * ln ϑ m ϑ * ϑ m 1 ϑ * , m = 1 , 2 ,
or
ρ * = ln ϑ m + 1 ϑ m ϑ m ϑ m 1 ln ϑ m ϑ m 1 ϑ m 1 ϑ m 2 , m = 2 , 3 , ,
COC and the approximate computational order of convergence (ACOC), respectively.
It is vital to note that ρ or ρ * do not need any kind of higher-order derivatives to compute the error bounds. The notation a 1 ( ± a 2 ) employs a 1 × 10 ( ± a 2 ) .
The computational works are performed with programming software Mathematica-10 (Wolfram Research, Champaign, IL, USA) [27] and the configurations of our computer are given below:
  • A processor Intel(R) Core (TM) i5-3210M (Intel, Santa Clara, CA, USA)
  • CPU @ 2.50 GHz (64-bit machine) (Intel, Santa Clara, CA, USA)
  • Microsoft Windows 8 (Microsoft Corporation, Albuquerque, NM, USA).
We consider at least 1000 digits of mantissa in order to minimize the round-off errors. In addition, all the problems are first transformed into nonlinear systems of equations and then solved using the proposed iterative scheme.
Example 1.
Returning back to Equation (7), for [ ϑ , Λ : Γ ] = 0 1 Γ ( Λ + v ( ϑ Λ ) ) d v , we see that our results can apply, if we choose
w ¯ 0 ( t ) = w ¯ 1 ( t ) = d 2 ( 5 24 t 2 / 3 + 2 t ) , a n d w 0 ( s , t ) = w ( s , t ) = 1 2 ( w ¯ 0 ( s ) + w ¯ 0 ( t ) ) ,
for ϑ 0 sufficiently close to the solution ϑ * = 0 .
Example 2. Bratu Problem: 
Here, we assume the well known Bratu Problem [28], which is given by
y + C e y = 0 , y ( 0 ) = y ( 1 ) = 0 .
It has a wide area of application, for example, in radioactive heat transfer, thermal reaction, the fuel ignition model of thermal combustion, chemical reactor theory, the Chandrasekhar model of the expansion of the universe and nanotechnology [28,29,30,31].
The finite difference discretization is used to convert the above boundary value problem, Equation (42), into a non-linear system of size 40 × 40 with step size h = 1 / 41 . For second derivative central difference has been used and is as follows
y σ = λ σ 1 2 λ σ + λ σ + 1 h 2 , σ = 1 , 2 , , 40 .
The computational comparison of the solution of this problem is shown in Table 1 and the graphical solution in Figure 1.
Example 3. Bratu Problem in 2D: 
We choose a prominent 2D Bratu problem [32,33], which is defined by
u x x + u t t + C e u = 0 , o n Ω : ( x , t ) 0 x 1 , 0 t 1 , a l o n g b o u n d a r y h y p o t h e s i s u = 0 o n Ω .
Let us assume that Γ i , j = u ( x i , t j ) is a numerical result over the grid points of the mesh. In addition, we consider that τ 1 and τ 2 are the number of steps in the direction of x and t, respectively. Moreover, we choose that h and k are the respective step sizes in the direction of x and y, respectively. In order to find the solution of partial differential equation (PDE) (43), we adopt the following approach
u x x ( x i , t j ) = Γ i + 1 , j 2 Γ i , j + Γ i 1 , j h 2 , C = 0 . 1 , t [ 0 , 1 ] ,
which further yields the succeeding system of nonlinear equation (SNE)
Γ i , j + 1 + Γ i , j 1 Γ i , j + Γ i + 1 , j + Γ i 1 , j + h 2 C exp Γ i , j i = 1 , 2 , 3 , , τ 1 , j = 1 , 2 , 3 , , τ 2 .
By choosing τ 1 = τ 2 = 11 , h = 1 11 and C = 0 . 1 , we get a large SNE of order 100 × 100 . The starting point is
ϑ 0 = 0 . 1 ( sin ( π h ) sin ( π k ) , sin ( 2 π h ) sin ( 2 π k ) , , sin ( 10 π h ) sin ( 10 π k ) ) T
and the results are depicted in Table 1. Further, the estimated solution has been plotted in Figure 2.
Example 4. Fisher’s Equation: 
Here, we assume another typical non-linear Fisher’s equation [34], which is given by
u t = D u x x + u ( 1 u ) = 0 , u ( x , 0 ) = 1 . 5 + 0 . 5 c o s ( π x ) , 0 x 1 , u x ( 0 , t ) = 0 , t 0 , u x ( 1 , t ) = 0 , t 0 ,
where D is the diffusion coefficient. Let us assume that Γ i , j = u ( x i , t j ) is a numerical result over the grid points of the mesh. In addition, we consider τ 1 and τ 2 to be the number of steps in the direction of x and t, respectively. Moreover, we choose that h and k are the respective step sizes in the direction of x and y, respectively. In order to find the solution of PDE (46), we adopt the following approach
u x x ( x i , t j ) = ( Γ i + 1 , j 2 Γ i , j + Γ i 1 , j ) / h 2 , u t ( x i , t j ) = ( Γ i , j Γ i , j 1 ) / k , u x ( x i , t j ) = ( Γ i + 1 , j Γ i , j ) / ( h ) ,
which further yields the succeeding SNE
Γ 1 , j Γ i , j 1 k Γ i , j 1 Γ i , j H Γ i + 1 , j 2 Γ i , j + Γ i 1 , j h 2 , i = 1 , 2 , 3 , , τ 1 , j = 1 , 2 , 3 , , τ 2 .
By choosing τ 1 = τ 2 = 21 , h = 1 τ 1 and k = 1 τ 2 , we get a large SNE of order 400 × 400 . The starting point is
ϑ 0 = ( i / ( τ 1 1 ) 2 ) T , i = 1 , 2 , , τ 1 1
and results are mentioned in Table 3. The approximate solution has been plotted in Figure 3.
Example 5.
Finally, we deal with the following SNE
Γ ( ϑ ) = 1 + ϑ j 2 x j + 1 = 0 , 1 j σ 1 , 1 + ϑ σ 2 ϑ 1 = 0 .
In order to access a giant system of nonlinear equations of order 100 × 100 , we pick σ = 100 . In addition, we consider the following starting approximation for this problem:
ϑ ( 0 ) = 1 . 13 , 1 . 13 , 1 . 13 ( 100 t i m e s ) T ,
and converges to ξ * = ( 1 , 1 , 1 , 1 , , 1 ( 100 t i m e s ) ) T . The attained computation outcomes are illustrated in Table 4.
Remark 1.
It follows from Table 1, Table 2, Table 3 and Table 4 that our methods have a very small error difference between the exact and approximated root as compared to the other mentioned methods. In addition, they have a more stable computational order of convergence and take less CPU time for better accuracy in the required zero.
In some tables the values of error approximations ϑ ( m ) ϑ * appear the same for different α 1 and α 2 , but actually they are different; if we mention the errors in more significant digits in those tables then we can see the clear difference. However, due to limited page space only three significant digits are depicted for different α 1 and α 2 .

4. Concluding Remarks

In this work, semi-local convergence analysis of the family proposed by [21] has been proved by adopting Lipschitz constants and order one divided differences on a Banach space setting under weak conditions, so that we can expand the applicability of Scheme (5) and other related schemes. The use of this family on real life problems, namely Bratu’s 1D (SNE 40 × 40 ), Bratu’s 2D (SNE 100 × 100 ), Fisher’s problem (SNE 400 × 400 ) and polynomial equations (SNE 100 × 100 ), also confirms the applicability of this family.

Author Contributions

R.B. and I.K.A.: Conceptualization; Methodology; Validation; Writing–Original Draft Preparation; Writing–Review & Editing. S.B. and S.K.: Validation; Review & Editing. All authors have read and agreed to the published version of the manuscript.

Funding

Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under Grant No. D-405-130-1441.

Acknowledgments

This work was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, under grant No. (D-405-130-1441). The authors, therefore, acknowledge with thanks DSR technical and financial support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ezquerro, J.A.; Grau-Sánchez, M.; Hernández-veron, M.A.; Noguera, M. A family of iterative methods that uses divided differences of first and second orders. Numer. Algor. 2015, 70, 571–589. [Google Scholar] [CrossRef] [Green Version]
  2. Grau-Sánchez, M.; Noguera, M. Frozen divided difference scheme for solving system of nonlinear equations. J. Comput. Appl. Math. 2011, 235, 1739–1743. [Google Scholar] [CrossRef]
  3. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  4. Ostrowski, A.M. Solutions of Equations and System of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
  5. Petković, M.S. Remarks on On a general class of multipoint root finding methods of high computational efficiency. SIAM J. Numer. Anal. 2011, 49, 1317–1319. [Google Scholar] [CrossRef]
  6. Potra, F.A.; Pták, V. Nondiscrete Introduction and Iterative Process; (Research Notes in Mathematics); Pitman Advanced Pub. Program: Roma, Italy, 1984. [Google Scholar]
  7. Amat, S.; Busquier, S.; Plaza, S. Dynamics of the King and Jarratt iterations. Aequationes Math. 2005, 69, 212–223. [Google Scholar] [CrossRef]
  8. Amat, S.; Busquier, S.; Plaza, S. Chaotic dynamics of a third-order Newton-type method. J. Math. Anal. Appl. 2010, 366, 24–32. [Google Scholar] [CrossRef] [Green Version]
  9. Amat, S.; Hernández, M.A.; Romero, N. A modified Chebyshev’s iterative method with at least sixth order of convergence. Appl. Math. Comput. 2008, 206, 164–174. [Google Scholar] [CrossRef]
  10. Argyros, I.K. Convergence and Application of Newton-Type Iterations; Springer: New York, NY, USA, 2008. [Google Scholar]
  11. Argyros, I.K.; Hilout, S. Computational Methods in Nonlinear Analysis; World Scientific Publishing Company: Hackensack, NJ, USA, 2013. [Google Scholar]
  12. Argyros, I.K.; George, S.; Magrenan, A.A. Local convergence for multi-point-parametric Chebyshev-Halley-type methods of high convergence order. J. Comput. Appl. Math. 2015, 282, 215–224. [Google Scholar] [CrossRef]
  13. Behl, R.; Kanwar, V.; Sharma, K.K. Optimal equi-scaled families of Jarratt’s method. Int. J. Comput. Math. 2013, 290, 408–422. [Google Scholar] [CrossRef]
  14. Ezquerro, J.A.; Hernández, M.A. New iterations of R-order four with reduced computational cost. BIT Numer. Math. 2009, 49, 325–342. [Google Scholar] [CrossRef]
  15. Kelley, C.T. Solving Nonlinear Equations with Newton’s Method; SIAM: Philadelphia, PA, USA, 2003. [Google Scholar]
  16. Wang, X.; Zhang, T.; Qian, W.; Teng, M. Seventh-order derivative-free iterative method for solving nonlinear systems. Numer. Algor. 2015, 70, 545–558. [Google Scholar] [CrossRef]
  17. Traub, J.F. Iterative Method for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  18. Grau-Sánchez, M.; Noguera, M.; Amat, S. On the approximation of derivatives using divided difference operators preserving the local convergence order of iterative methods. J. Comput. Appl. Math. 2013, 237, 363–372. [Google Scholar] [CrossRef]
  19. Grau-Sánchez, M.; Noguera, M.; Diaz-Barrero, J.L. On the local convergence of a family of two-step iterative methods for solving nonlinear equations. J. Comput. Appl. Math. 2014, 255, 753–764. [Google Scholar] [CrossRef]
  20. Steffensen, J.F. Remarks on iteration. Skand. Aktuar. Tidskr 1933, 16, 64–72. [Google Scholar] [CrossRef]
  21. Bhalla, S.; Kumar, S.; Argyros, I.K.; Behl, R. A family of higher order derivative free methods for nonlinear systems with local convergence analysis. Comput. Appl. Math. 2018, 37, 5807–5827. [Google Scholar] [CrossRef]
  22. Aizenshtein, M.; Bartoň, M.; Elber, G. Global solutions of well-constrained transcendental systems using expression trees and a single solution test. Comput. Aided Geom. Des. 2012, 29, 265–279. [Google Scholar] [CrossRef]
  23. Van Sosin, B.; Elber, G. Solving piecewise polynomial constraint systems with decomposition and a subdivision-based solver. Comput. Aided Des. 2017, 90, 37–47. [Google Scholar] [CrossRef]
  24. Bartoň, M. Solving polynomial systems using no-root elimination blending schemes. Comput. Aided Des. 2011, 43, 1870–1878. [Google Scholar] [CrossRef]
  25. Sharma, J.R.; Arora, H. An efficient derivative free iterative method for solving systems of nonlinear equations. Appl. Anal. Discrete Math. 2013, 7, 390–403. [Google Scholar] [CrossRef] [Green Version]
  26. Wang, X.; Zhang, T. A family of Steffensen-type methods with seventh-order convergence. Numer. Algor. 2013, 62, 429–444. [Google Scholar] [CrossRef]
  27. Wolfram, S. The Mathematica Book, 5th ed.; Wolfram Media: Champaign, IL, USA, 2003. [Google Scholar]
  28. Gelfand, I.M. Some problems in the theory of quasi-linear equations. Trans. Amer. Math. Soc. Ser. 1963, 2, 295–381. [Google Scholar]
  29. Jacobsen, J.; Schmitt, K. The Liouville Bratu Gelfand problem for radial operators. J. Differ. Equ. 2002, 184, 283–298. [Google Scholar] [CrossRef] [Green Version]
  30. Jalilian, R. Non-polynomial spline method for solving Bratu’s problem. Comput. Phys. Commun. 2010, 181, 1868–1872. [Google Scholar] [CrossRef]
  31. Wan, Y.Q.; Guo, Q.; Pan, N. Thermo-electro-hydrodynamic model for electrospinning process. Int. J. Nonlinear Sci. Numer. Simul. 2004, 5, 5–8. [Google Scholar] [CrossRef]
  32. Kapania, R.K. A pseudo-spectral solution of 2-parameter Bratu’s equation. Comput. Mech. 1990, 6, 55–63. [Google Scholar] [CrossRef]
  33. Simpson, R.B. A method for the numerical determination of bifurcation states of nonlinear systems of equations. SIAM J. Numer. Anal. 1975, 12, 439–451. [Google Scholar] [CrossRef]
  34. Sauer, T. Numerical Analysis, 2nd ed.; Pearson: London, UK, 2012. [Google Scholar]
Figure 1. Approximated solution of Bratu problem with C = 3 for t [ 0 , 1 ] .
Figure 1. Approximated solution of Bratu problem with C = 3 for t [ 0 , 1 ] .
Mathematics 08 00092 g001
Figure 2. Approximated solution for 2D Bratu problem with C = 0 . 1 t [ 0 , 1 ] .
Figure 2. Approximated solution for 2D Bratu problem with C = 0 . 1 t [ 0 , 1 ] .
Mathematics 08 00092 g002
Figure 3. Approximated Solution for Fisher’s equation t [ 0 , 1 ] .
Figure 3. Approximated Solution for Fisher’s equation t [ 0 , 1 ] .
Mathematics 08 00092 g003
Table 1. Comparison of distinct iterative schemes on Example 2.
Table 1. Comparison of distinct iterative schemes on Example 2.
Scheme ϑ ( 1 ) ϑ * ϑ ( 2 ) ϑ * ϑ ( 3 ) ϑ * ρ CPU Time (s)
S A 1 4 4.8 ( 2 ) 1.1 ( 8 ) 2.8 ( 35 ) 3.997 17.59
S A 2 4 1.0 ( 1 ) 3.6 ( 7 ) 5.4 ( 29 ) 3.993 21.36
G M 1 4 4.8 ( 2 ) 1.1 ( 8 ) 3.3 ( 35 ) 3.997 15.70
O M 1 4 4.8 ( 2 ) 1.1 ( 8 ) 3.3 ( 35 ) 3.997 15.29
O M 2 4 4.8 ( 2 ) 1.1 ( 8 ) 3.3 ( 35 ) 3.997 15.92
G M 2 6 6.4 ( 3 ) 4.4 ( 18 ) 4.5 ( 109 ) 5.999 15.90
O M 3 6 6.4 ( 3 ) 4.4 ( 18 ) 4.5 ( 109 ) 5.999 15.34
O M 4 6 6.4 ( 3 ) 4.4 ( 18 ) 4.5 ( 109 ) 5.999 16.15
W Z 1 7 6.8 ( 5 ) 1.2 ( 35 ) 5.7 ( 251 ) 7.000 29.50
W Z 2 7 5.1 ( 4 ) 2.4 ( 29 ) 1.4 ( 206 ) 6.999 29.56
O M 5 8 8.6 ( 4 ) 6.3 ( 31 ) 5.4 ( 248 ) 8.000 15.64
O M 6 8 8.6 ( 4 ) 6.3 ( 31 ) 5.4 ( 248 ) 8.000 15.37
O M 7 8 8.6 ( 4 ) 6.3 ( 31 ) 5.4 ( 248 ) 8.000 16.20
Table 2. Comparison of distinct iterative schemes on 2D Bratu problem Example 3.
Table 2. Comparison of distinct iterative schemes on 2D Bratu problem Example 3.
Scheme ϑ ( 1 ) ϑ * ϑ ( 2 ) ϑ * ϑ ( 3 ) ϑ * ρ CPU Time (s)
S A 1 4 2.8 ( 10 ) 2.8 ( 47 ) 2.6 ( 195 ) 3.999 241.63
S A 2 4 2.8 ( 10 ) 2.9 ( 47 ) 3.1 ( 195 ) 3.999 303.07
G M 1 4 4.2 ( 10 ) 2.1 ( 46 ) 1.6 ( 191 ) 3.999 224.73
O M 1 4 4.2 ( 10 ) 2.1 ( 46 ) 1.6 ( 191 ) 3.999 218.45
O M 2 4 4.2 ( 10 ) 2.1 ( 46 ) 1.6 ( 191 ) 3.999 238.30
G M 2 6 2.0 ( 15 ) 8.2 ( 102 ) 5.0 ( 620 ) 5.999 217.65
O M 3 6 2.0 ( 15 ) 8.2 ( 102 ) 5.0 ( 620 ) 5.999 210.22
O M 4 6 2.0 ( 15 ) 8.2 ( 102 ) 5.0 ( 620 ) 5.999 220.96
W Z 1 7 1.9 ( 19 ) 2.4 ( 148 ) 1.1 ( 1050 ) 6.999 1343.25
W Z 2 7 1.9 ( 19 ) 2.6 ( 148 ) 1.8 ( 1050 ) 6.999 1310.85
O M 5 8 9.2 ( 21 ) 1.6 ( 178 ) 1.2 ( 1440 ) 7.999 395.73
O M 6 8 9.2 ( 21 ) 1.6 ( 178 ) 1.2 ( 1440 ) 7.999 431.84
O M 7 8 9.2 ( 21 ) 1.6 ( 178 ) 1.2 ( 1440 ) 7.999 441.66
Table 3. Comparison of distinct iterative schemes on Fisher’s Equation (4).
Table 3. Comparison of distinct iterative schemes on Fisher’s Equation (4).
Scheme ϑ ( 1 ) ϑ * ϑ ( 2 ) ϑ * ϑ ( 3 ) ϑ * ρ CPU Time (s)
S A 1 4 d i v d i v d i v
S A 2 4 d i v d i v d i v
G M 1 4 1.2 4.3 ( 6 ) 3.5 ( 28 ) 4.049 1127.58
O M 1 4 1.2 4.3 ( 6 ) 3.5 ( 28 ) 4.049 1409.30
O M 2 4 1.2 4.3 ( 6 ) 3.5 ( 28 ) 4.049 1405.62
G M 2 6 2.5 ( 1 ) 9.1 ( 14 ) 1.1 ( 88 ) 6.024 1376.54
O M 3 6 2.5 ( 1 ) 9.1 ( 14 ) 1.1 ( 88 ) 6.024 1383.58
O M 4 6 2.5 ( 1 ) 9.1 ( 14 ) 1.1 ( 88 ) 6.024 1189.68
W Z 1 7 d i v d i v d i v
W Z 2 7 d i v d i v d i v
O M 5 8 4.8 ( 2 ) 1.6 ( 24 ) 2.0 ( 204 ) 8.0091 1093.62
O M 6 8 4.8 ( 2 ) 1.6 ( 24 ) 2.0 ( 204 ) 8.0091 1356.34
O M 7 8 4.8 ( 2 ) 1.6 ( 24 ) 2.0 ( 204 ) 8.0091 1359.12
Table 4. Comparison of distinct iterative schemes on Example 5.
Table 4. Comparison of distinct iterative schemes on Example 5.
Scheme ϑ ( 1 ) ϑ * ϑ ( 2 ) ϑ * ϑ ( 3 ) ϑ * ρ CPU Time (s)
S A 1 4 4.43 ( 11 ) 2.06 ( 44 ) 9.53 ( 178 ) 4.000 28.238
S A 2 4 3.41 ( 13 ) 1.80 ( 53 ) 1.80 ( 214 ) 4.000 38.889
G M 1 4 4.60 ( 14 ) 5.53 ( 56 ) 1.16 ( 223 ) 4.000 34.314
O M 1 4 4.60 ( 14 ) 5.53 ( 56 ) 1.16 ( 223 ) 4.000 35.634
O M 2 4 4.60 ( 14 ) 5.53 ( 56 ) 1.16 ( 223 ) 4.000 37.161
G M 2 6 6.32 ( 26 ) 8.87 ( 155 ) 6.81 ( 928 ) 6.000 50.954
O M 3 6 6.32 ( 26 ) 8.87 ( 155 ) 6.81 ( 928 ) 6.000 52.323
O M 4 6 6.32 ( 26 ) 8.87 ( 155 ) 6.81 ( 928 ) 6.000 53.823
W Z 1 7 1.52 ( 34 ) 5.23 ( 242 ) 3.06 ( 1694 ) 7.000 72.207
W Z 2 7 1.39 ( 37 ) 7.11 ( 264 ) 6.53 ( 1848 ) 7.000 163.046
O M 5 8 2.93 ( 41 ) 8.70 ( 329 ) 5.21 ( 2629 ) 8.000 57.366
O M 6 8 2.93 ( 41 ) 8.70 ( 329 ) 5.21 ( 2629 ) 8.000 57.079
O M 7 8 2.93 ( 41 ) 8.70 ( 329 ) 5.21 ( 2629 ) 8.000 59.043

Share and Cite

MDPI and ACS Style

Behl, R.; Bhalla, S.; Argyros, I.K.; Kumar, S. Semi-Local Analysis and Real Life Applications of Higher-Order Iterative Schemes for Nonlinear Systems. Mathematics 2020, 8, 92. https://doi.org/10.3390/math8010092

AMA Style

Behl R, Bhalla S, Argyros IK, Kumar S. Semi-Local Analysis and Real Life Applications of Higher-Order Iterative Schemes for Nonlinear Systems. Mathematics. 2020; 8(1):92. https://doi.org/10.3390/math8010092

Chicago/Turabian Style

Behl, Ramandeep, Sonia Bhalla, Ioannis K. Argyros, and Sanjeev Kumar. 2020. "Semi-Local Analysis and Real Life Applications of Higher-Order Iterative Schemes for Nonlinear Systems" Mathematics 8, no. 1: 92. https://doi.org/10.3390/math8010092

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop