Next Article in Journal
Group Decision-Making Method Based on Expert Classification Consensus Information Integration
Next Article in Special Issue
Advanced Algorithms and Common Solutions to Variational Inequalities
Previous Article in Journal
Binary Operations in the Unit Ball: A Differential Geometry Approach
Previous Article in Special Issue
Extending the Convergence Domain of Methods of Linear Interpolation for the Solution of Nonlinear Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparison Methods for Solving Non-Linear Sturm–Liouville Eigenvalues Problems

Department of Mathematics and Statistics, Jordan University of Science and Technology, Irbid 22110, Jordan
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(7), 1179; https://doi.org/10.3390/sym12071179
Submission received: 6 June 2020 / Revised: 8 July 2020 / Accepted: 11 July 2020 / Published: 16 July 2020
(This article belongs to the Special Issue Iterative Numerical Functional Analysis with Applications)

Abstract

:
In this paper, we present a comparative study between Sinc–Galerkin method and a modified version of the variational iteration method (VIM) to solve non-linear Sturm–Liouville eigenvalue problem. In the Sinc method, the problem under consideration was converted from a non-linear differential equation to a non-linear system of equations, that we were able to solve it via the use of some iterative techniques, like Newton’s method. The other method under consideration is the VIM, where the VIM has been modified through the use of the Laplace transform, and another effective modification has also been made to the VIM by replacing the non-linear term in the integral equation resulting from the use of the well-known VIM with the Adomian’s polynomials. In order to explain the advantages of each method over the other, several issues have been studied, including one that has an application in the field of spectral theory. The results in solutions to these problems, which were included in tables, showed that the improved VIM is better than the Sinc method, while the Sinc method addresses some advantages over the VIM when dealing with singular problems.

1. Introduction

The problem under consideration in this study, is of great importance in mathematics and physics. Many physical issues require an evaluation for the eigenvalues as well as finding the corresponding eigenfunctions to understand the physical interpretation, especially when dealing with vibrations and waves. A general non-linear Sturm–Liouville problem (NSLP) can be written by the following differential equation for a non-linear function F:
L ( y ) = d d x p ( x ) d y ( x ) d x + [ λ r ( x ) q ( x ) ] F ( y ( x ) ) = f ( x ) , x ( 0 , 1 )
subject to the boundary conditions
y ( 0 ) = 0 , y ( 1 ) = 0 ,
where the given functions p ( x ) 0 , r ( x ) 0 , q ( x ) 0 and f ( x ) are assumed to be analytic on their domain ( 0 , 1 ) . The constants that appear in the above equation are optional. Solving problem (1) with some boundary conditions (2), we obtain non-zero values for the parameter λ , we call them eigenvalues, which exist in infinite number, and characterized by many properties, real, increasing, and simple, and for each eigenvalue there is a corresponding solution, say y ( x ) , which known to be eigenfunction, see, [1].
Many previous studies have discussed solutions to the problem (1), especially those with the applications in physics and engineering. When starting an applicable study, such as vibration and stability of deformable bodies issues, it is required to find the eigenvalues first. Engineers are interested in the location of the smallest eigenvalue, as this gives potentially the most visual structure of dynamics systems. The eigenvalues are also crucial in finding the stability region of solutions of NSLP [2]. In [3], the collocation method of the weight residual methods are investigated for the approximate computation of higher SLP. Shannon sampling theory has been used to compute the eigenvalues of regular SLP [1]. Asymptotic formulas for eigenvalues associated with Hills equation have been studied in [4]. In [5], the Sinc–Galerkin method was used to approximate solutions of non-linear problems involving non-linear second, fourth, and sixth-order differential equation. The Sinc–Galerkin method was also used in [6,7] to solve two-point boundary value problem with applications to chemical reactor theory. The authors of [8,9] compared the performance of Sinc–Galerkin methods using Sinc bases for solving linear and non-linear second-order two-point boundary value problem. The VIM was discovered and developed by He [10,11,12]. Several studies dealt with modification of the method [13,14], and also several authors used the method to solve difficult problems with physical applications [15,16,17]. In [18] they compute eigenvalues and eigenfunctions of singular two-interval Sturm–Liouville problems. While in [19], they solve fourth order linear differential equations. Fractional Sturm–Liouville problem based on the operational matrix method was presented in [20]. Some existence results for non-linear third-order integro-multi-point boundary value problems are discussed in [21].
In this paper, we introduce two methods for solving (1), (2), Sinc–Galerkin method, and variational iteration method. Stenger [22] originally proposed the numerical solution of ordinary differential equations with the Sinc–Galerkin method. Excellent exposition of the use of Sinc function to approximate differential equations, may be found in [22,23,24]. A basis element may be transformed to any connected subset of the real line via a composition with a suitable conformal map. In conjunction with the Galerkin method for differential equations, perhaps the most distinctive feature of the basis is its resulting exponential convergence rate of the error, O ( exp ( c N ) ) , where c > 0 and where 2 N + 1 basis functions are used to build the approximation. Moreover, the convergence rate maintains when the solution of the differential equation has boundary singularities. Of equal practical significance is that the technique’s implementation requires no modification in the presence of singularities. Specifically, the statement of the quadrature, the mesh definition and the resulting matrix structure depend only on the parameters of the differential equation whether it is singular or non-singular.
This paper is organized as follows. The Sinc solution together with the Galerkin method and the development of the scheme is treated in Section 2. We formulate an iterative procedure, and provides a modification of the VIM to solve (1), (2) using VIM in Section 3. Numerical examples which demonstrates the convergence of the Sinc method and compares its performance with VIM are introduced in the last section.

2. Sinc Function Approximation

In this part, we will present an overview of some facts and concepts needed to study the topic of Sinc function, and use it in an ideal way to solve the systems in Equation (1), these concepts are taken from the books of Stenger and Lund [22,23].

2.1. Sinc Function Properties

It is known that the Sinc function is defined on I R , and because the period in which the issue is being considered is ( 0 , 1 ) , we need to redefine the Sinc function in order to use it during the period and the use of conformal mapping. The Sinc function (known as the band-limited function) defined on the whole real line by
sinc ( x ) = sin ( π x ) π x , x 0 1 , x = 0 .
We denote the equal distances between points by h > 0 , and we define the Sinc translated function as
S ( k , h ) ( x ) = sinc x k h h = sin π h ( x k h ) π h ( x k h ) , x k h 1 , x = k h . , k = 0 , ± 1 , ± 2 ,
If the function f is defined on all real numbers, and for a positive h, then we know that the following series, if it is converged, we call it the Whittaker cardinal series of f.
f ( x ) = C ( f , h ) ( x ) = k = f ( k h ) S ( k , h ) ( x ) .
Definition 1.
For d > 0 we define the infinite open strip
D d = { z = ξ + i η I C : | η | < d } .
Definition 2.
Let D C be a simply connected domain with boundary D with a , b denoted as two distinct points on D . Let ϕ be a conformal map of D onto D d with ψ as its inverse map. Let Γ = { z C : z = ψ ( u ) , u I R } . For given ϕ , ψ and a constant h, set z k = ψ ( k h ) for k = 0 , ± 1 , ± 2 ,
It is well known that Sinc approximation has exponential error, in order to achieve that, our function under approximation has to satisfy some certain decaying condition, so we introduce the following class of functions denoted by B ( D ) , for more details about the terms appeared in the next definitions, readers are recommended to see [22].
Definition 3.
Let B ( D ) be the class of complex-valued functions f defined on a path-connected domain D ;
ψ ( t + L ) | f ( w ) d w | = O ( | t | a ) , t
where a [ 0 , 1 ) , and L = { i y : | y | < d } ; and with
N ( f ; D ) = lim C D inf C D C | f ( w ) d w | <
where C is a simple closed curve in D .
The term of exponentially decaying is defined bellow.
Definition 4.
For f B ( D ) , the f / ϕ is exponentially decaying with respect to ϕ if there exist positive constants K and α such that
| f ( t ) ϕ ( t ) | K exp ( α | ϕ ( t ) | ) , t Γ .
We will use integrals to calculate the residual of functions that belong to class B ( D ) . The proof of the following Theorem can be found in [22].
Theorem 1.
Suppose f belongs to the class B ( D ) . Then we have
| Γ f ( z ) d z h k = f ( z k ) ϕ ( z k ) | N ( f ; D ) 1 e d e 2 π d / h .
If f / ϕ is decaying exponentially with respect to ϕ and belongs to B ( D ) , then for h = 2 π d / ( α N ) we get
| Γ f ( z ) d z h k = N N f ( z k ) ϕ ( z k ) | K e 2 π d α / h .

2.2. The General Sinc–Galerkin Method

We will describe the general Galerkin method that is typically used to approximate solution of an operator equation in the form of a linear combination of the elements of a given linearly independent system. To solve the equation
L ( y ) = f , y ( a ) = y ( b ) = 0 ,
we assume a suitable Hilbert space H and y H . Let { χ i } be a dense linearly independent set in H and let H N = span { Ø 1 , , Ø N } . For the solution y H we define the approximation y N H N as a linear combination
y N ( x ) = j = 1 N y j χ j ( x )
with χ j = S ( j , h ) ϕ ( x ) . The coefficients { y j } are to be determined. Galerkin’s idea is to require that residual
R N ( x ) = L ( y N ) f ( x ) ,
is orthogonal to each χ j H N by the inner product in H :
< R N , χ j > = 0 , 1 j N .
In matrix form:
k = 1 N < L χ k , χ j > y k = < f , χ j > , 1 j N .
These equations determine all { y k } . Taking χ i to be Dirac delta functions, we see that Sinc collocation methods are special case of Sinc-Galekin methods. Galerkin’s method is a powerful tool not only for finding approximate solutions, but also for proving existence theorems of solutions of linear and non-linear equations, especially in problems involving partial differential equations.

2.3. The Sinc Methodology

To solve the differential equation under consideration (1) using the Sinc methodology for all x in ( 0 , 1 ) , and since the Sinc function is defined and build on I R , we have to make some kind of adaptation of the Sinc via conformal mapping. Suppose that function f C ( I R ) is approximable on I R , and it satisfies certain smoothness and boundedness conditions on a D that contains I R . In this manner, we are dealing with the “eye-shaped” region (see Figure 1.7 , p. 68, of [22]) that contains our domain ( 0 , 1 ) . Therefore, to start our approximation on the domain ( 0 , 1 ) , we define the eye-shaped domain D E in the z -plane:
D E = { z = x + i y I C : | a r g ( z / ( 1 z ) | < d π / 2 } .
The eye-shaped domain D E is mapped conformally onto the infinite strip D d via
ϕ ( z ) = ln z 1 z ,
The conformal map ϕ ( z ) maps D E onto D d , appropriate for using Sinc methodology to approximate solution of (1) with boundary conditions (2). According to constructing the new basis functions defined on ( 0 , 1 ) as two functions compositions:
S j ( z ) = S ( j , h ) ϕ ( z ) = sinc ϕ ( z ) j h h , z D E .
To go back to original domain, we use the inverse of u = ϕ ( z ) :
z = ϕ 1 ( u ) = ψ ( u ) = e u 1 + e u .
For the equally interpolation points { j k } j = , we need to define the inverse images of the Sinc domain I R as
Γ = { ψ ( t ) D E : < t < } = ( 0 , 1 ) ,
with
x j = ψ ( j h ) = e j h 1 + e j h , j = 0 , 1 , 2 ,
respectively. Discrete system for Equation (1) is obtain by rewriting Equation (1) as
y ( x ) + τ ( x ) y ( x ) + λ R ( x ) y ( x ) + Q ( x ) F ( y ( x ) ) = K ( x ) ,
where τ ( x ) = p ( x ) / p ( x ) , R ( x ) = r ( x ) / p ( x ) , Q ( x ) = q ( x ) / p ( x ) , and K ( x ) = f ( x ) / p ( x ) . In order to establish discrete system for (9), we replace y ( x ) and its derivatives by
y N ( x ) = j = N N y j S j ϕ ( x )
The y j are constants appeared in (9) and they are specified later. Recall that
R N = L ( y N ) K
and suppose that metric internally product < . , . > is possessed as
< f , g > = 0 1 f ( x ) g ( x ) w ( x ) d x .
Here w ( x ) performs a weight function, selected for different cases. Although other logical reasoning exists, the option made here was a consequence of the request that boundary conditions at end points are zero. It is appropriate to pick w ( x ) = 1 / ϕ ( x ) . As a perfect debate on the options of the weight functions, we recommend a reader to see [22,23]. Orthogonalizing the remainder with regards to
S = S N ϕ S N + 1 ϕ S N ϕ and S w = S N ϕ ϕ S N + 1 ϕ ϕ S N ϕ ϕ
yields the framework
< y N , S > + < τ ( x ) y N , S > + < λ R ( x ) y N , S > + < Q ( x ) F ( y N ) , S > = < K , S >
Straight away we exchange y N by y. As a subsequent of the criterion for the Sinc–Galerkin procedure, the first expression in Equation (14) is integrated by parts twice and the latter expression once. Postulating the condition of vanishing at both end of the interval, y ( 0 ) = 0 = y ( 1 ) Equation (14) reduces to
0 1 y ( x ) [ S w ] ( x ) d x 0 1 y ( x ) [ τ S w ] ( x ) d x + 0 1 y ( x ) [ λ R S w ] ( x ) d x + 0 1 F ( y ( x ) ) [ Q S w ] ( x ) d x = 0 1 y ( x ) [ K S w ] ( x ) d x .
To build an approximate solution by means of the Sinc–Galerkin method, we must assess the integrals in (15) obtaining a linearly framework solution. The standard Sinc quadrature Formula (24) will be exercised. Specifics of the quadrature formula and situations controlling its error bounds (see [22]). To integrate a function G ( x ) defined on ( 0 , 1 ) and fulfill the postulates of the quadrature formula, subsequently for x k = ϕ 1 ( k h ) we arrived at
| 0 1 G ( x ) d x h k = N N G ( x k ) ϕ ( x k ) | O ( e κ N ) .
The Sinc–Galerkin method seek to know the derivatives of composite Sinc functions predestined at the nodes. For a conformal map ϕ of D E onto D d , we use the next codes
δ i , j ( 0 ) 1 , i = j 0 , i j ,
δ i , j ( 1 ) h d d ϕ S i ϕ ( x ) | x = x j = 0 , i = j ( 1 ) j i j i , i j ,
and,
δ i , j ( 2 ) h 2 d 2 d ϕ 2 S i ϕ ( x ) | x = x j = π 2 3 , i = j 2 ( 1 ) j i ( j i ) 2 , i j .
Thus, the expressions in the i-th equation of (15) are approximated by
0 1 y ( x ) [ S i ϕ w ] ( x ) d x h j = N N [ 1 h 2 δ i , j ( 2 ) ϕ ( x j ) w ( x j ) + 1 h δ i , j ( 1 ) ϕ ( x j ) ϕ ( x j ) w ( x j ) + 2 w ( x j ) + δ i , j ( 0 ) w ( x j ) ϕ ( x j ) ] y ( x j )
the second integral in Equation (15) can be written as,
0 1 y ( x ) [ τ S i ϕ w ] ( x ) d x h j = N N 1 h δ i , j ( 1 ) τ ( x j ) w ( x j ) + δ i , j ( 0 ) ( τ w ) ( x j ) ϕ ( x j ) y ( x j )
while the third integral in Equation (15) as,
0 1 y ( x ) [ λ R S i ϕ w ] ( x ) d x h λ j = N N δ i , j ( 0 ) R ( x j ) w ( x j ) ϕ ( x j ) y ( x j )
Finally, the last integral on the left-hand side of Equation (15) can be approximated by the finite sum
0 1 y ( x ) [ Q S i ϕ w ] ( x ) d x h j = N N δ i , j ( 0 ) Q ( x j ) w ( x j ) ϕ ( x j ) F ( y ( x j ) )
while the integral on the right-hand side of Equation (15) has the form
0 1 K ( x ) [ S i ϕ w ] ( x ) d x h j = N N δ i , j ( 0 ) w ( x j ) ϕ ( x j ) K ( x j )
The next coding will be needful to construct the system. Call the ( 2 N + 1 ) × ( 2 N + 1 ) matrices
I ( p ) = δ i , j ( p ) , p = 1 , 2 .
For instant, the i , j th components in matrix I ( 2 ) are as in (18). Denote D ( y ) = d i a g [ y ( z N ) , , y ( z N ) ] T , where the superscript “T” indicate the transpose of a matrix. The discrete system for using the Sinc–Galerkin system coincide with (15) and can be written in a further, convenient exemplification:
U + V + λ D ( R ) D w ϕ + D ( Q ) D w ϕ Y 2 N + 1 = D w ϕ F 2 N + 1
Matrix U represents the Sinc approximation of the second derivative of y, written as
U = 1 h 2 I ( 2 ) D ( ϕ w ) + 1 h I ( 1 ) D ϕ ϕ w + 2 w + D w ϕ
while:
V = 1 h I ( 1 ) D ( w ) + D w ϕ D ( τ ) + D w ϕ D ( τ )
F ( y ( x ) ) 2 N + 1 = F ( y N ) F ( y N + 1 ) F ( y N ) , K 2 N + 1 = K ( x N ) K ( x N + 1 ) K ( x N )
With this view, we have reached the fact
Theorem 2.
The discrete solution for the non-linear SLP via the use of Sinc–Galerkin method for the locating the constants { y j } , j = N , N is given by
U + V + λ D ( R ) D w ϕ + D ( Q ) D w ϕ Y 2 N + 1 = D w ϕ F 2 N + 1
As a conclusion, we end up with a non-linear system of ( 2 N + 1 ) equations in ( 2 N + 1 ) unknown constants { y j } j = N N . The non-linear system can be solved via the use of Newton’s method. The values found for { y j } j = N N give raise of the approximate Sinc solution y T ( x ) for y ( x ) .

2.4. Newton’s Method

Here we rewrite the system of non-linear Equation (20) in the form:
M ( y ) = M N ( y N , y N + 1 , , y N ) M N + 1 ( y N , y N + 1 , , y N ) M N ( y N , y N + 1 , , y N ) = 0 0 0
An appropriate method for solving (24) is Newton’s method, due to nonlinearity terms appeared in the system. Newton’s method suggests starting with an initial guess for the unknown function, an iteration i. Let y ( i ) be the guess. Let M ( i ) denotes the value of M evaluated at the i t h iteration. If the norm of M ( i ) is small enough, we are looking for updates for vectors Δ y ( i ) , where y ( i + 1 ) = y ( i ) + Δ y ( i ) , which can be written in components as
y N ( i + 1 ) y N + 1 ( i + 1 ) y N ( i + 1 ) = y N ( i ) y N + 1 ( i ) y N ( i ) + Δ y N ( i ) Δ y N + 1 ( i ) Δ y N ( i )
with the goal of reaching M ( y ( i + 1 ) ) = 0 . Taylor’s extension theorem for functions M I R 2 N + 1 can be used to approximate M ( y ) that is close to y ( i ) as
M ( y ( i ) + Δ y ( i ) ) = M ( y ( i ) ) + M ( y ( i ) ) Δ ( y ( i ) ) + O ( Δ y ( i ) 2 )
where M ( y ( i ) ) to denotes the Jacobian matrix given by given by
M ( y ) = M N y N ( y ) M N y N + 1 ( y ) M N y N ( y ) M N + 1 y N ( y ) M N + 1 y N + 1 ( y ) M N + 1 y N ( y ) M N y N ( y ) M N y N + 1 ( y ) M N y N ( y ) .
Ignoring the higher order terms and evaluating the Jacobian M ( y ) at y ( i ) , we arrive at
M ( y ( i ) + Δ y ( i ) ) = M ( y ( i ) ) + M ( y ( i ) ) Δ ( y ( i ) ) = 0 ,
or
M y ( i ) Δ y ( i ) = M ( y ( i ) ) .
The equation above is a system of 2 N + 1 linear equations in 2 N + 1 unknowns Δ y ( i ) . We stop Newton’s iteration whenever y ( i + 1 ) y ( i ) ϵ for given ϵ . We also would like to mention that other iterative techniques can be used to solve our non-linear systems (20) such as secant method and quasi-Newton’s method.

3. The Variational Iteration Method

The VIM proposed by Ji-Huan He [10,11] is an analytical method based on Lagrange multipliers. Powerful, easy and effective method for solving large classes of linear and non-linear problems. For linear problems, the exact solution can be evaluated by only one iteration step because the Lagrange multiplier can be exactly identified. It needs no discretization and no perturbations unlike the traditional methods.

3.1. Analysis of the VIM

To illustrate the basic concepts of He’s VIM, we consider the system
T y ( x ) = g ( x ) , x I .
Here T is differential operator that acts on the function y defined on an interval I I R and g is a known analytic function defined for all x I R .
The VIM is based on dividing T into linear and non-linear operators:
L y ( x ) + N y ( x ) = g ( x ) .
Here L is linear, N is non-linear differential operator and g ( x ) is the inhomogeneous term of the equation. The correction functional of Equation (26) can be constructed as follows
y n + 1 ( x ) = y n ( x ) + 0 x μ { L y n ( s ) + N y ˜ n ( s ) g ( s ) } d s .
Here μ is the Lagrange multiplier, which can be identified optimally by variational theory, y n are the n t h order approximate solutions and the y ˜ n are their restricted variations with their own variations δ y ˜ n = 0 . The successive approximation y n , n 1 will be readily obtained upon using the determined μ and by using any selective function y 0 . Consequently, the solution is given by
y ( x ) = lim n y n ( x ) ,
where y n has a limit as n . The convergent proof for the series solution can be found in [10]. To increase the convergence rate of the truncated series solution derived from VIM, we do a simple modification, the Laplace VIM (LVIM), through the following steps (see, [25])
  • convert the truncated series obtained by VIM by using Laplace transform.
  • approximate the result that we get from previous step using the Padé approximant.
  • convert the output function that we get from the previous step using inverse Laplace transform.
In the next subsection, we introduce our modification for the VIM, where the main idea is to combine Laplace transform and Adomian polynomials to the original VIM. For more details see [25].

3.2. Adomian—Variational Iteration Method(AVIM)

Considering the non-linear operator F ( y ( x ) ) , the first few Adomian polynomials are given by
A 0 = F ( y 0 ) A 1 = y 1 F ( y 0 ) A 2 = y 2 F ( y 0 ) + 1 2 ! y 1 2 F ( y 0 ) A 3 = y 3 F ( y 0 ) + y 1 y 2 F ( y 0 ) + 1 3 ! y 1 3 F ( y 0 ) .
The basic idea of VIM linked with Adomian polynomials is to approximate the general type of differential Equation (1). The main idea of our modified method is to construct the correction functional via the use of VIM. The non-linear term N y ( x ) is replaced by Adomian polynomials ([26]) as
N y ( x ) = k = 0 A k ( x ) ,
where the first few Adomian polynomials for a non-linear function F is given as above. Therefore, our approximate solution can be written in the form
y n + 1 ( x ) = y n ( x ) + 0 x μ { L y n ( s ) + k = 0 n A k ( s ) g ( s ) } d s .
Upon choosing the initial approximation y 0 as arbitrary, our first approximate solution has the form
y 1 ( x ) = y 0 ( x ) + 0 x μ { L y 0 ( s ) + A 0 ( s ) g ( s ) } d s .
And
y 2 ( x ) = y 1 ( x ) + 0 x μ { L y 1 ( s ) + ( A 0 ( s ) + A 1 ( s ) ) g ( s ) } d s .
And so on, we may compute terms as much as we need to reach sufficient good approximation. Some examples will be given to illustrate the idea. For more details on using Adomian’s techniques in finding eigenvalues see [27].

3.3. Lagrange Multiplier for Special Kind of Equations

Consider the following ODE
y ( x ) + h ( x ) h ( x ) y ( x ) + f ( t , y ( x ) ) = g ( x ) , y ( 0 ) = A , y ( 1 ) = B ,
where f ( t , y ( x ) ) and g ( x ) are continuous real-valued functions and h ( x ) is a continuous and differentiable function with h ( x ) 0 . Bratu, Emden–Fowler, LaneEmden, Poisson–Boltzmann Lagerstrom and more other equations are special cases of (33). For solving Equation (33) by VIM, we construct the correction functional as follows
y n + 1 ( x ) = y n ( x ) + 0 x λ ( x , s ) { y ( s ) + h ( s ) h ( s ) y ( s ) + f ( s , y ( s ) ) g ( s ) } d s .
Making the above correction functional stationary with respect to y n and noticing that δ y n ( 0 ) = 0 , it yields
δ y n + 1 ( x ) = δ y n ( x ) + δ 0 x λ ( x , s ) { y ( s ) + h ( s ) h ( s ) y ( s ) + f ( s , u ( s ) ) g ( s ) } d s . = δ y n ( x ) + λ ( x , s ) h ( s ) h ( s ) δ y n ) ( s ) + λ ( t , s ) δ y n ) ( s ) + λ ( x , s ) s δ y n ) ( s ) | s = x + 0 x 2 λ ( x , s ) s 2 + s λ ( x , s ) h ( s ) h ( s ) δ y n ( s ) d s . = 0 .
Therefore, the following stationary conditions are obtained as
2 λ ( x , s ) s 2 + s λ ( x , s ) h ( x ) h ( x ) δ y n ( x ) = 0
1 + λ ( x , x ) h ( s ) h ( s ) λ ( x , s ) s | s = x
λ ( x , x ) = 0
Therefore, the Lagrange multiplier can be readily identified with
λ ( x , s ) = h ( s ) d s h ( s ) d x h ( x ) .

4. Numerical Results

To show the efficiency of the methods described in the previous sections, we present two examples. The first example is non-linear, and will be tested using VIM and their modifications appeared in Section 3.1 and Section 3.2 A Sinc solution will also be provided. The second example is chosen to deal with finding eigenvalues for Titchmarch equation and some tabulated numerical results are demonstrated. In the Sinc–Galerkin method, we choose d π / 2 and α = 1 / 2 . The step size and the summation limit N are selected, therefore the error is asymptotically balanced. Once N is chosen, the step size is determined by h = π d α N . In using Newton’s iterative technique, we start with a zero initial guess, and we stop criterion when we reach y ( i + 1 ) y ( i ) 10 7 .
Example 1.
We consider a version of the Duffing equation [25]
y + 2 y + y + 8 y 3 e 3 x = 0 .
With conditions
y ( 0 ) = 0.5 a n d y ( 1 ) = 1 2 e
It is known that exact solution is given with y ( x ) = 0.5 e x .
For the VIM solution, we start with the functional correction in (40) written as
y n + 1 ( x ) = y n ( x ) + 0 x λ ( x , s ) { y ( s ) + 2 y ( s ) + y ( s ) + 8 y ˜ 3 ( s ) e s } d s .
By (39), the Lagrange multiplier is identified as
λ ( x , s ) = h ( s ) d s h ( s ) d s h ( x ) .
In our case,
h ( x ) h ( x ) = 2 , s o h = e 2 x .
Using (43), we arrived at
λ ( x , s ) = 1 2 e 2 s 2 x 1 .
We apply the LVIM, mentioned at the end of Section 3.1, to obtain the following iteration formula:
y n + 1 ( x ) = y n ( x ) + 0 x 1 2 e 2 s 2 x 1 { y ( s ) + 2 y ( s ) + y ( s ) + 8 y ˜ 3 ( s ) e s } d s .
Given that the initial guess is chosen arbitrarily, we assume y 0 = A + B x , so ourfirst approximation is given by
y 1 ( x ) = 4 A 3 x 2 A 3 e 2 x + 2 A 3 6 A 2 B x 2 + 6 A 2 B x + 3 A 2 B e 2 x 3 A 2 B 4 A B 2 x 3 + 6 A B 2 x 2 6 A B 2 x 3 A B 2 e 2 x + 3 A B 2 A x 2 1 4 A e 2 x + 5 A 4 B 3 x 4 + 2 B 3 x 3 3 B 3 x 2 + 3 B 3 x + 3 2 B 3 e 2 x 3 B 3 2 B x 2 4 + B x 4 3 8 B e 2 x + 3 B 8 + e 3 x 3 e 2 x 2 + 1 6 .
We use the conditions appeared in Equation (41) to get A = 0.5 and B = 0.5 . In the same manner, using the obtained values A , B , the second iteration is given by
y 2 ( x ) = 0.000600962 x 13 + 0.015625 x 12 0.204545 x 11 + 1.79531 x 10 0.0338542 e 2 . x x 9 11.8681 x 9 0.0416667 e 3 . x x 8 + 0.304688 e 2 . x x 8 + 62.793 x 8 + 0.0555556 e 3 . x x 7 1.74107 e 2 . x x 7 274.77 x 7 1.53704 e 3 . x x 6 + 5.89063 e 2 . x x 6 + 1008.25 x 6 3.24074 e 3 . x x 5 15.7625 e 2 . x x 5 3096.97 x 5 0.0138889 e 6 . x x 4 + 0.216667 e 5 . x x 4 0.990234 e 4 . x x 4 29.0957 e 3 . x x 4 + 27.3203 e 2 . x x 4 + 7829.67 x 4 + 0.0601852 e 6 . x x 3 0.837778 e 5 . x x 3 + 2.9707 e 4 . x x 3 97.072 e 3 . x x 3 41.099 e 2 . x x 3 15739.5 x 3 0.140046 e 6 . x x 2 + 1.95289 e 5 . x x 2 7.67432 e 4 . x x 2 311.183 e 3 . x x 2 + 34.4297 e 2 . x x 2 + 23663 . x 2 + 0.173804 e 6 . x x 2.34847 e 5 . x x + 8.04565 e 4 . x x 609.4 e 3 . x x 28.9792 e 2 . x x   23686.9 x 0.00470312 e 9 . x + 0.0902778 e 8 . x 0.603571 e 7 . x + 1.31535 e 6 . x + 1.59265 e 5 . x 6.24054 e 4 . x + e 3 x 3 613.749 e 3 . x 1.625 e 2 x 11231.4 e 2 . x + 11850.8 .
Applying Laplace transform to both sides of Equation (47), we get
L [ y 1 ( x ) ] ( s ) = 3.7422 × 10 6 s 14 + 7.4844 × 10 6 s 13 8.1648 × 10 6 s 12 + 6.51483 × 10 6 s 11 4.30668 × 10 6 s 10 + 2.53181 × 10 6 s 9 1.38484 × 10 6 s 8 + 725942 . s 7 371637 . s 6 + 187912 . s 5 94437.3 s 4 + 47325.9 s 3 23686.9 s 2 1.625 s + 2 11231.4 s + 2 . 28.9792 ( s + 2 . ) 2 + 68.8594 ( s + 2 . ) 3 246.594 ( s + 2 . ) 4 + 655.688 ( s + 2 . ) 5 1891.5 ( s + 2 . ) 6 + 4241.25 ( s + 2 . ) 7 8775 . ( s + 2 . ) 8 + 12285 . ( s + 2 . ) 9 12285 . ( s + 2 . ) 10 + 1 3 ( s + 3 ) 613.749 s + 3 . 609.4 ( s + 3 . ) 2 622.366 ( s + 3 . ) 3 582.432 ( s + 3 . ) 4 698.296 ( s + 3 . ) 5 388.889 ( s + 3 . ) 6 1106.67 ( s + 3 . ) 7 + 280 . ( s + 3 . ) 8 1680 . ( s + 3 . ) 9 6.24054 s + 4 . + 8.04565 ( s + 4 . ) 2 15.3486 ( s + 4 . ) 3 + 17.8242 ( s + 4 . ) 4 23.7656 ( s + 4 . ) 5 + 1.59265 s + 5 . 2.34847 ( s + 5 . ) 2 + 3.90578 ( s + 5 . ) 3 5.02667 ( s + 5 . ) 4 + 5.2 ( s + 5 . ) 5 + 1.31535 s + 6 . + 0.173804 ( s + 6 . ) 2 0.280093 ( s + 6 . ) 3 + 0.361111 ( s + 6 . ) 4 0.333333 ( s + 6 . ) 5 0.603571 s + 7 . + 0.0902778 s + 8 . 0.00470312 s + 9 . + 11850.8 s .
To simplify the matter, we replace s = 1 t
L [ y 1 ( x ) ] 1 t = 11231.4 2 . + 1 t 28.9792 2 . + 1 t 2 + 68.8594 2 . + 1 t 3 246.594 2 . + 1 t 4 + 655.688 2 . + 1 t 5 1891.5 2 . + 1 t 6 + 4241.25 2 . + 1 t 7 8775 . 2 . + 1 t 8 + 12285 . 2 . + 1 t 9 12285 . 2 . + 1 t 10 613.749 3 . + 1 t 609.4 3 . + 1 t 2 622.366 3 . + 1 t 3 582.432 3 . + 1 t 4 698.296 3 . + 1 t 5 388.889 3 . + 1 t 6 1106.67 3 . + 1 t 7 + 280 . 3 . + 1 t 8 1680 . 3 . + 1 t 9 6.24054 4 . + 1 t + 8.04565 4 . + 1 t 2 15.3486 4 . + 1 t 3 + 17.8242 4 . + 1 t 4 23.7656 4 . + 1 t 5 + 1.59265 5 . + 1 t 2.34847 5 . + 1 t 2 + 3.90578 5 . + 1 t 3 5.02667 5 . + 1 t 4 + 5.2 5 . + 1 t 5 + 1.31535 6 . + 1 t + 0.173804 6 . + 1 t 2 0.280093 6 . + 1 t 3 + 0.361111 6 . + 1 t 4 0.333333 6 . + 1 t 5 0.603571 7 . + 1 t + 0.0902778 8 . + 1 t 0.00470312 9 . + 1 t 3.7422 × 10 6 t 14 + 7.4844 × 10 6 t 13 8.1648 × 10 6 t 12 + 6.51483 × 10 6 t 11 4.30668 × 10 6 t 10 + 2.53181 × 10 6 t 9 1.38484 × 10 6 t 8 + 725942 . t 7 371637 . t 6 + 187912 . t 5 94437.3 t 4 + 47325.9 t 3 23686.9 t 2 + 11850.8 t 1.625 1 t + 2 + 1 3 1 t + 3 .
Equation (50) has its 3 3 Padé approximation in the form:
3 3 ( t ) = 4.08696 t 3 0.594203 t 2 + 0.5 t 8.17391 t 3 + 6.98551 t 2 0.188406 t + 1.000000000000000 .
As t = 1 s , the Equation (51) can be written in terms of the variable s:
3 3 1 s = 0.5 s 2 0.594203 s + 4.08696 1 . s 3 0.188406 s 2 + 6.98551 s + 8.17391 .
To finalize the solution, we take the inverse of the Laplace transform and arrive at
L 1 3 3 ( 1 s ) = 0.5 e x + e 2.79658 i x ( ( 1.86795 × 10 14 5.21063 × 10 13 i ) e 0.594203 x ( 1.86795 × 10 14 5.21063 × 10 13 i ) e 0.594203 ) .
Some numerical values for our approximate solution using the VIM and LVIM are shown in Table 1. The obtained results indicates an excellent agreement with the exact solution y ( x ) .
Now we calculate the solution of (40) using the improved method obtained by combining Adomian method with the VIM, which we called AVIM.
In (40), the non-linear term is f ( y ) = y 3 ( x ) . Therefore, the first few Adomian polynomials for f ( y ) are A 0 = y 0 3 ( x ) , A 1 = 3 y 1 ( x ) y 0 2 ( x ) , A 2 = 3 y 2 ( x ) y 0 2 ( x ) + 3 y 0 ( x ) y 1 2 ( x ) . By Adomian-Variational Iteration Method [AVIM] and using (45), the correction functional of (40) can be written as
y n + 1 ( x ) = y n ( x ) + 0 x 1 2 e 2 s 2 x 1 { y ( s ) + 2 y ( s ) + y ( s ) + 8 k = 0 n A k ( s ) e s } d s .
By choosing our initial guess to be y 0 = A + B x , the first approximation is given by
y 1 ( x ) = 4 A 3 x 2 A 3 e 2 x + 2 A 3 6 A 2 B x 2 + 6 A 2 B x + 3 A 2 B e 2 x 3 A 2 B 4 A B 2 x 3 + 6 A B 2 x 2 6 A B 2 x 3 A B 2 e 2 x + 3 A B 2 A x 2 1 4 A e 2 x + 5 A 4 B 3 x 4 + 2 B 3 x 3 3 B 3 x 2 + 3 B 3 x + 3 2 B 3 e 2 x 3 B 3 2 B x 2 4 + B x 4 3 8 B e 2 x + 3 B 8 + e 3 x 3 e 2 x 2 + 1 6 .
We use (41) to obtain specific values A = 0.5 and B = 0.5 . In the same manner, using (55), we calculate the next iteration as
y 2 ( x ) = 0.0535714 x 7 + 0.6875 x 6 4.25 x 5 + 16.4688 x 4 1.625 e 2 . x x 3 43.0625 x 3 0.666667 e 3 . x x 2 + 2.4375 e 2 . x x 2 + 75.5313 x 2 0.444444 e 3 . x x 3.25 e 2 . x x 82.3021 x + e 3 x 3 0.925926 e 3 . x 1.625 e 2 x 40.2344 e 2 . x + 42.952 .
Therefore, we calculate the rest of the iterations until we reach the fifth one. Because of the calculation’s length we will not record these iterations. The calculations in Table 2 were based on our fifth iteration. Table 1 shows that the solution by LVIM gives an excellent agreement with the exact solution y ( x ) more than the solution by AVIM.
The notation E m e t h o d denotes the absolute error using that method. Referring to the calculations in the three tables, we can say that the LVIM is much better than other methods. We also note that the improved AVIM has given significant solutions away from the original. However, the method of Sinc–Galerkin gave good results over the whole period [ 0 , 1 ] .
Example 2. Titchmarch Equation.
We consider the Titchmarch model
d 2 y ( x ) d x 2 + ( λ x 2 m ) y ( x ) = 0 , y ( 0 ) = y ( 1 ) = 0 ,
where m is a nonnegative integer. For the Sinc solution, we follow the procedure outlines in Section 2 and find numerical solution for (57). The first eigenvalues, listed in Table 3, are computed from the matrix system (20) for N = 40 and three different values of the parameter m. For the solution using the modified variational iteration method (LVIM), we follow the same steps as in the previous example. Solving λ, we obtain an estimation for the first eigenvalue according to different values of the parameter m. These results are shown in Table 4. Similar results can also be found in [9].
The first eigenvalue of the comparison equation y ( x ) + λ y ( x ) = 0 , y ( 0 ) = 0 , y ( 1 ) = 0 is π 2 . Table 3 predicts an estimate for the least eigenvalue λ 1 that satisfies π 2 < λ 1 < 11 , which is consistent with the results obtained in [9]. The corresponding eigenfunction to the first eigenvalue λ 1 for Equation (57) when m = 2 were computed at some points in its domain, the results are listed in Table 5.

Author Contributions

All authors contributed equally in this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

We wish to thank Bozidar Ivankovic for critically reading the manuscript who provided insight and expertise that were very useful in revising the paper. Also, our appreciation goes to our university (JUST) that supported the second author to complete the master’s thesis, as some results were taken from the master’s thesis for the second author [25].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chanane, B. Computing the eigenvalues of singular Sturm-Liouville problems using the regularized sampling method. Appl. Math. Comput. 2007, 184, 972–978. [Google Scholar] [CrossRef]
  2. Bender, C.M.; Orszag, S.A. Advanced Mathematical Methods for Scientific and Engineers; McGraw-Hill International Editions; McGraw-Hill: New York, NY, USA, 1987. [Google Scholar]
  3. Celik, I. Approximate calculation of eigenvalues with the method of weighted residuals-collocation method. Appl. Math. Comput. 2005, 160, 401–410. [Google Scholar]
  4. Guseinov, G.S.; Karaca, I.Y. Instability intervals of a Hill’s equation with piecewise constant and alternating coefficient. Comput. Math. Appl. 2004, 47, 319–326. [Google Scholar] [CrossRef] [Green Version]
  5. El-Gamel, M.; Zayed, A.I. Sinc-Galerkin method for solving nonlinear boundary-value problems. Comput. Math. Appl. 2004, 48, 1285–1298. [Google Scholar] [CrossRef]
  6. Saadatmandi, A.; Razzaghi, M.; Dehghan, M. Sinc-Galerkin solution for nonlinear two-point boundary value problems with applications to chemical reactor theory. Math. Comput. Model. 2005, 42, 1237–1244. [Google Scholar] [CrossRef]
  7. Stenger, F. A Sinc-Galerkin method of solution of boundary value problems. Math. Comput. 1979, 33, 85–109. [Google Scholar]
  8. Mohsen, A.; El-Gamel, M. On the Galerkin and collocation methods for two-point boundary value problems using sinc bases. Comput. Math. Appl. 2008, in press. [Google Scholar] [CrossRef]
  9. Alquran, M.; Al-Khaled, K. Approximations of Sturm-Liouville eigenvalues using sinc-Galerkin and differential transform methods. Appl. Appl. Math. 2010, 5, 128–147. [Google Scholar]
  10. He, J.H. Variational iteration method—A kind of non-linear analytical technique: Some examples. Int. J. Non-Linear Mech. 1999, 34, 699–708. [Google Scholar] [CrossRef]
  11. He, J.H. Variational iteration method?some recent results and new interpretations. J. Comput. Appl. Math. 2007, 207, 3–17. [Google Scholar] [CrossRef] [Green Version]
  12. He, J.H.; Wu, X.H. Variational iteration method: New development and applications. Comput. Math. Appl. 2007, 54, 881–894. [Google Scholar] [CrossRef]
  13. Abassy, T.A.; El-Tawil, M.A.; El Zoheiry, H. Toward a modified variational iteration method. J. Comput. Appl. Math. 2007, 207, 137–147. [Google Scholar] [CrossRef]
  14. Jin, L. Application of modified variational iteration method to the Bratu-type problems. Int. J. Contemp Math. Sci. 2010, 5, 153–158. [Google Scholar]
  15. Islam, S.U.; Haq, S.; Ali, J. Numerical solution of special 12th-order boundary value problems using differential transform method. Commun. Nonlinear Sci. Numer. Simul. 2009, in press. [Google Scholar] [CrossRef]
  16. Abbasbandy, S. A new application of He’s variational iteration method for quadratic Riccati differential equation by using Adomian’s polynomials. J. Comput. Appl. Math. 2007, 207, 59–63. [Google Scholar] [CrossRef] [Green Version]
  17. Wazwaz, A.M. The variational iteration method for solving two forms of Blasius equation on a half-infinite domain. Appl. Math. Comput. 2007, 188, 485–491. [Google Scholar] [CrossRef]
  18. Mukhtarov, O.S.; Yücel, M. A Study of the Eigenfunctions of the Singular Sturm-Liouville Problem Using the Analytical Method and the Decomposition Technique. Mathematics 2020, 8, 415. [Google Scholar] [CrossRef] [Green Version]
  19. Qadir, R.R.; Jwamer, K.H.F. Refinement Asymptotic Formulas of Eigenvalues and Eigenfunctions of a Fourth Order Linear Differential Operator with Transmission Condition and Discontinuous Weight Function. Symmetry 2019, 11, 1060. [Google Scholar] [CrossRef] [Green Version]
  20. Khashshan, M.M.; Syam, M.I.; Al Mokhmari, A. A Reliable Method for Solving Fractional Sturm-Liouville Problems. Mathematics 2018, 6, 176. [Google Scholar] [CrossRef]
  21. Alsaedi, A.; Alsulami, M.; Srivastava, H.M.; Ahmad, B.; Ntouyas, S.K. Existence Theory for Nonlinear Third-Order Ordinary Differential Equations with Nonlocal Multi-Point and Multi-Strip Boundary Conditions. Symmetry 2019, 11, 281. [Google Scholar] [CrossRef] [Green Version]
  22. Stenger, F. Numerical Methods Based on Sinc and Analytic Functions; Springer: New York, NY, USA, 1993. [Google Scholar]
  23. Eggert, N.; Jarratt, M.; Lund, J. Sinc function computation of the eigenvalues of Sturm-Liouville problems. J. Comput. Phys. 1987, 69, 209–229. [Google Scholar] [CrossRef]
  24. McArthur, K.M.; Arthur, K.M. A Collocative Variation of the Sinc-Galerkin Method for Second Order Boundary Value Problems. In Computation and Control; Progress in Systems and Control Theory; Birkhäuser: Boston, MA, USA, 1989; Volume 1. [Google Scholar]
  25. Hazaimeh, A. Solution of Sturm-Liouville Differential Equation via the Use of Variational Iteration Method. Master’s Thesis, Jordan University of Science and Technology, Ar-Ramtha, Jordan, May 2020. [Google Scholar]
  26. Adomian, G. A review of the decomposition method and some recent results for nonlinear equations. Math. Comput. Model. 1990, 13, 17–43. [Google Scholar] [CrossRef]
  27. Singh, N.; Kumar, M. Adomian decomposition method for computing eigen-values of singular Sturm-Liouville problems. Natl. Acad. Sci. Lett. 2013, 36, 311–318. [Google Scholar] [CrossRef]
Table 1. Numerical results for Example 1 by LVIM.
Table 1. Numerical results for Example 1 by LVIM.
x i y ( x i ) LVIMVIM E VIM E LVIM
00.50.50.5 1.81899 × 10 12 1.81899 × 10 12
0.20.4093650.4093650.409364 1.28746 × 10 6 2.52365 × 10 12
0.40.335160.335160.335110.0000497178 3.06966 × 10 12
0.60.2744060.2744060.2740560.000350104 3.26966 × 10 12
0.80.2246640.2246640.2234150.00124996 2.97354 × 10 12
10.183940.183940.1808190.00312102 2.1437 × 10 12
Table 2. Numerical results for Example 1 by AVIM.
Table 2. Numerical results for Example 1 by AVIM.
x i y ( x i ) LVIMAVIM E AVIM E LVIM
00.50.50.5 7.10543 × 10 15 1.81899 × 10 12
0.20.4093650.4093650.409362 4.29152 × 10 7 2.52365 × 10 12
0.40.335160.335160.335152 1.22625 × 10 6 3.06966 × 10 12
0.60.2744060.2744060.274412 7.66562 × 10 5 3.26966 × 10 12
0.80.2246640.2246640.224658 6.54347 × 10 4 2.97354 × 10 12
1.00.183940.183940.184125 1.89237 × 10 2 2.14370 × 10 12
Table 3. Numerical results for Example 1 by Sinc–Galerkin.
Table 3. Numerical results for Example 1 by Sinc–Galerkin.
x i y ( x i ) Sinc–Galerkin E Sinc
00.50.5 3.22370 × 10 8
0.20.4093650.409365 3.11190 × 10 7
0.40.3351600.335160 4.89625 × 10 7
0.60.2744060.274405 2.77628 × 10 6
0.80.2246640.224665 8.55219 × 10 7
1.00.1839400.183941 3.23908 × 10 7
Table 4. An estimate to the first eigenvalue λ 1 of Equation (57).
Table 4. An estimate to the first eigenvalue λ 1 of Equation (57).
mSinc–Galerkin ( N = 40 )LVIM ( n = 3 )
0 10.1363 10.1366
1 10.2754 10.3788
2 10.5655 10.5935
Table 5. Comparison of solutions corresponding to the first eigenvalue λ 1 of Equation (57).
Table 5. Comparison of solutions corresponding to the first eigenvalue λ 1 of Equation (57).
xSinc-Solution with ( N = 40 )LVIM ( n = 3 )
000
0.125 0.5461 0.5464
0.250 1.002 1.004
0.375 1.321 1.324
0.500 1.422 1.426
0.625 1.321 1.323
0.750 1.002 1.003
0.875 0.5301 0.5300
1.000 00

Share and Cite

MDPI and ACS Style

Al-Khaled, K.; Hazaimeh, A. Comparison Methods for Solving Non-Linear Sturm–Liouville Eigenvalues Problems. Symmetry 2020, 12, 1179. https://doi.org/10.3390/sym12071179

AMA Style

Al-Khaled K, Hazaimeh A. Comparison Methods for Solving Non-Linear Sturm–Liouville Eigenvalues Problems. Symmetry. 2020; 12(7):1179. https://doi.org/10.3390/sym12071179

Chicago/Turabian Style

Al-Khaled, Kamel, and Ashwaq Hazaimeh. 2020. "Comparison Methods for Solving Non-Linear Sturm–Liouville Eigenvalues Problems" Symmetry 12, no. 7: 1179. https://doi.org/10.3390/sym12071179

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop