Next Article in Journal
Quantum Fluctuations in the Small Fabry–Perot Interferometer
Previous Article in Journal
Error State Extended Kalman Filter Localization for Underground Mining Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spectral Solutions of Even-Order BVPs Based on New Operational Matrix of Derivatives of Generalized Jacobi Polynomials

by
Waleed Mohamed Abd-Elhameed
1,*,
Badah Mohamed Badah
2,
Amr Kamel Amin
3,* and
Muhammad Mahmoud Alsuyuti
4
1
Department of Mathematics, Faculty of Science, Cairo University, Giza 12613, Egypt
2
Department of Mathematics, College of Science, University of Jeddah, Jeddah 23218, Saudi Arabia
3
Department of Basic Sciences, Adham University College, Umm AL-Qura University, Makkah 28653, Saudi Arabia
4
Department of Basic Science, Egyptian Academy for Engineering and Advanced Technology Affiliated to Ministry of Military Production, Cairo 32511, Egypt
*
Authors to whom correspondence should be addressed.
Symmetry 2023, 15(2), 345; https://doi.org/10.3390/sym15020345
Submission received: 30 December 2022 / Revised: 21 January 2023 / Accepted: 23 January 2023 / Published: 26 January 2023
(This article belongs to the Section Mathematics)

Abstract

:
The primary focus of this article is on applying specific generalized Jacobi polynomials (GJPs) as basis functions to obtain the solution of linear and non-linear even-order two-point BVPs. These GJPs are orthogonal polynomials that are expressed as Legendre polynomial combinations. The linear even-order BVPs are treated using the Petrov–Galerkin method. In addition, a formula for the first-order derivative of these polynomials is expressed in terms of their original ones. This relation is the key to constructing an operational matrix of the GJPs that can be used to treat the non-linear two-point BVPs. In fact, a numerical approach is proposed using this operational matrix of derivatives to convert the non-linear differential equations into effectively solvable non-linear systems of equations. The convergence of the proposed generalized Jacobi expansion is investigated. To show the precision and viability of our suggested algorithms, some examples are given.

1. Introduction

The numerous applications of special functions and orthogonal polynomials in a variety of fields have made studying these polynomials more crucial. They occur in the study of differential and integral equations; for an illustration, see for example [1,2,3,4]. Furthermore, orthogonal polynomials have been shown to be significant in both mathematical statistics and quantum physics. Many theoretical investigations regarding special functions were performed, see for example [5,6,7,8]. The classical orthogonal polynomials, which involve the Hermite, Laguerre, and Jacobi polynomials, are the orthogonal polynomials that are the most utilized polynomials, see for example [9,10,11]. The Jacobi polynomials are among the most significant orthogonal polynomials used in numerical analysis. The most significant class of Jacobi polynomials is known as the Gegenbauer polynomials class, which also includes the classes of Legendre and Chebyshev polynomials of the first and second kinds as special classes. The four kinds of Chebyshev polynomials are all special Jacobi polynomials. The first and second kinds of Chebyshev polynomials are ultraspherical polynomials, whereas the third and fourth-kind Chebyshev polynomials are not ultraspherical polynomials since they are special cases of certain non-symmetric Jacobi polynomials, see for example [12,13].
BVPs play important roles since they may be found everywhere in the field of applied sciences, from engineering to fluid mechanics to optimization theory. For some applications, one can consult [14]. High-order BVPs can be used to describe a variety of real-world phenomena. The even-order two-point BVPs appear in a variety of problems. A fourth-order ordinary differential equation [15] governs the free vibration analysis of beam structures. An ordinary differential equation of sixth-order controls the vibrational activity of rings, see [16]. For some other applications to even-order BVPs, one can refer to [17]. From a numerical perspective, there are several numerical algorithms utilized to solve different types of even-order BVPs. For example, the authors in [18] treated both linear and non-linear two-point BVPs of any even order using the generalized third-kind Chebyshev polynomials. The linear equations were solved via the Galerkin approach, while the non-linear even-order BVPs were treated by applying the standard collocation technique, based on a matrix of derivatives of the generalized third-kind Chebyshev polynomials. Some other algorithms in the literature were developed to treat such types of polynomials. Among these methods are the differential transform methods in [19], perturbation and homotopy perturbation methods in [20], and the matrix method in [21].
The numerical analysis relies heavily on spectral methods. Numerical solutions to differential and integral equations have been obtained successfully using these methods. The required approximate solution can be expressed in terms of certain combinations of orthogonal polynomials that can serve as basis functions by using spectral methods. Tau, collocation, and Galerkin methods are three well-known spectral methods. The Galerkin approach relies on selecting orthogonal polynomial combinations that meet the underlying conditions and after that enforcing the residual of the equation to be orthogonal to the set of test functions that coincide with the set of trial functions, see for example [10,22,23,24,25]. The “Petrov–Galerkin” method is a variation of the Galerkin method. The primary distinction between the Galerkin and Petrov–Galerkin techniques is that, in contrast to the Galerkin method, the two sets of trial and test functions in the Petrov–Galerkin approach are not the same. Therefore, the Galerkin approach is less adaptable than the Petrov–Galerkin method. In comparison to the Galerkin and Petrov–Galerkin techniques, the tau method is more widely used, for an example, see [26,27]. This is because of the flexibility with which the basis and trial functions can be chosen. The collocation method, which can handle all different types of differential equations, is the most popular approach, see for instance, [28,29,30,31]. A survey of spectral techniques and their uses can be found in [2,32].
Two papers written by Shen [33,34] addressed the issue of combining orthogonal polynomials in order to deal with different types of differential equations. In [33], using the spectral Galerkin method, the author numerically treated the second and fourth-order two-point BVPs by using orthogonal combinations of Legendre polynomials. The fundamental benefit of choosing such combinations is that it allows one to transform the differential equations governed by their underlying conditions into sets of algebraic systems that are specially structured. Additionally, it has been demonstrated that some types of differential equations can be transformed into diagonal systems, which naturally considerably reduces the computational effort needed to solve these particular types of differential equations, see for example [24].
In the context of numerical analysis, the utilization of operational matrices of derivatives and integrals is beneficial. They are used to numerically solve almost all types of differential equations. For example, in [35], Napoli and Abd-Elhameed used harmonic numbers operational matrices of derivatives to treat the non-linear high-order initial value problems. A wide range of fractional differential equations can be solved using operational derivative matrices, see for example [36,37].
The principal objective of the current article is to employ the GJPs as basis functions to obtain spectral solutions of the linear and non-linear even-order BVPs. Two different approaches are utilized for proposing spectral solutions for these equations. The Petrov–Galerkin method is applied to treat the linear even-order BVPs. For the non-linear BVPs, the operational matrix of derivatives of such polynomials is first introduced, and after that, it is used to transform the differential equations governed by its governing boundary conditions (BCs) into systems of algebraic equations that can be efficiently solved.
The paper is organized as follows. The next section presents some preliminary information and some fundamental properties concerning the Legendre and GJPs. Section 3 develops new formulas concerned with the GJPs and their shifted ones. In Section 4, a numerical algorithm built on the application of the Petrov–Galerkin method is designed to solve the even-order two-point BVPs. Section 5 is devoted to presenting a matrix approach for handling the non-linear two-point BVPs. This algorithm is basically built on the application of the spectral collocation method. The convergence analysis of the proposed shifted generalized Jacobi polynomials is investigated in Section 6. Some illustrative examples are displayed in Section 7. Some concluding remarks are given in Section 8.

2. Preliminaries and Some Interesting Formulas

In this section, we discuss the basic characteristics of Legendre polynomials, their shifted polynomials, and a special class of polynomials called GJPs, which are represented as specific combinations of Legendre polynomials. We also introduce the shifted generalized Jacobi polynomials that are used later on.

2.1. An Account on Legendre Polynomials and the GJPs

Legendre polynomials are well-known to form an orthogonal set of polynomials on [ 1 , 1 ] with regard to the unit weight function. Its orthogonality relation is given as
1 1 P m ( x ) P n ( x ) d x = 2 2 n + 1 , m = n , 0 , m n .
These polynomials can be represented as ([38]):
P n ( x ) = 1 2 n m = 0 n 2 ( 1 ) m ( 2 n 2 m ) ! m ! ( n 2 m ) ! ( n m ) ! x n 2 m ,
and the formula for its inversion takes the form
x = 2 π ! m = 0 2 1 2 + 2 m m ! Γ 3 2 + m P 2 m ( x ) , 0 ,
where z represents the largest number that is less than or equal to z.
The shifted Legendre polynomials on [ a , b ] can be defined as follows:
P n ( x ) = P n 2 x a b b a .
It is obvious that P n ( x ) are orthogonal to [ a , b ] in the following way:
a b P m ( x ) P n ( x ) d x = b a 2 n + 1 , m = n , 0 , m n .
Guo et al. in [39] constructed kinds of orthogonal polynomials that are considered the natural basis of certain even-order two-point BVPs. He called them generalized Jacobi polynomials—“GJPs”. It was shown in [39] that these polynomials are combinations of the Legendre polynomials. Doha et al. in [24] found an explicit expression for these polynomials in terms of Legendre polynomials. More recently, Abd-Elhameed in [40] studied these polynomials from a theoretical point of view, and he established several formulas concerned with these polynomials and their connections with different orthogonal polynomials. In this paper, we employ these polynomials to treat the linear and non-linear even-order two-point BVPs numerically.
Let and m be any two integers and define the SGJPs as follows (see [39]):
G r ( , m ) ( x ) = ( 1 x ) ( 1 + x ) m R r r 0 ( , m ) ( x ) , r 0 = ( + m ) , , m 1 , ( 1 x ) R r 0 ( , m ) ( x ) , r 0 = , 1 , m > 1 , ( 1 + x ) m R r 0 ( , m ) ( x ) , r 0 = m , > 1 , m 1 , R r 0 ( , m ) ( x ) , r 0 = 0 , , m > 1 ,
where R m ( δ , μ ) ( x ) are the normalized Jacobi polynomials defined as ([40]):
R m ( δ , μ ) ( x ) = m ! ( δ + 1 ) m P m ( δ , μ ) ( x ) ,
where P m ( δ , μ ) ( x ) represents the well-known classical Jacobi polynomials.
From the definition in (2), it is clear that the polynomials defined as
G k n ( x ) = G k ( n , n ) ( x ) = ( 1 x 2 ) n R k ( n , n ) ( x ) ,
fulfill the ( 2 n ) BCs:
D q G k n ( ± 1 ) = 0 , q = 0 , 1 , , n 1 .
Remark 1.
The polynomials defined in (3) are those defined in [40], but they differ only in a factor.
In terms of Legendre polynomials, the following lemma shows how the GJPs can be written.
Lemma 1.
The GJPs defined in (3) can be expressed in terms of Legendre polynomials as ([24])
G k n ( x ) = 1 2 n ! j = 0 n ( 1 ) j ( 1 + 4 j + 2 k ) n j Γ 1 2 + j + k Γ 3 2 + j + k + n P k + 2 j ( x ) .
Now, from the orthogonality relation of the normalized symmetric Jacobi polynomials (ultraspherical polynomials) ([40]), it is clear that the orthogonality relation of the polynomials G k n ( x ) over [ 1 , 1 ] with regard to the weight function w ( x ) = ( 1 x 2 ) n is given by
1 1 w ( x ) G j n ( x ) G k n ( x ) d x = 2 2 n + 1 k ! ( n ! ) 2 ( 2 k + 2 n + 1 ) ( k + 2 n ) ! , j = k , 0 , j k .

2.2. The Shifted Generalized Jacobi Polynomials

The GJPs that are defined in (3) can be extended to be defined on a general interval [ a , b ] . In this respect, the shifted generalized Jacobi polynomials (SGJPs) on [ a , b ] can be defined as follows:
S G j n ( x ) = ( b x ) n ( x a ) n R ˜ j ( n , n ) ( x ) ,
where R ˜ j ( n , n ) ( x ) are the shifted normalized symmetric Jacobi polynomials on [ a , b ] , defined as
R ˜ j ( n , n ) ( x ) = R j ( n , n ) 2 x a b b a .
The orthogonality relation of the G J P s on [ 1 , 1 ] can be easily transformed to give the corresponding orthogonality relation for the SGJPs on [ a , b ] . More definitely, the following orthogonality relation holds for the shifted polynomials S G j n ( x ) :
a b w ˜ ( x ) S G j n ( x ) S G k n ( x ) d x = ( b a ) 2 n + 1 k ! ( n ! ) 2 ( 2 k + 2 n + 1 ) ( k + 2 n ) ! , j = k , 0 , j k ,
where w ˜ ( x ) is given by w ˜ ( x ) = ( b x ) n ( x a ) n .
The following corollary exhibits the expression of the SGJPs in terms of the shifted Legendre polynomials.
Corollary 1.
In terms of shifted Legendre polynomials, the SGJPs can be represented as
S G k n ( x ) = n ! ( b a ) 2 n 2 2 n + 1 j = 0 n ( 1 ) j ( 4 j + 2 k + 1 ) n j Γ j + k + 1 2 Γ j + k + n + 3 2 P k + 2 j ( x ) .
Proof. 
Formula (7) can be deduced from Formula (4) only by replacing x by 2 x a b b a .    □

3. Some New Formulas Concerned with the GJPs and Their Shifted Ones

This section is devoted to establishing some new formulas concerned with the GJPs and their shifted polynomials, which will be very useful to derive our two proposed algorithms to handle even-order linear and non-linear two-point BVPs.
In the first theorem, we state and prove an expression of the power form representation of the GJPs.
Theorem 1.
The analytic form of G j n ( x ) is given by
G j n ( x ) = ( 2 n + 1 ) ! Γ 3 2 + n p = 0 j 2 + n ( 1 ) n p 2 j 2 p 1 Γ 1 2 + j + n p p ! ( j + 2 n 2 p ) ! x j + 2 n 2 p , j 0 .
Proof. 
The analytic form of R j ( n , n ) ( x ) (see [40]) allows one to write
R j ( n , n ) ( x ) = j ! ( 2 n + 1 ) ! ( j + 2 n ) ! Γ 3 2 + n r = 0 j 2 ( 1 ) r 2 j 2 r 1 Γ 1 2 + j + n r ( j 2 r ) ! r ! x j 2 r ,
and therefore, we have
G j n ( x ) = j ! ( 2 n + 1 ) ! ( j + 2 n ) ! Γ 3 2 + n r = 0 j 2 ( 1 ) r 2 j 2 r 1 Γ 1 2 + j + n r ( j 2 r ) ! r ! 1 x 2 n x j 2 r .
The previous formula is transformed into the following form by using the binomial theorem:
G j n ( x ) = j ! ( 2 n + 1 ) ! ( j + 2 n ) ! Γ 3 2 + n r = 0 j 2 ( 1 ) r 2 j 2 r 1 Γ 1 2 + j + n r ( j 2 r ) ! r ! = 0 n ( 1 ) n x j + 2 2 r .
Some algebraic computations on (8) lead to the following formula:
G j n ( x ) = j ! ( 2 n + 1 ) ! ( j + 2 n ) ! Γ 3 2 + n p = 0 j 2 + n ( 1 ) n p = 0 p 2 j 2 1 n + n p Γ 1 2 + j + n ( j 2 ) ! ! x j + 2 n 2 p .
The second right-hand sum in the previous formula can now be written as
= 0 p 2 j 2 1 n + n p Γ 1 2 + j + n ( j 2 ) ! ! = 2 j 1 n n p Γ j + n + 1 2 j ! 3 F 2 p , 1 2 j 2 , j 2 1 2 j n , 1 + n p 1 .
The 3 F 2 ( 1 ) that appears in (9) can be reduced using the identity of Pfaff–Saalschutz (see, [41]) to give
3 F 2 p , 1 2 j 2 , j 2 1 2 j n , 1 + n p 1 = ( 2 n + j 2 p + 1 ) 2 p 2 2 p j + n p + 1 2 p ( n p + 1 ) p ,
and as a result, the power form representation of G j n ( x ) shown below can be obtained:
G j n ( x ) = ( 2 n + 1 ) ! Γ 3 2 + n p = 0 j 2 + n ( 1 ) n p 2 j 2 p 1 Γ 1 2 + j + n p p ! ( j + 2 n 2 p ) ! x j + 2 n 2 p .
Theorem 1 is now proved.    □
Our next objective is to present and prove a significant theorem that gives a Legendre-based expression for the high-order derivatives of G j n ( x ) .
Theorem 2.
Let j and q be two non-negative integers with j + 2 n q . The q t h -derivative of the GJPs has the following Legendre expression:
D q G j n ( x ) = ( 1 ) n 2 q n ! Γ 1 2 + j + n × r = 0 j q 2 + n ( 1 ) r 1 2 + j + 2 n 2 r q ( 1 + n r q ) r r ! Γ 3 2 + j + 2 n r q 1 2 + j + n r r P j + 2 n q 2 r ( x ) .
Proof. 
The analytic form of G j n ( x ) in Theorem 1 allows one to write
D q G j n ( x ) = n ! π m = 0 j 2 + n ( 1 ) n m 2 j 2 m + 2 n Γ 1 2 + j m + n m ! ( j 2 m + 2 n q ) ! x j 2 m + 2 n q .
The application of the inversion formula of the Legendre polynomials turns Formula (11) into the following formula:
D q G j n ( x ) = 2 q n ! m = 0 j 2 + n ( 1 ) n m Γ 1 2 + j m + n m ! × r = 0 j q 2 + n m ( 1 2 + j 2 m + 2 n q 2 r ) r ! Γ 3 2 + j 2 m + 2 n q r P j 2 m + 2 n q 2 r ( x ) ,
which can be rewritten as follows:
D q G j n ( x ) = 2 q 1 n ! r = 0 j q 2 + n ( 1 + 2 j + 4 n 4 r 2 q ) × = 0 r ( 1 ) + n Γ 1 2 + j + n ! ( r ) ! Γ 3 2 + j + 2 n r q P j q 2 r + 2 n ( x ) .
Now, noting the identity
= 0 r ( 1 ) + n Γ 1 2 + j + n ! ( r ) ! Γ 3 2 + j + 2 n r q = ( 1 ) n Γ j + n + 1 2 r ! Γ 1 2 ( 3 + 2 j + 4 n 2 r 2 q ) 2 F 1 r , 1 2 j 2 n + r + q 1 2 j n 1 ,
then it is easy using the Chu–Vandermonde identity ([41]) to express D q G j n ( x ) as
D q G j n ( x ) = ( 1 ) n 2 q n ! Γ 1 2 + j + n × r = 0 j q 2 + n ( 1 ) r 1 2 + j + 2 n 2 r q ( 1 + n r q ) r r ! Γ 3 2 + j + 2 n r q 1 2 + j + n r r P j + 2 n q 2 r ( x ) .
This proves Theorem 2.    □
Now, the shifted Legendre polynomials can be used to represent the high-order derivatives of the S G j n ( x ) as in the next corollary.
Corollary 2.
Let j and q be two non-negative integers with j + 2 n q . The q t h -derivative of the S G j n ( x ) can be represented in terms of the shifted Legendre polynomials as
D q S G j n ( x ) = 2 2 q 2 n ( b a ) 2 n q n ! Γ j + n + 1 2 × r = 0 j q 2 + n ( 1 ) n + r j + 2 n q 2 r + 1 2 ( n q r + 1 ) r r ! Γ j + 2 n q r + 3 2 j + n r + 1 2 r P j + 2 n q 2 r ( x ) .
Proof. 
Formula (12) can be deduced as a direct consequence of Formula (10) by replacing x by 2 x a b b a .    □

4. Treating Linear High Even-Order Differential Equations via the Petrov–Galerkin Method

The focus of this section is on thoroughly analyzing a spectral solution to the high-even order BVPs presented below:
( 1 ) n D 2 n u ( x ) + q = 1 2 n 1 ξ q d q D q u ( x ) + d 0 u ( x ) = g ( x ) , x I = [ a , b ] , n 1 ,
governed by the BCs:
D k u ( a ) = D k u ( b ) = 0 , k = 0 , 1 , , n 1 ,
where { d q , q = 0 , 1 , , 2 n 1 } are arbitrary real constants and ξ q are defined as
ξ q = ( 1 ) q 2 , q even , ( 1 ) q + 1 2 , q odd .
Now, we are willing to solve (13)–(14) utilizing the SGJPs that are defined in (5) as basis functions. So, we select the following basis functions:
ψ j , n ( x ) = S G j n ( x ) = ( b x ) n ( x a ) n R ˜ j ( n , n ) ( x ) , j = 0 , 1 , 2 , .
It should be noticed that { ψ j , n ( x ) : j = 0 , 1 , 2 , } are linearly independent and orthogonal with respect to the weight function w ˜ ( x ) = ( b x ) n ( x a ) n .
Consider the Sobolev spaces to be H n ( I ) ( n = 0 , 1 , 2 , ) , where the inner product is denoted by ( . , . ) n , and the norm by ∥.∥ (see, [42]), and consider the following space
H 0 n ( I ) = { u H n ( I ) : u ( q ) ( a ) = u ( q ) ( b ) = 0 , 0 q n 1 } ,
where u ( q ) ( x ) = d q u d x q . Now, define the following subspace of H 0 n ( I )
V N = span { ψ 0 , n ( x ) , ψ 1 , n ( x ) , , ψ N , n ( x ) } .

4.1. Function Approximation

Assume now that the function u ( x ) H 0 n ( I ) (defined in (16)) can be expanded in terms of the polynomials ψ j , n ( x ) as
u ( x ) = j = 0 c j ψ j , n ( x ) ,
where
c j = ( 2 j + 2 n + 1 ) ( j + 2 n ) ! ( b a ) 2 n + 1 j ! ( n ! ) 2 a b u ( x ) ψ j , n ( x ) ( x a ) n ( b x ) n d x .
Additionally, assume an approximate solution u N ( x ) V N ( x ) to (13)–(14) that can be expanded as
u N ( x ) = j = 0 N c j ψ j , n ( x ) .
In the following subsection, we present how to obtain the proposed numerical solution u N ( x ) using a suitable spectral method.

4.2. Petrov–Galerkin Approximation to (13)–(14)

In this section, we are interested in applying the shifted generalized Jacobi Petrov–Galerkin method (SGJPGM) to solve (13)–(14). One should follow the concept of picking two distinct sets of trial and test functions. The test functions are selected at random, while the trial functions are selected to fulfill the BCs (14) and after that make the residual of (13) orthogonal to the selected test functions. It is appropriate to select the set of shifted Legendre polynomials as the test functions since it is evident from Formula (7) that the basis functions are given in terms of the shifted Legendre polynomials. In this regard, we pick the following test functions:
χ k ( x ) = P k ( x ) .
To apply the SGJPGM to solve (13)–(14), we have to find u N ( x ) V N ( x ) such that
( 1 ) n D 2 n u N ( x ) , P k ( x ) + q = 1 2 n 1 ξ q d q D q u N ( x ) , P k ( x ) + d 0 ( u N ( x ) , P k ( x ) ) = ( g ( x ) , P k ( x ) ) ,
where u ( x ) , F ( x ) = a b u ( x ) F ( x ) d x is the scalar inner product in the space L 2 [ a , b ] .
Now, if we denote
g k = g ( x ) , P k ( x ) , g = ( g 0 , g 1 , , g N ) , B ( 2 n ) = ( b k j 2 n ) 0 k , j N , B ( q ) = ( b k j ( q ) ) 0 k , j N , 1 q 2 n 1 , B ( 0 ) = ( b k j 0 ) 0 k , j N ,
then the matrix form corresponding to (19) is
B ( 2 n ) + q = 1 2 n 1 d q B ( q ) + d 0 B ( 0 ) c = g ,
where the nonzero elements of the matrices B ( 2 n ) , B ( q ) , 1 q 2 n 1 and B ( 0 ) are given explicitly in the following theorem.
Theorem 3.
If the trial and test basis functions ψ j , n ( x ) and χ k ( x ) are as selected in (15) and (18), and if we denote b k j ( 2 n ) = ( 1 ) n D 2 n ψ j , n ( x ) , P k ( x ) , b k j ( q ) = ξ q D q ψ j , n ( x ) , P k ( x ) , 1 q 2 n 1 , b k j ( 0 ) = ψ j , n ( x ) , P k ( x ) , then the nonzero elements of the matrices B ( 2 n ) , B ( q ) , 1 q 2 n 1 and B ( 0 ) are given explicitly as follows:
b k j ( 2 n ) = 2 2 n 1 n ( b a ) j k 2 + n 1 ! Γ 1 2 ( j + k + 1 ) + n j k 2 ! Γ 1 2 ( j + k + 3 ) , j k , ( j + k ) e v e n , b k j ( q ) = ξ q ( 1 ) n 2 q n ! b a 2 2 n q + 1 1 2 ( j k + q ) 1 ! Γ 1 2 ( 1 + j + k + q ) 1 2 ( j k + 2 n q ) ! Γ 1 2 ( 3 + j + k + 2 n q ) ( q n 1 ) ! ,
( j + k + q ) e v e n , ( j 2 n + q ) k , 1 q 2 n 1 ,
b k j 0 = ( 1 ) k j 2 b a 2 2 n + 1 n k j 2 n ! Γ 1 2 ( 1 + j + k ) Γ 1 2 ( 3 + j + k ) + n , 0 k j 2 n .
Proof. 
To compute the elements of the matrices B ( 2 n ) , B ( q ) , 1 q 2 n 1 and B ( 0 ) , it is required to compute D q ψ j , n ( x ) , P k ( x ) . Based on Formula (12), D q ψ j , n ( x ) has the following expression:
D q ψ j , n ( x ) = r = 0 j q 2 + n A r , j , q P j + 2 n q 2 r ( x ) ,
where A r , j , q take the form
A r , j , q = ( 1 ) n + r 2 q n ! b a 2 2 n q Γ 1 2 + j + n 1 2 + j + 2 n 2 r q ( 1 + n r q ) r r ! Γ 3 2 + j + 2 n r q 1 2 + j + n r r .
Now, Formula (24) along with the well-known orthogonality property of the shifted Legendre polynomials in (1) will yield
D q ψ j , n ( x ) , P k ( x ) = r = 0 j q 2 + n A r , j , q δ j + 2 n q 2 r , k h k ,
where h k = b a 2 k + 1 and δ j , k is the well-known Kronecker delta function.
It is easy to see that Formula (25) reduces to the following formula:
D q ψ j , n ( x ) , P k ( x ) = ( 1 ) n 2 q n ! b a 2 2 n q + 1 1 2 ( j k + q ) 1 ! Γ 1 2 ( 1 + j + k + q ) 1 2 ( j k + 2 n q ) ! Γ 1 2 ( 3 + j + k + 2 n q ) ( q n 1 ) ! , ( j k + q ) even , j 2 n + q k , 0 , otherwise .
Now, regarding the entries of the matrix B ( 2 n ) , it is clear from Formula (26) that
b k j ( 2 n ) = ( 1 ) n D 2 n ψ j , n ( x ) , P k ( x ) = 2 2 n 1 n ( b a ) j k 2 + n 1 ! Γ 1 2 ( j + k + 1 ) + n j k 2 ! Γ 1 2 ( j + k + 3 ) , j k , ( j + k ) even , 0 , otherwise ,
and this proves Formula (21).
Now, it can be seen that Formula (22) is an immediate consequence of Formula (26). Formula (23) is an immediate result of Formula (7) together with the orthogonality relation (6).    □
Remark 2.
Equation (13) governed by the non-homogeneous BCs turns into homogeneous ones using a suitable transformation, see [18].
Remark 3.
Some of the characteristics and advantages of the SGJPGM that are used to solve (13)(14) can be listed as follows:
  • The Petrov–Galerkin method is used to turn the linear even-order BVPs in (13)(14) into linear systems of equations (20) that can be efficiently solved.
  • For some particular two-point BVPs (13), and in particular for d q = 0 , the system in (20) reduces to an upper triangular system that can be easily solved. This, of course, gives an advantage when applying this method to these types of equations.
Remark 4.
In the following Algorithm 1, we write the steps required to obtain the solution of (13)(14) using SGJPGM.
Algorithm 1: Required steps to solve (13)–(14) by the SGJPGM
  Step 1. Choose the basis functions ψ j , n ( x ) and χ k ( x ) as in (15) and (18).
  Step 2. Apply the Pertrov-Galerkin method to (13)–(14) to obtain the variational formulation in (19).
  Step 3. Convert (19) into its corresponding matrix system (20).
  Step 4. Compute the unknown vector c in (20) by a suitable numerical solver
  Step 5. Obtain the numerical solution: u N ( x ) = j = 0 N c j ψ j , n ( x ) .

5. Treatment of the Non-Linear Differential Equations via a Matrix Approach

This section is devoted to analyzing and implementing in detail the algorithm designed to solve the non-linear even-order two-point BVPs. Now, consider the following non-linear, 2 n t h -order BVPs:
Z ( 2 n ) ( x ) = F x , Z ( x ) , Z ( 1 ) ( x ) , Z ( 2 ) ( x ) , , Z ( 2 n 1 ) ( x ) , x I ,
governed by the homogenous BCs:
Z ( i ) ( a ) = Z ( i ) ( b ) = 0 , i = 0 , 1 , , n 1 .
We consider an approximation to the solution Z ( x ) in Equation (27) in the form:
Z ( x ) Z N ( x ) = i = 0 N c i ψ i ( x ) = C T Ψ ( x ) ,
where
C T = [ c 0 , c 1 , , c N ] , Ψ ( x ) = [ ψ 0 , n ( x ) , ψ 1 , n ( x ) , , ψ N , n ( x ) ] T .
We tackle (27)–(28) by expressing the derivatives in terms of their original ones. This can be done via the operational matrix method. Thus, the main idea behind the derivation of our algorithm is based on utilizing the operational matrix of the SGJPs that will be employed as basis functions. In the following subsection, we derive this new operational matrix of derivatives.

5.1. New Operational Matrix of Derivatives Based on the SGJPs

This section is devoted to introducing a new operational matrix of derivatives of the SGJPs. This matrix serves to solve the linear and non-linear even-order two-point BVPs using a unified approach.
We now present and demonstrate the fundamental theorem, which enables the introduction of a new operational derivative matrix.
Theorem 4.
If the polynomials ψ i , n ( x ) are selected as in (15), then the following relation holds for all i 1 ,
D ψ i , n ( x ) = 2 b a j = 0 ( i + j ) o d d i 1 ( 2 n + 2 j + 1 ) ψ j , n ( x ) + ρ i ( x ) ,
with
ρ i ( x ) = n ( b x ) n 1 ( x a ) n 1 a + b 2 x , i e v e n , a b , i o d d .
Proof. 
Without any loss of generality, we consider the case corresponds to I [ 1 , 1 ] . In this case, Formula (30) turns into the following formula:
D ϕ i , n ( x ) = j = 0 ( i + j ) o d d i 1 ( 2 n + 2 j + 1 ) ϕ j , n ( x ) + θ i ( x ) ,
where ϕ i , n ( x ) is given by
ϕ i , n ( x ) = ( 1 x 2 ) n R i ( n , n ) ( x ) ,
and θ i ( x ) is given by
θ i ( x ) = 2 n ( 1 ) n x 2 1 n 1 x , i even , 1 , i odd .
This formula was stated and proved in [40], but by taking Remark 1 into consideration.
Now, replacing x by 2 x a b b a in Formula (32) yields the desired result.    □
Now, and with the aid of Theorem 4, one can deduce that d Ψ ( x ) d x has the following expression:
d Ψ ( x ) d x = S Ψ ( x ) + ρ ( x ) ,
where ρ ( x ) = ρ 0 ( x ) , ρ 1 ( x ) , , ρ N ( x ) T , and S = s i j 0 i , j N , is a ( N + 1 ) × ( N + 1 ) matrix whose nonzero entries can be explicitly determined from relation (30) as
s i j = 2 b a ( 2 n + 2 j + 1 ) , i > j , ( i + j ) o d d , 0 , o t h e r w i s e .
As an example, for N = 6 and n = 4 , we have
S = 2 b a 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 11 0 0 0 0 0 9 0 13 0 0 0 0 0 11 0 15 0 0 0 9 0 13 0 17 0 0 0 11 0 15 0 19 0 7 × 7 .
Corollary 3.
The qth-derivative of the vector Ψ ( x ) is given by
d q Ψ ( x ) d x q = S q Ψ ( x ) + m = 0 q 1 S q m 1 d m ρ ( x ) d x m .
Proof. 
The repeated application of Formula (33) yields the expression in (35).    □

5.2. Our Proposed Matrix Method for Treating (27)–(28)

This section is confined to presenting a matrix approach for treating (27)–(28). The key of our proposed approach is based on employing the operational matrix of derivatives of the SGJPs that are derived in Section 5.1. More definitely, the shifted generalized Jacobi operational matrix method (SGJOMM) is employed to treat (27)–(28).
Now, if Z ( x ) is approximated as in (29), then based on Formula (33), the derivatives of the approximate solution Z N ( x ) given by (29) can be represented as
Z N ( x ) = C T S Ψ ( x ) + ρ ( x ) , Z N ( x ) = C T S 2 Ψ ( x ) + ρ 2 ( x ) , Z N ( 2 n 1 ) ( x ) = C T S 2 n 1 Ψ ( x ) + ρ 2 n 1 ( x ) , Z N ( 2 n ) ( x ) = C T S 2 n Ψ ( x ) + ρ 2 n ( x ) ,
where S is the operational matrix of derivatives whose elements are given explicitly in (34), and the vector ρ q ( x ) is given by the following formula:
ρ q ( x ) = m = 0 q 1 S q m 1 d m ρ ( x ) d x m , 2 q 2 n ,
where (31) provides the components of the vector ρ ( x ) .
To find the residual of Equation (27), we use the representations in (36) to give
R ( x ) = C T S 2 n Ψ ( x ) + ρ 2 n ( x ) F x , C T Ψ ( x ) , C T ( S Ψ ( x ) + ρ ( x ) ) , C T S 2 Ψ ( x ) + ρ 2 ( x ) , , C T S 2 n 1 Ψ ( x ) + ρ 2 n 1 ( x ) .
The philosophy behind the application of the typical collocation method to obtain the desired numerical solution Z N ( x ) is to enforce the residual vanish at suitably selected ( N + 1 ) points, say x i , 0 i N ; that is, we get
C T S 2 n Ψ ( x i ) + ρ 2 n ( x i ) = F x i , C T Ψ ( x i ) , C T ( S Ψ ( x i ) + ρ ( x i ) ) , C T S 2 Ψ ( x i ) + ρ 2 ( x i ) , , C T S 2 n 1 Ψ ( x i ) + ρ 2 ( x i ) .
There are numerous ways to choose these collocation points. A numerical approximation Z N ( x ) is generated for each option. These are some options for these collocation points:
1.
The ( N + 1 ) zeros of the shifted Legendre polynomials P N + 1 ( x ) .
2.
The ( N + 1 ) zeros of the shifted Chebyshev polynomial of the first kind T N + 1 ( x ) .
3.
The ( N + 1 ) zeros of the shifted Chebyshev polynomial of the second kind U N + 1 ( x ) .
It is evident that a set of non-linear equations in the expansion coefficients, c i , are produced for every choice of collocation points, with ( N + 1 ) being the number of non-linear equations. Newton’s iterative approach, widely used to solve non-linear systems, can be applied here, and the associated approximate solution Z N ( x ) can then be acquired.
Remark 5.
We mention here the advantages of the SGJOMM derived in Section 5 to solve (27)(28). They can be listed as follows:
  • The simplicity of the SGJOMM as well as its high efficiency.
  • Multiple solutions can be obtained using different collocation points.
  • The derived operational matrix approach can be used to solve both linear and non-linear two-point even-order BVPs.
Remark 6.
In Algorithm 2, We list the required steps to obtain the solution of the non-linear equation using SGJOMM.
Algorithm 2: Required steps to solve (27)–(28) by the SGJOMM
  Step 1. Choose the basis functions ψ j , n as in (15).
  Step 2. Consider an approximate solution to (27) as: Z N ( x ) = i = 0 N c i ψ i , n ( x ) .
  Step 3. Represnet Z N ( x ) and D q Z N ( x ) , 1 q 2 n as in (29) and (36), respectively.
  Step 4. Apply the collocation method to obtain the system in (37).
  Step 5. Solve the system (37) by using Newton’s iterative method.
  Step 6. Obtain the approximate solution Z N ( x ) .

6. Discussion on the Convergence Analysis

Here, we provide an extensive analysis of the proposed generalized Jacobi expansion, including a study of its convergence and examining the resulting global error. Two theorems are given and proved in this context. The first shows that the generalized Jacobi expansion of a function u N ( x ) = j = 0 N c j ψ j , n ( x ) converges uniformly to u ( x ) . In addition, a global error estimate upper bound is found in the second theorem. The following four lemmas are useful in the sequel.
Lemma 2
([43]). The shifted Legendre polynomials P j ( x ) , x I satisfy the inequality
sin ϑ P j ( cos ϑ ) < 2 π j , ϑ [ 0 , π ] .
Lemma 3
([38]). For a non-zero real number α and any non-negative integers j, one has
Γ ( j + α ) = O ( j α 1 j ! ) .
Lemma 4
([38]). The shifted polynomials R ˜ j ( n , n ) ( x ) satisfy the following inequality:
R ˜ j ( n , n ) ( x ) 1 , x I .
Lemma 5
([18]). If the m times repeated integrations of P j ( x ) are denoted by
I j ( m ) ( x ) = m t i m e s P j ( x ) d x d x d x m t i m e s ,
then, I j ( m ) ( x ) can be expressed as
I j ( m ) ( x ) = k = 0 m ( 1 ) k 4 m ( b a ) m m k j + m 2 k + 1 2 j k + 1 2 m + 1 P j + m 2 k ( x ) + π ¯ m 1 ( x ) ,
where π ¯ m 1 ( x ) is a polynomial of degree at most ( m 1 ) .
Theorem 5.
If u ( x ) = ( b x ) n ( x a ) n g ( x ) is expanded in infinite series of the basis functions ψ j , n ( x ) defined in (15), i.e.,
u ( x ) = j = 0 c j ψ j , n ( x ) , x I .
Then this series converges uniformly to u ( x ) , and the expansion coefficients c j satisfy the following inequality:
| c j | < M j 2 m 4 n 2 ( j ! ) 2 , j > m , m > 2 n + 1 ,
with | g ( m ) ( x ) | M , where M is a positive constant.
Proof. 
The assumption (38) and the orthogonality relation (6) allow one to write
c j = 1 h j a b w ˜ ( x ) u ( x ) ψ j , n ( x ) d x .
Now, if we assume that u ( x ) = ( b x ) n ( x a ) n g ( x ) , and using (15), then we have
c j = 1 h j a b g ( x ) ( b x ) n ( x a ) n R ˜ j ( n , n ) ( x ) d x ,
In virtue of Formula (7), we can write
c j = 1 h j i = 0 n η i , j , n a b g ( x ) P 2 i + j ( x ) d x ,
where η i , j , n take the following form:
η i , j , n = ( 1 ) i n ! ( b a ) 2 n ( 4 i + 2 j + 1 ) n i Γ i + j + 1 2 2 2 n + 1 Γ i + j + n + 3 2 .
The properties of the shifted Legendre polynomials are used with Lemma 5 to enable us to rewrite (39), after integrating m times by parts, in the form
c j = 1 h j i = 0 n η i , j , n a b g ( m ) ( x ) P 2 i + j ( m ) ( x ) d x .
If we use the change of variable x = ( b a ) 2 cos ϑ + ( b + a ) 2 ,   ϑ = cos 1 2 x a b b a , then we have
c j = 1 h j i = 0 n η i , j , n 0 π g ( m ) π 2 1 cos ϑ I 2 i + j ( m ) π 2 1 cos ϑ sin ϑ d ϑ .
Taking into consideration the assumption g ( m ) π 2 1 cos ϑ M , we get
| c j | < M h j i = 0 n | η i , j , n | 0 π I 2 i + j ( m ) π 2 1 cos ϑ | sin ϑ | d ϑ .
After some manipulation, the following inequality can be produced from Lemmas 2, 3, and 5:
| c j | < M j 2 m 4 n 2 ( j ! ) 2 , j > m , m > 2 n + 1 ,
which proves the theorem. □
Theorem 6.
Let u ( x ) satisfy the same assumptions of Theorem 5, and consider the approximate solution u N ( x ) as in (17). The following error estimate holds:
u ( x ) u N ( x ) < M ( b a ) 2 n N 2 m 4 n 1 ( N ! ) 2 .
Proof. 
From relation (17), we have x I
u ( x ) u N ( x ) = j = 0 c j ψ j , n ( x ) j = 0 N c j ψ j , n ( x ) ,
and hence
u ( x ) u N ( x ) = j = N + 1 c j ψ j , n ( x ) ,
by virtue of Formula (15), we can write
u ( x ) u N ( x ) j = N + 1 c j ( x a ) n ( b x ) n R ˜ j ( n , n ) ( x ) .
With the aid of Lemma 4 and Theorem 5, the following inequality can be obtained:
u ( x ) u N ( x ) < M ( b a ) 2 n N 2 m 4 n 1 ( N ! ) 2 ,
and accordingly, we have
u ( x ) u N ( x ) < M ( b a ) 2 n N 2 m 4 n 1 ( N ! ) 2 .
Theorem 6 is now proved. □

7. Illustrative Problems and Comparisons

Examples of both linear and non-linear BVPs are presented here to demonstrate the usefulness and precision of our two suggested techniques. The shifted generalized Jacobi Petrov–Galerkin method (SGJPGM) is used to solve the linear differential equations, while the shifted generalized Jacobi operational matrix method (SGJOMM) is used to solve the non-linear differential equations in the examples below. First, we define the maximum absolute errors (MAEs) and the errors resulting from the least squares method (LSM), i.e., L - and L 2 -errors for the function u ( x ) defined on I. They can be defined, respectively, by
MAEs ( L e r r o r s ) = max x I u ( x ) u N ( x ) ,
LSM ( L 2 e r r o r s ) = a b u ( x ) u N ( x ) 2 d x ,
where u N ( x ) is the approximate solution of u ( x ) .
Example 1.
Consider the following eighth-order BVP [44,45,46,47]:
u ( 8 ) ( x ) u ( x ) = 8 ( 2 x cos x + 7 sin x ) , x [ 0 , 1 ] ,
u ( 0 ) = 0 , u ( 0 ) = 1 , u ( 0 ) = 0 , u ( 0 ) = 3 , u ( 1 ) = 0 , u ( 1 ) = e , u ( 1 ) = 4 e , u ( 1 ) = 9 e ,
with the exact solution: u ( x ) = x ( 1 x ) e x .
We apply our algorithm—namely, SGJPGM—to solve problem (40). Table 1 illustrates a comparison of L - and L 2 -errors resulting from our algorithm with distinct N. Table 2 compares L -errors resulting from our algorithm with those resulting from the application of the following methods:
  • The spline method (SM) in [44].
  • The non-polynomial spline method (NPSM) in [45].
  • The reproducing kernel method (RKM) in [46].
  • The Legendre matrix method (LMM) in [47].
Furthermore, Figure 1 shows the MAEs of our algorithm at N = 7 , while Figure 2 shows the L o g 10 ( L -errors) and L o g 10 ( L 2 -errors) of our algorithm with distinct N.
Example 2.
Consider the following BVP: [44,45,46]:
u ( 8 ) ( x ) u ( x ) = 8 ( 2 x cos x + 7 sin x ) , x [ 1 , 1 ] ,
u ( 1 ) = 0 , u ( 1 ) = 2 sin ( 1 ) , u ( 1 ) = 4 cos ( 1 ) 2 sin ( 1 ) , u ( 1 ) = 6 cos ( 1 ) 6 sin ( 1 ) , u ( 1 ) = 0 , u ( 1 ) = 2 sin ( 1 ) , u ( 1 ) = 4 cos ( 1 ) + 2 sin ( 1 ) , u ( 1 ) = 6 cos ( 1 ) 6 sin ( 1 ) ,
with the exact solution: u ( x ) = ( x 2 1 ) sin x .
Our proposed method (SGJPGM) is applied to solve problem (41). Table 3 illustrates a comparison of L - and L 2 -errors of SGJPGM for various values of N. Table 4 compares the L -errors resulting from our algorithm with those obtained if the methods in [44,45,46] are applied. In addition, Figure 3 displays the MAEs of our algorithm for N = 9 , while Figure 4 shows the L o g 10 ( L -errors) and L o g 10 ( L 2 -errors) of our algorithm with distinct N.
Example 3.
Consider the BVP [48,49]:
Z ( 8 ) ( x ) = 7 ! e 8 Z ( x ) 2 ( x + 1 ) 8 , x [ 0 , e 1 / 2 1 ] ,
subject to the BCs
Z ( 0 ) = 0 , Z ( 0 ) = 1 , Z ( 0 ) = 1 , Z ( 0 ) = 2 , Z ( e 1 / 2 1 ) = 1 2 , Z ( e 1 / 2 1 ) = e 1 / 2 , Z ( e 1 / 2 1 ) = e 1 , Z ( e 1 / 2 1 ) = 2 e 3 / 2 ,
with the exact solution: Z ( x ) = ln ( x + 1 ) .
Our proposed method—namely, SGJOMM—is applied to solve problem (42). Table 5 illustrates a comparison of L - and L 2 -errors of SGJOMM for various values of N. Table 6 compares the L -errors resulting from our algorithm with those obtained by the following two methods:
  • The Galerkin septic B-splines method (GSBSM) in [48].
  • The Sinc–Galerkin method (SGM) in [49].
In addition, Figure 5 displays the MAEs of our algorithm at N = 11 , while Figure 6 shows the L o g 10 ( L -errors) and L o g 10 ( L 2 -errors) of our algorithm with distinct N.
Example 4.
Consider the following non-linear two-point BVP [50,51]:
Z ( 4 ) ( x ) + ( Z ( x ) ) 2 = sin x + sin 2 x , x [ 0 , 1 ] ,
subject to the BCs:
Z ( 0 ) = 0 , Z ( 0 ) = 1 , Z ( 1 ) = sin ( 1 ) , Z ( 1 ) = cos ( 1 ) ,
with the following exact solution: Z ( x ) = sin x .
SGJOMM is applied to solve problem (43). Table 7 illustrates a comparison of L - and L 2 -errors of SGJOMM for various values of N using the following three types of collocation points:
  • The zeros of the shifted Legendre polynomials.
  • The zeros of the shifted Chebyshev polynomials of the first kind.
  • The zeros of the shifted symmetric Jacobi polynomials R ˜ j ( 2 , 2 ) ( x ) .
Table 8 compares the L -errors resulting from our algorithm with those obtained by the following two methods:
  • The variational iteration method (VIM) in [50].
  • The double decomposition method (DDM) in [51].
In addition, Figure 7 shows a comparison of L o g 10 ( L -errors) of our algorithm by using Legendre, Chebyshev, and symmetric Jacobi polynomials with distinct N, while Figure 8 shows a comparison of L o g 10 ( L 2 -errors) of our algorithm by using (Legendre, Chebyshev, and symmetric Jacobi) with distinct N. using the same zeros.

8. Conclusions

A kind of orthogonal polynomial—namely, SGJPs—was utilized to treat both linear and non-linear BVPs. The linear ones were handled by applying the Petrov–Galerkin spectral method, while the non-linear ones were treated with the aid of the typical collocation method. To treat the linear BVPs, the expression for the derivatives of the SGJPs in terms of the shifted Legendre polynomials was established and utilized, while the algorithm designed for treating the non-linear BVPs was built on the utilization of the operational matrix of derivatives of the SGJPs. The two proposed numerical algorithms were tested via illustrative examples. These examples show the high performance and accuracy of the suggested algorithms. We also mention that the codes were written and debugged using Mathematica version 12 software using a PC machine, with Intel(R) Core(TM) i5-8500 CPU @ 3.00 GHz, 12.00 GB of RAM.

Author Contributions

W.M.A.-E. contributed to conceptualization, methodology, software, validation, formal analysis, investigation, Writing—Original draft, Writing—review & editing, and Supervision. B.M.B. contributed to methodology, validation and investigation, A.K.A. contributed to validation and funding Acquisition. M.M.A. contributed to methodology, software, formal analysis, investigation, Writing—review & editing. All authors have read and agreed to the published version of the manuscript.

Funding

The third author, Amr Kamel Amin ([email protected]), is funded by the Deanship for Research and Innovation, Ministry of Education in Saudi Arabia through the project number IFP22UQU4331287DSR038.

Data Availability Statement

Not applicable.

Acknowledgments

The authors extend their appreciation to the Deanship for Research and Innovation, Ministry of Education in Saudi Arabia for funding this research work through the project number IFP22UQU4331287DSR038.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mason, J.C.; Handscomb, D.C. Chebyshev Polynomials; CRC Press: Boca Raton, FL, USA, 2002. [Google Scholar]
  2. Boyd, J.P. Chebyshev and Fourier Spectral Methods; Courier Corporation: North Chelmsford, MA, USA, 2001. [Google Scholar]
  3. Shen, J.; Tang, T.; Wang, L.L. Spectral Methods: Algorithms, Analysis and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2011; Volume 41. [Google Scholar]
  4. Hesthaven, J.S.; Gottlieb, S.; Gottlieb, D. Spectral Methods for Time-Dependent Problems; Cambridge University Press: Cambridge, UK, 2007; Volume 21. [Google Scholar]
  5. Raza, N.; Zainab, U.; Araci, S.; Esi, A. Identities involving 3-variable Hermite polynomials arising from umbral method. Adv. Differ. Equ. 2020, 2020, 640. [Google Scholar] [CrossRef]
  6. Boussayoud, A.; Kerada, M.; Araci, S.; Acikgoz, M.; Esi, A. Symmetric functions of binary products of Fibonacci and orthogonal polynomials. Filomat 2019, 33, 1495–1504. [Google Scholar] [CrossRef] [Green Version]
  7. Khan, W.A.; Araci, S.; Acikgoz, M.; Esi, A. Laguerre-based Hermite-Bernoulli polynomials associated with bilateral series. Tbil. Math. J. 2018, 11, 111–121. [Google Scholar]
  8. Albosaily, S.; Khan, W.A.; Araci, S.; Iqbal, A. Fully degenerating of Daehee numbers and polynomials. Mathematics 2022, 10, 2528. [Google Scholar] [CrossRef]
  9. Yari, A. Numerical solution for fractional optimal control problems by Hermite polynomials. J. Vib. Control 2021, 27, 698–716. [Google Scholar] [CrossRef]
  10. Sun, Q.; Zhang, R.; Hu, Y. Domain decomposition-based discontinuous Galerkin time-domain method with weighted Laguerre polynomials. IEEE Trans. Antennas Propag. 2021, 69, 7999–8002. [Google Scholar] [CrossRef]
  11. Abd-Elhameed, W.M.; Doha, E.H.; Alsuyuti, M.M. Numerical treatment of special types of odd-order boundary value problems using nonsymmetric cases of Jacobi polynomials. Prog. Fract. Differ. Appl. 2022, 8, 305–319. [Google Scholar]
  12. Sakran, M.R.A. Numerical solutions of integral and integro-differential equations using Chebyshev polynomials of the third kind. Appl. Math. Comput. 2019, 351, 66–82. [Google Scholar] [CrossRef]
  13. Doha, E.H.; Abd-Elhameed, W.M.; Bassuony, M.A. On using third and fourth kinds Chebyshev operational matrices for solving Lane-Emden type equations. Rom. J. Phys. 2015, 60, 281–292. [Google Scholar]
  14. Agarwal, R.P. Boundary Value Problems for Higher Order Differential Equations; World Scientific: Singapore, 1986. [Google Scholar]
  15. Tomar, S. A computationally efficient iterative scheme for solving fourth-order boundary value problems. Int. J. Appl. Comput. Math. 2020, 6, 111. [Google Scholar] [CrossRef]
  16. Baldwin, P. Asymptotic estimates of the eigenvalues of a sixth-order boundary-value problem obtained by using global phase-integral methods. Philos. Trans. R. Soc. Lond. Ser. A 1987, 322, 281–305. [Google Scholar]
  17. Chandrasekhar, S. Hydrodynamic and Hydromagnetic Stability; Courier Corporation: North Chelmsford, MA, USA, 2013. [Google Scholar]
  18. Abd-Elhameed, W.M.; Alkenedri, A.M. Spectral solutions of linear and nonlinear BVPs using certain Jacobi polynomials generalizing third-and fourth-kinds of Chebyshev polynomials. CMES Comput. Model. Eng. Sci. 2021, 126, 955–989. [Google Scholar] [CrossRef]
  19. Islam, S.U.; Haq, S.; Ali, J. Numerical solution of special 12th-order boundary value problems using differential transform method. Commun. Nonlinear Sci. Numer. Simul. 2009, 14, 1132–1138. [Google Scholar] [CrossRef]
  20. Golbabai, A.; Javidi, M. Application of homotopy perturbation method for solving eighth-order boundary value problems. Appl. Math. Comp. 2007, 191, 334–346. [Google Scholar] [CrossRef]
  21. Izadi, M.; Srivastava, H. A novel matrix technique for multi-order pantograph differential equations of fractional order. Proc. R. Soc. 2021, 477, 20210321. [Google Scholar] [CrossRef]
  22. Alsuyuti, M.M.; Doha, E.H.; Ezz-Eldien, S.S. Galerkin operational approach for multi-dimensions fractional differential equations. Commun. Nonlinear Sci. Numer. Simul. 2022, 114, 106608. [Google Scholar] [CrossRef]
  23. Alsuyuti, M.M.; Doha, E.H.; Ezz-Eldien, S.S.; Bayoumi, B.I.; Baleanu, D. Modified Galerkin algorithm for solving multitype fractional differential equations. Math. Methods Appl. Sci. 2019, 42, 1389–1412. [Google Scholar] [CrossRef]
  24. Doha, E.H.; Abd-Elhameed, W.M.; Bhrawy, A.H. New spectral-Galerkin algorithms for direct solution of high even-order differential equations using symmetric generalized Jacobi polynomials. Collect. Math. 2013, 64, 373–394. [Google Scholar] [CrossRef]
  25. Abdelhakem, M.; Alaa-Eldeen, T.; Baleanu, D.; Alshehri, M.G.; El-Kady, M. Approximating real-life BVPs via Chebyshev polynomials’ first derivative pseudo-Galerkin method. Fractal Fract. 2021, 5, 165. [Google Scholar] [CrossRef]
  26. Abd-Elhameed, W.M. Novel expressions for the derivatives of sixth-kind Chebyshev polynomials: Spectral solution of the non-linear one-dimensional Burgers’ equation. Fractal Fract. 2021, 5, 53. [Google Scholar] [CrossRef]
  27. Tu, H.; Wang, Y.; Lan, Q.; Liu, W.; Xiao, W.; Ma, S. A Chebyshev-Tau spectral method for normal modes of underwater sound propagation with a layered marine environment. J. Sound Vib. 2021, 492, 115784. [Google Scholar] [CrossRef]
  28. Singh, H. Jacobi collocation method for the fractional advection-dispersion equation arising in porous media. Numer. Methods Partial Differ. Equ. 2022, 38, 636–653. [Google Scholar] [CrossRef]
  29. Rashidinia, J.; Eftekhari, T.; Maleknejad, K. Numerical solutions of two-dimensional nonlinear fractional Volterra and Fredholm integral equations using shifted Jacobi operational matrices via collocation method. J. King Saud Univ. Sci. 2021, 33, 101244. [Google Scholar] [CrossRef]
  30. Khader, M.M.; Saad, K.M.; Hammouch, Z.; Baleanu, D. A spectral collocation method for solving fractional KdV and KdV-Burgers equations with non-singular kernel derivatives. Appl. Numer. Math. 2021, 161, 137–146. [Google Scholar] [CrossRef]
  31. Kumbinarasaiah, S.; Preetham, M.P. Applications of the Bernoulli wavelet collocation method in the analysis of MHD boundary layer flow of a viscous fluid. J. Umm Al-Qura Univ. Appl. Sci. 2022. [Google Scholar] [CrossRef]
  32. Trefethen, L. Spectral Methods in MATLAB; SIAM: Philadelphia, PA, USA, 2000. [Google Scholar]
  33. Shen, J. Efficient spectral-Galerkin method I. Direct solvers of second-and fourth-order equations using Legendre polynomials. SIAM J. Sci. Comput. 1994, 15, 1489–1505. [Google Scholar] [CrossRef]
  34. Shen, J. Efficient spectral-Galerkin method II. Direct solvers of second-and fourth-order equations using Chebyshev polynomials. SIAM J. Sci. Comput. 1995, 16, 74–87. [Google Scholar] [CrossRef] [Green Version]
  35. Napoli, A.; Abd-Elhameed, W.M. An innovative harmonic numbers operational matrix method for solving initial value problems. Calcolo 2017, 54, 57–76. [Google Scholar] [CrossRef]
  36. Shloof, A.M.; Senu, N.; Ahmadian, A.; Salahshour, S. An efficient operation matrix method for solving fractal–fractional differential equations with generalized Caputo-type fractional–fractal derivative. Math. Comput. Simul. 2021, 188, 415–435. [Google Scholar] [CrossRef]
  37. Youssri, Y.H. Orthonormal ultraspherical operational matrix algorithm for fractal–fractional Riccati equation with generalized Caputo derivative. Fractal Fract. 2021, 5, 100. [Google Scholar] [CrossRef]
  38. Rainville, E.D. Special Functions; The Maximalan Company: New York, NY, USA, 1960. [Google Scholar]
  39. Guo, B.Y.; Shen, J.; Wang, L.L. Optimal spectral-Galerkin methods using generalized Jacobi polynomials. J. Sci. Comput. 2006, 27, 305–322. [Google Scholar] [CrossRef] [Green Version]
  40. Abd-Elhameed, W.M. Novel formulae of certain generalized Jacobi polynomials. Mathematics 2022, 10, 4237. [Google Scholar] [CrossRef]
  41. Andrews, G.E.; Askey, R.; Roy, R. Special Functions; Cambridge University Press: Cambridge, UK, 1999; Volume 71. [Google Scholar]
  42. Canuto, C.; Hussaini, M.Y.; Quarteroni, A.; Zang, T.A. Spectral Methods in Fluid Dynamics; Springer: Berlin/Heidelberg, Germany, 1988. [Google Scholar]
  43. Giordano, C.; Laforgia, A. On the Bernstein-type inequalities for ultraspherical polynomials. J. Comput. Appl. Math. 2003, 153, 243–248. [Google Scholar] [CrossRef] [Green Version]
  44. Siddiqi, S.S.; Twizell, E.H. Spline solutions of linear eighth-order boundary-value problems. Comput. Methods Appl. Mech. Eng. 1996, 131, 309–325. [Google Scholar] [CrossRef]
  45. Siddiqi, S.S.; Akram, G. Solution of eighth-order boundary value problems using the non-polynomial spline technique. Int. J. Comput. Math. 2007, 84, 347–368. [Google Scholar] [CrossRef]
  46. Akram, G.; Rehman, H.U. Numerical solution of eighth order boundary value problems in reproducing Kernel space. Numer. Algorithms 2013, 62, 527–540. [Google Scholar] [CrossRef]
  47. Napoli, A.; Abd-Elhameed, W.M. Numerical solution of eighth-order boundary value problems by using Legendre polynomials. Int. J. Comput. Methods 2018, 15, 1750083. [Google Scholar] [CrossRef]
  48. Ballem, S.; Viswanadham, K.N.S.K. Numerical solution of eighth order boundary value problems by Galerkin method with septic B-splines. Procedia Eng. 2015, 127, 1370–1377. [Google Scholar] [CrossRef] [Green Version]
  49. El-Gamel, M.; Abdrabou, A. Sinc-Galerkin solution to eighth-order boundary value problems. SeMA J. 2019, 76, 249–270. [Google Scholar] [CrossRef]
  50. Noor, M.A.; Mohyud-Din, S.T. Variational iteration method for fifth-order boundary value problems using He’s polynomials. Math. Probl. Eng. 2008, 2008, 954794. [Google Scholar] [CrossRef] [Green Version]
  51. AL-Zaid, N.; AL-Refaidi, A.; Bakodah, H.; AL-Mazmumy, M. Solution of second-and higher-order nonlinear two-point boundary-value problems using double decomposition method. Mathematics 2022, 10, 3519. [Google Scholar] [CrossRef]
Figure 1. MAEs of our algorithm at N = 7 for Example 1.
Figure 1. MAEs of our algorithm at N = 7 for Example 1.
Symmetry 15 00345 g001
Figure 2. L o g 10 ( L -errors) and L o g 10 ( L 2 -errors) of our algorithm with distinct N for Example 1.
Figure 2. L o g 10 ( L -errors) and L o g 10 ( L 2 -errors) of our algorithm with distinct N for Example 1.
Symmetry 15 00345 g002
Figure 3. MAEs of our algorithm at N = 9 for Example 2.
Figure 3. MAEs of our algorithm at N = 9 for Example 2.
Symmetry 15 00345 g003
Figure 4. L o g 10 ( L e r r o r s ) and L o g 10 ( L 2 e r r o r s ) of our algorithm with distinct N for Example 2.
Figure 4. L o g 10 ( L e r r o r s ) and L o g 10 ( L 2 e r r o r s ) of our algorithm with distinct N for Example 2.
Symmetry 15 00345 g004
Figure 5. MAEs of our algorithm at N = 11 for Example 3.
Figure 5. MAEs of our algorithm at N = 11 for Example 3.
Symmetry 15 00345 g005
Figure 6. L o g 10 ( L -errors) and L o g 10 ( L 2 -errors) of our algorithm with distinct N for Example 3.
Figure 6. L o g 10 ( L -errors) and L o g 10 ( L 2 -errors) of our algorithm with distinct N for Example 3.
Symmetry 15 00345 g006
Figure 7. Comparison of L o g 10 ( L -errors) of our algorithm by using Legendre, Chebyshev, and R ˜ j ( 2 , 2 ) ( x ) with distinct N for Example 4.
Figure 7. Comparison of L o g 10 ( L -errors) of our algorithm by using Legendre, Chebyshev, and R ˜ j ( 2 , 2 ) ( x ) with distinct N for Example 4.
Symmetry 15 00345 g007
Figure 8. Comparison of L o g 10 ( L 2 -errors) of our algorithm by using Legendre, Chebyshev, and R ˜ j ( 2 , 2 ) ( x ) with distinct N for Example 4.
Figure 8. Comparison of L o g 10 ( L 2 -errors) of our algorithm by using Legendre, Chebyshev, and R ˜ j ( 2 , 2 ) ( x ) with distinct N for Example 4.
Symmetry 15 00345 g008
Table 1. Comparison of L - and L 2 -errors of our algorithm with distinct N for Example 1.
Table 1. Comparison of L - and L 2 -errors of our algorithm with distinct N for Example 1.
N L -Errors L 2 -Errors
1 4.48011 · 10 7 2.44031 · 10 7
2 4.30610 · 10 9 2.13197 · 10 9
3 1.65350 · 10 9 8.88121 · 10 10
4 1.02447 · 10 11 5.64667 · 10 12
5 1.35607 · 10 12 7.03195 · 10 13
6 7.02861 · 10 15 3.89473 · 10 15
7 8.40090 · 10 16 3.46545 · 10 16
Table 2. Comparison of L -errors of our algorithm with those of methods in [44,45,46,47] for Example 1.
Table 2. Comparison of L -errors of our algorithm with those of methods in [44,45,46,47] for Example 1.
ErrorSM in [44]NPSM in [45]RKM in [46] LMM in [47]SGJPGM
N = 8 N = 10 N = 7
E 1.20 · 10 5 1.02 · 10 8 4.90 · 10 9 8.95 · 10 14 2.85 · 10 16 8.40 · 10 16
Table 3. Comparison of L and L 2 -errors of our algorithm with distinct N for Example 2.
Table 3. Comparison of L and L 2 -errors of our algorithm with distinct N for Example 2.
N L -Errors L 2 -Errors
1 3.72783 · 10 6 3.16549 · 10 6
3 4.44508 · 10 8 3.73237 · 10 8
5 1.28602 · 10 10 1.05002 · 10 10
7 1.10634 · 10 13 8.23261 · 10 14
9 5.00745 · 10 14 3.73238 · 10 14
Table 4. Comparison of L -errors of our algorithm with those of methods in [44,45,46] for Example 2.
Table 4. Comparison of L -errors of our algorithm with those of methods in [44,45,46] for Example 2.
ErrorSM in [44]NPSM in [45]RKM in [46]SGJPGM
h = 1 / 32 h = 1 / 32 N = 10 N = 9
E 1.20 · 10 5 1.02 · 10 8 4.90 · 10 9 5.01 · 10 14
Table 5. Comparison of L - and L 2 -errors of our algorithm with distinct N for Example 3.
Table 5. Comparison of L - and L 2 -errors of our algorithm with distinct N for Example 3.
N L -Errors L 2 -Errors
1 6.49854 · 10 7 2.84248 · 10 7
2 6.84114 · 10 8 2.77812 · 10 8
3 3.70217 · 10 8 1.58289 · 10 8
4 3.01981 · 10 9 1.17252 · 10 9
5 6.13366 · 10 10 2.45326 · 10 10
6 4.58591 · 10 11 1.65191 · 10 11
7 2.76063 · 10 12 9.38161 · 10 13
8 2.24598 · 10 13 6.82216 · 10 14
9 1.78746 · 10 14 5.54010 · 10 15
10 1.27676 · 10 15 4.29149 · 10 16
11 5.55112 · 10 16 7.43891 · 10 17
Table 6. Comparison of L -errors of our algorithm with those of methods in [48,49] for Example 3.
Table 6. Comparison of L -errors of our algorithm with those of methods in [48,49] for Example 3.
ErrorGSBSM in [48]SGM in [49]SGJOMM
N = 11
E 8.508563 · 10 5 1.2303 · 10 11 5.55112 · 10 16
Table 7. Comparison of L - and L 2 -errors of our algorithm by using Legendre, Chebyshev, and R ˜ j ( 2 , 2 ) ( x ) with distinct N for Example 4.
Table 7. Comparison of L - and L 2 -errors of our algorithm by using Legendre, Chebyshev, and R ˜ j ( 2 , 2 ) ( x ) with distinct N for Example 4.
Legendre Chebyshev R ˜ j ( 2 , 2 ) ( x )
N L -Errors L 2 -Errors L -Errors L 2 -Errors L -Errors L 2 -Errors
1 3.099 · 10 5 1.917 · 10 5 1.814 · 10 5 1.099 · 10 5 1.594 · 10 6 8.994 · 10 7
2 8.634 · 10 7 5.173 · 10 7 5.665 · 10 7 3.354 · 10 7 8.568 · 10 8 4.853 · 10 8
3 4.630 · 10 8 2.536 · 10 8 1.747 · 10 8 8.598 · 10 9 1.416 · 10 9 7.770 · 10 10
4 1.040 · 10 9 5.920 · 10 10 4.779 · 10 10 2.553 · 10 10 6.677 · 10 11 3.720 · 10 11
5 1.177 · 10 11 5.603 · 10 12 5.177 · 10 12 2.726 · 10 12 9.008 · 10 13 4.917 · 10 13
6 3.471 · 10 13 1.820 · 10 13 1.712 · 10 13 9.264 · 10 14 3.603 · 10 14 1.983 · 10 14
7 3.886 · 10 15 1.757 · 10 15 2.442 · 10 15 9.810 · 10 16 4.996 · 10 16 2.212 · 10 16
8 2.220 · 10 16 5.449 · 10 17 2.220 · 10 16 3.080 · 10 17 2.220 · 10 16 7.638 · 10 18
9 2.220 · 10 16 4.844 · 10 19 2.220 · 10 16 3.224 · 10 19 2.220 · 10 16 8.795 · 10 20
10 2.220 · 10 16 3.847 · 10 20 2.220 · 10 16 3.685 · 10 20 2.220 · 10 16 4.775 · 10 20
Table 8. Comparison of L -errors of our algorithm by using Legendre, Chebyshev, and R ˜ j ( 2 , 2 ) ( x ) at N = 7 with those of methods in [50,51] for Example 4.
Table 8. Comparison of L -errors of our algorithm by using Legendre, Chebyshev, and R ˜ j ( 2 , 2 ) ( x ) at N = 7 with those of methods in [50,51] for Example 4.
VIM in [50]DDM in [51] Our Method
x LegendreChebyshev R ˜ j ( 2 , 2 ) ( x )
0.0 9.5923 · 10 14 1.6490 · 10 25 0.0000 · 10 00 0.0000 · 10 00 0.0000 · 10 00
0.1 7.7856 · 10 8 1.2030 · 10 10 6.1932 · 10 16 3.2266 · 10 16 1.7019 · 10 16
0.2 2.7231 · 10 7 4.0080 · 10 10 5.2855 · 10 17 6.7811 · 10 16 2.5175 · 10 16
0.3 5.2489 · 10 7 7.3035 · 10 10 2.6620 · 10 15 1.1722 · 10 15 2.0676 · 10 16
0.4 7.7730 · 10 7 1.0007 · 10 9 3.8641 · 10 16 2.2400 · 10 16 1.8735 · 10 16
0.5 9.7145 · 10 7 1.1173 · 10 9 3.7557 · 10 15 2.3187 · 10 15 4.1785 · 10 16
0.6 1.0502 · 10 6 1.0304 · 10 9 2.0600 · 10 17 2.7539 · 10 17 2.4112 · 10 16
0.7 9.6286 · 10 7 7.6427 · 10 10 2.8821 · 10 15 1.1441 · 10 15 8.8471 · 10 17
0.8 6.8407 · 10 7 4.1663 · 10 10 1.9657 · 10 16 9.5746 · 10 16 3.7036 · 10 16
0.9 2.7069 · 10 7 1.2106 · 10 10 6.6153 · 10 16 2.7111 · 10 16 3.0602 · 10 16
1.0 1.5676 · 10 13 1.4649 · 10 23 0.0000 · 10 00 0.0000 · 10 00 0.0000 · 10 00
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abd-Elhameed, W.M.; Badah, B.M.; Amin, A.K.; Alsuyuti, M.M. Spectral Solutions of Even-Order BVPs Based on New Operational Matrix of Derivatives of Generalized Jacobi Polynomials. Symmetry 2023, 15, 345. https://doi.org/10.3390/sym15020345

AMA Style

Abd-Elhameed WM, Badah BM, Amin AK, Alsuyuti MM. Spectral Solutions of Even-Order BVPs Based on New Operational Matrix of Derivatives of Generalized Jacobi Polynomials. Symmetry. 2023; 15(2):345. https://doi.org/10.3390/sym15020345

Chicago/Turabian Style

Abd-Elhameed, Waleed Mohamed, Badah Mohamed Badah, Amr Kamel Amin, and Muhammad Mahmoud Alsuyuti. 2023. "Spectral Solutions of Even-Order BVPs Based on New Operational Matrix of Derivatives of Generalized Jacobi Polynomials" Symmetry 15, no. 2: 345. https://doi.org/10.3390/sym15020345

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop