Next Article in Journal
Globally Optimizing QAOA Circuit Depth for Constrained Optimization Problems
Next Article in Special Issue
A Mathematical Model of Universal Basic Income and Its Numerical Simulations
Previous Article in Journal
Utilizing the Particle Swarm Optimization Algorithm for Determining Control Parameters for Civil Structures Subject to Seismic Excitation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Unified Formulation of Analytical and Numerical Methods for Solving Linear Fredholm Integral Equations

by
Efthimios Providas
Department of Environmental Sciences, Gaiopolis Campus, University of Thessaly, 415 00 Larissa, Greece
Algorithms 2021, 14(10), 293; https://doi.org/10.3390/a14100293
Submission received: 15 September 2021 / Revised: 3 October 2021 / Accepted: 6 October 2021 / Published: 10 October 2021

Abstract

:
This article is concerned with the construction of approximate analytic solutions to linear Fredholm integral equations of the second kind with general continuous kernels. A unified treatment of some classes of analytical and numerical classical methods, such as the Direct Computational Method (DCM), the Degenerate Kernel Methods (DKM), the Quadrature Methods (QM) and the Projection Methods (PM), is proposed. The problem is formulated as an abstract equation in a Banach space and a solution formula is derived. Then, several approximating schemes are discussed. In all cases, the method yields an explicit, albeit approximate, solution. Several examples are solved to illustrate the performance of the technique.

1. Introduction

Fredholm integral equations arise in the mathematical modeling of various processes in science and engineering, but also as reformulations of differential boundary value problems in applied mathematics. For example, in [1], a two-dimensional Stokes flow through a periodic channel problem is reformulated into an integral equation over the boundary of the domain and solved numerically; in [2], the solution of several boundary value problems and initial boundary value problems of interest to geomechanics through their reduction to integral equations is described, and many related references are cited; in [3], several different approaches to transformation of the second-order ordinary differential equations into integral equations is presented, and approximate solutions are derived via numerical quadrature methods; in [4], planar problems for Laplace’s equation are reformulated as boundary integral equations and then solved numerically.
The general linear Fredholm integral equation of the second kind has the form
u ( x ) λ a b K ( x , t ) u ( t ) d t = f ( x ) , x [ a , b ] ,
where a , b R , the kernel K ( x , t ) is a given complex valued and continuous function on [ a , b ] × [ a , b ] , the input or source function f ( x ) is assumed to be complex valued and continuous on [ a , b ] , λ is a complex parameter, and u ( x ) is the unknown continuous function to be determined. In this paper, for simplicity, we confine our investigations to one dimension with x [ a , b ] R , but the results obtained can be extended to other regions in two or more dimensions.
Integral equations of the type (1) have been studied by many researchers over the last century and continue to receive much attention in recent years. For their solution, a variety of methods have been developed; see [5,6,7,8,9] and others. Well-known classical solution techniques are the Direct Computational Method (DCM), the Degenerate Kernel Methods (DKM), the Quadrature Methods (QM), and the Projection Methods (PM); see, for example, the standard treatises [5,6,7], the traditional articles [10,11,12], and the recent papers [13,14,15,16,17,18,19,20,21]. The DCM has the advantage that it delivers the exact solution in closed form, but its application is limited to special cases where the kernel is separable (degenerate) and the integrals involved can be determined analytically. DKM utilize approximate finite representations for the kernel, and possibly the input function, and they are easy to manage and to perform error analysis. However, as with DCM, when the terms in the degenerate kernel are other than simple functions, then their integration has to be performed numerically. QM are very efficient, particularly when they are combined with Nyström’s interpolation, although their error analysis becomes more involved. The formulation of PM is more complicated, while some of these methods can be dealt with as the DKM. The above methods are treated separately in the literature.
Here, we present a common formulation suitable for symbolic computations for all of these techniques. Our approach is based on the idea that in all instances, the integral equation in (1) may be cast or reduced to an equation of the form
u ( x ) λ j = 1 m g j ( x ) Ψ j ( u ) = f ( x ) , x [ a , b ] ,
where for m 1 , g j ( x ) , j = 1 , 2 , , m , are known continuous functions on [ a , b ] and Ψ j , j = 1 , 2 , , m are linear bounded functionals such as definite integrals or sums of values at some points contained in [ a , b ] . Both { g j ( x ) } and { Ψ j } are obtained through the separation or approximation of the kernel, the approximation of the integral, the unknown function or a combination of them. Equation (2), under certain conditions, which are associated with the existence and uniqueness of the solution of the integral equation in (1), can be solved symbolically to obtain an exact or approximate analytic solution of (1).
We implement the proposed method to construct exact closed-form solutions when the kernel K ( x , t ) is separable, approximate analytic solutions when K ( x , t ) is not separable, but it can be represented as a truncated power series or an interpolation polynomial, and semi-discrete solutions when the definite integral is replaced by a finite sum by using a quadrature rule. The economy and the efficiency of the method are revealed by solving several tests problems from the literature.
The paper is organized as follows. In Section 2, an abstract formulation of the problem in a Banach space is presented and a closed-form solution of (2) is derived. In Section 3, Section 4, Section 5 and Section 6, we elaborate on the cases where K ( x , t ) is separable, K ( x , t ) is approximated by a power series or a polynomial, the definite integral is replaced by a quadrature formula and the unknown function is approximated by a polynomial, respectively. Several examples are solved in Section 7. Finally, some conclusions are quoted in Section 8.

2. Formulation in Banach Space

Let X be a complex Banach space of functions and X * the adjoint space of X, i.e., the set of all complex-valued linear bounded functionals Ψ j : X C , j N . Let Ψ = col ( Ψ 1 , Ψ 2 , , Ψ m ) be a vector of linear bounded functionals Ψ j , j = 1 , 2 , , m , and G = λ ( g 1 , g 2 , , g m ) a vector of functions g j X , j = 1 , 2 , , m . Then Equation (2) may be written in the form
T u = f , f X ,
where the linear operator T : X X is defined by
T u = u G Ψ ( u ) .
In (3) and (4), the components of the vectors G and Ψ are known, f is given and u has to be determined.
To examine the solvability and find the unique solution of (3), we state and prove the theorem below, but first, we explain some formulae and notations which we will use.
It is understood that Ψ ( u ) and Ψ ( G ) denote the m × 1 column vector and the m × m matrix
Ψ ( u ) = Ψ 1 ( u ) Ψ 2 ( u ) Ψ m ( u ) , Ψ ( G ) = Ψ 1 ( g 1 ) Ψ 1 ( g 2 ) Ψ 1 ( g m ) Ψ 2 ( g 1 ) Ψ 2 ( g 2 ) Ψ 2 ( g m ) Ψ m ( g 1 ) Ψ m ( g 2 ) Ψ m ( g m ) ,
respectively. It can be easily verified that
Ψ ( G c ) = Ψ ( G ) c ,
where c = col ( c 1 , c 2 , , c m ) is a constant vector. Bold lowercase and capital letters denote vectors and matrices, respectively, whose elements are numbers. By 0 and I m , we mark the zero column vector and the identity matrix of order m, respectively.
Theorem 1.
Let the linear operator T : X X be defined by (4). Then T is injective on D ( T ) X if and only if
det V = det I m Ψ ( G ) 0 .
In this case, the unique solution of Equation (3) is given by
u = T 1 f = f + G V 1 Ψ ( f ) ,
where T 1 : X X denotes the inverse operator of T.
Proof. 
Suppose V is a non-singular matrix, i.e., det V 0 , and let z ker T . Then,
T z = z G Ψ ( z ) = 0 .
Acting by the vector Ψ on both sides of (7), we obtain
Ψ z G Ψ ( z ) = Ψ ( z ) Ψ G Ψ ( z ) = Ψ ( z ) ( Ψ ( G ) ) Ψ ( z ) = I m Ψ ( G ) Ψ ( z ) = V Ψ ( z ) = 0 ,
which implies that Ψ ( z ) = 0 . Then, from (7) follows that z = 0 and hence ker T = { 0 } , which means that the operator T is injective. Conversely, we will prove that if T is an injective operator then det V 0 or equivalently, if det V = 0 then T is not injective. Let det V = 0 . Then, there exists a vector c = col ( c 1 , c 2 , , c m ) 0 such that V c = 0 . Let the element z = G c and note that z 0 ; otherwise, V c = [ I m Ψ ( G ) ] c = c ( Ψ ( G ) ) c = c Ψ ( G c ) = c = 0 , which contradicts the hypothesis. Substituting z into (7), we obtain
T z = G c G Ψ ( G c ) = G [ c Ψ ( G c ) ] = G [ c ( Ψ ( G ) ) c ] = G [ I m Ψ ( G ) ] c = G V c = 0 ,
which means that ker T { 0 } , and so T is not injective.
Assume now that (5) is true. Applying the vector Ψ on both sides of (3), viz.
T u = u G Ψ ( u ) = f ,
and working as above, we have
Ψ T u = Ψ u G Ψ ( u ) = I m Ψ ( G ) Ψ ( u ) = V Ψ ( u ) = Ψ ( f ) .
Since det V 0 , it follows that Ψ ( u ) = V 1 Ψ ( f ) . After substituting into (8), we obtain
T u = u G V 1 Ψ ( f ) = f ,
and hence
u = f + G V 1 Ψ ( f ) ,
which is Equation (6), i.e., the solution of the problem (3). □

3. Direct Computational Method (DCM)

In this Section, we consider the ideal case where the kernel K ( x , t ) in (1) is a separable function, i.e., has the special form
K ( x , t ) = j = 1 m g j ( x ) h j ( t ) , x , t [ a , b ] ,
where the functions g j ( x ) , h j ( x ) , j = 1 , 2 , , m , are continuous on [ a , b ] and preferably, but not necessarily, linearly independent sets. Substituting (9) into (1), we obtain
u ( x ) λ j = 1 m g j ( x ) a b h j ( t ) u ( t ) d t = f ( x ) , x [ a , b ] .
Define the row vector of functions
G = λ g 1 g 2 g m , g j = g j ( x ) C [ a , b ] , j = 1 , 2 , , m ,
and the column vector of linear bounded functionals
Ψ ( u ) = Ψ 1 ( u ) Ψ 2 ( u ) Ψ m ( u ) , Ψ j ( u ) = a b h j ( t ) u ( t ) d t , j = 1 , 2 , , m .
By means of (11) and (12) and after setting u = u ( x ) and f = f ( x ) , Equation (10) may be put in the vector form
u G Ψ ( u ) = f x [ a , b ] .
Further, by taking X = C [ a , b ] and defining the operator T : X X by T u = u G Ψ ( u ) , Equation (13) may be cast in the operator form (3), namely
T u = u G Ψ ( u ) = f .
Provided condition (5) is fulfilled, the unique solution of (14) follows from Theorem 1, and specifically from Formula (6).
If the functions g j ( x ) , h j ( x ) , j = 1 , 2 , , m , and f ( x ) are such that the evaluation of Ψ ( G ) and Ψ ( f ) can be performed by analytic means, i.e., without the use of numerical tricks, and then the solution of (14) is the exact closed-form solution of the integral Equation (1).

4. Degenerate Kernel Methods (DKM)

If the kernel K ( x , t ) in (1) is not separable, then we can consider approximating it in a way that makes it separable. There are several mathematical means to accomplish this [7]. We discuss here two such common methods for general continuous kernels.

4.1. Power Series Approximation

Let us express K ( x , t ) as a power series in t at a point t 0 , namely
K ( x , t ) = k = 0 a k ( x ) ( t t 0 ) k ,
where the coefficients a k ( x ) , k = 0 , 1 , , are continuous functions of x on [ a , b ] . We truncate this series and take the partial sum of the first n + 1 terms, viz.
K n ( x , t ) = j = 1 n + 1 a j 1 ( x ) ( t t 0 ) j 1 .
We replace the kernel K ( x , t ) in (1) by (15) to obtain the degenerate integral equation
u ˜ n ( x ) λ j = 1 n + 1 a j 1 ( x ) a b ( t t 0 ) j 1 u ˜ n ( t ) d t = f ( x ) , x [ a , b ] ,
where u ˜ n ( x ) indicates an approximate solution of (1). By defining the vectors
G = λ g 1 g 2 g n + 1 = λ a 0 ( x ) a 1 ( x ) a n ( x ) ,
and
Ψ ( u ˜ n ) = Ψ 1 ( u ˜ n ) Ψ 2 ( u ˜ n ) Ψ n + 1 ( u ˜ n ) , Ψ j ( u ˜ n ) = a b ( t t 0 ) j 1 u ˜ n ( t ) d t , j = 1 , 2 , , n + 1 ,
where u ˜ n = u ˜ n ( x ) ; Equation (16) may be written in the symbolic form
T s u ˜ n = u ˜ n G Ψ ( u ˜ n ) = f ,
wherein the operator T s : X X and X = C [ a , b ] .
Furthermore, to facilitate the computation of Ψ ( G ) and Ψ ( f ) , without resorting to numerical integration techniques, we can approximate the functions { g j ( x ) } and f ( x ) , provided they are analytic in [ a , b ] , by power series of the same type as above.
Specifically, we can take
f r ( x ) = l = 1 r + 1 φ l 1 ( x x 0 ) l 1 ,
where φ l 1 , l = 1 , 2 , , r + 1 are known constants, e.g., in a Taylor series expansion φ l 1 = 1 ( l 1 ) ! f ( l 1 ) ( x 0 ) . Substituting (18) into Equation (17), we obtain
T s u ˜ n = u ˜ n G Ψ ( u ˜ n ) = f r ,
where f r = f r ( x ) . For similar modifications, one can look at [9,19].
Likewise, we may also express the functions { g j ( x ) } as partial sums of power series, namely
g j r ( x ) = l = 1 r + 1 γ j ( l 1 ) ( x x 0 ) l 1 , j = 1 , 2 , , n + 1 ,
with γ j ( l 1 ) , l = 1 , 2 , , r + 1 being known constants. Using (20), we may write
G r = λ g 1 r g 2 r g ( n + 1 ) r ,
and Equation (19) then becomes
T s s u ˜ n = u ˜ n G r Ψ ( u ˜ n ) = f r ,
where the operator T s s : X X and X = C [ a , b ] .
Note that we will arrive at an equation similar to (22) if we approximate the kernel K ( x , t ) directly by a finite segment of a double power series, viz.
K n n ( x , t ) = l = 1 n + 1 k = 1 n + 1 a ( l 1 ) ( k 1 ) ( x x 0 ) l 1 ( t t 0 ) k 1 ,
where the coefficients a ( l 1 ) ( k 1 ) are constants. This shows how involved computations can be handled efficiently by the present formulation.
All three Equations (17), (19) and (22) are of the kind (3) and can be solved explicitly through Theorem 1.

4.2. Polynomial Approximation

An important and at same time one of the simplest methods to construct degenerate kernels approximating given continuous ones is via interpolation. From the several kinds of interpolation and the many interpolation basis functions, we choose here the polynomial interpolation and in particular the Lagrange formula.
Let the n + 1 distinct ordered points a = t 1 < t 2 < < t n < t n + 1 = b in the interval [ a , b ] . The continuous kernel K ( x , t ) can be approximated by a polynomial degree n of the form
K n ( x , t ) = j = 1 n + 1 j ( t ) K ( x , t j ) ,
where
j ( t ) = i j i = 1 n + 1 t t i t j t i
are known as Lagrange basis functions. By putting (23) into (1) in the place of the kernel K ( x , t ) , we obtain
u ˜ n ( x ) λ j = 1 n + 1 K ( x , t j ) a b j ( t ) u ˜ n ( t ) d t = f ( x ) , x [ a , b ] .
Specifying the vectors
G = λ g 1 g 2 g n + 1 = λ K ( x , t 1 ) K ( x , t 2 ) K ( x , t n + 1 ) ,
and
Ψ ( u ˜ n ) = Ψ 1 ( u ˜ n ) Ψ 2 ( u ˜ n ) Ψ n + 1 ( u ˜ n ) , Ψ j ( u ˜ n ) = a b j ( t ) u ˜ n ( t ) d t , j = 1 , 2 , , n + 1 ,
Equation (24) may be written in the symbolic form
T p u ˜ n = u ˜ n G Ψ ( u ˜ n ) = f ,
where the operator T p : X X and X = C [ a , b ] .
In (25) the computation of Ψ ( G ) and Ψ ( f ) , except in some special cases, has to be performed numerically. To avoid the numerical integration, we may approximate the functions { g j ( x ) } and f ( x ) by polynomials of degree r , which interpolate these functions at the r + 1 points a = x 1 < x 2 < < x r < x r + 1 = b , namely
f r ( x ) = l = 1 r + 1 l ( x ) f ( x l ) ,
and
g j r ( x ) = l = 1 r + 1 l ( x ) g j ( x l ) , j = 1 , 2 , , n + 1 .
Then, we can substitute (26) into (25) to obtain
T p u ˜ n = u ˜ n G Ψ ( u ˜ n ) = f r .
In addition, we may use (27) to set up the vector
G r = λ g 1 r g 2 r g ( n + 1 ) r ,
and then by substituting into (28) to have
T p p u ˜ n = u ˜ n G r Ψ ( u ˜ n ) = f r ,
where the operator T p p : X X and X = C [ a , b ] .
Note that we will arrive at an analogous equation to (30) if bilinear interpolation is used for interpolating the kernel K ( x , t ) at the Cartesian mesh nodes ( x l , t j ) , where l varies from 1 to r + 1 and j varies from 1 to n + 1 .
All three Equations (25), (28) and (30) are of the type (3), and thus their unique solutions may obtained through Theorem 1.

5. Quadrature Methods (QM)

In this Section, we explore the use of some of the numerical integration techniques to approximate the integral operator in (1) and to thus obtain a semi-discrete equation of the kind (2).
A numerical integration or numerical quadrature formula may be written in the form
a b y ( x ) d x = j = 1 n + 1 w j y ( x j ) + E n ( y ) ,
where y ( x ) C [ a , b ] . The abscissas, usually equally spaced points, x j , j = 1 , 2 , , n + 1 , and the weights w j , j = 1 , 2 , , n + 1 , are determined only by the quadrature rule that we apply and do not depend on any way upon the integrand y ( x ) . E n ( y ) denotes the quadrature error which depends upon a , b , n and the value of a higher-order derivative of y ( x ) at some point between a and b [22].
Using (31), we may express the definite integral in (1) as
a b K ( x , t ) u ( t ) d t = j = 1 n + 1 w j K ( x , t j ) u ( t j ) + E n ( K , u ) ,
where { t j } is a set of n + 1 points in [ a , b ] , { w j } is a specific set of positive weights not depending on x and { t j } , and E n ( K , u ) is an error function which depends upon x as well as a , b , n and the values of higher-order derivatives of K ( x , t ) and u ( t ) with respect to t at some point between a and b. Substituting (32) into (1), we find
u ( x ) λ j = 1 n + 1 w j K ( x , t j ) u ( t j ) λ E n ( K , u ) = f ( x ) , x [ a , b ] .
After disregarding the error term, we obtain the semi-discrete equation
u ˜ n ( x ) λ j = 1 n + 1 w j K ( x , t j ) u ˜ n ( t j ) = f ( x ) , x [ a , b ] ,
where u ˜ n denotes an approximate solution of u. By specifying the vectors
G = λ g 1 g 2 g n + 1 = λ w 1 K ( x , t 1 ) w 2 K ( x , t 2 ) w n + 1 K ( x , t n + 1 ) ,
and
Ψ ( u ˜ n ) = Ψ 1 ( u ˜ n ) Ψ 2 ( u ˜ n ) Ψ n + 1 ( u ˜ n ) , Ψ j ( u ˜ n ) = u ˜ n ( t j ) , j = 1 , 2 , , n + 1 ,
Equation (33) may be recast into symbolic form
T q u ˜ n = u ˜ n G Ψ ( u ˜ n ) = f ,
where the operator T q : X X and X = C [ a , b ] . This equation is of the type (3) and its unique solution for the entire interval [ a , b ] is
u ˜ n = T q 1 f = f + G [ I n + 1 Ψ ( G ) ] 1 Ψ ( f ) ,
by means of Theorem 1.
We observe that in (35), the evaluation of Ψ ( G ) and Ψ ( f ) consists merely of the computation of the functions g j ( x ) , j = 1 , 2 , , n + 1 , and f ( x ) at the quadrature points.
Moreover, Formula (35) corresponds to what is known as the natural interpolation form of Nyström, which is one of the most efficient methods for computing accurate approximate values of the true solution in the entire interval [ a , b ] from its approximate values at a set of nodes in [ a , b ] ; see [5,6] for more details.
As an alternative to this, we may use other interpolating schemes to construct an approximate solution of specific type throughout the interval [ a , b ] . We consider below two such cases where the functions { g j ( x ) } and f ( x ) are replaced by other, simpler functions, such as Taylor series and polynomials.
Let us approximate each of g j ( x ) , j = 1 , 2 , , n + 1 and f ( x ) by partial sums of Taylor series as in (20) and (18), respectively. Then, by means of (21), Equation (34) is carried to
T q s u ˜ n = u ˜ n G r Ψ ( u ˜ n ) = f r ,
where the operator T q s : X X and X = C [ a , b ] .
Analogously, we may approximate each of g j ( x ) , j = 1 , 2 , , n + 1 and f ( x ) by interpolating polynomials as in (27) and (26), respectively. Then, by using (29), Equation (34) decreases to
T q p u ˜ n = u ˜ n G r Ψ ( u ˜ n ) = f r ,
where the operator T q p : X X and X = C [ a , b ] .
Both Equations (36) and (37) are of the form (3), and hence they can be solved explicitly by Theorem 1.

6. Projection Methods (PM)

Characteristic cases of projection methods are collocation and Galerkin methods. By way of illustration, we consider here the collocation method wherein the unknown function u ( x ) in (1) is approximated through the whole of [ a , b ] by the interpolating polynomial of degree n ,
u ˜ n ( x ) = j = 1 n + 1 j ( x ) u ( x j ) ,
where j ( x ) , j = 1 , , n + 1 are the Lagrange basis functions defined in (23), which interpolates u ( x ) at n + 1 distinct ordered points a = x 1 < x 2 < < x n < x n + 1 = b . By using (38), Equation (1) degenerates to
u ˜ n ( x ) λ j = 1 n + 1 u ˜ n ( x j ) a b K ( x , t ) j ( t ) d t = f ( x ) , x [ a , b ] .
Setting up the vectors
G = λ g 1 g 2 g n + 1 , g j ( x ) = a b K ( x , t ) j ( t ) d t , j = 1 , 2 , , n + 1 ,
and
Ψ ( u ˜ n ) = Ψ 1 ( u ˜ n ) Ψ 2 ( u ˜ n ) Ψ n + 1 ( u ˜ n ) , Ψ j ( u ˜ n ) = u ˜ n ( x j ) , j = 1 , 2 , , n + 1 ,
Equation (39) may be written in the symbolic form
T c u ˜ n = u ˜ n G Ψ ( u ˜ n ) = f ,
where the operator T c : X X and X = C [ a , b ] .
Furthermore, approximating { g j ( x ) } and f ( x ) by polynomials of degree n as above, viz.
g j n ( x ) = l = 1 n + 1 l ( x ) a b K ( x l , t ) j ( t ) d t , j = 1 , 2 , , n + 1 ,
and
f n ( x ) = l = 1 n + 1 l ( x ) f ( x l ) ,
and after introducing the vector
G n = λ g 1 n g 2 n g ( n + 1 ) n ,
Equation (40) is carried to
T c c u ˜ n = u ˜ n G n Ψ ( u ˜ n ) = f n ,
where the operator T c c : X X and X = C [ a , b ] .
Equations (40) and (41) are of the type (3), and so their unique solutions may obtained by Theorem 1.

7. Examples

To clarify the application of the proposed technique and to evaluate its performance, we consider from the literature five example integral equations with known exact solutions and construct approximate explicit solutions in several ways.
We emphasize that the solutions obtained with the proposed procedure have an explicit form. However, to avoid listing large expressions and to be able to compare these solutions, except in some cases, we convert all coefficients to floating point numbers with six decimal places without rounding. For the error estimation between the exact solution u and the approximate solution u ˜ n , we use the norm, i.e.,
ϵ n = u u ˜ n = m a x a x b | u ( x ) u ˜ n ( x ) | .
Example 1.
Let the integral equation [23]
u ( x ) 0 π sin ( x t ) u ( t ) d t = 1 , x [ 0 , π ] .
The kernel K ( x , t ) = sin ( x t ) and the input function f ( x ) = 1 are continuous on [ 0 , π ] × [ 0 , π ] and [ 0 , π ] , respectively, and we seek the unique solution u ( x ) C [ 0 , π ] . We solve this equation exactly as well as approximately.
DCM: Exact solution: 
Since
K ( x , t ) = sin ( x t ) = sin x cos t cos x sin t ,
i.e., K ( x , t ) is separable, the integral equation in (42) is written as
u ( x ) sin x 0 π cos t u ( t ) d t + cos x 0 π sin t u ( t ) d t = 1 .
Following the procedure in Section 3, we set up the vectors
G = sin x cos x , Ψ ( u ) = 0 π cos t u ( t ) d t 0 π sin t u ( t ) d t ,
and write (43) as
T u = u G Ψ ( u ) = 1 .
Condition (5) is fulfilled, specifically det V = π 2 4 + 1 0 , and thus from (6) it follows that
u ( x ) = 1 4 π π 2 + 4 sin x 8 π 2 + 4 cos x ,
which is the exact solution of (42). 
DKM: Taylor series: 
Let us now approximate the kernel K ( x , t ) by a truncated Taylor series in t about the point 0 where all terms through t n are included, i.e.,
K n ( x , t ) = j = 1 ( n + 2 ) / 2 ( 1 ) j 1 t 2 ( j 1 ) ( 2 ( j 1 ) ) ! sin x j = 1 ( n + 1 ) / 2 ( 1 ) j 1 t 2 j 1 ( 2 j 1 ) ! cos x .
As an illustration, let n = 2 . Then, after substituting K 2 ( x , t ) , Equation (42) degenerates to
u ˜ 2 ( x ) sin x 0 π 1 t 2 2 u ˜ 2 ( t ) d t + cos x 0 π t u ˜ 2 ( t ) d t = 1 .
Following the steps in Section 4.1, we specify the vectors
G = sin x cos x , Ψ ( u ˜ 2 ) = 0 π 1 t 2 2 u ˜ 2 ( t ) d t 0 π t u ˜ 2 ( t ) d t ,
or alternatively
G = sin x cos x s i n x 2 , Ψ ( u ˜ 2 ) = 0 π u ˜ 2 ( t ) d t 0 π t u ˜ 2 ( t ) d t 0 π t 2 u ˜ 2 ( t ) d t ,
and put (42) in the form
T s u ˜ 2 = u ˜ 2 G Ψ ( u ˜ 2 ) = 1 .
Solving by (5) and (6), we acquire
u ˜ 2 ( x ) = 1 4 π 3 + 12 π 6 π 2 + 36 sin x π 4 6 π 2 6 π 2 + 36 cos x .
In a similar manner, other approximate solutions of the same analytic form for higher values of n are derived. We tabulate some of these solutions in Table 1 and compare them against the exact answer. The size of maximum errors and the error ratios between two approximations are also given. 
QM: Simpson’s rule:
According to Section 5, let us divide the interval [ 0 , π ] into n equal subintervals of length h = π / n , where n is an even integer number, consider the n + 1 abscissas x j = h ( j 1 ) , j = 1 , 2 , , n + 1 , and employ the Simpson’s formula
0 π y ( x ) d x h 3 y ( x 1 ) + 4 j = 2 , 4 , 6 n y ( x j ) + 2 j = 3 , 5 , 7 n 1 y ( x j ) + y ( x n + 1 )
to approximate the integral in (42). By way of illustration, let n = 2 . Then, Equation (42) assumes the semi-discrete form,
u ˜ 2 ( x ) π 6 sin x u ˜ 2 ( 0 ) 4 cos x u ˜ 2 ( π 2 ) sin x u ˜ 2 ( π ) = 1 .
Assemble the vectors
G = π sin x 6 2 π cos x 3 , Ψ ( u ˜ 2 ) = u ˜ 2 ( 0 ) u ˜ 2 ( π ) u ˜ 2 ( π 2 ) ,
or alternatively
G = π sin x 6 2 π cos x 3 π sin x 6 , Ψ ( u ˜ 2 ) = u ˜ 2 ( 0 ) u ˜ 2 ( π 2 ) u ˜ 2 ( π ) ,
and write Equation (46) as
T q u ˜ 2 = u ˜ 2 G Ψ ( u ˜ 2 ) = 1 .
This equation is solved by means of (5) and (6) to obtain
u ˜ 2 ( x ) = 1 2 π 2 2 π 2 + 9 sin x 6 π 2 π 2 + 9 cos x ,
which is an approximate solution of (42). Other solutions of the same type for various values of n are recorded in Table 2. Clearly, the size of the error shows that the accuracy of the solutions is O ( h 4 ) , for which accuracy is valid throughout the interval [ 0 , π ] and not only to the quadrature nodes.
In Figure 1, Taylor series and Simpson’s rule solutions for n = 6 are plotted and compared against the exact solution. The Simpson’s rule solution almost coincides with the exact solution. From Figure 1 and the results in Table 1 and Table 2, it is implied that Simpson’s rule approximate solutions converge very rapidly and at a constant rate to the true solution, while Taylor series approximation yields more accurate results when an adequate number of terms ( n = 16 ) are included in the truncated series.
Example 2.
Consider the integral equation [5]
u ( x ) 1 λ A 0 1 e x t u ( t ) d t = f ( x ) , λ A 0 , 0 x 1 ,
with
f ( x ) = e x e x + 1 1 λ A ( x + 1 ) ,
so that (48) has the unique solution u ( x ) = e x . The kernel K ( x , t ) = e x t is many times continuously differentiable on [ 0 , 1 ] × [ 0 , 1 ] , but it is not separable. The input function f ( x ) is continuous on [ 0 , 1 ] . We apply several approximating schemes.
DKM: Taylor series: 
We formulate the given integral equation as in (19). Specifically, we replace the kernel K ( x , t ) and the input function f ( x ) by finite segments of Taylor series of degree n ( r = n ) about 0 in t and x, respectively, as follows
K n ( x , t ) = j = 1 n + 1 x j 1 ( j 1 ) ! t j 1 , f n ( x ) = j = 1 n + 1 d j 1 f ( x ) d x j 1 x = 0 x j 1 ,
and solve via (5) and (6). For λ A = 2 , and n = 2 , we get
u ˜ 2 ( x ) = 2145 e 7257 1559 ( 5184 e 24420 ) x 10913 ( 4530 e 17550 ) x 2 10913 .
Analogous solutions are obtained for other larger values of n; e.g., for n = 4 and n = 8 , we have
u ˜ 4 ( x ) = 0.997193 + 0.998193 x + 0.499304 x 2 + 0.166475 x 3 + 0.041625 x 4 , u ˜ 8 ( x ) = 0.999999 + 0.999999 x + 0.499999 x 2 + 0.166666 x 3 + 0.041666 x 4 + 0.008333 x 5 + 0.001388 x 6 + 0.000198 x 7 + 0.000024 x 8 .
Comparison of these approximate solutions with the exact solution
u ( x ) = e x = 1 + x + x 2 2 + x 3 6 + x 4 24 + x 5 120 + x 6 720 + x 7 5040 + x 8 40320 +
shows the excellent agreement accomplished even with small values of n. Further, the maximum errors between the approximate solutions for various values of n and the exact solution are given in Table 3. The maximum error is located at the point x = 1 in all cases. The results are distinguished for their high accuracy.
DKM: Polynomials: 
We follow the procedure in Section 4.2 and approximate K ( x , t ) and f ( x ) with interpolating polynomials of degree n ( r = n ). Expressing the integral equation in (48) in the form (28) and solving by means of Theorem 1 for λ A = 2 , we find
u ˜ 2 ( x ) = 1.003217 + 0.877792 x + 0.847809 x 2 , u ˜ 4 ( x ) = 1.000005 + 0.998808 x + 0.509797 x 2 + 0.140256 x 3 + 0.069434 x 4 , u ˜ 8 ( x ) = 1.000000 + 0.999999 x + 0.500000 x 2 + 0.166664 x 3 + 0.041675 x 4 + 0.008310 x 5 + 0.001425 x 6 + 0.000164 x 7 + 0.000041 x 8 .
Table 4 shows the maximum errors between these approximate solutions and the exact solution and the points where they occur. As expected, the results are very accurate. Compared with those obtained by the Taylor series above, they are superior. 
QM: Trapezoidal rule and Taylor series: 
Let us use the composite trapezoidal rule
a b y ( x ) d x h 2 y ( x 1 ) + 2 j = 2 n y ( x j ) + y ( x n + 1 ) ,
to approximate the integral in (48). Following the procedure in Section 5, we may initially express (48) in the form (34). However, to construct an explicit solution of a type such as a polynomial throughout the interval [ a , b ] , we substitute the components of the vector G and the function f ( x ) with finite segments of the Taylor series of degree r = n in x about 0, as in (36). After solving by means of (5) and (6) for λ A = 2 , and n = 2 , we obtain the solution
u ˜ 2 ( x ) = 3436 e 11712 2391 ( 1268 e 5892 ) x 2391 ( 1096 e 4248 ) x 2 2391 .
Similarly, for n = 4 and n = 8 , we acquire the higher order solutions
u ˜ 4 ( x ) = 1.026427 + 1.024317 x + 0.515273 x 2 + 0.172608 x 3 + 0.043388 x 4 , u ˜ 8 ( x ) = 1.007331 + 1.006523 x + 0.503972 x 2 + 0.168193 x 3 + 0.042106 x 4 + 0.008433 x 5 + 0.001407 x 6 + 0.000201 x 7 + 0.000025 x 8 .
The maximum errors between the approximate solutions corresponding to various values of n and the exact solution are listed in Table 5. Moreover, in Table 6, we give the results obtained when λ A = 50 . It is evident that the accuracy is O ( h 2 ) in the entire interval [ 0 , 1 ] . As the value of the parameter λ A increases, the accuracy improves and the convergence to the exact solution is faster.
QM: Trapezoidal rule and Polynomials: 
Let us now consider the approximating scheme in (37) where we employ the composite trapezoidal rule to discritize the integral and interpolating polynomials of degree r = n to approximate the components of G and the function f ( x ) . For λ A = 2 , and n = 2 , n = 4 and n = 8 , we obtain the corresponding solutions
u ˜ 2 ( x ) = 1.131517 + 0.972464 x + 0.969511 x 2 , u ˜ 4 ( x ) = 1.029984 + 1.025406 x + 0.526503 x 2 + 0.145148 x 3 + 0.072597 x 4 , u ˜ 8 ( x ) = 1.007331 + 1.006524 x + 0.503972 x 2 + 0.168191 x 3 + 0.042115 x 4 + 0.008410 x 5 + 0.001444 x 6 + 0.000167 x 7 + 0.000041 x 8 .
In Table 7, we list the maximum errors between these approximate solutions and the exact solution. Additionally, in Table 8, we present the results for the value of the parameter λ A = 50 . The results are very close to those obtained above with the composite Trapezoidal rule and Taylor series.
PM: Collocation with Polynomials: 
Finally, we solve the integral equation in (48) by employing a projection method such as the simple collocation method given in Section 6, where the ordinary Lagrange basis functions are used. For λ A = 1 , by means of (41) and polynomials of degree n = 3 , n = 4 and n = 5 interpolating u ( x ) on the equally spaced nodes x j = ( j 1 ) / n , j = 1 , 2 , , n + 1 , we obtain the following solutions
u ˜ 3 ( x ) = 0.999256 + 1.013577 x + 0.425543 x 2 + 0.278580 x 3 , u ˜ 4 ( x ) = 0.999985 + 0.998801 x + 0.509787 x 2 + 0.140276 x 3 + 0.069415 x 4 , u ˜ 5 ( x ) = 0.999998 + 1.000081 x + 0.499067 x 2 + 0.170411 x 3 + 0.0348642 x 4 + 0.013855 x 5 ,
respectively. In Table 9, we give the maximum errors between these approximate solutions and the exact solution. By way of illustration, we also compare these results with the maximum errors obtained in [24,25], where an advanced quasi-projection method based on B-spline tight framelets is utilized.
Example 3.
Consider the integral equation
u ( x ) 1 λ A 0 1 c u ( t ) c 2 + ( x t ) 2 d t = f ( x ) , 0 x 1 ,
where the parameters λ A 0 and c > 0 . This equation appears in electrostatics and it is known as Love’s equation [26]. The kernel function
K ( x , t ) = c c 2 + ( x t ) 2 ,
is continuous on [ 0 , 1 ] × [ 0 , 1 ] with a peak at x = t when c is small. Figure 2 shows the shape of K ( x , t ) for various values of c and x = 0.5 . It is understood that as c diminishes to zero, the more difficult it is to construct the solution to the problem. Let λ A = 0.5 , c = 0.1 and
f ( x ) = 1 50 ( 10 x 4 ) ln 100 x 2 200 x + 101 100 x 2 + 1 + 1 50 ( 100 x 2 80 x + 5 ) ( arctan ( 10 x ) arctan ( 10 x 10 ) ) + 1 50 ( 50 x 2 40 x + 13 ) ,
so that (50) has the exact solution u ( x ) = 0.06 0.8 x + x 2 [26].
We noticed that Taylor series and interpolation polynomials are not generally efficient in solving problems with kernel functions of the type (51) when c is small ( c < 0.5 ). Therefore, we apply here the quadrature method in Section 5 by using the Trapezoidal and Simpson’s rules.
QM: Trapezoidal rule: 
Let us consider n + 1 equally spaced quadrature points 0 = x 1 < x 2 < < x n + 1 = 1 , x j = ( j 1 ) h , j = 1 , 2 , , n + 1 , where h = 1 / n , in the interval [ 0 , 1 ] . We use the quadrature formula in (49) to approximate the integral in (50) and then, through the steps in Section 5, write Equation (50) in the form (34), i.e.,
T q u ˜ n = u ˜ n G Ψ ( u ˜ n ) = f ,
where the vectors
G = 1 n 1 10 1 100 + x 2 1 5 1 100 + ( x x 2 ) 2 1 10 1 100 + ( x 1 ) 2 , Ψ ( u ˜ n ) = u ˜ n ( 0 ) u ˜ n ( x 2 ) u ˜ n ( 1 ) .
Solving (52) via (5) and (6), we obtain an approximate analytic solution to (50) in the interval [ 0 , 1 ] . For example, for n = 2 , we have
u ˜ 2 ( x ) = f ( x ) 4.052649 × 10 4 0.01 + x 2 + 0.004199 0.01 + ( x 0.5 ) 2 0.005479 0.01 + ( x 1.0 ) 2 .
In Table 10, we record the maximum errors for various values of n. The maximum error occurs at the point x = 1 except in the instance n = 8 , which is a poor approximation anyway, where it is located at x = 0.935 .
QM: Simpson’s rule: 
We utilize now the quadrature formula in (45), with n being an even number, and repeat the steps above. As an illustration, for n = 16 , we obtain the solution
u ˜ 16 ( x ) = f ( x ) 2.554573 × 10 4 x 2 + 0.01 2.269247 × 10 4 ( x 0.0625 ) 2 + 0.01 + 2.119289 × 10 4 ( x 0.125 ) 2 + 0.01 + 8.937448 × 10 4 ( x 0.1875 ) 2 + 0.01 + 6.691821 × 10 4 ( x 0.25 ) 2 + 0.01 + 0.001507 ( x 0.3125 ) 2 + 0.01 + 8.573774 × 10 4 ( x 0.375 ) 2 + 0.01 + 0.001609 ( x 0.4375 ) 2 + 0.01 + 7.766917 × 10 4 ( x 0.5 ) 2 + 0.01 + 0.001200 ( x 0.5625 ) 2 + 0.01 + 4.271220 × 10 4 ( x 0.625 ) 2 + 0.01 + 2.803521 × 10 4 ( x 0.6875 ) 2 + 0.01 1.912995 × 10 4 ( x 0.75 ) 2 + 0.01 0.001151 ( x 0.8125 ) 2 + 0.01 0.001080 ( x 0.875 ) 2 + 0.01 0.003089 ( x 0.9375 ) 2 + 0.01 0.001106 ( x 1.0 ) 2 + 0.01 .
Table 11 shows the errors of the approximate solutions for varying values of n. It is noted that the maximum error occurs at the point x = 1 , with the exception of the instances n = 8 and n = 16 , where it is located at the points x = 0.95 and x = 0.995 , respectively. In the same Table, some results obtained in [26] are also quoted for comparison. The supremacy of the Simpson’s rule is reaffirmed and the agreement with results of other formulations is acknowledged.
Example 4.
Consider the integral equation [8]
u ( x ) 1 1 ( x t 2 x 2 t ) d t = x 4 , x [ 1 , 1 ] .
The kernel K ( x , t ) = x t 2 x 2 t is continuous on [ 1 , 1 ] and separable. Thus, Equation (53) can be solved exactly by the DCM in Section 3 to obtain
u ( x ) = 30 133 x + 20 133 x 2 x 4 .
We also solve this equation here by the collocation method (PM) explained in Section 6. Specifically, we use Lagrange interpolating polynomials of the type (38) to approximate u ( x ) and put the integral equation in (53) in the symbolic form (41). For n = 2 , n = 3 and n = 4 , we obtain
u ˜ 2 ( x ) = 6 19 x 15 19 x 2 , u ˜ 3 ( x ) = 1 9 50 171 x 470 513 x 2 , u ˜ 4 ( x ) = 30 133 x + 20 133 x 2 x 4 ,
respectively. Notice that for n = 4 , as expected, the exact solution was fully recovered. This further validates the procedure in Section 6.
Example 5.
Let the integral equation [9]
u ( x ) λ 0 2 π e i ( x t ) u ( t ) d t = e x , x [ 0 , 2 π ] .
The kernel K ( x , t ) = e i ( x t ) is continuous on [ 0 , 2 π ] and separable, and therefore the integral equation in (55) can be written as
u ( x ) λ e i x 0 2 π e i t u ( t ) d t = e x .
Following the steps in DCM in Section 3, we set
G = λ e i x , Ψ ( u ) = 0 2 π e i t u ( t ) d t ,
and write (56) in the form (14). By means of (5), det V = 1 2 π λ , and hence the integral Equation (55) has a unique solution if λ 1 2 π . In this case by substituting into (6), we obtain the unique solution
u ( x ) = e x + λ ( 1 + i ) ( e 2 π 1 ) 2 ( 1 2 π λ ) e i x , λ 1 2 π .

8. Discussion

The main objective of the present article was to present a unified and versatile procedure suitable for symbolic computations for the construction of approximate analytic solutions to linear Fredholm integral equations of the second kind with general continuous kernels.
It was shown how some of the classical methods such as the Direct Computational Method (DCM), the Degenerate Kernel Methods (DKM), the Quadrature Methods (QM) and the Projection Methods (PM) can be incorporated in the proposed procedure. Additionally, it was demonstrated how complicated calculations such as double power series approximation, interpolation in two dimensions and combinations of different types of approximation can be handled in an effective way.
The technique was tested by solving several examples from the literature. Several approximating schemes for the kernel and the integral operator were used and their accuracy and convergence were evaluated.
In all cases, explicit solutions of high accuracy for the entire interval [ a , b ] were obtained, which converge to the true solution as n increases. The power series approximation and polynomial interpolation of the kernel yield excellent results when the kernel is smooth with no “peaks”. For continuous kernels with “peaks” or kernels of the convolution type, numerical integration of the integral operator is appropriate.
In this paper, the emphasis was on presenting a novel framework for solving integral equations and on establishing its versatility and reliability in practice. A proper convergence and error analysis is postponed to a sequel paper. In the proposed framework, other numerical methods such as piecewise projection methods, Galerkin methods and wavelets methods can be included. The technique can be extended to two or more dimensions.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The author would like to thank the anonymous reviewers for their valuable suggestions and comments.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Bystricky, L.; Pålsson, S.; Tornberg, A.-K. An accurate integral equation method for Stokes flow with piecewise smooth boundaries. BIT Numer. Math. 2021, 61, 309–335. [Google Scholar] [CrossRef]
  2. Selvadurai, A.P.S. The Analytical Method in Geomechanics. Appl. Mech. Rev. 2007, 404, 87–106. [Google Scholar] [CrossRef]
  3. Siedlecki, J.; Ciesielski, M.; Blaszczyk, T. Transformation of the second order boundary value problem into integral form - different approaches and a numerical solution. J. Appl. Math. Comput. Mech. 2015, 14, 103–108. [Google Scholar] [CrossRef] [Green Version]
  4. Atkinson, K.; Han, W. Boundary Integral Equations. In Theoretical Numerical Analysis: A Functional Analysis Framework; Springer: New York, NY, USA, 2005; pp. 523–553. [Google Scholar] [CrossRef]
  5. Atkinson, K.E. The Numerical Solution of Integral Equations of the Second Kind; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar] [CrossRef] [Green Version]
  6. Kress, R. Linear Integral Equations: Third Edition; Springer: New York, NY, USA, 2014. [Google Scholar] [CrossRef]
  7. Polyanin, P.; Manzhirov, A.V. Handbook of Integral Equations: Second Edition, 2nd ed.; Chapman and Hall/CRC: New York, NY, USA, 2008. [Google Scholar] [CrossRef]
  8. Wazwaz, A.M. Linear and Nonlinear Integral Equations, 3rd ed.; Springer: Berlin, Germany, 2011. [Google Scholar] [CrossRef]
  9. Zemyan, S.M. The Classical Theory of Integral Equations; Birkhäuser: Basel, Switzerland, 2012. [Google Scholar] [CrossRef]
  10. Kumar, S.; Sloan, I.H. A New Collocation-Type Method for Hammerstein Integral Equations. Math. Comput. 1987, 48, 585–593. [Google Scholar] [CrossRef]
  11. Sloan, I.H.; Burn, B.J.; Datyner, N. A new approach to the numerical solution of integral equations. J. Comput. Phys. 1975, 18, 92–105. [Google Scholar] [CrossRef]
  12. Sloan, I.H.; Noussair, E.; Burn, B.J. Projection methods for equations of the second kind. J. Math. Anal. Appl. 1979, 69, 84–103. [Google Scholar] [CrossRef] [Green Version]
  13. Allouch, C.; Remogna, S.; Sbibih, D.; Tahrichi, M. Superconvergent methods based on quasi-interpolating operators for fredholm integral equations of the second kind. Appl. Math. Comput. 2021, 404, 126227. [Google Scholar] [CrossRef]
  14. Baiburin, M.M.; Providas, E. Exact solution to systems of linear first-order integro-differential equations with multipoint and integral conditions. In Mathematical Analysis and Applications; Rassias, T., Pardalos, P., Eds.; Springer Optimization and Its Applications; Springer: Cham, Switzerland, 2019; Volume 154, pp. 1–16. [Google Scholar] [CrossRef]
  15. Dellwo, D.R. Accelerated degenerate-kernel methods for linear integral equations. J. Comput. Appl. Math. 1995, 58, 135–149. [Google Scholar] [CrossRef] [Green Version]
  16. Fairbairn, A.I.; Kelmanson, M.A. Spectrally accurate Nyström-solver error bounds for 1-D Fredholm integral equations of the second kind. Appl. Math. Comput. 2017, 315, 211–223. [Google Scholar] [CrossRef]
  17. Gao, R.X.; Tan, S.R.; Tsang, L.; Tong, M.S. A Nyström Method with Lagrange’s Interpolation for Solving Electromagnetic Scattering by Dielectric Objects. In Proceedings of the 2019 PhotonIcs Electromagnetics Research Symposium—Spring (PIERS-Spring), Rome, Italy, 17–20 June 2019; pp. 1957–1960. [Google Scholar] [CrossRef]
  18. Li, X.Y.; Wu, B.Y. Superconvergent kernel functions approaches for the second kind Fredholm integral equations. Appl. Numer. Math. 2021, 167, 202–210. [Google Scholar] [CrossRef]
  19. Molabahrami, A. The relationship of degenerate kernel and projection methods on Fredholm integral equations of the second kind. Commun. Numer. Anal. 2017, 2017, 34–39. [Google Scholar] [CrossRef] [Green Version]
  20. Parasidis, I.N.; Providas, E. Resolvent operators for some classes of integro-differential equations. In Mathematical Analysis, Approximation Theory and Their Applications; Rassias, T., Gupta, V., Eds.; Springer Optimization and Its Applications; Springer: Cham, Switzerland, 2016; Volume 111, pp. 535–558. [Google Scholar] [CrossRef]
  21. Providas, E. Approximate solution of Fredholm integral and integro-differential equations with non-separable kernels. In Approximation and Computation in Science and Engineering; Daras, N.J., Rassias, T.M., Eds.; Springer Optimization and Its Applications 180; Springer: Berlin, Germany, 2021; pp. 697–712. [Google Scholar] [CrossRef]
  22. Conte, S.D.; De Boor, C. Elementary Numerical Analysis: An Algorithmic Approach; SIAM: New York, NY, USA, 2017. [Google Scholar] [CrossRef] [Green Version]
  23. Ramm, A.G. Integral Equations and Applications; Cambridge University Press: Cambridge, UK, 2015. [Google Scholar]
  24. Mohammad, M. A Numerical Solution of Fredholm Integral Equations of the Second Kind Based on Tight Framelets Generated by the Oblique Extension Principle. Symmetry 2019, 11, 854. [Google Scholar] [CrossRef] [Green Version]
  25. Mohammad, M.; Trounev, A.; Alshbool, M. A Novel Numerical Method for Solving Fractional Diffusion-Wave and Nonlinear Fredholm and Volterra Integral Equations with Zero Absolute Error. Axioms 2021, 10, 165. [Google Scholar] [CrossRef]
  26. Atkinson, K.E.; Shampine, L. Algorithm 876: Solving Fredholm Integral Equations of the Second Kind in Matlab. ACM Trans. Math. Softw. 2008, 34, 1–20. [Google Scholar] [CrossRef]
Figure 1. Solution of integral Equation (42).
Figure 1. Solution of integral Equation (42).
Algorithms 14 00293 g001
Figure 2. Function K ( x , t ) in (51), x = 0.5 .
Figure 2. Function K ( x , t ) in (51), x = 0.5 .
Algorithms 14 00293 g002
Table 1. IE (42). DKM: Taylor series.
Table 1. IE (42). DKM: Taylor series.
nConstCoeff sin x Coeff cos x ϵ n = u u ˜ n Ratio
21.0−1.698469−0.4010968.12 × 10 1
41.00.013033−0.4603209.26 × 10 1
81.0−0.889403−0.5720061.73 × 10 2 53.5
161.0−0.906036−0.5768001.04 × 10 7 166,346.2
exact1.0−0.906036−0.576800
Table 2. IE (42). QM: Simpson’s rule.
Table 2. IE (42). QM: Simpson’s rule.
nConstCoeff sin x Coeff cos x ϵ n = u u ˜ n Ratio
21.0−0.686838−0.6558822.33 × 10 1
41.0−0.908102−0.5781152.45 × 10 3
81.0−0.906158−0.5768781.45 × 10 4 16.9
161.0−0.906044−0.5768058.91 × 10 6 16.3
exact1.0−0.906036−0.576800
Table 3. IE (48), λ A = 2 . DKM: Taylor series.
Table 3. IE (48), λ A = 2 . DKM: Taylor series.
n ϵ n = u u ˜ n η = ϵ n | u | 100 % × η Ratio
23.77 × 10 1 1.39 × 10 1 13.88%
41.55 × 10 2 5.70 × 10 3 0.57%24.3
84.14 × 10 6 1.52 × 10 6 1.52 × 10 4 %3.7 × 10 3
163.62 × 10 15 1.33 × 10 15 1.33 × 10 13 %1.1 × 10 9
Table 4. IE (48), λ A = 2 . DKM: Polynomials.
Table 4. IE (48), λ A = 2 . DKM: Polynomials.
n ϵ n = u u ˜ n η = ϵ n | u | 100 % × η x m a x ϵ n
22.26 × 10 2 9.99 × 10 3 1.00%0.8150
46.98 × 10 5 2.80 × 10 5 2.80 × 10 3 %0.9150
82.17 × 10 10 8.28 × 10 11 8.28 × 10 9 %0.9625
Table 5. IE (48), λ A = 2 . QM: Trapezoidal rule and Taylor series.
Table 5. IE (48), λ A = 2 . QM: Trapezoidal rule and Taylor series.
n ϵ n = u u ˜ n η = ϵ n | u | 100 % × η Ratio
21.73 × 10 1 6.36 × 10 2 6.36%
46.37 × 10 2 2.34 × 10 2 2.34%2.7
81.99 × 10 2 7.33 × 10 3 0.73%3.2
164.95 × 10 3 1.82 × 10 3 0.18%4.0
Table 6. IE (48), λ A = 50 . QM: Trapezoidal rule and Taylor series.
Table 6. IE (48), λ A = 50 . QM: Trapezoidal rule and Taylor series.
n ϵ n = u u ˜ n η = ϵ n | u | 100 % × η Ratio
22.172 × 10 1 7.989 × 10 2 7.989%
48.710 × 10 3 3.204 × 10 3 0.320%24.9
83.356 × 10 4 1.235 × 10 4 0.012%26.0
168.473 × 10 5 3.117 × 10 5 0.003%4.0
Table 7. IE (48), λ A = 2 . QM: Trapezoidal rule and Polynomials.
Table 7. IE (48), λ A = 2 . QM: Trapezoidal rule and Polynomials.
n ϵ n = u u ˜ n η = ϵ n | u | 100 % × η Ratio
23.55 × 10 1 1.31 × 10 1 13.07%
48.14 × 10 2 2.99 × 10 2 2.99%4.4
81.99 × 10 2 7.33 × 10 3 0.73%4.1
164.95 × 10 3 1.82 × 10 3 0.18%4.1
Table 8. IE (48), λ A = 50 . QM: Trapezoidal rule and Polynomials.
Table 8. IE (48), λ A = 50 . QM: Trapezoidal rule and Polynomials.
n ϵ n = u u ˜ n η = ϵ n | u | 100 % × η Ratio
25.351 × 10 3 1.968 × 10 2 0.197%
41.351 × 10 3 4.971 × 10 4 0.050%4.0
83.387 × 10 4 1.246 × 10 4 0.012%4.0
168.473 × 10 5 3.117 × 10 5 0.003%4.0
Table 9. IE (48), λ A = 1 : PM: Collocation with Polynomials.
Table 9. IE (48), λ A = 1 : PM: Collocation with Polynomials.
[24] ϵ n = u u ˜ n 2 ϵ n = u u ˜ n
n N * B 2 -OEP B 4 -OEP N *
2 401.03 × 10 3 4.45 × 10 7 31.96 × 10 2
3 1121.08 × 10 4 1.78 × 10 7 41.47 × 10 3
4 2881.25 × 10 4 4.35 × 10 8 56.02 × 10 5
5 7049.78 × 10 5 1.58 × 10 9 62.87 × 10 6
*N = order of system of linear algebraic equations to solve.
Table 10. IE (50), λ A = 0.5 , c = 0.1 . QM: Trapezoidal rule.
Table 10. IE (50), λ A = 0.5 , c = 0.1 . QM: Trapezoidal rule.
n ϵ n = u u ˜ n η = ϵ n | u | 100 % × η Ratio
85.20 × 10 2 2.79 × 10 1 27.95%
167.32 × 10 3 2.82 × 10 2 2.82%
322.02 × 10 3 7.78 × 10 3 0.78%3.6
645.16 × 10 4 1.98 × 10 3 0.20%3.9
1281.30 × 10 4 4.98 × 10 4 0.05%4.0
Table 11. IE (50), λ A = 0.5 , c = 0.1 . QM: Simpson’s rule.
Table 11. IE (50), λ A = 0.5 , c = 0.1 . QM: Simpson’s rule.
n [26] ϵ n = u u ˜ n ϵ n = u u ˜ n η = ϵ n | u | 100 % × η Ratio
8 6.855 × 10 2 3.385 × 10 1 33.850%
16 5.630 × 10 3 2.216 × 10 2 2.216%
32 1.679 × 10 4 6.456 × 10 4 0.065%
646.6 × 10 6 6.636 × 10 6 2.552 × 10 5 0.003%25.3
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Providas, E. A Unified Formulation of Analytical and Numerical Methods for Solving Linear Fredholm Integral Equations. Algorithms 2021, 14, 293. https://doi.org/10.3390/a14100293

AMA Style

Providas E. A Unified Formulation of Analytical and Numerical Methods for Solving Linear Fredholm Integral Equations. Algorithms. 2021; 14(10):293. https://doi.org/10.3390/a14100293

Chicago/Turabian Style

Providas, Efthimios. 2021. "A Unified Formulation of Analytical and Numerical Methods for Solving Linear Fredholm Integral Equations" Algorithms 14, no. 10: 293. https://doi.org/10.3390/a14100293

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop