Next Article in Journal
Optimality Conditions for Group Sparse Constrained Optimization Problems
Next Article in Special Issue
On High-Order Iterative Schemes for the Matrix pth Root Avoiding the Use of Inverses
Previous Article in Journal
Elicitation of the Factors Affecting Electricity Distribution Efficiency Using the Fuzzy AHP Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Picard-Type Iterative Scheme for Fredholm Integral Equations of the Second Kind

by
José M. Gutiérrez
*,† and
Miguel Á. Hernández-Verón
Department of Mathematics and Computer Sciences, University of La Rioja, 26006 Logroño, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2021, 9(1), 83; https://doi.org/10.3390/math9010083
Submission received: 10 December 2020 / Revised: 27 December 2020 / Accepted: 28 December 2020 / Published: 1 January 2021
(This article belongs to the Special Issue Application of Iterative Methods for Solving Nonlinear Equations)

Abstract

:
In this work, we present an application of Newton’s method for solving nonlinear equations in Banach spaces to a particular problem: the approximation of the inverse operators that appear in the solution of Fredholm integral equations. Therefore, we construct an iterative method with quadratic convergence that does not use either derivatives or inverse operators. Consequently, this new procedure is especially useful for solving non-homogeneous Fredholm integral equations of the first kind. We combine this method with a technique to find the solution of Fredholm integral equations with separable kernels to obtain a procedure that allows us to approach the solution when the kernel is non-separable.

1. Introduction

In general, an equation in which the unknown function is under a sign of integration is called an integral equation [1,2,3]. Both linear and nonlinear integral equations appear in numerous fields of science and engineering [4,5,6], because many physical processes and mathematical models can be described by them, so that these equations provide an important tool for modeling processes [7]. In particular, integral equations arise in fluid mechanics, biological models, solid state physics, kinetics chemistry, etc. In addition, many initial and boundary value problems can be easily turned into integral equations.
The definition of an integral equation given previously is very general, so in this work, we focus on some particular integral equations that are widely applied, such as Fredholm integral equations [8]. This kind of integral equation appears frequently in mathematical physics, engineering, and mathematics [9].
Fredholm integral equations of the first kind have the form:
f ( x ) = a b N ( x , t ) y ( t ) d t , x [ a , b ] ;
and those of the second kind can be written as:
y ( x ) = f ( x ) + λ a b N ( x , t ) y ( t ) d t , x [ a , b ] , λ R .
In both cases, < a < b < + , λ R , f ( x ) C [ a , b ] is a given function and N ( x , t ) , defined in [ a , b ] × [ a , b ] , is called the kernel of the integral equation. y ( x ) C [ a , b ] is the unknown function to be determined. The integral equation is said to be homogeneous when f ( x ) is the zero function and not homogeneous in other cases.
For the integral Equation (1), we can consider the operator N : C [ a , b ] C [ a , b ] , given by:
[ N ( y ) ] ( x ) = a b N ( x , t ) y ( t ) d t , x [ a , b ] .
and the operator F : C [ a , b ] C [ a , b ] given by:
[ F ( y ) ] ( x ) = f ( x ) + λ a b N ( x , t ) y ( t ) d t = f ( x ) + λ [ N ( y ) ] ( x ) , x [ a , b ] , λ R .
If the operator F defined in (2) is a contraction, the Banach fixed point theorem [10], guarantees the existence of a unique fixed point of F in C [ a , b ] . In addition, this fixed point can be approximated by the iterative scheme:
y 0 given in C [ a , b ] , y n + 1 = F ( y n ) , n 0 .
The operator F defined in (2) is a contraction in C [ a , b ] if there exists α ( 0 , 1 ) such that:
F ( u ) F ( v ) < α u v , for all u , v C [ a , b ] .
If the operator F is derivable in C [ a , b ] , the condition F ( y ) < 1 , for all y C [ a , b ] , implies that F is a contraction. Note that:
[ F ( y ) u ] ( x ) = λ a b N ( x , t ) u ( t ) d t ,
so the operator F has a unique fixed point y in C [ a , b ] if | λ | N < 1 with:
N = max x [ a , b ] a b | N ( x , t ) | d t .
We can find in the literature many techniques to solve, in an exact or approximate way, Fredholm integral equations. For instance, the direct computation method and the successive approximations method appear amongst the most used methods for this issue. Another well-known technique consists of transforming the Fredholm equation into an equivalent boundary value problem. We can add to this list other recently developed techniques, such as the Adomian decomposition method [11], the modified decomposition method [12], or the variational iteration method [13]. The emphasis in this paper will be on the use of iterative processes rather than proving theoretical concepts of convergence and existence. The theorems of existence and convergence are important. The concern will be on the determination of the solution y ( x ) of the Fredholm integral equations of the first kind and the second kind.
In this paper, we present a Newton-type iterative method that shares many properties of Picard-type iterative methods, namely it is derivative-free and does not use inverse operators, although preserving the quadratic order of convergence that characterizes Newton’s method. These features allow us to design an efficient iterative method. Actually, with a very reduced number of iterations, we can find competitive approximations to the solution of the involved Fredholm integral equation. This is one of the main targets of our research: to justify that it is enough to consider a few steps in our iterative procedure to reach a good approach to the solution.
The new iterative procedure is based on the following idea (see [14]): Newton’s method gives a sequence that does not use inverses when it is applied to the problem of approximating the inverse of a given operator.
The iterative scheme (3) is known as the method of successive approximations for operator F . It converges to the fixed point y for any function y 0 C [ a , b ] .
It is known that a fixed point approximation can be expressed by different iteration functions y = H ( y ) with H : C [ a , b ] C [ a , b ] . In our case, instead of (3), we consider the well-known iterative scheme of Picard [15], which can be written in the form:
y 0 given   in   C [ a , b ] , y n + 1 = P ( y n ) = y n G ( y n ) , n 0 ,
where G ( y ) = ( I F ) ( y ) . Obviously, a fixed point of P is a fixed point of F and vice versa. Moreover, both methods provide the same iterations, since P ( y ) = y G ( y ) = y ( I F ) ( y ) = F ( y ) .
It is clear that the Banach fixed point theorem does not allow us to locate a fixed point in a domain that is not all the space C [ a , b ] . Both the successive approximation method and Picard’s method do not need either inverse operators or derivative operators. As a consequence, they only reach a linear order of convergence. Our aim in this paper is to construct iterative processes with a quadratic order of convergence, but without using inverse operators or derivative operators. In addition, we obtain results that allow us to locate the fixed point in a subset of C [ a , b ] .
Notice that Equation (1) can be expressed as:
( I λ N ) y ( x ) = f ( x ) .
Therefore, the solution y ( x ) of Equation (1) is given by:
y ( x ) = ( I λ N ) 1 f ( x ) .
From a theoretical point of view, Formula (5) gives the exact solution of Equation (1) or (4). However, for practical purposes, the calculus of the inverse ( I λ N ) 1 could be not possible or very complicated. For this reason, we propose the use of iterative methods to approach this inverse and therefore the solution of the integral Equation (1).
In Section 2, we use Newton’s method for this purpose to obtain a method with quadratic convergence for calculating inverse operators. In addition, we describe two procedures for approaching the solution of an integral Equation (1), one for separable kernels and another one for non-separable kernels. The approximated solutions obtained by the methods in Section 2 can be used as initial values for the iterative method explained in Section 3 for finding solutions of integral equations. Actually, by combining the two techniques given in these two sections, we can obtain, with a low number of steps, good approximations of the integral Equation (1), especially in the non-separable case. We illustrate the theoretical results in Section 3, with some numerical examples.

2. Newton’s Method for the Calculus of Inverse Operators

As we said, we can calculate the solution of (1) by Formula (5). Therefore, we consider the problem of the approximation of the inverse of the linear operator A = I λ N by means of iterative methods for solving nonlinear equations.
To do this, we introduce the set:
G L ( C [ a , b ] , C [ a , b ] ) = { H L ( C [ a , b ] , C [ a , b ] ) : H 1 exists } ,
where L ( C [ a , b ] , C [ a , b ] ) is the set of bounded linear operators from the Banach space C [ a , b ] into the Banach space C [ a , b ] .
Actually, in this section, we use Newton’s method to approach the inverse of a given linear operator A G L ( C [ a , b ] , C [ a , b ] ) , namely for solving:
T ( H ) = 0 , where T ( H ) = H 1 A .
Therefore, the Newton iteration in this case can be written in the following way:
H 0 L ( C [ a , b ] , C [ a , b ] ) given , H m + 1 = H m [ T ( H m ) ] 1 T ( H m ) , m 0 ,
or equivalently (to avoid inverse operators as recommended in [16]):
T ( H m ) ( H m + 1 H m ) = T ( H m ) , m 0 .
Indeed, to obtain the corresponding algorithm, we only need to compute T ( H m ) . Therefore, given H G L ( C [ a , b ] , C [ a , b ] ) , as H 1 exists, if:
0 < ε < 1 L H 1 ,
we have I H 1 ( H + ε L ) < 1 for L G L ( C [ a , b ] , C [ a , b ] ) . Therefore, it is known that H + ε L G L ( C [ a , b ] , C [ a , b ] ) , and then:
T ( H ) L = lim ε 0 1 ε [ T ( H + ε L ) T ( H ) ] = H 1 L H 1
(see [17]). Note that T ( H ) is intended as a Gateaux derivative, but since T is locally Lipschitz in G L ( C [ a , b ] , C [ a , b ] ) and T ( H ) : G L ( C [ a , b ] , C [ a , b ] ) G L ( C [ a , b ] , C [ a , b ] ) is linear, then T is Fréchet derivable.
As a consequence, Newton’s method is now given by the following algorithm:
H 0 L ( C [ a , b ] , C [ a , b ] ) given , H m + 1 = 2 H m H m A H m , m 0 .
Observe that in this case, Newton’s method does not use inverse operators for approximating the inverse operator A 1 = ( I λ N ) 1 .
Now, we prove a local convergence result for the sequence (6). To do this, we suppose that A 1 exists, or equivalently, that | λ | K < 1 . Therefore, we obtain the following result.
In the rest of this paper, we use the following notation for open and closed balls centered at a point x 0 that belongs to a given Banach space X and with radius R:
B ( x 0 , R ) = { x X ; x x 0 < R } , B ( x 0 , R ) ¯ = { x X ; x x 0 R } .
Theorem 1.
Let H 0 B A 1 , θ A with θ ( 0 , 1 ) . Then, the sequence { H m } defined by (6) belongs to B A 1 , θ A and converges quadratically to A 1 . In addition,
H m A 1 θ 2 m 1 H 0 A 1 .
Proof. 
Taking into account the definition of the sequence (6), we have:
H m + 1 A I = 2 H m A H m A H m A I = ( H m A I ) 2 = ( H m A 1 ) A ( H m A 1 ) A .
Then, if we apply this equality recursively, for H m B A 1 , θ A , we have:
H m + 1 A 1 ( H m + 1 A I ) A 1 H m A 1 2 A < θ 2 A < θ A ,
so H m + 1 B A 1 , θ A for all m 0 .
We can apply (7) to obtain:
H m + 1 A 1 = ( H m + 1 A I ) A 1 = ( H m A I ) 2 A 1 = ( H m A 1 ) A ( H m A 1 ) ,
and, therefore:
H m + 1 A 1 A H m A 1 2 .
Consequently,
H m A 1 H m 1 A 1 2 A H 0 A 1 2 m A ( 2 m 1 + + 2 + 1 ) ,
and therefore, by the hypothesis:
H m A 1 θ 2 m 1 H 0 A 1 .
Now, second, we want to prove a semilocal convergence result, without assuming the existence of A 1 .
Theorem 2.
Let H 0 L ( C [ a , b ] , C [ a , b ] ) such that I H 0 A δ , with δ 0 , 5 1 2 . Then, the sequence { H m } defined by (6) belongs to B H 0 , H 0 1 δ ( 1 + δ ) and converges quadratically to H , with H A = I . Moreover,
I H m A δ 2 m 1 I H 0 A .
Proof. 
A direct application of (7) gives us:
I H m A = ( I H m 1 A ) 2 ,
and therefore:
I H m A I H m 1 A 2 I H 0 A 2 m δ 2 m .
On the other hand,
H m 2 H m 1 H m 1 A H m 1 ( 1 + I H m 1 A ) H m 1 .
Now, by applying the previous inequality recursively and taking into account (9), we obtain:
H m j = 0 m 1 ( 1 + δ 2 j ) H 0 < ( 1 + δ ) m H 0 .
Consequently, by the definition of the sequence (6), (9), and (10), we have for k N :
H m + k H m H m + k H m + k 1 + + H m + 1 H m j = 0 k 1 I H m + j A H m + j j = 0 k 1 δ 2 m + j ( 1 + δ ) m + j H 0 < j = 0 k 1 [ δ ( 1 + δ ) ] m + j H 0 .
Therefore, as δ ( 1 + δ ) < 1 , for m = 0 , we have:
H k H 0 < H 0 1 δ ( 1 + δ ) .
Then, H k B H 0 , H 0 1 δ ( 1 + δ ) for k 1 . Moreover, we obtain:
H m + k H m < [ δ ( 1 + δ ) ] m [ δ ( 1 + δ ) ] m + k 1 δ ( 1 + δ ) H 0 ,
and it follows that { H m } is a Cauchy sequence. Then, { H m } converges to H . Moreover, as:
I H m A I H 0 A 2 m δ 2 m ,
it follows that lim m ( I H m A ) = 0 and then H A = I . □
Notice that, if we prove that A 1 exists, then H = A 1 . On the other hand, if we do not suppose that A 1 exists, if we consider H 0 such that H 0 A = A H 0 , we have:
A H 1 = A ( 2 H 0 H 0 A H 0 ) = 2 A H 0 A H 0 A H 0 = 2 H 0 A H 0 A H 0 A = H 1 A .
Therefore, from an inductive procedure, we obtain that H m A = A H m and then A H = I . Therefore, in this case, H is the inverse operator of A. However, if H 0 A A H 0 , then H = lim m H m satisfies only H A = I , so that the sequence { H m } converges to the left inverse of A.
Our aim in the rest of the section is to face the problem of the calculus of the inverse of the the linear operator A = I λ N that appears in the solution of Fredholm integral Equation (1). We distinguish two situations, depending on whether the kernel N is separable or not. In the first case, an exact solution can be obtained by means of algebraic procedures, whereas in the second case, we use the sequence defined by Newton’s method (6) for approximating A 1 = ( I λ N ) 1 .

2.1. Separable Kernels

In the first case, we assume that N ( x , t ) is a separable kernel, that is:
N ( x , t ) = i = 1 m α i ( x ) β i ( t ) .
If we denote A j = a b β j ( t ) y ( t ) d t , we have by (5):
( I λ N ) y ( x ) = y ( x ) λ j = 1 m α j ( x ) A j = f ( x ) ,
and:
( I λ N ) 1 f ( x ) = y ( x ) = f ( x ) + λ j = 1 m α j ( x ) A j .
In addition, the integrals A j can be calculated independently of y . To do this, we multiply the second equality of (12) by β i ( x ) , and we integrate in the x variable. Therefore, we have:
A i λ j = 1 m a b β i ( x ) α j ( x ) d s A j = a b β i ( x ) f ( x ) d x .
Now, if we denote:
a i j = a b β i ( x ) α j ( x ) d x a n d b i = a b β i ( x ) f ( x ) d x ,
we obtain the following linear system of equations:
A i λ j = 1 m a i j A j = b i , i = 1 , , m .
This system has a unique solution if:
( λ ) m a 11 1 λ a 12 a 13 a 1 m a 21 a 22 1 λ a 23 a 2 m a m 1 a m 2 a m 3 a m m 1 λ 0 .
Then, we assume 1 λ is not an eigenvalue of the matrix ( a i j ) . Thus, if A 1 , A 2 , , A m is the solution of system (13), we can obtain directly the solution:
y ( x ) = ( I λ N ) 1 f ( x ) = f ( x ) + λ j = 1 m α j ( x ) A j .
From a practical point of view, we can solve the systems defined in (13) by using classical techniques, such as L U decomposition or iterative methods for solving linear systems. We can also use any kind of specific scientific software for this purpose. Notice that System (13) depends directly on the integration needed for computing the coefficients a i j and b i . In the case of the impossibility of analytical integration, a numerical formula of integration must be used.

2.2. Non-Separable Kernels

Now, we wonder what happens when the kernel is not separable. With the aim of reaching a quadratic convergence, we can apply Newton’s method to approximate the inverse of the operator A = I λ N defined in (5) to obtain the solution y ( x ) of the Fredholm integral Equation (1).
Our idea is to approximate N ( x , t ) by a separable kernel N ˜ ( x , t ) , that is:
N ( x , t ) = N ˜ ( x , t ) + R ( x , t ) ,
where N ˜ ( x , t ) = i = 1 m α i ( x ) β i ( t ) and R ( x , t ) is the error in the approximation.
We consider the operator:
[ N ˜ ( y ) ] ( x ) = a b N ˜ ( x , t ) y ( t ) d t , x [ a , b ] .
With this operator, we can consider:
H 0 = ( I λ N ˜ ) 1
as the starting seed in the iterative process (6) to approximate A 1 .
To check if H 0 defined in (17) is a good choice, we can apply Theorem 3. We need to guarantee that:
I H 0 A δ < 5 1 2 .
In this case, we have:
I H 0 A | λ | H 0 N N ˜ | λ | H 0 R ( b a ) ,
where:
R = max x [ a , b ] a b | R ( x , t ) | d t
and H 0 is the norm induced by the max-norm in the space L ( C [ a , b ] , C [ a , b ] ) .
Consequently, if the error R in (15) is small enough, H 0 could be considered as a good starting point for the iterative process (6). If, for example, N ( x , t ) is sufficiently derivable in some argument, we can apply the Taylor series to calculate the approximation given in (15), and then, the error made by the Taylor series will allow us to establish how much R approaches zero. Improving this approach will depend, in general, on the number of Taylor development terms.
Now, we compute m steps in Newton’s method (6) for approximating A 1 = ( I λ N ) 1 . Therefore, once H m is calculated, we consider H m f ( x ) as an approximated solution of the Fredholm integral Equation (1). This is the main idea developed in the next section.

3. A Picard-Type Iterative Scheme from the Newton Method

As we said in the previous section, our target now is to obtain the solution y of Equation (1). Therefore, as the limit H of the sequence of linear operators defined in (6) satisfies H A = I , with A = I λ N , we have:
( I λ N ) y ( x ) = f ( x ) ,
H ( I λ N ) y ( x ) = H f ( x ) ,
y ( x ) = H f ( x ) .
Therefore, for approximating the solution y of Equation (1), we can consider the following iterative scheme, for A = I λ N :
H 0 L ( C [ a , b ] , C [ a , b ] ) given , y 0 ( x ) = H 0 f ( x ) , H m = 2 H m 1 H m 1 A H m 1 , m 0 , y m ( x ) = H m f ( x ) .
From the previous theorems, it is easy to prove the local and semilocal convergence of the iterative scheme (18).
Theorem 3.
Suppose that there exists A 1 . Given H 0 L ( C [ a , b ] , C [ a , b ] ) such that H 0 B A 1 , θ A , with θ ( 0 , 1 ) , then, for each y 0 B y , θ f A , the sequence { y m } defined by (18) belongs to B y , θ f A and converges quadratically to y , the solution of Equation (1). In addition,
y m y θ 2 m 1 H 0 A 1 f .
Proof. 
From Equation (7) for m = 0 , we obtain:
H 1 A 1 = ( H 1 A I ) A 1 H 0 A 1 2 A < θ 2 A < θ A ,
and:
y 1 y = ( H 1 A 1 ) f θ f A .
Now, by a mathematical inductive procedure, it is easy to prove that y m B y , θ f A for n N .
On the other hand, taking into account (8), we have:
y m y H m A 1 f θ 2 m 1 H 0 A 1 f .
Then, the result follows directly. □
Now, to prove a semilocal convergence result for the iterative scheme (18), we will not assume the existence of A 1 .
Theorem 4.
Let H 0 L ( C [ a , b ] , C [ a , b ] ) such that I H 0 A δ , with δ 0 , 5 1 2 . Then, the sequence { y m } defined by (18) belongs to B y 0 , 2 H 0 f 1 δ ( 1 + δ ) and converges quadratically to y , the solution of Equation (1).
Proof. 
If we consider the sequence { y m } given by (18), from (11), we have:
y m + k y m ( H m + k H m + k 1 + + H m + 1 H m ) f < 2 j = 0 k 1 [ δ ( 1 + δ ) ] m + j H 0 f ,
then, as in (12), we obtain:
y m + k y m < 2 [ δ ( 1 + δ ) ] m [ δ ( 1 + δ ) ] m + k 1 δ ( 1 + δ ) H 0 f ,
and we obtain that { y m } is a Cauchy sequence. Then, { y m } converges to y ˜ .
Moreover, taking m = 0 in (19), it follows that { y m } B y 0 , 2 H 0 f 1 δ ( 1 + δ ) .
Notice that y ˜ is the solution of Equation (1) if we verify that A y ˜ ( x ) = f ( x ) . Then, from Theorem 2, the sequence { H m } converges to H with H A = I and y ˜ ( x ) = H f ( x ) , so y ( x ) = H A y ( x ) = H f ( x ) = y ˜ ( x ) . Therefore, it follows that y ˜ ( x ) = y ( x ) is the solution of Equation (1). □
We would like to indicate that this result allows us to locate the solution of Equation (1) in the closed ball:
B y 0 , 2 H 0 f 1 δ ( 1 + δ ) ¯ .

4. Examples

We illustrate the theoretical results obtained in the previous sections with some examples. Firstly, we examine a case with a separable kernel. In this case, the technique developed in Section 2.1 can be applied.
Example 1.
We consider the following linear Fredholm integral equation,
y ( x ) = 1 3 4 cos ( π x ) π 16 sin ( π x ) π 8 0 1 sin ( π ( x + t ) ) y ( t ) d t .
It is easy to check that y ( x ) = 1 cos ( π x ) is the solution.
We can apply the procedure developed in Section 2.1 with:
f ( x ) = 1 3 4 cos ( π x ) π 16 sin ( π x ) , λ = π 8
and the separable kernel:
N ( x , t ) = sin ( π ( x + t ) ) = sin ( π x ) cos ( π t ) + cos ( π x ) sin ( π t ) ,
that is α 1 ( x ) = sin ( π x ) , α 2 ( x ) = cos ( π x ) , β 1 ( t ) = cos ( π t ) , β 2 ( t ) = sin ( π t ) .
We have b 1 = 3 / 8 , b 2 = 2 π π 32 and:
( a i j ) = 0 1 / 2 1 / 2 0 .
Therefore, the solution of the linear system (13) is A 1 = 1 / 2 , A 2 = 2 / π , and by (14), we obtain the solution of the integral Equation (20):
y ( x ) = f ( x ) + λ ( A 1 α 1 ( x ) + A 2 α 2 ( x ) ) = 1 cos ( π x ) .
In the following example, we establish a procedure for approximating the solution of an integral equation with a non-separable kernel. For this issue, we approximate the given integral equation by another integral equation with a separable kernel. Next, we find the exact solution of this last integral equation with the technique developed in Section 2.1. This exact solution provides us a good approximation of the solution of the original integral equation with a non-separable kernel.
Example 2.
We consider the following Fredholm integral equation,
y ( x ) = 4 π x sin ( π x ) 2 + 1 2 0 1 x cos ( π x t 2 ) y ( t ) d t ,
whose exact solution is y ( x ) = 2 π x .
The non-separable kernel N ( x , t ) = x cos ( π x t 2 ) can be approached by the separable kernel N ˜ ( x , t ) defined by:
N ˜ ( x , t ) = x 1 2 π 2 t 4 x 3 + 1 4 ! π 4 t 8 x 5 1 6 ! π 6 t 12 x 7 .
Consequently,
N ( x , t ) = N ˜ ( x , t ) + R ( θ , x , t ) , with R ( θ , x , t ) = sin ( π x θ 2 ) 7 ! x 8 t 13 , θ ( 0 , t ) .
Then, we consider the linear Fredholm integral equation:
y ( x ) = 4 π x sin ( π x ) 2 + 1 2 0 1 x 1 2 π 2 t 4 x 3 + 1 4 ! π 4 t 8 x 5 1 6 ! π 6 t 12 x 7 y ( t ) d t ,
Obviously, the difference between the solutions of Equations (21) and (23) will be given depending on the remaining R ( θ , x , t ) . Depending on the number of terms considered to be the development of N ( x , t ) , the rest will be further reduced, and therefore, the solutions of (21) and (23) are nearby. Note that, moreover, Equation (23) has a separable kernel, and we can obtain its exact solution by following the procedure shown in Section 2.1.
We consider the real functions:
α 1 ( x ) = x , α 2 ( x ) = x 3 , α 3 ( x ) = x 5 , α 4 ( x ) = x 7 ,
β 1 ( t ) = 1 , β 2 ( t ) = 1 2 π 2 t 4 β 3 ( t ) = 1 4 ! π 4 t 8 β 4 ( t ) = 1 6 ! π 6 t 12 .
Now, we can obtain y 0 ( x ) = H 0 f ( x ) , with H 0 = ( I λ N ˜ ) 1 , as an approximated solution to our problem. We follow the steps indicated in Section 2.1 to get y 0 ( x ) = H 0 f ( x ) . In this problem, we have b 1 = 2.8233 , b 2 = 4.9502 , b 3 = 2.4843 , b 4 = 0.5882 , and:
( a i j ) = 1 / 2 1 / 4 1 / 6 1 / 8 π 2 / 12 π 2 / 16 π 2 / 20 π 2 / 24 π 4 / 240 π 4 / 288 π 4 / 336 π 4 / 384 π 6 / 10,080 π 6 / 11,520 π 6 / 12,960 π 6 / 14,400 .
The solution of the linear system (13) is A 1 = 3.1379 , A 2 = 5.1551 , A 3 = 2.5421 , A 4 = 0.5971 . Then, by (14), the solution of the integral Equation (23) is:
y 0 ( x ) = f ( x ) + λ ( A 1 α 1 ( x ) + A 2 α 2 ( x ) + A 3 α 3 ( x ) + A 4 α 4 ( x ) )
= 0.298544 x 7 + 1.27105 x 5 2.57756 x 3 + 7.85213 x 0.5 sin ( π x ) ,
which provides us a good approximation of the solution of (21), as we can see in Figure 1. Of course, if we increase the number of terms in the approximation given in (22), we can improve the approach y 0 of the solution of Equation (21).
Now, we contemplate a problem with a non-separable kernel. In this case, the technique developed in Section 2.2 and the algorithm given in (18) can be used.
Example 3.
We consider the following Fredholm integral equation,
y ( x ) = 2 x 2 1 3 + 2 3 e x ( x 1 ) + 1 3 0 1 x 3 e x t y ( t ) d t .
It is easy to check that y ( x ) = x 2 1 is the solution.
Obviously, in this case, f ( x ) = 2 x 2 1 3 + 2 3 e x ( x 1 ) , λ = 1 3 , and the kernel N ( x , t ) = x 3 e x t is non-separable. Then, for example, there exists θ ( 0 , t ) such that:
N ( x , t ) = x 3 e x t = i = 0 m x i + 3 t i i ! + R ( θ , x , t ) , R ( θ , x , t ) = e x θ ( m + 1 ) ! x m + 4 t m + 1 .
Thus, if we consider m = 3 , we have:
N ˜ ( x , t ) = i = 0 3 x i + 3 t i i ! = i = 1 4 x i + 2 t i 1 ( i 1 ) ! ,
and then:
N ( x , t ) = N ˜ ( x , t ) + R ( θ , x , t ) , w i t h N ˜ ( x , t ) = i = 1 4 α i ( x ) β i ( t ) a n d R ( θ , x , t ) = e x θ 4 ! x 7 t 4 ,
for the real functions:
α 1 ( x ) = x 3 , α 2 ( x ) = x 4 , α 3 ( x ) = x 5 , α 4 ( x ) = x 6 ,
β 1 ( t ) = 1 , β 2 ( t ) = t , β 3 ( t ) = t 2 β 4 ( t ) = t 3 .
We consider as the initial function in the sequence (6), H 0 = ( I λ N ˜ ) 1 , where:
[ N ˜ ( y ) ] ( x ) = a b N ˜ ( x , t ) y ( t ) d t , x [ a , b ] .
In our example,
I H 0 A | λ | H 0 R ( b a ) 1 3 e 4 ! H 0 .
Note that:
λ N ˜ < 1 3 1 + 1 + 1 2 + 1 3 ! = 8 9 < 1 ,
so by the Banach lemma on inverse operators, there exists H 0 = ( I λ N ˜ ) 1 and H 0 9 . Consequently,
I H 0 A 3 e 4 ! = e 8 < 5 1 2 .
Now, we can use the algorithm (18) with this H 0 to approximate the solution of our problem. Actually, we can obtain the initial approach y 0 ( x ) = H 0 f ( x ) by following the procedure shown in Section 2.1. In this case, we have b 1 = ( 11 6 e ) / 9 , b 2 = 2 ( e 3 ) / 3 , b 3 = ( 241 90 e ) / 45 , b 4 = ( 264 e 719 ) / 36 and:
( a i j ) = 1 / 4 1 / 5 1 / 6 1 / 7 1 / 5 1 / 6 1 / 7 1 / 8 1 / 6 1 / 7 1 / 8 1 / 9 1 / 7 1 / 8 1 / 9 1 / 10 .
The solution of the linear system (13) is A 1 = 0.6754 , A 2 = 0.2575 , A 3 = 0.1399 , A 4 = 0.0892 . Then, by (14), the solution of the integral Equation (20):
y 0 ( x ) = f ( x ) + λ ( A 1 α 1 ( x ) + A 2 α 2 ( x ) + A 3 α 3 ( x ) + A 4 α 4 ( x ) )
= 2 3 ( 0.0446 x 6 0.07 x 5 0.1288 x 4 0.3377 x 3 + x 2 + e x ( x 1 ) 0.5 )
Next, to calculate the first approximation y 1 ( x ) , we use (18) to get:
y 1 ( x ) = 2 y 0 ( x ) H 0 y 0 ( x ) + λ N y 0 ( x )
We can compute H 0 y 0 ( x ) by the same technique described for separable kernels, just by changing f ( x ) by y 0 ( x ) , to obtain:
H 0 y 0 ( x ) = 0.2432 x 6 0.3545 x 5 0.6031 x 4 1.4584 x 3 + 2 x 2 + 2 e x ( x 1 ) 1 / 3 .
As N is a non-separable kernel, we can approximate N y 0 ( x ) by N ˜ y 0 ( x ) and follow the same procedure in Section 2.1. In this way, we have:
N ˜ y 0 ( x ) = x 3 0.0584 x 3 0.0675 x 2 0.0799 x 0.5011 e x / 2 0.009 e x 0.2648 .
Consequently, we obtain the following approximation for y 1 ( x ) :
y 1 ( x ) = 2 y 0 ( x ) H 0 y 0 ( x ) + λ N ˜ y 0 ( x )
= ( 0.0063 x 6 + 0.0071 x 5 + 0.0081 x 4 0.5011 e x / 2 x 3 0.1573 x 3
+ e x 0.009 x 3 + 2 x 2 + 2 x 2 1 ) / 3 .
As we can see in Figure 2, y 1 ( x ) improves considerably the initial approach y 0 ( x ) . On the left side of Figure 2, we can appreciate that y 1 ( x ) practically overlaps the exact solution x 2 1 . On the right side, we plot the error committed by y 0 ( x ) and y 1 ( x ) to approach the exact solution, that is the error functions:
E i ( x ) = | y i ( x ) x 2 + 1 | ,
where y i ( x ) , i = 0 , 1 are the functions defined above by following our procedure. If we consider more terms in (25), we can obtain a better approximation of the exact solution of (24).
Finally, we compare our procedure with the classical Picard iterative method defined in (3):
p i + 1 ( x ) = f ( x ) + 1 3 0 1 x 3 e x t p i ( t ) d t , p 0 ( x ) = f ( x ) , i 0 .
On the left side of Figure 3, we plot the corresponding error committed by Picard’s method. In this case, the error functions are:
P i ( x ) = | p i ( x ) x 2 + 1 | , i = 1 , 2 , 3 .
On the right side of this figure, we can appreciate that the error committed by only one iteration of our method is less than the error obtained with three Picard iterations and is more similar to the error committed by four iterations by Picard’s method.

5. Conclusions

In this work, we consider the numerical solution of Fredholm integral equations of the second kind. We transform this problem into another one where the key is to approximate the inverse of a given operator that allows us to solve the integral equation. In this way, we construct an iterative procedure based on an important characteristic of Newton’s method: it does not use the inverse when it is applied to the nonlinear problem of calculating the inverse of an operator (see [14]). With this idea, we obtain a Picard-type iterative method, with quadratic convergence, that does not use either derivatives or inverse operators. This iterative method is more efficient and precise than the classical Picard iteration that is usually used for solving this kind of problem, at least when a discretization procedure is not used.
We think our method could be considered as a good method for obtaining starting points as a first approximation of the solution of Fredholm integral equations. Therefore, we can consider our method as a good predictor method, in a first attempt to obtain a starting point and next to use another corrector iterative method.

Author Contributions

Investigation, J.M.G. and M.A.H.-V. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Spanish Ministerio de Ciencia, Innovación y Universidades, Grant Number PGC2018-095896-B-C21.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ganesh, M.; Joshi, M.C. Numerical solvability of Hammerstein integral equations of mixed type. IMA J. Numer. Anal. 1991, 11, 21–31. [Google Scholar] [CrossRef]
  2. Hernández-Verón, M.A.; Martínez, E. On nonlinear Fredholm integral equations with non-differentiable Nemystkii operator. Math. Methods Appl. Sci. 2020, 43, 7961–7976. [Google Scholar] [CrossRef]
  3. Rashidinia, J.; Zarebnia, M. New approach for numerical solution of Hammerstein integral equations. Appl. Math. Comput. 2007, 185, 147–154. [Google Scholar] [CrossRef]
  4. Argyros, I.K. On a class of nonlinear integral equations arising in neutron transport. Aequ. Math. 1988, 36, 99–111. [Google Scholar] [CrossRef]
  5. Bruns, D.D.; Bailey, J.E. Nonlinear feedback control for operating a nonisothermal CSTR near an unstable steady state. Chem. Eng. Sci. 1977, 32, 257–264. [Google Scholar] [CrossRef]
  6. Chandrasekhar, S. Radiative Transfer; Dover: New York, NY, USA, 1960. [Google Scholar]
  7. Argyros, I.K.; Regmi, S. Undergraduate Research at Cameron University on Iterative Procedures in Banach and Other Spaces; Nova Science Publisher: New York, NY, USA, 2019. [Google Scholar]
  8. Porter, D.; Stirling, D.S.G. Integral Equations; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
  9. Davis, H.T. Introduction to Nonlinear Differential and Integral Equations; Dover: New York, NY, USA, 1962. [Google Scholar]
  10. Berinde, V. Iterative Approximation of Fixed Point; Springer: New York, NY, USA, 2005. [Google Scholar]
  11. Adomian, G. Solving Frontier Problems of Physics, The Decomposition Method; Kluwer: Boston, MA, USA, 1994. [Google Scholar]
  12. Wazwaz, A.M. A reliable modification of the Adomian decomposition method. Appl. Math. Comput. 1999, 102, 77–86. [Google Scholar] [CrossRef]
  13. He, J.H. Some asymptotic methods for strongly nonlinear equations. Intern. J. Mod. Phys. B. 2006, 20, 1141–1199. [Google Scholar] [CrossRef] [Green Version]
  14. Amat, S.; Ezquerro, J.A.; Hernández-Verón, M.A. Approximation of inverse operators by a new family of high-order iterative methods. Numer. Linear Algebra Appl. 2014, 21, 629–644. [Google Scholar] [CrossRef]
  15. Ezquerro, J.A.; Hernández-Verón, M.A. A modification of the convergence conditions for Picard’s iteration. Comp. Appl. Math. 2004, 23, 55–65. [Google Scholar] [CrossRef] [Green Version]
  16. Rheinboldt, W.C. Methods for Solving Systems of Nonlinear Equations; SIAM: Philadelphia, PA, USA, 1974. [Google Scholar]
  17. Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: Oxford, UK, 1982. [Google Scholar]
Figure 1. On the left, the graphics of the first approximation to the solution of the integral Equation (21) and the exact solution y ( x ) = 2 π x . On the right, the graphics of the corresponding error.
Figure 1. On the left, the graphics of the first approximation to the solution of the integral Equation (21) and the exact solution y ( x ) = 2 π x . On the right, the graphics of the corresponding error.
Mathematics 09 00083 g001
Figure 2. On the left, the graphics of the first two approximations to the solution of the integral Equation (24) and the exact solution y ( x ) = x 2 1 . On the right, the graphics of the corresponding errors E i ( x ) defined in (26).
Figure 2. On the left, the graphics of the first two approximations to the solution of the integral Equation (24) and the exact solution y ( x ) = x 2 1 . On the right, the graphics of the corresponding errors E i ( x ) defined in (26).
Mathematics 09 00083 g002
Figure 3. On the left, errors committed by the first three iterates of Picard’s method (27). On the right, comparison among E 1 ( x ) , P 3 ( x ) , and P 4 ( x ) .
Figure 3. On the left, errors committed by the first three iterates of Picard’s method (27). On the right, comparison among E 1 ( x ) , P 3 ( x ) , and P 4 ( x ) .
Mathematics 09 00083 g003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gutiérrez, J.M.; Hernández-Verón, M.Á. A Picard-Type Iterative Scheme for Fredholm Integral Equations of the Second Kind. Mathematics 2021, 9, 83. https://doi.org/10.3390/math9010083

AMA Style

Gutiérrez JM, Hernández-Verón MÁ. A Picard-Type Iterative Scheme for Fredholm Integral Equations of the Second Kind. Mathematics. 2021; 9(1):83. https://doi.org/10.3390/math9010083

Chicago/Turabian Style

Gutiérrez, José M., and Miguel Á. Hernández-Verón. 2021. "A Picard-Type Iterative Scheme for Fredholm Integral Equations of the Second Kind" Mathematics 9, no. 1: 83. https://doi.org/10.3390/math9010083

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop