Next Article in Journal
Interference among Multiple Vibronic Modes in Two-Dimensional Electronic Spectroscopy
Previous Article in Journal
Two Stochastic Differential Equations for Modeling Oscillabolastic-Type Behavior
Previous Article in Special Issue
Solvability of Coupled Systems of Generalized Hammerstein-Type Integral Equations in the Real Line
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Constant Sign Solutions to Linear Fractional Integral Problems and Their Applications to the Monotone Method

by
Daniel Cao Labora
and
Rosana Rodríguez-López
*,†
Facultade de Matemáticas, Universidade de Santiago de Compostela, 15782 Santiago de Compostela, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2020, 8(2), 156; https://doi.org/10.3390/math8020156
Submission received: 23 December 2019 / Revised: 15 January 2020 / Accepted: 18 January 2020 / Published: 22 January 2020

Abstract

:
This manuscript provides some results concerning the sign of solutions for linear fractional integral equations with constant coefficients. This information is later used to prove the existence of solutions to some nonlinear problems, together with underestimates and overestimates. These results are obtained after applying suitable modifications in the classical process of monotone iterative techniques. Finally, we provide an example where we prove the existence of solutions, and we compute some estimates.

1. Introduction

The theory of equations of arbitrary order has been proposed as an adequate framework to deal with the heterogeneity and memory effects present in the physical phenomena [1,2]. The study of fractional integral equations is relevant by itself and also for the study of the properties of the solutions to fractional differential equations. Linear problems for fractional equations can be addressed by passing to integer order equations [3,4]. On the other hand, for nonlinear problems, one interesting approach is the development of iterative techniques based on the use of upper and lower solutions [5].
The main purpose of this manuscript is to provide some results concerning estimations of solutions to nonlinear fractional integral problems. The paper is structured as follows.
In the second section, we introduce some basic concepts involving fractional calculus, together with some fundamental and useful results. In the third section, we describe several theorems providing conditions ensuring that certain linear fractional integral equations with constant coefficients have nonnegative solutions. In the fourth section, we use the previous results to adapt the classical idea of the monotone iterative technique [5,6] for this case. In the last section, we give an example of application in a specific nonlinear equation.

2. Preliminaries

In this section we introduce some definitions and notation that will be used for the rest of the document. These concepts are, essentially, the fundamental notions of fractional calculus, together with some theorems involving them. These introductory results focus on conditions ensuring the existence and uniqueness of a solution to a fractional integral or differential equation.
We will assume that the reader is familiar with the basic notions on Banach Spaces, together with its classical notation, for instance, the space L 1 [ 0 , b ] . Interested readers in Banach Spaces can consult [7,8].
In the rest of the document [ 0 , b ] R will denote a compact interval. In particular, any equation that equals two expressions depending on t will be true if, and only if, it holds for every t [ 0 , b ] except, at most, for a zero-measure set. It is important to notice that, when both expressions involved in the equation are continuous, if the equality holds for every t [ a , b ] except for a zero-measure set, then it has to hold for every t [ a , b ] .
Thus, when we talk about nonnegative functions in L 1 [ 0 , b ] we have to understand that we are describing an element of the quotient space that admits a nonnegative representative. Analogously, a nondecreasing function in L 1 [ 0 , b ] will describe an element of the quotient space that admits a nondecreasing representative. In this framework, we recall the Dominated Convergence Theorem for the Lebesgue Integral [9].
Theorem 1
(Lebesgue Dominated Convergence). Consider a sequence of measurable functions f n on [ 0 , b ] converging pointwise to a function f. If there is an integrable function g such that | f n | g , then f is integrable and, moreover, f is the limit of the sequence ( f n ) n N with respect to the L 1 [ 0 , b ] norm.
On the other hand, we shall introduce some basic concepts and notation concerning fractional calculus. Further information about fractional calculus, additional to that provided here, can be consulted in [1,2]. We begin by reproducing the definition of the Riemann–Liouville fractional integral, which is a natural generalization of the Cauchy formula for repeated integration.
Definition 1.
We define the Riemann–Liouville fractional integral of a function f L 1 ( a , b ) with initial point a R and order δ R + as
I a + δ f ( t ) : = 1 Γ ( δ ) a t ( t s ) δ 1 f ( s ) d s ,
where the function Γ ( δ ) , for δ > 0 , is defined as
Γ ( δ ) = 0 x δ 1 e x d x .
Moreover, the previous definition is extended for δ = 0 as I a + 0 : = I d .
Remark 1.
The main results involving the Riemann–Liouville fractional integral are that it is a continuous linear operator from L 1 [ 0 , b ] to itself, that it forms a continuous semigroup with respect to δ, and the law of additivity of orders I a + δ 2 I a + δ 1 = I a + δ 1 + δ 2 holds true for any δ 1 , δ 2 R + { 0 } .
We also summarize the results in [3,4] about the existence and uniqueness of solutions to linear fractional integral problems, in the following theorem.
Theorem 2.
If g L 1 [ 0 , b ] , A 1 , , A n R and δ 1 > > δ n > 0 , the fractional integral equation
( A 1 I 0 + δ 1 + + A n I 0 + δ n + I d ) y ( t ) = g ( t )
has exactly one solution in L 1 [ 0 , b ] .

3. Inequalities

For the sake of simplicity, we rewrite (1) as
( T + I d ) y ( t ) = ( T + + T + I d ) y ( t ) = σ 0 ( t ) ,
where T : = A 1 I 0 + δ 1 + + A n I 0 + δ n , i.e., T + is the part of the previous sum involving positive coefficients, i.e., T is the part of the sum that involves negative coefficients, and we have denoted the source term as σ 0 ( t ) . The question is how to impose conditions on T and σ 0 ( t ) ensuring that y ( t ) , which is the unique solution to the equation, is nonnegative on [ 0 , b ] . In the first subsection, we will state and prove a theorem for the case of negative coefficients T + = 0 . In the second subsection we will develop a theorem for the case of positive coefficients T = 0 . Finally, a combined argument will allow us to give a result for the general case when T + 0 and T 0 simultaneously.

3.1. The Particular Case T + = 0

As we said before, initially, we want to establish a theorem that applies for the case T = T .
Theorem 3.
If A 1 , , A n < 0 and σ 0 ( t ) , η 0 ( t ) are functions in L 1 [ 0 , b ] such that σ 0 ( t ) η 0 ( t ) at almost every point, then the unique solution of Equation (2) associated to σ 0 ( t ) is greater or equal, at almost every point, than the one associated with η 0 ( t ) .
We will deduce the previous theorem after developing some partial results. We begin with the following lemma, which gives a result of nonnegativity, provided that the source term is continuous and nonnegative.
Lemma 1.
If A 1 , , A n < 0 and σ 0 ( t ) is a nonnegative continuous function in [ 0 , b ] with σ 0 ( 0 ) > 0 , then the unique solution y of Equation (2) is nonnegative.
Proof of Lemma 1.
At first, we will show that y is continuous. It can be shown inductively that, for all n Z + ,
( T + I d ) ( y ( t ) σ + T σ + + ( 1 ) n T n 1 σ ) = ( 1 ) n T n σ .
The right hand side (RHS) lies in the space I 0 + 1 L 1 [ 0 , b ] , provided that n is big enough. Hence, by Theorem 2, the solution y ( t ) σ + T σ + + ( 1 ) n T n 1 σ belongs to I 0 + 1 L 1 [ 0 , b ] . Therefore, the solution is continuous and, since σ + T σ + + ( 1 ) n T n 1 σ is continuous, y ( t ) is continuous on [ 0 , b ] .
Furthermore, a direct evaluation at t = 0 shows that y ( 0 ) = σ 0 ( 0 ) > 0 . Thus, it will be sufficient to show that y can not have a zero in [ 0 , b ] .
If y had a zero, a combination of the infimum property over the set of zeros of y, together with the continuity of y, shows that we can choose the first zero t 0 [ 0 , b ] . However, the evaluation of (2) in t 0 gives a contradiction, as
( T + I d ) y ( t 0 ) = T y ( t 0 ) = σ 0 ( t 0 ) 0 ,
however, T was a negative operator. The contradiction arises as y is nonnegative in [ 0 , t 0 ] and strictly positive at some [ 0 , δ ] , so ( T + I d ) y ( t 0 ) = T y ( t 0 ) < 0 . □
Corollary 1.
If A 1 , , A n < 0 , and σ 0 ( t ) , η 0 ( t ) are two nonnegative continuous functions in [ 0 , b ] with σ 0 ( t ) η 0 ( t ) and σ 0 ( 0 ) > η 0 ( 0 ) , then the unique solution of Equation (2) associated to the source term σ 0 ( t ) is greater or equal, at any point, than the one associated to η 0 ( t ) .
Remark 2.
Lemma 1 is still valid if σ 0 ( 0 ) 0 . If we change the source term σ 0 ( t ) by a perturbation σ 0 , ε ( t ) = σ 0 ( t ) + ε in (2), where ε > 0 , we get that the associated solution y ε ( t ) is nonnegative. However, ( T + I d ) 1 is continuous (remember that T + I d was a continuous linear bijection between Banach spaces) so y ε ( t ) converges to y in the L 1 [ 0 , b ] norm. Consequently, y ( t ) is essentially nonnegative and, since it is continuous, y ( t ) is nonnegative. It is also straightforward to check that Corollary 1 is still valid if σ 0 ( 0 ) η 0 ( 0 ) .
Remark 3.
Lemma 1 is still valid if σ 0 is not continuous, but in L 1 [ 0 , b ] . If we change the source term σ 0 ( t ) by a perturbation σ 0 , ε ( t ) in (2), such that σ 0 , ε σ 0 < ε , the associated solution y ε ( t ) will be a nonnegative continuous function. As before, y ε ( t ) will converge to the solution y ( t ) , in the L 1 [ 0 , b ] norm. Consequently, we deduce that y ( t ) is essentially nonnegative. It is also straightforward to check, from the previous arguments, that Corollary 1 can be adapted for the case where σ 0 , η 0 are in L 1 [ 0 , b ] , implying Theorem 3.

3.2. The Particular Case T = 0

Now, we describe a theorem that applies for the case where the operator T is positive, i.e., T = 0 . The main idea is to describe the solution to Equation (2) as a functional series. The convergence of this series will be guaranteed by the first hypothesis in Theorem 4. Moreover, we will see that the solution is positive by checking that the sum of the term at position 2 k 1 with the term at position 2 k is positive for any k Z + . This will be ensured by the second hypothesis in Theorem 4.
Theorem 4.
If A 1 , , A n > 0 and σ 0 ( t ) is an essentially nonnegative integrable function in [ 0 , b ] , then the unique solution to Equation (2) is nonnegative provided that the following conditions hold:
  • T < 1 ,
  • σ 0 ( t ) 0 t A 1 Γ ( δ 1 ) ( t s ) δ 1 1 + + A n Γ ( δ n ) ( t s ) δ n 1 σ 0 ( s ) d s .
Proof of Theorem 4.
From (2), it is possible to deduce the equation
( T + I d ) ( y ( t ) σ 0 ( t ) ) = σ 1 ( t ) : = T ( σ 0 ( t ) ) .
One can proceed inductively and, in fact, the following identity will hold for any natural m N :
( T + I d ) ( y ( t ) σ 0 ( t ) σ m ( t ) ) = σ m + 1 ( t ) ,
where σ j = ( 1 ) j T j σ 0 . It is obvious that σ m ( t ) tends to 0 in L 1 [ 0 , b ] when m goes to infinity, because T < 1 . In fact, if σ 0 is essentially bounded, then the sequence converges uniformly. Provided that
S ( t ) : = j = 0 σ j ( t )
converges, the continuity of T would ensure that ( T + I d ) ( y S ) ( t ) = 0 , which has a unique solution due to Theorem 2, which has to be the trivial one. So, in summary, provided that S ( t ) exists, we have that y ( t ) = S ( t ) .
As we said before, the condition T < 1 is enough to ensure that S ( t ) converges. It is enough to check that the Cauchy condition holds for the sequence of partial sums, as L 1 [ 0 , b ] is complete. To check the Cauchy condition, we have to see that σ n + + σ m can be arbitrarily small if N N is big enough and m > n N . We use the trivial bound
σ n + + σ m ( T n + + T m ) · σ 0 .
However the last quantity can be bounded from above by
j = N T j · σ 0 = T N 1 T σ 0 ,
which can be arbitrarily small if N is big enough. In conclusion, S ( t ) converges and we have to prove that it is nonnegative. To complete our task, we will use the remaining hypothesis.
It is straightforward to see that σ j is nonnegative when j is even and that σ j is non-positive when j is odd. Thus, a good idea to prove that S is nonnegative is to show that σ 2 j + σ 2 j + 1 0 for any j 0 . However, σ 2 j + σ 2 j + 1 = T 2 j ( σ 0 + σ 1 ) and, since T is a positive operator, it is enough to show that σ 0 ( t ) + σ 1 ( t ) 0 . This is immediate, as it is exactly the last condition in Theorem 4. □
Before facing the general case, where T + , T 0 , we will discuss the two hypotheses in Theorem 4. On the one hand, we rewrite the first hypothesis in computable terms. On the other hand, we give a more restrictive condition for the second hypothesis, but it is much easier to check.
Remark 4
(About the first condition in Theorem 4). It is easy to compute explicitly the value of T , that is i = 1 n A i · b δ i Γ ( 1 + δ i ) . At first, we show that the previously mentioned quantity is an upper bound for T , since
T f 0 b 0 t i = 1 n A i · t δ i 1 Γ ( δ i ) f ( s ) d s d t = 0 b s b i = 1 n A i · t δ i 1 Γ ( δ i ) f ( s ) d t d s .
However, computing the RHS, together with a direct application of Hölder’s inequality, gives
0 b f ( s ) i = 1 n A i · b δ i s δ i Γ ( 1 + δ i ) d s i = 1 n A i · b δ i Γ ( 1 + δ i ) · f .
To conclude, we show that the previous upper bound is optimal. Given ε ( 0 , b ) , we define the function χ ε that takes the value 1 ε in [ 0 , ε ] and 0 in [ ε , b ] . It is obvious that χ ε = 1 and that
T χ ε = 0 b 1 ε 0 ε i = 1 n A i · ( t s ) δ i 1 Γ ( δ i ) χ ε ( s ) d s d t 0 b i = 1 n A i · ( t ε ) δ i 1 Γ ( δ i ) d t .
However, it is trivial to compute explicitly the previous lower bound as
i = 1 n A i · ( b ε ) δ i Γ ( 1 + δ i ) .
Finally, if ε goes to zero, the previous lower bound for T goes to the upper bound i = 1 n A i · b δ i Γ ( 1 + δ i ) , and we conclude that the first hypothesis in Theorem 4 can be rewritten as
T = i = 1 n A i · b δ i Γ ( 1 + δ i ) < 1 .
Remark 5
(About the second condition in Theorem 4). Due to Hölder’s inequality, the second hypothesis is an immediate consequence of
σ 0 ( t ) i = 1 n A i t δ i Γ ( 1 + δ i ) · ess sup s [ 0 , t ] σ 0 ( s ) ,
that is easier to check. Moreover, if σ 0 is a nondecreasing function, this new condition just means that
i = 1 n A i t δ i Γ ( 1 + δ i ) 1 ,
which was already included in the first hypothesis. We observe that, for this simplification, we have also used that σ 0 is nonnegative. Hence, if σ 0 is an nondecreasing function, the second hypothesis in Theorem 4 can be removed.

3.3. The General Case Where T + , T 0

We will provide the proof of the following result, which allows T to have, simultaneously, positive and negative coefficients. The theorem obtained in this section will not be used in the rest of the document, although it is interesting by itself.
Theorem 5.
If σ 0 ( t ) is an essentially nonnegative integrable function in [ 0 , b ] , then the unique solution to (2) is nonnegative, provided that the following conditions hold:
  • T + < 1 (first condition in Theorem 4).
  • σ 0 ( t ) T + σ 0 ( t ) (second condition in Theorem 4).
  • T < 1 and the least integral order δ i in T is greater than or equal to 1.
We provide the proof for Theorem 5. The idea is similar to the one used for the proof of Theorem 4, but with some additional considerations.
Proof of Theorem 5.
From Equation (2), it is straightforward to deduce
( T + + T + I d ) ( y ( t ) r 0 ( t ) ) = μ 1 ( t ) : = T ( r 0 ( t ) ) ,
where r 0 ( t ) is the unique solution to the equation ( T + + I d ) y ( t ) = σ 0 ( t ) , and where we have renamed μ 0 ( t ) : = σ 0 ( t ) . We can ensure that r 0 ( t ) is positive because of Theorem 4 and the two first hypotheses, so μ 1 ( t ) will be also positive and, furthermore, nondecreasing since the least order in T is greater than or equal to 1.
One can proceed inductively and, in fact, the following identity will hold for any natural m N :
( T + + T + I d ) ( y ( t ) r 0 ( t ) r m ( t ) ) = μ m + 1 ( t ) ,
where r j ( t ) is the unique solution to the problem
( T + + I d ) r j ( t ) = μ j ( t ) ,
which means r j ( t ) = ( T + + I d ) 1 μ j ( t ) , and where μ j + 1 ( t ) = T ( r j ( t ) ) . This inductive construction also shows that any μ j ( t ) and r j ( t ) are positive. This claim can be immediately proven by induction, taking into account Remark 5 and that the least integral order in T is greater than or equal to one.
Now, we should ensure that μ m ( t ) tends to 0 in L 1 [ 0 , b ] when m goes to infinity and the convergence of
R ( t ) : = j = 0 r j ( t ) .
As r m is nonnegative for any m N , it is trivial to obtain the following bounds,
r m ( t ) ( T + + Id ) r m ( t ) = μ m ( t ) = T m · r 0 , μ m + 1 ( t ) T · r m = T m + 1 · r 0 .
From these bounds, since T < 1 , one can emulate the proof in Theorem 4 and conclude that R ( t ) converges, that it is positive (it is a sum of positive addends), and that it is the unique solution to (2). □

4. Nonlinear Problem

In this section, we consider the nonlinear fractional integral equation
x ( t ) = f ( t , I 0 + δ 1 x ( t ) , , I 0 + δ n x ( t ) ) .
Next, we develop a suitable version of the method of upper and lower solutions for problem (4), via the results obtained in Section 3.2. It is obvious that the solutions to (4) coincide with those of problem
A 1 I 0 + δ 1 + + A n I 0 + δ n + I d x ( t ) = ( A 1 I 0 + δ 1 + + A n I 0 + δ n ) x ( t ) + f ( t , I 0 + δ 1 x ( t ) , , I 0 + δ n x ( t ) ) ,
for any fractional operator A 1 I 0 + δ 1 + + A n I 0 + δ n .
Definition 2.
A function α L 1 [ 0 , b ] is said to be a lower solution for problem (4) if the following conditions are satisfied:
f ( t , I 0 + δ 1 α ( t ) , , I 0 + δ n α ( t ) ) α ( t )
admits a nonnegative and nondecreasing representative, where t [ 0 , b ] .
Similarly, we say that a function β L 1 [ 0 , b ] is an upper solution for problem (4) if the following conditions are satisfied:
β ( t ) f ( t , I 0 + δ 1 β ( t ) , , I 0 + δ n β ( t ) )
admits a nonnegative and nondecreasing representative, where t [ 0 , b ] .
Theorem 6.
Suppose that the following conditions are satisfied:
 (I
There exist functions α , β L 1 [ 0 , b ] which are, respectively, lower and upper solutions for problem (4), with α β on [ 0 , b ] .
 (II
There exist coefficients A 1 , , A n > 0 and orders δ 1 > > δ n > 0 such that function g α , β given by
g α , β ( t ) = f ( t , I 0 + δ 1 β ( t ) , , I 0 + δ n β ( t ) ) f ( t , I 0 + δ 1 α ( t ) , , I 0 + δ n α ( t ) ) + ( A 1 I 0 + δ 1 + + A n I 0 + δ n ) ( β α ) ( t ) ,
is in L 1 [ 0 , b ] , nonnegative, and nondecreasing, for every t [ 0 , b ] .
 (III
The operator T : = A 1 I 0 + δ 1 + + A n I 0 + δ n , associated to the constants given in (II), fulfils T < 1 , which is the first hypothesis in Theorem 4 (in this sense, recall Remark 5).
If we denote by S ( α ) and S ( β ) , respectively, the solutions to the problems
A 1 I 0 + δ 1 + + A n I 0 + δ n + I d x ( t ) = A 1 I 0 + δ 1 + + A n I 0 + δ n α ( t ) + f ( t , I 0 + δ 1 α ( t ) , , I 0 + δ n α ( t ) ) ,
and
A 1 I 0 + δ 1 + + A n I 0 + δ n + I d x ( t ) = A 1 I 0 + δ 1 + + A n I 0 + δ n β ( t ) + f ( t , I 0 + δ 1 β ( t ) , , I 0 + δ n β ( t ) ) ,
then S ( α ) S ( β ) almost everywhere on [ 0 , b ] .
Proof. 
Let w = S ( β ) S ( α ) L 1 [ 0 , b ] . Then, by (II), we have, for t [ 0 , b ] ,
( A 1 I 0 + δ 1 + + A n I 0 + δ n + Id ) w ( t ) = f ( t , I 0 + δ 1 β ( t ) , , I 0 + δ n β ( t ) ) f ( t , I 0 + δ 1 α ( t ) , , I 0 + δ n α ( t ) ) + ( A 1 I 0 + δ 1 + + A n I 0 + δ n ) ( β α ) ( t ) = g α , β ( t ) ,
which lies in L 1 [ 0 , b ] . Moreover, due to ( I I ) , we have that g α , β is nonnegative and nondecreasing. Finally, by the hypothesis ( I I I ) , we get that w 0 on [ 0 , b ] , that is, S ( α ) S ( β ) on [ 0 , b ] . □
Now, we see how to construct two monotonic sequences from the previous upper and lower solutions, in such a way that each of these sequences converges to a solution to (4).
Theorem 7.
Suppose that f : [ 0 , b ] × R n R is continuous and that Conditions ( I ) and ( I I I ) in Theorem 6 hold. Suppose, also, that
(II*) There exist coefficients A 1 , , A n > 0 , and orders δ 1 > > δ n > 0 , such that function g η , ξ given by
g η , ξ ( t ) = f ( t , I 0 + δ 1 ξ ( t ) , , I 0 + δ n ξ ( t ) ) f ( t , I 0 + δ 1 η ( t ) , , I 0 + δ n η ( t ) ) + ( A 1 I 0 + δ 1 + + A n I 0 + δ n ) ( ξ η ) ( t ) ,
is in L 1 [ 0 , b ] , it admits a nonnegative and nondecreasing representative, and for any choice of functions we have that α η ξ β .
(IV) The map M : L 1 [ 0 , b ] L 1 [ 0 , b ] induced by f, and given by M ( η ) ( t ) = f ( I 0 + δ 1 η ( t ) , , I 0 + δ n η ( t ) ) , is well defined and continuous.
Then there exist monotone sequences α n n N and β n n N in L 1 [ 0 , b ] such that α 0 = α , β 0 = β . These sequences α n n N , β n n N are convergent to ρ, γ which are extremal solutions to (4) in the functional interval [ α , β ] .
Proof. 
We consider the functional interval
[ α , β ] : = { x L 1 [ 0 , b ] : α x β on [ 0 , b ] } .
For each fixed source term, depending on η [ α , β ] , we consider the following linear problem
A 1 I 0 + δ 1 + + A n I 0 + δ n + Id x ( t ) = A 1 I 0 + δ 1 + + A n I 0 + δ n η ( t ) + f ( t , I 0 + δ 1 η ( t ) , , I 0 + δ n η ( t ) ) .
If we denote the RHS in (5) as σ η , we can define the operator S as the map taking each η [ α , β ] into the unique solution to (5) with source term σ η . It is clear that a function in L 1 [ 0 , b ] is a fixed point of S if and only if it is a solution to (4).
For the construction of the sequence that was described in the thesis of the theorem, we choose α 0 = α , and α 1 is the unique solution to (5) associated with σ α . Thus, the sequences α n n N and β n n N are, therefore, defined via the recurrence relation
α n = S ( α n 1 ) , β n = S ( β n 1 ) , n 1 .
We prove the following properties:
i)
S is nondecreasing on the functional interval [ α , β ] .
ii)
The operator S maps the interval [ α , β ] into itself.
iii)
α n n N is convergent towards ρ and β n n N is convergent towards γ .
iv)
ρ , γ are the extremal solutions to (4) in the functional interval [ α , β ] .
To check i), consider that η , ξ [ α , β ] are such that α η ξ β on [ 0 , b ] , then we prove that S ( η ) S ( ξ ) . Indeed, using ( I I * ) , we find that, for t [ 0 , b ] ,
σ ξ ( t ) σ η ( t ) = A 1 I 0 + δ 1 + + A n I 0 + δ n ξ ( t ) + f ( t , I 0 + δ 1 ξ ( t ) , , I 0 + δ n ξ ( t ) ) A 1 I 0 + δ 1 + + A n I 0 + δ n η ( t ) f ( t , I 0 + δ 1 η ( t ) , , I 0 + δ n η ( t ) )
is a nonnegative and nondecreasing integrable function. Hence, if we consider
( A 1 I 0 + δ 1 + + A n I 0 + δ n + I d ) ( S ( ξ ) S ( η ) ) ( t ) = σ ξ ( t ) σ η ( t ) ,
we can conclude that S ( ξ ) S ( η ) 0 on [ 0 , b ] because of ( I I I ) . Thus, we have proved that S is nondecreasing.
Now, we prove claim ii). Due to i), it is enough to show that S ( α ) [ α , β ] and S ( β ) [ α , β ] . We just observe that
A 1 I 0 + δ 1 + + A n I 0 + δ n + Id ( S ( α ) ) ( t ) = A 1 I 0 + δ 1 + + A n I 0 + δ n α ( t ) + f ( t , I 0 + δ 1 α ( t ) , , I 0 + δ n α ( t ) ) , A 1 I 0 + δ 1 + + A n I 0 + δ n + Id ( α ) ( t ) = A 1 I 0 + δ 1 + + A n I 0 + δ n α ( t ) + α ( t ) .
Then, it follows that
A 1 I 0 + δ 1 + + A n I 0 + δ n + Id ( S ( α ) α ) ( t ) = f ( t , I 0 + δ 1 α ( t ) , , I 0 + δ n α ( t ) ) α ( t ) ,
which is nonnegative and nondecreasing, as α is a lower solution. Due to Condition ( I I I ) , we conclude that S ( α ) α 0 on [ 0 , b ] . A similar argument shows that β S ( β ) 0 . Since S is nondecreasing, we have the chain of inequalities α S ( α ) S ( β ) β .
Now, to prove iii), note that α n n N is increasing, and β n n N is decreasing. Indeed, we have seen that α S ( α ) S ( β ) β and that S is increasing. Thus, we have that α S ( α ) S 2 ( α ) S 2 ( β ) S ( β ) β . If we apply this argument inductively, we derive the monotonicity of the sequences α n n N and β n n N . We need to prove that both sequences are convergent. Without loss of generality, we develop our argument for the sequence α n n N .
We see that α n | α | + | β | for any n N . Thus, the sequence α n n N is uniformly bounded by an integrable function.
Moreover, for almost every t [ 0 , b ] , the sequence α n ( t ) n N is well defined, increasing, and bounded from above by β ( t ) . Hence, for almost every t [ 0 , b ] , the sequence α n ( t ) n N is convergent. This implies that α n n N converges pointwise in L 1 [ 0 , b ] .
Due to the Lebesgue’s Dominated Convergence Theorem, we conclude that α n n N converges in the L 1 norm to some ρ L 1 [ 0 , b ] . Analogously, we deduce that β n n N converges to some γ L 1 [ 0 , b ] .
To prove iv), we use that
A 1 I 0 + δ 1 + + A n I 0 + δ n + Id α n + 1 ( t ) = A 1 I 0 + δ 1 + + A n I 0 + δ n α n ( t ) + f ( t , I 0 + δ 1 α n ( t ) , , I 0 + δ n α n ( t ) ) .
Recall that the sequence α n n N converges in the L 1 norm towards ρ . As the fractional integral operators I 0 + δ j , and the function f, are continuous, we can take limits at both sides of (6) and conclude
A 1 I 0 + δ 1 + + A n I 0 + δ n + Id ρ ( t ) = A 1 I 0 + δ 1 + + A n I 0 + δ n ρ ( t ) + f ( t , I 0 + δ 1 ρ ( t ) , , I 0 + δ n ρ ( t ) ) .
Therefore, ρ is a solution to the problem (4). Similarly, one shows that γ is also a solution to (4).
Finally, if x L 1 [ 0 , b ] is a solution to (4) such that α x β , we use that S is nondecreasing to conclude that α n = S n ( α ) S n ( x ) S n ( β ) = β n . Thus, we have that α n x β n for every n N . This implies, after taking the limits when n , that ρ x γ , showing that ρ and γ are the extremal solutions in [ α , β ] . □
Remark 6. 
Condition (IV) can be removed if all the orders δ i 1 , the lower and upper solutions are bounded, and f is continuously differentiable.
It is clear that the intervals [ I 0 + δ 1 α , I 0 + δ 1 β ] are bounded. Thus f is Lipschitz on the compact set [ 0 , b ] × [ I 0 + δ n α , I 0 + δ n β ] × × [ I 0 + δ 1 α , I 0 + δ 1 β ] . It is obvious that M ( η ) ( t ) is measurable, once η is fixed. Moreover, due to the Lipschitz condition, it is straightforward to see that it is also absolutely integrable.
Now we need to prove that M is continuous. We consider a sequence η n converging to η in the L 1 [ 0 , b ] norm. We need to show that M ( η n ) converges to M ( η ) in the L 1 [ 0 , b ] norm.
At first, we observe that I 0 + δ i η n converges uniformly to I 0 + δ i η , since
I 0 + δ i | η n η | ( t ) I 0 + δ i 1 I 0 + 1 | η n η | ( b ) I 0 + δ i 1 I 0 + 1 | η n η | ( b )
for every t [ 0 , b ] . Hence, given any δ > 0 , we can consider m N such that I 0 + δ i | η m 1 η m 2 | ( t ) < δ , whenever m 1 , m 2 > m , for any valid subindex i { 1 , , n } and any t [ 0 , b ] .
Since f is continuous, and the intervals [ I 0 + δ 1 α , I 0 + δ 1 β ] are bounded, f is uniformly continuous on the compact set [ 0 , b ] × [ I 0 + δ n α , I 0 + δ n β ] × × [ I 0 + δ 1 α , I 0 + δ 1 β ] . Thus, due to the last paragraph, and given any ε > 0 , we can consider m N such that
M ( η m 1 ) ( t ) M ( η m 2 ) ( t ) = f I 0 + δ 1 η m 1 ( t ) , , I 0 + δ n η m 1 ( t ) f I 0 + δ 1 η m 2 ( t ) , , I 0 + δ n η m 2 ( t ) < ε ,
whenever m 1 , m 2 > m , for any t [ 0 , b ] .
Thus, if we make a direct estimate with the L 1 norm, we obtain that we can choose m N such that
M ( η m 1 ) ( t ) M ( η m 2 ) < ε · b ,
whenever m 1 , m 2 > m , for any t [ 0 , b ] . Since L 1 [ 0 , b ] is complete, M ( η m ) is convergent to an L 1 [ 0 , b ] function. The previous arguments, replacing ( η m 1 , η m 2 ) by ( η , η m ) , show that M ( η m ) M ( η ) tends to zero as m , implying that M ( η m ) converges to M ( η ) , and that M ( η ) lies in L 1 [ 0 , b ] .

5. An Example

In this final section we provide an example of application of the previous results. We will obtain a specific value of b ensuring the existence of solutions in L 1 [ 0 , b ] , which will lie between a lower and an upper solution, to the following problem
x ( t ) = f t , I 0 + 3 2 x ( t ) , I 0 + 5 4 x ( t ) : = 1 Γ 5 2 I 0 + 3 2 x ( t ) · 1 Γ 9 4 I 0 + 5 4 x ( t ) .
To check ( I ) , observe that the constant functions α = 0 and β = 1 are a lower and upper solution to our equation, respectively, for b = 1 .
On the one hand, we have that
f t , I 0 + 3 2 α ( t ) , I 0 + 5 4 α ( t ) α ( t ) = 1 .
Of course, this function is nonnegative and nondecreasing.
On the other hand, we have again that
β ( t ) f t , I 0 + 3 2 β ( t ) , I 0 + 5 4 β ( t ) = t 3 2 + t 5 4 t 11 4 ,
which is nonnegative and nondecreasing on [ 0 , 1 ]
To check ( I I * ) , consider the perturbation operator T = A 1 · I 0 + δ 1 + A 2 · I 0 + δ 2 = Γ 5 2 I 0 + 3 2 + Γ 9 4 I 0 + 5 4 . We have that
g η , ξ ( t ) = Γ 9 4 I 0 + 5 4 ( ξ η ) ( t ) · Γ 5 2 I 0 + 3 2 ( ξ η ) ( t ) ,
and ( I I * ) is fulfilled, since ξ η implies that each factor of the previous product is nonnegative and nondecreasing. Of course, ( I I * ) implies ( I I ) .
To check ( I I I ) , we compute T via Remark 4. We get that
T = b 3 2 + b 5 4 .
It is easy to see numerically that b = 3 5 implies T < 1 .
Finally, ( I V ) holds trivially, in virtue of Remark 6: The integral orders associated to the fractional operator are greater or equal to one, the upper and lower solutions are bounded, and the function f ( t , a 1 , a 2 ) = 1 Γ 5 2 a 1 · 1 Γ 9 4 a 2 is continuously differentiable.
Thus, the previous problem is under the hypotheses of Theorem 7 when b = 3 5 . In particular, we know that the problem has at least one solution defined in [ 0 , 3 5 ] , whose image lies in the interval [ 0 , 1 ] .

Author Contributions

All authors contributed equally to every part of this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by grant numbers MTM2016-75140-P (AEI/FEDER, UE) and ED431C 2019/02 (GRC Xunta de Galicia).

Acknowledgments

The authors are grateful to the Editor Yuriy Rogovchenko and the Reviewers for their comments on the paper.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Samko, S.G.; Kilbas, A.A.; Marichev, O.I. Fractional Integrals and Derivatives; Gordon and Breach Science Publishers: Yverdon, Switzerland, 1993. [Google Scholar]
  2. Podlubny, I. Fractional Differential Equations: An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and Some of Their Applications; Elsevier: Amsterdam, The Netherlands, 1998. [Google Scholar]
  3. Cao Labora, D.; Rodríguez-López, R. From fractional order equations to integer order equations. Fract. Calc. Appl. Anal. 2017, 20, 1405–1423. [Google Scholar] [CrossRef] [Green Version]
  4. Cao Labora, D.C.; Rodríguez-López, R. Improvements in a method for solving fractional integral equations with some links with fractional differential equations. Fract. Calc. Appl. Anal. 2018, 21, 174–189. [Google Scholar] [CrossRef] [Green Version]
  5. Ladde, G.S.; Lakshmikantham, V.; Vatsala, A.S. Monotone Iterative Techniques for Nonlinear Differential Equations; Pitman Publishing: Marshfield, MA, USA, 1985. [Google Scholar]
  6. Lakshmikantham, V.; Leela, S. Differential and Integral Inequalities: Theory and Applications: Volume I: Ordinary Differential Equations; Academic Press: Cambridge, MA, USA, 1969. [Google Scholar]
  7. Muscat, J. Functional Analysis: An Introduction to Metric Spaces, Hilbert Spaces, and Banach Algebras; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  8. Bollobás, B. Linear Analysis; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
  9. Rudin, W. Principles of Mathematical Analysis; McGraw-Hill: New York, NY, USA, 1964. [Google Scholar]

Share and Cite

MDPI and ACS Style

Cao Labora, D.; Rodríguez-López, R. Constant Sign Solutions to Linear Fractional Integral Problems and Their Applications to the Monotone Method. Mathematics 2020, 8, 156. https://doi.org/10.3390/math8020156

AMA Style

Cao Labora D, Rodríguez-López R. Constant Sign Solutions to Linear Fractional Integral Problems and Their Applications to the Monotone Method. Mathematics. 2020; 8(2):156. https://doi.org/10.3390/math8020156

Chicago/Turabian Style

Cao Labora, Daniel, and Rosana Rodríguez-López. 2020. "Constant Sign Solutions to Linear Fractional Integral Problems and Their Applications to the Monotone Method" Mathematics 8, no. 2: 156. https://doi.org/10.3390/math8020156

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop