Next Article in Journal
A Study on the Low-Power Operation of the Spike Neural Network Using the Sensory Adaptation Method
Next Article in Special Issue
Parameter Estimation and Hypothesis Testing of The Bivariate Polynomial Ordinal Logistic Regression Model
Previous Article in Journal
Increasing Micro-Rotational Viscosity Results in Large Micro-Rotations: A Study Based on Monolithic Eulerian Cosserat Fluid–Structure Interaction Formulation
Previous Article in Special Issue
Inferences for Nadarajah–Haghighi Parameters via Type-II Adaptive Progressive Hybrid Censoring with Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Parameter Estimation for a Fractional Black–Scholes Model with Jumps from Discrete Time Observations

1
Laboratoire de Mathématiques et Informatique et Applications (LAMIA), Université des Antilles, 97157 Pointe-à-Pitre, France
2
Ecole Normale Supérieure, Université d’Etat d’Haiti, Port-au-Prince HT6110, Haiti
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(22), 4190; https://doi.org/10.3390/math10224190
Submission received: 5 September 2022 / Revised: 13 October 2022 / Accepted: 13 October 2022 / Published: 9 November 2022
(This article belongs to the Special Issue Advances in Applied Probability and Statistical Inference)

Abstract

:
We consider a stochastic differential equation (SDE) governed by a fractional Brownian motion ( B t H ) and a Poisson process ( N t ) associated with a stochastic process ( A t ) such that: d X t = μ X t d t + σ X t d B t H + A t X t d N t , X 0 = x 0 > 0 . The solution of this SDE is analyzed and properties of its trajectories are presented. Estimators of the model parameters are proposed when the observations are carried out in discrete time. Some convergence properties of these estimators are provided according to conditions concerning the value of the Hurst index and the nonequidistance of the observation dates.

1. Introduction

Modeling with fractional Brownian motions is an interesting tool in many domains where self-similarity and short- or long-range dependence are evident. In this paper, we consider a stochastic differential equation (SDE) whose solutions are continuous-time stochastic processes that can be used in different applications such as finance or hydrology. It is an extension of the Black–Scholes model [1] in the case where there is a short- or long-term dependence with jumps of random amplitude. The main object of this paper is to provide estimation procedures for the extended model parameters.
Let ( Ω , , ( t ) t 0 , P ) be a filtered complete probability space. The SDE considered is:
d X t = μ X t d t + σ X t d B t H + A t X t d N t ,
where μ R is the drift coefficient and σ R + * is the diffusion coefficient.
B t H t 0 is the standard fBm with index H ( 0 , 1 ) [2]. ( N t ) t 0 is a homogeneous Poisson process with intensity  λ R + * . ( A t ) t 0 is a stochastic process taking value in ( 1 , + ) . We assume that the three processes ( N t ) t 0 , ( A t ) t 0 and ( B t H ) t 0 are independent and adapted to ( t ) t 0 .
Our motivation is to propose parameter estimators for the solution X = ( X t ) t 0 of the SDE (1). Note that without the jump term A t X t d N t , this SDE defines the Black–Scholes model governed by an fBm [3], which is one of the extensions of the mathematical model in finance introduced by [1]. One of these extensions is the mixed fractional SDE proposed by [4].
In the following, we denote by ( μ , σ 2 , λ , θ , x 0 , H ) the model followed by the solution of Equation (1) where θ R k stands for the distribution parameter of ( A t ) t 0 and x 0 R + * is the known initial value of ( X t ) t 0 . For an observation interval of length T > 0 , we focus on the estimation of μ , σ 2 , λ and θ from data consisting of observations of X on ( 0 , T ] at n dates 0 < t 1 , , t n T plus the amplitude A D i and the date D i of jumps for i [ 1 , N T ] N . As in [5], we assume that the self-similarity parameter H is known. Nevertheless, procedures for estimating H such as quadratic variation methods [6] and regression methods [7] are discussed.
It is worth pointing out that statistical inference for fractional diffusion processes (without jumps) have been studied by many authors under discrete observations [3,8,9,10] and also under a continuous observation assumption [9]. They proposed estimation methods concerning mainly the drift and diffusion coefficients, but also sometimes the Hurst index H [3]. On the other hand, the Black–Scholes model with jumps governed by the standard Brownian motion was applied to hydrology data [11]. In this paper, we consider methods for estimating the drift and diffusion coefficients as well as the parameters of the jump distribution for a fractional Black–Scholes model with jumps. In Section 2, we first recall some properties of fBm and then present distributional properties of the solution of Equation (1). In Section 3, we propose estimators for the parameters of this fractional Black–Scholes process with jumps. Their asymptotic properties are studied. Procedures for simulating ( μ , σ 2 , λ , θ , x 0 , H ) are provided in Section 4 with numerical codes given in Appendix A. Section 5 proposes some perspectives for future work.

2. Preliminaries

The standard fractional Brownian motion B t H t R with Hurst index H ( 0 , 1 ) is a continuous and centered Gaussian process with covariance function [12]:
E ( B s H B t H ) = 1 2 | s | 2 H + | t | 2 H | s t | 2 H , ( s , t ) R 2 .
Thus, B t H follows a Gaussian distribution with parameters ( 0 , | t | 2 H ) . In the following, we consider the restriction of the fBm on R + . The process B t H t 0 has stationary increments but these increments are dependent unless H = 1 2 . In this last case, we get the standard Brownian motion. We have a long-term dependence when H ( 1 2 , 1 ) and a positive correlation between increments. Inversely, the increments are negatively correlated when H ( 0 , 1 2 ) . Nevertheless, long-range dependence with a positive correlation is usually considered in practice for more realistic modeling. More detailed properties of fBm can be seen in [2,12,13]. Note the pioneering work of [14] and their discussion of potential applications of fBm for modeling long-term dependence in economics or hydrology.
Lemma 1.
Let T R + * and ( X t ) t R + be a solution of SDE (1), then
t ( 0 , T ] , X t = ( 1 + A t ) X t if t { D 1 , , D N T } X t otherwise
where X t = lim s < t s t X s .
Proof. 
Note that the fractional Black–Scholes process without jump has continuous trajectories since the fBm is continuous. When jumps are included in the model, the process trajectories are continuous on any open interval ( D i , D i + 1 ) between two consecutive jump dates. Furthermore, from Equation (1), we can write
( t , ϵ ) ( 0 , T ] × R + * , X t X t ϵ = μ t ϵ t X s d s + σ t ϵ t X s d B s H + t ϵ t A s X s d N s
where t ϵ t A s X s d N s = i ( N t ϵ , N t ] A D i X D i . When ϵ tends to zero, we get
X t X t = i ( N t , N t ] A D i X D i = A t X t if t { D 1 , , D N T } 0 otherwise .
This leads us to the final result.    □
This lemma emphasizes that A t is the relative jump at date t while the raw jump at t, which is X t X t , is equal to A t X t .
We have not ruled out in our model the fact of having jumps with negative values. For example, consider that 1 + A t follows a log-Gaussian distribution denoted by 𝒩 ( θ 1 , t , θ 2 , t ) . This means that θ 1 , t and θ 2 , t are the expectation and variance of log ( 1 + A t ) , respectively. In such a case, A t may take negative values since
P A t ( 1 , 0 ] = P log ( 1 + A t ) ( , 0 ] = Φ θ 1 , t θ 2 , t > 0
where Φ is the standard normal cumulative distribution function.
Theorem 1.
The solution of Equation (1) is given by:
t R + , X t = X 0 exp μ t 1 2 σ 2 t 2 H + σ B t H + 0 t log ( 1 + A s ) d N s
Proof. 
To solve SDE (1), we can proceed in two steps. Firstly, we consider SDE (7) without jumps associated with SDE (1):
d X t = μ X t d t + σ X t d B t H , X 0 = x 0 > 0
By applying to SDE (7) the stochastic integrating method proposed by [15] for SDEs driven by a fractional Brownian motion, we get:
t R + , X t = X 0 exp μ t + σ B t H 1 2 σ 2 t 2 H .
Secondly, we take into consideration the fact there is no jump in any open interval between two consecutive jump dates. For any given t in R + , we can write
[ 0 , t ] = i = 1 N t [ D i 1 , D i ) { D N t } ( D N t , t ] with D 0 = 0 .
According to Equality (8), conditional on X 0 = x 0 , we have
s [ 0 , D 1 ) , X s = x 0 exp ( μ s + σ B s H 1 2 σ 2 s 2 H ) .
Therefore,
X D 1 = lim t < D 1 t D 1 X t = x 0 exp ( μ D 1 + σ B D 1 H 1 2 σ 2 D 1 2 H )
and from Lemma 1, we get
X D 1 = x 0 ( 1 + A D 1 ) exp ( μ D 1 + σ B D 1 H 1 2 σ 2 D 1 2 H ) .
On the other hand, Lemma 1 gives us
i [ 1 , N t ] N , X D i = ( 1 + A D i ) X D i .
Consequently, for 1 < i N t , we obtain in a similar way as above
s [ D i 1 , D i ) , X s = x 0 j = 1 i 1 ( 1 + A D j ) exp ( μ s + σ B s H 1 2 σ 2 s 2 H ) .
and
X D i = x 0 j = 1 i ( 1 + A D j ) exp ( μ D i + σ B D i H 1 2 σ 2 D i 2 H ) .
Thus, for s ( D N t , t ] ,
X s = x 0 i = 1 N t ( 1 + A D i ) exp ( μ s + σ B s H 1 2 σ 2 s 2 H ) .
Since
i = 1 N t ( 1 + A D i ) = exp 0 t log ( 1 + A s ) d N s ,
Equations (9)–(13) lead us to the final result.    □
Proposition 1.
Consider the process ( X t ) t 0 defined by Equation (6) and suppose that the following three conditions are verified:
( C 1 )
The processes ( N t ) t 0 , ( A t ) t 0 and ( B t H ) t 0 are independent;
( C 2 )
i [ 1 , N t ] N , E ( A D i 2 ) < ;
( C 3 )
For any t > 0 , the random variables A D i , i = 1 , , N t are independent and identically distributed.
Then, for ( t , x 0 ) R + × R , conditionally to X 0 = x 0 ,
( i )
The expectation of X t is
E ( X t | X 0 = x 0 ) = x 0 exp μ + λ E ( A D 1 ) t ;
( i i )
The variance of X t is
V a r ( X t | X 0 = x 0 ) = E 2 ( X t | X 0 = x 0 ) exp σ 2 t 2 H + λ t E ( A D 1 2 ) 1 .
Proof
(i)
Equation (6) leads to the expectation of X t conditionally to X 0 = x 0 :
E ( X t | X 0 = x 0 ) = x 0 exp ( μ t 1 2 σ 2 t 2 H ) × E e σ B t H × exp ( 0 t l o g ( 1 + A s ) d N s ) .
Since condition ( C 1 ) is verified, then
E ( X t | X 0 = x 0 ) = x 0 exp ( μ t 1 2 σ 2 t 2 H ) × exp ( 1 2 σ 2 t 2 H ) × E i = 1 N t ( 1 + A D i ) = x 0 exp ( μ t ) × E 1 + E ( A D 1 | N t ) N t ( because of ( C 3 ) ) .
The independence of A D 1 and N t implies that E ( A D 1 | N t ) = E ( A D 1 ) and then
E ( X t | X 0 = x 0 ) = x 0 exp ( μ t ) × E 1 + E ( A D 1 ) N t = x 0 exp ( μ t ) × exp λ t ( 1 + E ( A D 1 ) 1 ) = x 0 exp μ + λ E ( A D 1 ) t .
(ii)
Similarly, we get
E ( X t 2 | X 0 = x 0 ) = x 0 2 exp ( 2 μ t σ 2 t 2 H ) × exp ( 2 σ 2 t 2 H ) × E i = 1 N t ( 1 + A D i ) 2 = x 0 2 exp ( 2 μ t + σ 2 t 2 H ) × exp λ t 2 E ( A D 1 ) + E ( A D 1 2 )
so that V a r ( X t | X 0 = x 0 ) = E 2 ( X t | X 0 = x 0 ) exp σ 2 t 2 H + λ t E ( A D 1 2 ) 1 .    □
Theorem 2.
Let ( X t ) t R + be the solution of Equation (1). If μ + λ E ( A D 1 ) < 0 , then the expected process E ( X t ) t 0 converges to zero and we have the following results:
( i )
If 2 H 1 < 0 and μ < λ 2 2 ( E ( A D 1 ) + E ( A D 1 2 ) , then ( X t ) converges in mean square to zero.
( i i )
If 2 H 1 = 0 and μ < λ 2 2 ( E ( A D 1 ) + E ( A D 1 2 ) + σ 2 then ( X t ) converges in mean square to zero.
( i i i )
If 2 H 1 > 0 , there is no mean-square convergence.
Proof. 
From Equation (14), we have
t R + , E ( X t | X 0 = x 0 ) = x 0 exp μ + λ E ( A D 1 ) t .
Since μ + λ E ( A D 1 ) < 0 , then
x 0 R , lim t + E ( X t | X 0 = x 0 ) = 0 .
On the other hand, let us write
R t = 2 μ + 2 λ E ( A D 1 ) + λ E ( A D 1 2 ) + σ 2 t 2 H 1 .
Then,
E ( X t 2 | X 0 = x 0 ) = x 0 2 exp 2 μ + 2 λ E ( A 1 ) + λ E ( A 1 2 ) + σ 2 t 2 H 1 t = x 0 2 exp ( t R t ) .
( i ) If 2 H 1 < 0 , then lim t + R t = 2 μ + 2 λ E ( A 1 ) + λ E ( A 1 2 ) < 0 so that lim t + E ( X t 2 ) = 0 .
( i i ) If 2 H 1 = 0 then lim t + R t = 2 μ + 2 λ E ( A 1 ) + λ E ( A 1 2 ) + σ 2 < 0 and lim t + E ( X t 2 ) = 0 .
( i i i ) If 2 H 1 > 0 , then t R t = 2 μ + 2 λ E ( A 1 ) + λ E ( A 1 2 ) t 1 2 H + σ 2 t 2 H tends to infinity when t tends to infinity. Therefore, we do not have mean-square convergence to zero.    □

3. Parameter Estimation

In the case of SDEs without jumps, several parameter estimation methods have been discussed in the literature [3,5,9,16,17]. In this paper, our main goal is to estimate the parameters of model (6) which incorporates jumps. The process ( X t ) t 0 is observed discretely on the interval [ 0 , T ] at dates t 1 , , t n with T > 0 and 0 < t 1 , , t n T . The process ( N t ) t 0 is fully observed on [ 0 , T ] and each jump amplitude A D i is available for i [ 1 , N T ] N .
In the following, we write Y t = log ( X t ) so that Equation (6) leads to:
Y t = log ( X 0 ) + i = 1 N t log ( 1 + A D i ) + μ t 1 2 σ 2 t 2 H + σ B t H .
In Section 3.1, the maximum likelihood estimators (MLEs) of μ , σ 2 and λ are provided assuming that x 0 and H are known. The estimation of the jump amplitude parameter θ is presented in Section 3.2. Then, an asymptotically unbiased and consistent estimator of σ 2 is proposed in Section 3.3 by means of quadratic variations. Results on convergence properties of MLEs are obtained in Section 3.4 with respect to the Hurst index value in situations where it is not necessary to assume the same interval length between two consecutive observation dates.

3.1. Maximum Likelihood Estimator of ( μ , σ 2 , λ )

As mentioned before, refs. [3,17] proposed a MLE in the case of SDEs without jumps. In the following theorem, we take into account the dates and amplitudes of jump to provide the MLE of ( μ , σ 2 , λ ) .
Theorem 3.
The MLEs of μ , σ 2 and λ from the observations Y t 1 , , Y t n , A D 1 , , A D N t are, respectively,
μ ^ = i = 1 n j = 1 n t i Γ i j 1 Y t j + 1 2 σ 2 ^ t j 2 H log ( X 0 ) k = 1 N t j log ( 1 + A D k ) i = 1 n j = 1 n t i t j Γ i j 1
σ 2 ^ = 2 n 2 + ( i = 1 n j = 1 n C t i Γ i j 1 C t j ) ( i = 1 n j = 1 n t i 2 H t j 2 H Γ i j 1 ) 1 / 2 n i = 1 n j = 1 n t i 2 H t j 2 H Γ i j 1
λ ^ = N T T
where the Γ i j 1 are the elements of the inverse matrix Γ 1 of Γ with Γ = ( E ( B t i H B t j H ) ) 1 i n 1 j n given by Equation (2).
                                        C t i = Y t i μ ^ t i log ( X 0 ) k = 1 N t i log ( 1 + A D k ) .
Proof. 
The random variable Y t as defined in expression (16) has a Gaussian distribution conditionally to ( A D i ) i = 1 , , N t with expectation
E Y t | ( A D i ) 1 i N t , N t = log ( x 0 ) + i = 1 N t log ( 1 + A D i ) + μ t 1 2 σ 2 t 2 H
and variance
V a r Y t | ( A D i ) 1 i N t , N t = σ 2 t 2 H .
Writing t = ( t 1 , , t n ) , t 2 H = ( t 1 2 H , , t n 2 H ) , Y = ( Y t 1 , , Y t n ) and Z t = log ( x 0 ) + k = 1 N t 1 log ( 1 + A D k ) , , log ( x 0 ) + k = 1 N t n log ( 1 + A D k ) , the log-likelihood L ( Y ; ( μ , σ 2 ) ) for the observations Y conditional on ( A D i ) 1 i N T , N T and X0 is
n 2 log ( 2 π ) 1 2 log ( | σ 2 Γ | ) 1 2 σ 2 Y μ t + 1 2 σ 2 t 2 H Z t Γ 1 Y μ t + 1 2 σ 2 t 2 H Z t
where Γ is the square matrix ( E ( B t i H B t j H ) ) and prime ( ) denotes the vector transposition.
Therefore, L ( Y ; ( μ , σ 2 ) ) can be expressed as follows:
L ( Y ; ( μ , σ 2 ) ) =           n 2 log ( 2 π ) 1 2 log ( | σ 2 Γ | ) 1 2 σ 2 i = 1 n j = 1 n y t i μ t i + 1 2 σ 2 t i 2 H log ( x 0 ) k = 1 N t i log ( 1 + A k ) × Γ i j 1 y t j μ t j + 1 2 σ 2 t j 2 H log ( x 0 ) k = 1 N t j log ( 1 + A k ) .
The derivatives of expression (22) with respect to μ and σ 2 lead to the MLEs μ ^ and σ 2 ^ as expressed in Equations (17) and (18).
On the other hand, the independence condition ( C 1 ) implies that the term from the full likelihood related to λ is exp 0 T log ( λ ) d N s 0 T λ d s = λ N T exp ( λ T ) so that the MLE of λ is λ ^ = N T T .    □
It is worth noticing that μ ^ and σ 2 ^ have a matrix expression:
μ ^ = t Γ 1 K t Γ 1 t ,
σ 2 ^ = 2 n 2 + t 2 H Γ 1 ( t 2 H ) C Γ 1 C 1 2 n t 2 H Γ 1 ( t 2 H ) ,
where C = ( C t 1 , , C t n ) and K = ( K t 1 , , K t n ) with
K t j = Y t j + 1 2 σ 2 ^ t j 2 H log ( X 0 ) k = 1 N t j log ( 1 + A D k ) for 1 j n .

3.2. Estimation of θ

The observed jump amplitudes D 1 , , D N T provide us with a sample of independent realizations from the same distribution, say 𝒟 θ . Consequently, statistical inference on θ can be performed by means of classical tools [18]. For example, the package maxlik [19] uses different optimization routines in the statistical environment R for maximum likelihood estimations.

3.3. Quadratic Variation Method for Estimating σ 2

Ref. [3] proposed a quadratic variation method for estimating  σ 2 in the case where there are no jumps and for which the observation dates are equidistant. Our goal in this subsection is to extend this method and provide an asymptotically unbiased estimator of σ 2 in the case of jumps and for observation dates not necessarily equidistant.
Theorem 4.
From the observations Y t 1 , , Y t n , A D 1 , , A D N t , the quadratic variation method provides the following estimator of σ 2 :
σ 2 ˜ = 1 n i = 0 n 1 Y t i + 1 Y t i j = N t i + 1 N t i + 1 log ( 1 + A D j ) 2 ( t i + 1 t i ) 2 H
with t 0 = 0 .
For H > 1 2 , if t i = h i + ϵ i for i = 1 , , n with h > 0 and ϵ i = o ( 1 n ) , then the estimator σ 2 ˜ is asymptotically unbiased and consistent for σ 2 .
Proof. 
For any integer i such that 0 i < n , from Equation (16), we get
Y t i + 1 Y t i = j = N t i + 1 N t i + 1 log ( 1 + A D j ) + μ ( t i + 1 t i ) 1 2 σ 2 ( t i + 1 2 H t i 2 H ) + σ ( B t i + 1 H B t i H )
Therefore, for H > 1 2 , we obtain in a similar way as [3]:
Y t i + 1 Y t i j = N t i + 1 N t i + 1 log ( 1 + A D j ) 2 = σ 2 B t i + 1 H B t i H 2 + o ( t i + 1 t i ) 2 H
so that
1 n i = 0 n 1 Y t i + 1 Y t i j = N t i + 1 N t i + 1 log ( 1 + A D j ) 2 ( t i + 1 t i ) 2 H = σ 2 i = 0 n 1 B t i + 1 H B t i H 2 n ( t i + 1 t i ) 2 H + o 1 n .
Taking the expectation of both members of Equality (28), we get
E ( σ 2 ˜ ) = σ 2 + o 1 n
so that σ 2 ˜ is asymptotically unbiased.
On the other hand, we can prove that σ 2 ˜ is consistent by first calculating its second-order moment:
E ( σ 2 ˜ ) 2 = 1 n 2 E i = 0 n 1 Y t i + 1 Y t i j = N t i + 1 N t i + 1 log ( 1 + A D j ) 2 ( t i + 1 t i ) 2 H 2 .
From (30), we obtain
n 2 E ( σ 2 ˜ ) 2 = i = 0 n 1 E Y t i + 1 Y t i j = N t i + 1 N t i + 1 log ( 1 + A D j ) 4 ( t i + 1 t i ) 4 H +                                 2 0 i < k n 1 E Y t i + 1 Y t i j = N t i + 1 N t i + 1 log ( 1 + A D j ) 2 ( t i + 1 t i ) 2 H × Y t k + 1 Y t k j = N t k + 1 N t k + 1 log ( 1 + A D j ) 2 ( t k + 1 t k ) 2 H 2 .
Using Equality (27) leads to
i = 0 n 1 E Y t i + 1 Y t i j = N t i + 1 N t i + 1 log ( 1 + A D j ) 4 ( t i + 1 t i ) 4 H =                                             i = 0 n 1 E σ 2 B t i + 1 H B t i H 2 + o ( t i + 1 t i ) 2 H 2 ( t i + 1 t i ) 4 H = n ( 3 σ 4 + o ( 1 ) ) .
Moreover,
0 i < k n 1 E Y t i + 1 Y t i j = N t i + 1 N t i + 1 log ( 1 + A D j ) 2 ( t i + 1 t i ) 2 H × Y t k + 1 Y t k j = N t k + 1 N t k + 1 log ( 1 + A D j ) 2 ( t k + 1 t k ) 2 H 2 =                   n ( n 1 ) 2 σ 4 + o ( 1 ) .
Thus, the results (29), (31) and (32) lead us to
V a r ( σ 2 ˜ ) = 2 σ 4 n + o 1 n
so that V a r ( σ 2 ˜ ) tends to zero as n tends to .
Furthermore, from Chebyshev’s inequality, we get for any ϵ > 0
P ( | σ 2 ˜ σ 2 | ϵ ) P ( | σ 2 ˜ E ( σ 2 ˜ ) | + | E ( σ 2 ˜ ) σ 2 | ϵ )
V a r ( σ 2 ˜ ) ϵ | E ( σ 2 ˜ ) σ 2 | 2
which tends to zero as n tends to and implies the consistency of σ 2 ˜ for σ 2 .    □

3.4. Asymptotic Properties of MLEs

In this subsection, we focus on some distributional properties of estimators of μ and  σ 2 .
Theorem 5.
When σ 2 is known,
( i )
The estimator μ ^ of μ is unbiased.
( i i )
If t i = h i + ϵ i for i = 1 , , n with h > 0 and ϵ i = o ( 1 n ) , then μ ^ converges in mean square to μ as n .
( i i i )
μ ^ follows a Gaussian distribution with expectation μ and variance σ 2 t Γ 1 t .
Proof. 
( i )
Let us write B H = ( B t 1 H , , B t n H ) . From Equations (16) and (25), we have K = μ t + σ B H and Equation (23) leads to
μ ^ = t Γ 1 ( μ t + σ ( B H ) ) t Γ 1 t = μ + σ t Γ 1 ( B H ) t Γ 1 t .
B H is centered so that E ( μ ^ ) = μ .
( i i )
From Equation (35), we obtain the variance of μ ^ :
V a r ( μ ^ ) = E ( μ ^ μ ) 2 = σ 2 E t Γ 1 B H ( B H ) Γ 1 t ( t Γ 1 t ) 2 .
Since Γ = E B H ( B H ) , we obtain
V a r ( μ ^ ) = σ 2 t Γ 1 t .
Γ 1 is symmetric positive definite which implies that
t Γ 1 t t t γ max
where γ max denotes the largest eigenvalue of Γ .
Consequently,
V a r ( μ ^ ) σ 2 γ max t t .
On the other hand,
t t = i = 1 n ( h i + ϵ i ) 2 = n ( n + 1 ) ( 2 n + 1 ) 6 h 2 + 2 h i = 1 n i ϵ i + i = 1 n ϵ i 2 .
Since ϵ i = o ( 1 n ) , we have t t = h 2 n 3 3 + o ( 1 n ) . Moreover, according to Gerschgorin’s theorem (see [20], chp. 8), we have
γ m a x max i = 1 , , n j = 1 n | Γ i j | β n n h + o ( 1 n ) 2 H
with β > 0 . From Inequalities (38) and (39), we get
V a r ( μ ^ ) σ 2 β h 2 n 2 H 2 / 3
so that V a r ( μ ^ ) converges to zero when n tends to infinity.
( i i i )
The result is straightforward from expressions (35) and (36).    □
Theorem 6.
When μ is known,
( i )
σ 2 ^ 2 follows a noncentral chi-square distribution.
Assume that the following conditions are verified:
( C 4 )
H > 3 4 ;
( C 5 )
i 0 N * , i i 0 , t i > 1 ;
( C 6 )
t i = h i + ϵ i for i = 1 , , n with h > 0 and ϵ i = o ( 1 n ) .
Then,
( i i )
The estimator σ 2 ^ of σ 2 is asymptotically unbiased.
( i i i )
σ 2 ^ converges in mean square to σ 2 as n .
Proof. 
(i)
From Equality (24), we get that σ 2 ^ 2 is a quadratic form of the Gaussian vector C so that it follows a noncentral chi-square distribution (see Theorem 5.5 in [21]).
(ii)
From Equation (24), we have
E ( σ 2 ^ ) 2 = 4 E C Γ 1 C t 2 H Γ 1 t 2 H .
Since
E C Γ 1 C = E ( C ) Γ 1 E ( C ) + t r ( Γ 1 σ 2 Γ ) = 1 4 σ 4 t 2 H Γ 1 t 2 H + n σ 2 ,
then
E ( σ 2 ^ ) 2 = σ 4 + 4 n σ 2 t 2 H Γ 1 t 2 H .
Similarly to Inequality (37), we get
t 2 H Γ 1 ( t 2 H ) t 2 H ( t 2 H ) γ max .
From condition  ( C 6 ) and Inequalities (39) and (41), it follows that
4 n σ 2 t 2 H Γ 1 t 2 H 4 n σ 2 γ max t 2 H t 2 H 4 β n 2 σ 2 n h + o ( 1 n ) 2 H i = 1 n i h + o ( 1 n ) 4 H .
When conditions ( C 4 ) , ( C 5 ) and ( C 6 ) are verified, we obtain
i = 1 n i h + o ( 1 n ) 4 H = i = 0 n ( i h ) 3 + o ( 1 n ) = n 2 ( n + 1 ) 2 h 3 4 + o ( 1 n )
which leads to
4 n σ 2 t 2 H Γ 1 t 2 H 16 σ 2 β h 2 H 3 n 2 H 2
so that the bias of ( σ 2 ^ ) 2 for σ 4 converges to zero when n tends to infinity.
Due to the preservation of the convergence in probability and the distribution by continuous mappings (see Lemmas 3.3 and 3.7 in [22]), we obtain the final result.
( i i i )
From Equation (24), we have
V a r σ 2 ^ 2 = 16 V a r C Γ 1 C t 2 H Γ 1 t 2 H 2
and since
V a r C Γ 1 C = 2 t r ( Γ 1 σ 2 Γ ) 2 + 4 E ( C ) Γ 1 ( σ 2 Γ ) Γ 1 E ( C ) = 2 n σ 4 + σ 6 t 2 H Γ 1 t 2 H
we get
V a r σ 2 ^ 2 = 16 2 n σ 4 + σ 6 t 2 H Γ 1 t 2 H t 2 H Γ 1 t 2 H 2 = 32 n σ 4 t 2 H Γ 1 t 2 H 2 + 16 σ 6 t 2 H Γ 1 t 2 H .
It results from (43) that the right member of Equality (44) tends to zero as n tends to infinity.

4. Numerical Simulations

In order to simulate ( X t ) t 0 given by Equation (6), it is first necessary to simulate the fBm. Different methods have been discussed in the literature to simulate fBm trajectories [23,24]. Thus, the packages s o m e b m and l o n g m e m o have been available with the software environment for statistical computing and graphics Mathematics 10 04190 i001 [25]. With function f b m of s o m e b m , we can simulation a standard fBm on [ 0 , 1 ] .
To get a simulation on any interval [ a , b ] with a < b , we created a subdivision of length n by writing t i = a + i n ( b a ) for i = 0 , , n and used the following equalities in law:
B t i H B a H = L B t i a H = L B i n ( b a ) H ( stationary increment property ) ,
B i n ( b a ) H = L ( b a ) H B i n H ( self - similarity property )
so that
B t i H = L B a H + ( b a ) H B i n H .
Equation (45) allows us to simulate an fBm trajectory on [ a , b ] from one on [ 0 , 1 ] . In Appendix A, an R script simulating the model ( μ , σ 2 , λ , θ , x 0 , H ) is provided. In what follows, this script was applied to different parameter values ( μ , σ 2 , λ , θ , x 0 , H ) with respect to the convergence conditions given in Theorem 2. The jump amplitudes were distributed according to a log-Gaussian law with log-expectation  θ 1 and log-variance  θ 2 . Then, four cases were distinguished:
Case 1: μ < min λ e θ 1 + θ 2 2 1 ; λ 2 e 2 ( θ 1 + θ 2 ) 1 and H < 1 2 ,
Case 2: μ < min λ e θ 1 + θ 2 2 1 ; λ 2 e 2 ( θ 1 + θ 2 ) 1 + σ 2 and H = 1 2 ,
Case 3: μ < λ e θ 1 + θ 2 2 1 and H > 1 2 ,
Case 4: μ > λ e θ 1 + θ 2 2 1 .
The latter case is the situation for which the expected process E ( X t ) t 0 tends to infinity.
Figure 1 and Figure 2 provide graphical representations of the simulated trajectories for each case. The expectation and variance of the jump amplitudes were 0.111 and 0.012 , respectively. The probability of a negative amplitude was 0.159 in accordance with Equation (5). These figures illustrate the convergence results given by Theorem 2, namely that in cases 1 to 3, we had convergence of the expected process, and divergence of this process in case 4.
Figure 3 illustrates the following theoretical result: between two consecutive jumps, the trajectories are smoother as H approaches unity and more irregular as H approaches zero.

5. Conclusions

Since the work of [14] on fractional noise, the fields of application of processes with long-term dependence have increased quite considerably. In this paper, we developed statistical inference for fractional Black–Scholes processes with jumps for which H is not necessarily greater than 1/2. First, the distributional properties of such processes were presented. Closed-form MLEs for the drift and diffusion coefficients were proposed. Their asymptotic properties were studied, in particular consistency. An asymptotically unbiased and consistent estimator of σ 2 was proposed by means of quadratic variations. These results were obtained with respect to the Hurst index value. Contrary to most methods proposed in the literature, we did not require an equal interval length between two consecutive observation dates. A procedure for simulating trajectories was developed within the Mathematics 10 04190 i001 programming environment.
We assumed independence between the Poisson process of jump dates and the series of jump amplitudes. It would be interesting in the future to study how the correlation between jump dates and amplitudes may impact the behavior of the SDE solution and the statistical inference on this process.

Author Contributions

Conceptualization, J.-F.T. and J.V.; Methodology, J.-F.T. and J.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partly funded by the French Embassy in Haïti (PhD grant 107991X).

Acknowledgments

We would like to thank the Ecole Normale Supérieure of the Université d’Etat d’Haïti which supported this project.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Codes with R Programming Language

( μ , σ 2 , λ , θ , x 0 , H ) is the solution of the SDE (1) and can be simulated by means of the R script detailed below. It is required to install the R package s o m e b m . Jump amplitudes are assumed to be log-Gaussian.
Listing A1. Script with R language for simulating the solution of the SDE (1).   
  • SimBSfsauts2=function(mu,sigma2,lambda,theta,X0,H,t,Nsim,MAX){
  • #mu is the drift
  • #sigma2 is the diffusion coefficient
  • #lambda is the Poisson intensity
  • #theta is the vector parameter of the jump amplitude law :
  • #a log-Gaussian distribution with log-expectation theta [1]
  • #and log-variance theta [2].
  • #X0 is the initial value of the process at t=0
  • #H is the Hurst parameter
  • #t is the simulation window length
  • #Nsim is the number of simulated trajectories
  • #MAX is the maximum value on y-axis
  • #install the package "somebm" for  simulations of fBm
  • n=rpois(1,lambda*t) #drawing of one realization of the Poisson law
  • dates=sort(runif(n,max=t))
  • #Drawing of $n$ jump amplitudes
  • sauts=rlnorm(n,meanlog=theta [1], sdlog=sqrt(theta [2])) - 1
  • #Simulation between dates 0 and dates [1]
  • temps=seq(0,dates [1],length=102)
  • #Self-similarity property
  • Bf=fbm(hurst=H)*dates [1]^H
  • Xt=X0*exp(mu*temps-0.5*sigma2*(temps^(2*H))+sqrt(sigma2)*c(0,Bf))
  • Xt[102]=Xt[102]*(1+sauts [1])
  • X=Xt
  • temps2=temps
  • #For i from 2 to n, simulation between dates [i-1] and dates [i]
  • for(i in 2:n){
  • temps=seq(dates [i-1],dates [i],length=103)[-1]
  • #Unique Gaussian process which is self-similar and stationary
  • Bf=Bf[101]+fbm(hurst=H)*(dates [i]-dates [i-1])^H
  • Xt=Xt[102]*exp(mu*temps-0.5*(sigma2)*temps^(2*H)+sqrt(sigma2)*c(0,Bf))
  • Xt[102]=Xt[102]*(1+sauts[i])
  • X=c(X,Xt)
  • temps2=c(temps2,temps)
  • }
  • #Simulation between dates [n] and t
  • temps=seq(dates [n],t,length=103)[-1]
  • Bf=Bf[101]+fbm(hurst=H)*(t-dates[n])^H
  • Xt=Xt[102]*exp(mu*temps-0.5*(sigma2)*temps^(2*H)+sqrt(sigma2)*c(0,Bf))
  • X=c(X,Xt)
  • temps2=c(temps2,temps)
  • miny=min(X)
  • m=theta [1]
  • delta=sqrt(theta [2])
  • #MAX=exp(log(X0)+mu*t-0.5*(sigma2)*(t^(2*H))+lambda*t*m
  • #+1.96*sqrt(sigma2*(t^(2*H))+lambda*t*(m^2+delta^2)))
  • plot(temps2,X,type="l",xlab="time t",ylab="Process value at time t",
  • ylim=c(0,MAX), col=1)
  • #Simulations of the other trajectories
  • for(j in 2:Nsim){
  •  n=rpois(1,lambda*t) #drawing of one realization of the Poisson law
  • dates=sort(runif(n,max=t))
  • #Drawing of $n$ jump amplitudes
  • sauts=rlnorm(n,meanlog=theta [1],sdlog=sqrt(theta [2])) - 1
  • #Simulation between dates 0 and dates [1]
  • temps=seq(0,dates [1],length=102)
  • Bf=fbm(hurst=H)*dates[1]^H # autosimilarity property
  • Xt=X0*exp(mu*temps-0.5*sigma2*(temps^(2*H))+sqrt(sigma2)*c(0,Bf))
  • Xt[102]=Xt[102]*(1+sauts [1])
  • X=Xt
  • temps2=temps
  • #For i from 2 to n, simulation between dates [i-1] and dates [i]
  • for(i in 2:n){
  • temps=seq(dates [i-1],dates [i],length=103)[-1]
  • #Unique Gaussian process which is self-similar and stationary
  • Bf=Bf[101]+fbm(hurst=H)*(dates [i]-dates [i-1])^H
  • Xt=Xt[102]*exp(mu*temps-0.5*sigma2*temps^(2*H)+sqrt(sigma2)*c(0,Bf))
  • Xt[102]=Xt[102]*(1+sauts [i])
  • X=c(X,Xt)
  • temps2=c(temps2,temps)
  • }
  • #Simulation between dates[n] and t
  • temps=seq(dates[n],t,length=103)[-1]
  • Bf=Bf[101]+fbm(hurst=H)*(t-dates[n])^H
  • Xt=Xt[102]*exp(mu*temps-0.5*sigma2*temps^(2*H)+sqrt(sigma2)*c(0,Bf))
  • X=c(X,Xt)
  • temps2=c(temps2,temps)
  • miny=min(X)
  • lines(temps2,X,type="l",xlab="t",ylab="Xt", col=j)
  • }
  • }

References

  1. Black, F.; Scholes, M.S. The Pricing of Options and Corporate Liabilities. J. Political Econ. 1973, 81, 637–654. [Google Scholar] [CrossRef] [Green Version]
  2. Biagini, F.; Hu, Y.; Øksendal, B.; Zhang, T. Stochastic Calculus for Fractional Brownian Motion and Applications; Springer Science & Business Media: London, UK, 2008. [Google Scholar]
  3. Xiao, W.; Zhang, W.; Zhang, X. Parameter identification for the discretely observed geometric fractional Brownian motion. J. Stat. Comput. Simul. 2015, 85, 269–283. [Google Scholar] [CrossRef]
  4. Shevchenko, G. Mixed fractional stochastic differential equations with jumps. Stochastics Int. J. Probab. Stoch. Process. 2014, 86, 203–217. [Google Scholar] [CrossRef] [Green Version]
  5. Bertin, K.; Torres, S.; Tudor, C.A. Maximum-likelihood estimators and random walks in long memory models. Statistics 2011, 45, 361–374. [Google Scholar] [CrossRef]
  6. Kubilius, K.; Melichov, D. Quadratic variations and estimation of the Hurst index of the solution of SDE driven by a fractional Brownian motion. Lith. Math. J. 2010, 50, 401–417. [Google Scholar] [CrossRef]
  7. Berzin, C.; León, J.R. Estimation in models driven by fractional Brownian motion. Ann. Inst. H. Poincaré Probab. Statist. 2008, 44, 191–213. [Google Scholar] [CrossRef]
  8. Hu, Y.; Nualart, D. Parameter estimation for fractional Ornstein–Uhlenbeck processes. Stat. Probab. Lett. 2010, 80, 1030–1038. [Google Scholar] [CrossRef] [Green Version]
  9. Mishura, Y.; Ralchenko, K.; Shklyar, S. Maximum likelihood estimation for Gaussian process with nonlinear drift. Nonlinear Anal. Model. Control. 2018, 23, 120–140. [Google Scholar] [CrossRef]
  10. Haress, E.M.; Hu, Y. Estimation of all parameters in the fractional Ornstein–Uhlenbeck model under discrete observations. Stat. Inference Stoch. Process. 2021, 24, 327–351. [Google Scholar] [CrossRef]
  11. Cesars, J.; Nuiro, S.; Vaillant, J. Statistical Inference on a Black-Scholes Model with Jumps. Application in Hydrology. J. Math. Stat. 2019, 15, 196–200. [Google Scholar] [CrossRef]
  12. Mishura, Y. Stochastic Calculus for Fractional Brownian Motion and Related Processes; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  13. Nualart, D. The Malliavin Calculus and Related Topics, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2006; Volume 1995. [Google Scholar]
  14. Mandelbrot, B.B.; Van Ness, J.W. Fractional Brownian motions, fractional noises and applications. SIAM Rev. 1968, 10, 422–437. [Google Scholar] [CrossRef]
  15. Unal, G.; Dinler, A. Exact linearization of one dimensional Itô equations driven by fBm: Analytical and numerical solutions. Nonlinear Dyn. 2008, 53, 251–259. [Google Scholar] [CrossRef]
  16. Mishura, Y.; Ralchenko, K.; Shklyar, S. Maximum likelihood drift estimation for Gaussian process with stationary increments. arXiv 2016, arXiv:1612.00160. [Google Scholar] [CrossRef]
  17. Yaozhong, H.; David, N.; Weilin, X.; Weiguo, Z. Exact maximum likelihood estimator for drift fractional Brownian motion at discrete observation. Acta Math. Sci. 2011, 31, 1851–1859. [Google Scholar] [CrossRef] [Green Version]
  18. Casella, G.; Berger, R.L. Statistical Inference; Cengage Learning: Belmont, CA, USA, 2021. [Google Scholar]
  19. Henningsen, A.; Toomet, O. maxLik: A package for maximum likelihood estimation in R. Comput. Stat. 2011, 26, 443–458. [Google Scholar] [CrossRef]
  20. Golub, G.H.; Van Loan, C.F. Matrix Computations; JHU Press: Baltimore, MD, USA, 2013. [Google Scholar]
  21. Rencher, A.C.; Schaalje, G.B. Linear Models in Statistics; John Wiley & Sons: Hoboken, NJ, USA, 2008. [Google Scholar]
  22. Kallenberg, O. Foundations of Modern Probability; Springer: Berlin/Heidelberg, Germany, 1997; Volume 2. [Google Scholar]
  23. Coeurjolly, J.F. Simulation and identification of the fractional Brownian motion: A bibliographical and comparative study. J. Stat. Softw. 2000, 5, 1–53. [Google Scholar] [CrossRef]
  24. Beran, J.; Whitcher, B.; Maechler, M. Longmemo: Statistics for Long-Memory Processes. 2009. Available online: http://CRAN.R-project.org/package=longmemo (accessed on 2 June 2020).
  25. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2017. [Google Scholar]
Figure 1. Four simulated trajectories of model ( μ , σ 2 , λ , θ , x 0 , H ) over [ 0 , 2 ] for jumps distributed according to LN ( 0.1 , 0.01 ) . On the left, an example of case 1 with ( μ , σ 2 , λ , θ , x 0 , H ) = ( 1 , 0.2 , 5 , ( 0.1 , 0.01 ) , 6 , 0.4 ) ; On the right, an example of case 2 with ( μ , σ 2 , λ , θ , x 0 , H ) = ( 1 , 0.2 , 5 , ( 0.1 , 0.01 ) , 6 , 0.5 ) .
Figure 1. Four simulated trajectories of model ( μ , σ 2 , λ , θ , x 0 , H ) over [ 0 , 2 ] for jumps distributed according to LN ( 0.1 , 0.01 ) . On the left, an example of case 1 with ( μ , σ 2 , λ , θ , x 0 , H ) = ( 1 , 0.2 , 5 , ( 0.1 , 0.01 ) , 6 , 0.4 ) ; On the right, an example of case 2 with ( μ , σ 2 , λ , θ , x 0 , H ) = ( 1 , 0.2 , 5 , ( 0.1 , 0.01 ) , 6 , 0.5 ) .
Mathematics 10 04190 g001
Figure 2. Four simulated trajectories of model ( μ , σ 2 , λ , θ , x 0 , H ) over [ 0 , 2 ] for jumps distributed according to LN ( 0.1 , 0.01 ) . On the left, an example of case 3 with ( μ , σ 2 , λ , θ , x 0 , H ) = ( 1 , 0.2 , 5 , ( 0.1 , 0.01 ) , 6 , 0.6 ) ; On the right, an example of case 4 with ( μ , σ 2 , λ , θ , x 0 , H ) = ( 0.1 , 0.2 , 5 , ( 0.1 , 0.01 ) , 6 , 0.4 ) .
Figure 2. Four simulated trajectories of model ( μ , σ 2 , λ , θ , x 0 , H ) over [ 0 , 2 ] for jumps distributed according to LN ( 0.1 , 0.01 ) . On the left, an example of case 3 with ( μ , σ 2 , λ , θ , x 0 , H ) = ( 1 , 0.2 , 5 , ( 0.1 , 0.01 ) , 6 , 0.6 ) ; On the right, an example of case 4 with ( μ , σ 2 , λ , θ , x 0 , H ) = ( 0.1 , 0.2 , 5 , ( 0.1 , 0.01 ) , 6 , 0.4 ) .
Mathematics 10 04190 g002
Figure 3. Four simulated trajectories of model ( 1 , 0.1 , 10 , ( 0.1 , 0.01 ) , 6 , H ) over [ 0 , 1 ] for H in { 0.1 , , 0.9 } . The trajectory smoothness between two consecutive jumps increases as H goes from 0 to 1.
Figure 3. Four simulated trajectories of model ( 1 , 0.1 , 10 , ( 0.1 , 0.01 ) , 6 , H ) over [ 0 , 1 ] for H in { 0.1 , , 0.9 } . The trajectory smoothness between two consecutive jumps increases as H goes from 0 to 1.
Mathematics 10 04190 g003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Thony, J.-F.; Vaillant, J. Parameter Estimation for a Fractional Black–Scholes Model with Jumps from Discrete Time Observations. Mathematics 2022, 10, 4190. https://doi.org/10.3390/math10224190

AMA Style

Thony J-F, Vaillant J. Parameter Estimation for a Fractional Black–Scholes Model with Jumps from Discrete Time Observations. Mathematics. 2022; 10(22):4190. https://doi.org/10.3390/math10224190

Chicago/Turabian Style

Thony, John-Fritz, and Jean Vaillant. 2022. "Parameter Estimation for a Fractional Black–Scholes Model with Jumps from Discrete Time Observations" Mathematics 10, no. 22: 4190. https://doi.org/10.3390/math10224190

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop