Next Article in Journal
Finite-Time Stability for Caputo Nabla Fractional-Order Switched Linear Systems
Next Article in Special Issue
The First Integral of the Dissipative Nonlinear Schrödinger Equation with Nucci’s Direct Method and Explicit Wave Profile Formation
Previous Article in Journal / Special Issue
Modal Shifted Fifth-Kind Chebyshev Tau Integral Approach for Solving Heat Conduction Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analytical and Computational Problems Related to Fractional Gaussian Noise

by
Yuliya Mishura
1,
Kostiantyn Ralchenko
1,* and
René L. Schilling
2
1
Department of Probability Theory, Statistics and Actuarial Mathematics, Taras Shevchenko National University of Kyiv, 64/13, Volodymyrska, 01601 Kyiv, Ukraine
2
TU Dresden, Fakultät Mathematik, Institut für Mathematische Stochastik, 01062 Dresden, Germany
*
Author to whom correspondence should be addressed.
Fractal Fract. 2022, 6(11), 620; https://doi.org/10.3390/fractalfract6110620
Submission received: 16 September 2022 / Revised: 16 October 2022 / Accepted: 19 October 2022 / Published: 22 October 2022
(This article belongs to the Special Issue Feature Papers for Numerical and Computational Methods Section)

Abstract

:
We study the projection of an element of fractional Gaussian noise onto its neighbouring elements. We prove some analytic results for the coefficients of this projection. In particular, we obtain recurrence relations for them. We also make several conjectures concerning the behaviour of these coefficients, provide numerical evidence supporting these conjectures, and study them theoretically in particular cases. As an auxiliary result of independent interest, we investigate the covariance function of fractional Gaussian noise, prove that it is completely monotone for H > 1 / 2 , and, in particular, monotone, convex, log-convex along with further useful properties.

1. Introduction

This paper is about some (conjectured) properties of the projection of an element of fractional Gaussian noise onto the neighbouring elements. Unfortunately, not all our conjectures are amenable to analytical proofs, while numerical experiments confirm their validity. This is indeed rather strange, as the properties of fractional Brownian motion and its increments have been thoroughly studied, attracting a lot of research efforts resulting in countless papers and several books, e.g., [1,2,3,4]. These books are mostly devoted to the stochastic analysis of fractional processes, the properties of their trajectories, distributional properties of certain functionals of the paths, and related issues. Note that such interest and the large number of theoretical studies related to Gaussian fractional noises is due to the wide range of applications of such processes and its properties: existence of memory combined with self-similarity and stationarity. In particular, fractional Gaussian noises appear in the investigation of the behaviour of anomalous diffusion and solutions of fractional diffusion equations, including numerical schemes [5,6,7], information capacity of a non-linear neuron model [8], statistical inference [9,10], entropy calculation [11,12], extraction of the quantitative information from recurrence plots [13] and many others. There is, however, an area where much less is known: problems relating to the covariance matrix of fractional Brownian motion and fractional Gaussian noise in high dimensions, and its determinant. Computational features of the covariance matrices are widely used for simulations and in various applications, see, for example, [14,15,16,17]. The problem considered in the present paper arose in the following way: In [18], the authors construct a discrete process that converges weakly to a fractional Brownian motion (fBm) B H = { B t H , t 0 } with Hurst parameter H ( 1 2 , 1 ) . The construction of this process is based on the Cholesky decomposition of the covariance matrix of the fractional Gaussian noise (fGn). Several interesting properties of this decomposition are proved in [18], such as the positivity of all elements of the corresponding triangular matrix and the monotonicity along its main diagonal. Numerical examples suggest also the conjecture, that one has monotonicity along all diagonals of this matrix. However, the analytic proof of this fact remains an open problem. Studying this problem, the authors of [18] establish a connection between the predictor’s coefficients—that is, the coefficients of the projection of any value of a stationary Gaussian process onto finitely many subsequent elements—and the Cholesky decomposition of the covariance matrix of the process. It turns out that the positivity of the coefficients of the predictor implies the monotonicity along the diagonals of the triangular matrix of the Cholesky decomposition of fGn, which is sufficient for the monotonicity along the columns of the triangular matrix in the Cholesky decomposition of fBm itself; this property, in turn, ensures the convergence of a wide class of discrete-time schemes to a fractional Brownian motion. We will see in Section 2.1 below that the coefficients of the predictor can be found as the solution to a system of linear equations, whose coefficient matrix coincides with the covariance matrix of fGn. This enables us to reduce the monotonicity problem for the Cholesky decomposition to proving the positivity of the solution to a linear system of equations. However, see Section 2, even in the particular case of a 3 × 3 -matrix, an analytic proof of positivity of all coefficients is a non-trivial problem. For the moment, we have only a partial solution. Therefore, we formulate the following conjecture:
Conjecture 1.
If H > 1 / 2 , then the coefficients of the projection of any element of fractional Gaussian noise onto any finite number of its subsequent elements are strictly positive.
We shall discuss this conjecture in Section 2 in more detail. Due to stationarity, it is sufficient to establish Conjecture 1 for the projection of Δ 1 onto Δ 2 , , Δ n , i.e., for the conditional expectation
E Δ 1 Δ 2 , , Δ n , n 3 ,
where B H denotes fBM and Δ k = B k H B k 1 H . Having computational evidence but lacking an analytical proof for Conjecture 1, we provide in this paper a wide range of associated properties of coefficients, some with an analytic proof, and some obtained using various computational tools. It is, in particular, interesting to study the asymptotic behaviour of the coefficients as H 1 . This is particularly interesting since H = 1 fractional Brownian motion B 1 is degenerate, i.e., B t 1 = t ξ , where ξ N ( 0 , 1 ) , and N ( 0 , 1 ) denotes the standard normal distribution. Consequently, Δ k N ( 0 , 1 ) for all k 1 , and
E Δ 1 Δ 2 , , Δ n = k = 2 n α k Δ k N ( 0 , 1 )
for any convex combination, α k 0 , k = 2 n α k = 1 . This shows that in the case H = 1 , the values of the coefficients are indefinite, and therefore they cannot define the asymptotic behaviour of the prelimit coefficients as H 1 . It will be very “elegant” if all coefficients tend to ( n 1 ) 1 ; however, in reality their asymptotic behaviour is different, see Section 2.3. Another interesting question are the relations between the coefficients. It is natural to assume that they decrease as k increases, but the situation here is also more involved, essentially depending on the value of H. In Section 2.4, we prove some recurrence relations between the coefficients. These relations lead to a computational algorithm which is more efficient than solving the system of equations as described in Section 2.1. Finally, it turns out that the positivity of the first coefficient can be proven analytically for all values of n; this result is established in Section 2.5.
We close the paper with a few numerical examples, supporting our theoretical results and conjectures. In particular, we compute the coefficients for all n 10 and for various values of H, and discuss their behaviour. Additionally, we compare different calculation methods for the coefficients in terms of computing time, and we demonstrate the advantage of the approach via the recurrence of formulae in most cases.
The paper is organized as follows: Section 2 contains almost all properties of the predictor’s coefficients that can be established analytically, and it introduces the system of linear equations for these coefficients and some properties of the coefficients of this system. We consider in detail two particular cases: n = 3 and n = 4 . In these cases, we prove the positivity of all coefficients, establish some relations between them, and study the asymptotic behaviour as H 1 . We also obtain recurrence relations for the coefficients, and prove that for all values of n, the first coefficient is positive. Section 3 contains some numerical illustrations of the properties and conjectures from Section 1 and Section 2. In Section 3.3, we briefly discuss some observations concerning the case H < 1 / 2 .

2. Analytical Properties of the Coefficients

Let B H = B t H , t 0 be a fractional Brownian motion (fBm) with Hurst index H ( 1 2 , 1 ) , that is, a centered Gaussian process with covariance function of the form
E B t H B s H = 1 2 t 2 H + s 2 H | t s | 2 H .
We use
Δ n = B n H B n 1 H , n = 1 , 2 , 3 , .
for the nth increment of fBM. It is well known that the process B H has stationary increments, which implies that Δ n , n 1 is a stationary Gaussian sequence (known as fractional Gaussian noise—fGn for short). It follows from (1) that its autocovariance function is given by
ρ k = E Δ 1 Δ k + 1 = 1 2 | k + 1 | 2 H 2 | k | 2 H + | k 1 | 2 H , k 1 .
Obviously, ρ 0 = 1 .
Now, let us consider the projection of Δ 1 onto Δ 2 , , Δ n , i.e., the conditional expectation E Δ 1 Δ 2 , , Δ n . Since the joint distribution of ( Δ 1 , , Δ n ) is centered and Gaussian, we obtain the following relation from the theorem on normal correlation (see, for example, Theorem 3.1 in [19]):
E Δ 1 Δ 2 , , Δ n = k = 2 n Γ n k Δ k , n 2 ,
where Γ n k R . Our Conjecture 1 means that all the coefficients Γ n k for n = 2 , 3 , k = 2 , 3 , , n , are strictly positive (We have formulated it in more general form, i.e., for any element Δ j , because, by stationarity, the projection E Δ j Δ j + 1 , , Δ j + n 1 for any j has the same distribution as E Δ 1 Δ 2 , , Δ n ).
Let us consider two approaches to the calculation of the coefficients Γ n k . The first method is straightforward; it involves solving of the system of linear equations. The second one is based on recurrence relations for the Γ n k .

2.1. System of Linear Equations for Coefficients

Multiplying both sides of (3) by Δ l , 2 l n and taking expectations yields
E Δ 1 Δ l = k = 2 n Γ n k E Δ k Δ l , 2 l n .
Due to stationarity,
E Δ k Δ l = ρ | l k | .
This leads to the following system of linear equations for the coefficients Γ n k , k = 2 , , n :
ρ l 1 = k = 2 n Γ n k ρ | l k | , 2 l n .
We can solve this using Cramer’s rule,
Γ n k = det A k det A ,
where
A = 1 ρ 1 ρ 2 ρ k ρ n 2 ρ 1 1 ρ 1 ρ k 1 ρ n 3 ρ n 2 ρ n 3 ρ n 4 ρ n k 1 1
and A k is the matrix A with its kth column vector replaced by ( ρ 1 , , ρ n 1 ) :
A k = 1 ρ 1 ρ 2 ρ 1 k th   column ρ n 2 ρ 1 1 ρ 1 ρ 2 ρ n 3 ρ n 2 ρ n 3 ρ n 4 ρ n 1 1 .
Remark 1.
It is known that the finite-dimensional distributions of B H have a nonsingular covariance matrix; in particular, for any 0 < t 1 < < t n , the values B t 1 H , , B t n H are linearly independent; see Theorem 1.1 in [20] and its proof. Obviously, a similar statement holds for fractional Gaussian noise, since the vector ( Δ 1 , , Δ n ) is a nonsingular linear transform of ( B 1 H , , B n H ) . In other words, det A 0 ; moreover, if k α k Δ k = 0 a.s., then α k = 0 for all k.

2.2. Relations between the Values ρ k

In order to establish analytic properties of the coefficients Γ n k , we need several auxiliary results on the properties of the sequence ρ k , k Z + . We start with a useful relation between ρ 1 , ρ 2 and ρ 3 .
Lemma 1.
The following equality holds:
ρ 2 ρ 1 2 = 1 2 ( ρ 1 ρ 3 ) .
Proof. 
Using the self-similarity of fBm and the stationarity of its increments, we obtain
2 2 H ρ 1 = 2 2 H E B 1 H ( B 2 H B 1 H ) = E B 2 H ( B 4 H B 2 H ) = E ( B 2 H B 1 H ) ( B 4 H B 2 H ) + E B 1 H ( B 4 H B 2 H ) = E Δ 2 ( Δ 3 + Δ 4 ) + E Δ 1 ( Δ 3 + Δ 4 ) = ρ 1 + 2 ρ 2 + ρ 3 .
Note that by (2), ρ 1 = 2 2 H 1 1 , whence 2 2 H = 2 ρ 1 + 2 . Thus, we arrive at
( 2 ρ 1 + 2 ) ρ 1 = ρ 1 + 2 ρ 2 + ρ 3 .
which is equivalent to (7).    □
Remark 2.
The inequality ρ 1 2 < ρ 2 was proved in [18] (p. 28) by analytic methods. In this paper, we improve this result in two directions: we obtain an explicit expression for ρ 2 ρ 1 2 and we prove the sharper bound ρ 1 2 < ρ 3 ; see Lemma 3 below.
Many important properties of the covariance function of a fractional Gaussian noise (such as monotonicity, convexity and log-convexity) follow from the more general property of complete monotonicity, which is stated in the next lemma. To formulate it, let us introduce the function
ρ ( x ) = ρ ( H , x ) = 1 2 | x + 1 | 2 H 2 | x | 2 H + | x 1 | 2 H , x 0 .
Lemma 2.
1.
The function ρ : ( 0 , ) R is convex if H > 1 / 2 and concave if H < 1 / 2 .
2.
If H > 1 / 2 , then the function ρ is completely monotone(CM)on ( 1 , ) , that is, ρ C ( 1 , ) and
( 1 ) n ρ ( n ) ( x ) 0 for all n N { 0 } and x > 1 .
3.
If H < 1 / 2 , then the function ρ is completely monotone on ( 1 , ) .
Proof. 
1. Using the elementary relation d d x | x | 2 H = 2 H sgn ( x ) | x | 2 H 1 , it is not hard to see that
ρ ( x ) = 1 2 | x + 1 | 2 H | x | 2 H 1 2 | x | 2 H | x 1 | 2 H = H ( 2 H 1 ) 0 1 t t | x + s | 2 H 2 d s d t .
Since x | x + s | 2 H 2 is convex, and since convex functions are a convex cone which is closed under pointwise convergence, the double integral appearing in the representation of ρ ( x ) is again convex. Thus, ρ ( x ) is convex or concave according to 2 H 1 > 0 or 2 H 1 < 0 , respectively.
2. Let H > 1 2 and x 1 . Then, Formula (9) remains valid if we replace | x + s | with ( x + s ) . But ( x + s ) 2 H 2 is CM and so ρ ( x ) is an integral mixture of CM-functions. Since CM is a convex cone which is closed under pointwise convergence, cf. Corollary 1.6 in [21], we see that ρ is CM on ( 1 , ) .
3. The above arguments holds true in the case H < 1 2 ; the only difference is that in this case, the factor ( 2 H 1 ) is negative.    □
Remark 3.
1. Since x ρ ( x + 1 ) is a CM function on ( 0 . ) , it admits the representation ρ ( x + 1 ) = a + 0 e x t μ ( d t ) , for some positive measure μ on [ 0 , ) and a 0 , see, for example, Theory 1.4 in [21]. Taking into account that ρ ( + ) = 0 , it is not hard to see that a = 0 , i.e.,
ρ ( x + 1 ) = 0 e x t μ ( d t ) .
2. The function ρ can be represented in the form ρ ( x + 1 ) = Δ 1 2 f H ( x ) , where we write Δ 1 f ( x ) : = f ( x + 1 ) f ( x ) for the step-1 difference operator, and f H ( x ) : = 1 2 x 2 H . Then the second statement of Lemma 2 follows from the more general result: if f is CM on ( 0 , ) , then Δ 2 1 f is CM. Indeed, since CM is a closed convex cone, it is enough to verify the claim for the “basic” CM function f ( x ) = e t x , where t 0 is a parameter. Now we have
Δ 1 2 f ( x ) = e ( x + 2 ) t 2 e ( x + 1 ) t + e x t = e x t ( e t 1 ) 2 ,
and this is clearly a completely monotone function.
3. The argument which we used in the proof of Lemma 2. proves a bit more: The function x ρ ( H , x ) is for x 1 and H > 1 / 2 even a Stieltjes function, i.e., a double Laplace transform. To see this, we note that the kernel x ( x + s ) 2 H 2 is a Stieltjes function. Further details on Stieltjes functions can be found in [21].
As for the following properties, fractional Brownian with Hurst index H = 1 is degenerate, i.e., B t 1 = t ξ , where ξ N ( 0 , 1 ) ; consequently all ρ k = 1 and the next set of inequalities are equalities. Therefore, we consider only 1 / 2 < H < 1 .
Corollary 1.
Let H ( 1 2 , 1 ) . The sequence { ρ k , k 0 } has the following properties
1.
Monotonicity and positivity: for any k N
ρ k 1 > ρ k > 0 .
2.
Convexity: for any k N
ρ k 1 ρ k > ρ k ρ k + 1 .
3.
Log-convexity: for any k N
ρ k 1 ρ k + 1 > ρ k 2 .
Proof. 
By Lemma 2, the function ρ ( x ) is convex on ( 0 , ) and completely monotone on ( 1 , ) ; by continuity, we can include the endpoints of each interval.
We begin with the observation that a completely monotone function is automatically log-convex. We show this for ρ using the representation (10): for any x 0 ,
ρ ( x + 1 ) = 0 e x t μ ( d t ) , ρ ( x + 1 ) = 0 e x t t μ ( d t ) , ρ ( x + 1 ) = 0 e x t t 2 μ ( d t ) .
Thus, the Cauchy–Schwarz inequality yields
ρ ( x ) 2 ρ ( x ) · ρ ( x )
which guarantees that x log ρ ( x + 1 ) is convex.
Therefore all properties claimed in the statement hold for k 2 , convexity even for k 1 , and we only have to deal with the case k = 1 .
Monotonicity for k = 1 : We have to show ρ 0 > ρ 1 . This follows by direct verification since by (2),
ρ 0 = 1 and ρ 1 = 2 2 H 1 1
(recall that 1 / 2 < H < 1 ).
Log-convexity for k = 1 : In this case, the inequality (13) has the form ρ 2 > ρ 1 2 . It immediately follows from the representation (7) combined with the monotonicity property (11).    □
The previous lemma implies that ρ 1 2 < ρ 2 . The following result gives a sharper bound.
Lemma 3.
If H ( 1 2 , 1 ) , then
ρ 1 2 < ρ 3 .
Proof. 
Applying (7), we may write
ρ 3 ρ 1 2 = ρ 3 ρ 2 + 1 2 ( ρ 1 ρ 3 ) = 1 2 ( ρ 1 2 ρ 2 + ρ 3 ) > 0 ,
because of Statement 2 of Corollary 1.    □

2.3. Particular Cases

We will now consider in detail two particular cases: n = 3 and n = 4 . In these cases, we prove the positivity of all coefficients Γ n k , establish some relations between them, and study the asymptotic behavior as H 1 . In the case n = 3 everything is established analytically, while in the case n = 4 , the sign of the second coefficient Γ n 3 and the relation between the second and the third coefficients, Γ n 3 and Γ n 4 , are verified numerically.

2.3.1. Case n = 3

In the case n = 3 , the system (4) becomes
Γ 3 2 + Γ 3 3 ρ 1 = ρ 1 , Γ 3 2 ρ 1 + Γ 3 3 = ρ 2 ,
whence
Γ 3 2 = ρ 1 ( 1 ρ 2 ) 1 ρ 1 2 , Γ 3 3 = ρ 2 ρ 1 2 1 ρ 1 2 .
Proposition 1.
For any H ( 1 2 , 1 ) ,
Γ 3 2 > Γ 3 3 > 0 .
Proof. 
Recall that, by Corollary 1 (Statement 1), 1 > ρ 1 > ρ 2 > Hence, the first inequality Γ 3 2 > Γ 3 3 is equivalent to
ρ 1 ( 1 ρ 2 ) > ρ 2 ρ 1 2 or ( 1 + ρ 1 ) ( ρ 1 ρ 2 ) > 0 ,
which is true due to Corollary 1.
To prove the second inequality Γ 3 3 > 0 , we need to show that ρ 2 > ρ 1 2 , which was established in Corollary 1.    □
Remark 4.
It is worth pointing out that the positivity (and positive definiteness) of the coefficient matrix together with the positivity of the right-hand side of the system does not imply the positivity of the solution. Indeed, consider the following system with the same coefficients as in (17), but another positive right-hand side, say ( b 1 , b 2 ) :
1 ρ 1 ρ 1 1 x 1 x 2 = b 1 b 2 .
The solution has the form
x 1 = b 1 b 2 ρ 1 1 ρ 1 2 , x 2 = b 2 b 1 ρ 1 1 ρ 1 2 .
If, for example, b 2 < b 1 ρ 1 , then x 1 > 0 and x 2 < 0 . For the system (17), this condition is written as ρ 2 < ρ 1 2 , contradicting Corollary 1.
Proposition 2.
lim H 1 Γ 3 2 = 9 log 9 8 log 4 8 log 4 0.783083 , lim H 1 Γ 3 3 = 8 log 16 9 log 9 8 log 4 0.216917 .
Proof. 
If we take the limit H 1 in the relations
ρ 1 = 2 2 H 1 1 , ρ 2 = 1 2 3 2 H 2 2 H + 1 + 1 ,
we obtain
ρ 1 1 , ρ 2 1 , as H 1 .
Therefore,
lim H 1 Γ 3 2 = lim H 1 ρ 1 ( 1 ρ 2 ) ( 1 ρ 1 ) ( 1 + ρ 1 ) = lim H 1 1 ρ 2 2 ( 1 ρ 1 ) = lim H 1 1 1 2 3 2 H 2 2 H + 1 + 1 2 ( 1 2 2 H 1 + 1 ) = lim H 1 1 3 2 H + 2 2 H + 1 4 ( 2 2 2 H 1 ) = lim H 1 1 9 H + 2 · 4 H 8 2 · 4 H .
By l’Hôpital’s rule,
lim H 1 Γ 3 2 = lim H 1 9 H log 9 + 2 · 4 H log 4 2 · 4 H log 4 = 9 log 9 + 8 log 4 8 log 4 .
Similarly,
lim H 1 Γ 3 3 = lim H 1 ρ 2 ρ 1 2 1 ρ 1 2 = lim H 1 ρ 2 ρ 1 2 2 ( 1 ρ 1 ) = lim H 1 1 2 3 2 H 2 2 H + 1 + 1 ( 2 2 H 1 1 ) 2 2 ( 2 2 2 H 1 ) = lim H 1 3 2 H 2 2 H + 1 + 1 2 ( 2 4 H 2 2 2 H + 1 ) 4 ( 2 2 2 H 1 ) = lim H 1 9 H 1 1 2 16 H 8 2 · 4 H = lim H 1 9 H log 9 1 2 16 H log 16 2 · 4 H log 4 = 9 log 9 8 log 16 8 log 4 .  
Figure 1 shows the dependence of the coefficients Γ 3 2 and Γ 3 3 on H. It illustrates the theoretical results stated in Propositions 1 and 2, in particular, the positivity and monotonicity of the coefficients, and convergence to theoretical limit values as H 1 .

2.3.2. Case n = 4

For n = 4 , the system (4) has the following form
ρ 1 = Γ 4 2 + Γ 4 3 ρ 1 + Γ 4 4 ρ 2 , ρ 2 = Γ 4 2 ρ 1 + Γ 4 3 + Γ 4 4 ρ 1 , ρ 3 = Γ 4 2 ρ 2 + Γ 4 3 ρ 1 + Γ 4 4 .
Therefore,
Γ 4 2 = ρ 1 + ρ 1 2 ρ 3 + ρ 1 ρ 2 2 ρ 2 ρ 3 ρ 1 3 ρ 1 ρ 2 1 + 2 ρ 1 2 ρ 2 ρ 2 2 2 ρ 1 2 ,
Γ 4 3 = ρ 1 2 ρ 2 ρ 2 3 + ρ 1 ρ 2 ρ 3 ρ 1 2 + ρ 2 ρ 1 ρ 3 1 + 2 ρ 1 2 ρ 2 ρ 2 2 2 ρ 1 2 ,
Γ 4 4 = ρ 1 3 + ρ 1 ρ 2 2 2 ρ 1 ρ 2 + ρ 3 ρ 1 2 ρ 3 1 + 2 ρ 1 2 ρ 2 ρ 2 2 2 ρ 1 2 .
Proposition 3.
For any H ( 1 2 , 1 ) ,
Γ 4 2 > Γ 4 3 and Γ 4 4 > 0 .
Proof. 
The positivity of the denominator follows from the representation
1 + 2 ρ 1 2 ρ 2 ρ 2 2 2 ρ 1 2 = ( 1 ρ 2 ) ( 1 ρ 1 2 ) + ( 1 ρ 2 ) ( ρ 2 ρ 1 2 )
and with Corollary 1. Therefore, it suffices to prove the claimed relations for the numerators of Γ 4 2 , Γ 4 3 , and Γ 4 4 .
1. Let us prove that Γ 4 2 > Γ 4 3 . The difference between the numerators of Γ 4 2 and Γ 4 3 is equal to
( ρ 1 + ρ 1 2 ρ 3 + ρ 1 ρ 2 2 ρ 2 ρ 3 ρ 1 3 ρ 1 ρ 2 ) ( ρ 1 2 ρ 2 ρ 2 3 + ρ 1 ρ 2 ρ 3 ρ 1 2 + ρ 2 ρ 1 ρ 3 ) = ( ρ 1 ρ 2 ) ( 1 + ρ 3 ) ( 1 ρ 2 ) + ( ρ 1 2 ρ 2 2 ) ( 1 ρ 1 ρ 2 + ρ 3 ) > 0 ,
since ρ 1 > ρ 2 and 1 ρ 1 ρ 2 ρ 3 by Statements 1 and 2 of Corollary 1.
2. Finally, the positivity of Γ 4 4 follows from the following representation of its numerator:
ρ 1 3 + ρ 1 ρ 2 2 2 ρ 1 ρ 2 + ρ 3 ρ 1 2 ρ 3 = ( ρ 1 ρ 2 ) 2 + ( 1 ρ 1 ) ( ρ 3 ρ 1 2 + ρ 1 ρ 3 ρ 2 2 ) ,
because ρ 3 > ρ 1 2 and ρ 1 ρ 3 > ρ 2 2 by (16), and (13), respectively.    □
Figure 2 confirms the above proposition. We see that Γ 4 2 is the largest coefficient. However, Γ 4 3 > Γ 4 4 only for H < 0.752281 ; for larger H, the order changes.
Remark 5.
Consider numerically the relation between Γ 4 3 and Γ 4 4 and the sign of Γ 4 3 . One may represent the numerator of Γ 4 3 as follows:
ρ 1 2 ρ 2 ρ 2 3 + ρ 1 ρ 2 ρ 3 ρ 1 2 + ρ 2 ρ 1 ρ 3 = ( 1 ρ 2 ) ( ρ 2 + ρ 2 2 ρ 1 2 ρ 1 ρ 3 ) .
Thus we need to establish that
ρ 2 + ρ 2 2 ρ 1 2 ρ 1 ρ 3 > 0 .
We established this fact numerically since we could not come up with an analytical proof. Figure 3 shows the plot of the left-hand side of (24) that confirms the positivity of Γ 4 3 .
However, we can look at (24) from another point of view. Rewrite (24) in the following form:
1 + ρ 2 ρ 1 > ρ 1 + ρ 3 ρ 2 .
The left- and the right-hand sides of this inequality are the values at the points x = 0 and x = 1 , respectively, of the following function:
ψ ( H , x ) : = ρ ( H , x ) + ρ ( H , x + 2 ) ρ ( H , x + 1 ) = ( x + 3 ) 2 H 2 ( x + 2 ) 2 H + 2 ( x + 1 ) 2 H 2 x 2 H + ( 1 x ) 2 H ( x + 2 ) 2 H 2 ( x + 1 ) 2 H + x 2 H , x [ 0 , 1 ] .
The graph of the surface ψ ( H , x ) , x [ 0 , 1 ] , H ( 1 / 2 , 1 ) is shown in Figure 4. It was natural to assume that the function ψ ( H , x ) decreases in x for any H, being at x = 0 bigger than at x = 1 . However, the function is not monotone for all H. Figure 5 contains two-dimensional plots of ψ ( H , x ) , x [ 0 , 1 ] for four different values of H: 0.6 , 0.7 , 0.8 and 0.9 . We observe that ψ ( H , 0 ) > ψ ( H , 1 ) for each value of H; however, the function ψ ( H , x ) changes its behavior from increasing to decreasing.
Remark 6.
The unexpected behavior of ψ ( H , x ) , x [ 0 , 1 ] (first increasing, then decreasing) is a consequence of the non-standard term ( 1 x ) 2 H . For x 1 , this function decreases in x for any H > 1 / 2 . Indeed, for x 1 , it has the form
ψ ( H , x ) = ( x + 3 ) 2 H 2 ( x + 2 ) 2 H + 2 ( x + 1 ) 2 H 2 x 2 H + ( x 1 ) 2 H ( x + 2 ) 2 H 2 ( x + 1 ) 2 H + x 2 H = 2 + ( x + 3 ) 2 H + ( x 1 ) 2 H 2 ( x + 1 ) 2 H ( x + 2 ) 2 H 2 ( x + 1 ) 2 H + x 2 H = 2 + ( 1 + 2 x + 1 ) 2 H + ( 1 2 x + 1 ) 2 H 2 ( 1 + 1 x + 1 ) 2 H + ( 1 1 x + 1 ) 2 H 2 .
Write y = 1 x + 1 ( 0 , 1 2 ] . It is sufficient to prove that the function
η ( H , y ) = ( 1 + 2 y ) 2 H + ( 1 2 y ) 2 H 2 ( 1 + y ) 2 H + ( 1 y ) 2 H 2 , y ( 0 , 1 2 ] ,
increases in y for any H ( 1 2 , 1 ) . However, for y < 1 2 ,
( 1 + y ) 2 H + ( 1 y ) 2 H 2 = k = 0 c k y 2 k + 2
and
( 1 + 2 y ) 2 H + ( 1 2 y ) 2 H 2 = k = 0 c k ( 2 y ) 2 k + 2 ,
where
c k = 4 H ( 2 H 1 ) ( 2 H 2 ) ( 2 H 2 k 1 ) ( 2 k + 2 ) ! = 2 ( 2 H ) 2 k + 2 ( 2 k + 2 ) ! , k = 0 , 1 , 2 ,
(here ( x ) n = x ( x 1 ) ( x n + 1 ) is the Pochhammer symbol). The monotonicity of η ( H , y ) for y ( 0 , 1 2 ] can be proved by differentiation. Then
η ( H , y ) = k = 0 c k 2 2 k + 2 y 2 k k = 0 c k y 2 k ,
and hence, the partial derivative equals
y η ( H , y ) = k = 0 c k y 2 k 2 k = 1 c k 2 2 k + 2 2 k y 2 k 1 l = 0 c l y 2 l k = 1 c k 2 k y 2 k 1 l = 0 c l 2 2 l + 2 y 2 l = k = 0 c k y 2 k 2 k = 1 l = 0 c k c l · 2 k 2 2 k + 2 2 2 l + 2 y 2 k + 2 l 1 .
By rearranging the double sum in the numerator, we obtain the expression
y η ( H , y ) = k = 0 c k y 2 k 2 k = 1 l = 0 k c k c l ( 2 k 2 l ) 2 2 k + 2 2 2 l + 2 y 2 k + 2 l 1 ,
which is clearly positive. Thus for any H ( 1 2 , 1 ) , η ( H , y ) is increasing as a function of y ( 0 , 1 2 ) .
Let us try to establish a bit more. We can represent η ( H , y ) in the following form:
η ( H , y ) = k = 0 c k 2 2 k + 2 y 2 k k = 0 c k y 2 k = k = 0 b k y 2 k ,
where the coefficients b k can be found successively from the following equations:
2 2 c 0 = c 0 b 0 , 2 4 c 1 = c 0 b 1 + c 1 b 0 , 2 6 c 2 = c 0 b 2 + c 1 b 1 + c 2 b 0 , 2 8 c 3 = c 0 b 3 + c 1 b 2 + c 2 b 1 + c 3 b 0 ,
Let us find the first few coefficients: b 0 = 2 2 = 4 ,
b 1 = 2 4 c 1 2 2 c 1 c 0 = ( 2 4 2 2 ) ( 2 H 1 ) ( 2 H 2 ) ( 2 H 3 ) 4 ! ( 2 H 1 ) 2 ! = ( 2 H 2 ) ( 2 H 3 ) , b 2 = ( 2 6 2 2 ) c 2 c 1 b 1 c 0 = 60 ( 2 H 1 ) ( 2 H 2 ) ( 2 H 3 ) ( 2 H 4 ) ( 2 H 5 ) 6 ! ( 2 H 1 ) ( 2 H 2 ) 2 ( 2 H 3 ) 2 4 ! ( 2 H 1 ) 2 ! = 2 ! ( 2 H 2 ) ( 2 H 3 ) 4 ! 2 ( 2 H 4 ) ( 2 H 5 ) ( 2 H 2 ) ( 2 H 3 ) = 1 6 ( 2 H 2 ) ( 2 H 3 ) ( 2 H 2 13 H + 17 ) .
It is easy to see that b 0 , b 1 , and b 2 are positive for H ( 1 2 , 1 ) . We believe that b k > 0 for all k. However, the proof of this fact remains an open problem.
Proposition 4.
lim H 1 Γ 4 2 = 531 log 2 4 + 72 log 2 6 + 51 log 2 9 384 log 2 12 + 108 log 2 18 96 log 2 12 640 log 2 2 51 log 2 9 0.742250 , lim H 1 Γ 4 3 = 48 log 2 15 log 9 16 log 2 3 log 9 0.069508 , lim H 1 Γ 4 4 = 108 log 2 18 364 log 2 2 216 log 2 log 9 81 log 2 9 96 log 2 12 640 log 2 2 51 log 2 9 0.188242 .
Remark 7.
Obviously, the sum of the limits of the coefficients is 1, as expected.
Proof 
(Sketch of proof). The proof is straightforward. Substituting ρ 1 = 2 2 H 1 1 , ρ 2 = 1 2 3 2 H 2 2 H + 1 + 1 , and ρ 3 = 1 2 4 2 H 2 · 3 2 H + 2 2 H into (19)–(21), and simplifying the resulting expressions, we obtain
Γ 4 2 = 2 7 · 4 H + 4 3 H + 1 + 6 2 H + 1 4 · 9 H 2 4 H + 3 · 9 H + 2 · 9 2 H + 4 4 H + 18 2 H 2 2 6 H + 1 + 2 · 9 H + 3 · 2 4 H 9 2 H + 12 2 H 1 , Γ 4 3 = 2 2 H + 1 + 4 2 H + 1 2 2 H + 1 · 9 H 64 H + 9 2 H 1 2 2 2 H + 1 + 9 H 16 H 1 , Γ 4 4 = 3 · 2 4 H + 1 + 4 H + 8 2 H + 1 + 4 · 9 H 2 2 H + 1 · 9 H 2 4 H + 1 · 9 H 2 · 9 2 H 4 4 H + 18 2 H 2 2 2 6 H + 1 + 2 · 9 H + 3 · 2 4 H 9 2 H + 12 2 H 1
(for Γ 4 3 , we first cancel out the factor 1 ρ 2 , see (22) and (23)). Then applying l’Hôpital’s rule (twice), we arrive at the claimed limits by simple algebra.    □
Remark 8.
For n = 5 , we present the graphical results only; see Figure 6. The situation here is more complicated compared to the case n = 4 . The first coefficient Γ 5 2 is still the largest; however, the order of three other coefficients changes several times depending on H. In particular, for H close to 1/2, these coefficients are decreasing, but for H close to 1, they are increasing.

2.4. Recurrence Relations for the Coefficients

In general, there are several ways to obtain (4). For example, we can consider the coefficients Γ n k as a result of minimizing the value of the quadratic form
E ( Δ 1 k = 2 n α k Δ k ) 2 .
Evidently, differentiation leads again to the system (4). We can look for the coefficients with the help of the inverse matrix A 1 , where A is from (5). However, calculating the entries of the inverse matrix is as difficult as calculating the determinants. It is possible to avoid determinants using the properties of fGn. More precisely, we propose a recurrence method to calculate the coefficients Γ n k successively, starting with Γ 2 2 = ρ 1 = 2 2 H 1 1 .
Proposition 5.
The following relations hold true:
Γ n + 1 n + 1 = ρ n k = 2 n Γ n k ρ n + 1 k 1 k = 2 n Γ n k ρ k 1 , n 2 ,
Γ n + 1 k = Γ n k Γ n + 1 n + 1 Γ n n k + 2 , n 2 , 2 k n .
Proof. 
In order to prove (26) and (27), we use the theorem on normal correlation as well as the independence of Δ n + 1 E Δ n + 1 Δ 2 , , Δ n and any of Δ k , 2 k n . We get
E Δ 1 Δ 2 , , Δ n , Δ n + 1 = k = 2 n Γ ˜ n k Δ k + Γ n + 1 n + 1 Δ n + 1 E Δ n + 1 Δ 2 , , Δ n ,
where Γ ˜ n k , 2 k n , are some constants. Now we take the conditional expectation E · Δ 2 , , Δ n on both sides of (28) to obtain
E Δ 1 Δ 2 , , Δ n = k = 2 n Γ ˜ n + 1 k Δ k .
Comparing this equality with (3), and taking into account that the increments Δ k , 2 k n are linearly independent, we conclude that
k = 2 n Γ ˜ n + 1 k Δ k = k = 2 n Γ n k Δ k .
Now we insert this equality into (28) and see
E Δ 1 Δ 2 , , Δ n , Δ n + 1 = k = 2 n Γ n k Δ k + Γ n + 1 n + 1 Δ n + 1 E Δ n + 1 Δ 2 , , Δ n .
After multiplying both sides of the last equality by Δ n + 1 E Δ n + 1 Δ 2 , , Δ n and taking expectations, we arrive at
E Δ 1 Δ n + 1 E Δ n + 1 Δ 2 , , Δ n = Γ n + 1 n + 1 E Δ n + 1 E Δ n + 1 Δ 2 , , Δ n 2 .
It follows from the stationarity of the increments that the indices n + 1 and 1 of the last equality play symmetric roles, i.e., they are equivalent to
E Δ n + 1 Δ 1 E Δ 1 Δ 2 , , Δ n = Γ n + 1 n + 1 E Δ 1 E Δ 1 Δ 2 , , Δ n 2 .
From this, we conclude that
Γ n + 1 n + 1 = E Δ n + 1 Δ 1 E Δ 1 Δ 2 , , Δ n E Δ 1 E Δ 1 Δ 2 , , Δ n 2 = ρ n k = 2 n Γ n k ρ n + 1 k 1 k = 2 n Γ n k ρ k 1 .
Thus, the relation (26) is proved.
Using again the symmetry of the stationary increments, it is not hard to see that
E Δ n + 1 Δ 2 , , Δ n = k = 2 n Γ n n k + 2 Δ k .
Therefore, we obtain from (29) that
E Δ 1 Δ 2 , , Δ n , Δ n + 1 = k = 2 n Γ n k Δ k + Γ n + 1 n + 1 Δ n + 1 Γ n + 1 n + 1 k = 2 n Γ n n k + 2 Δ k = k = 2 n Γ n k Γ n + 1 n + 1 Γ n n k + 2 Δ k + Γ n + 1 n + 1 Δ n + 1 ,
and (27) follows.    □

2.5. Positivity of Γ n 2

We conjecture that all coefficients Γ n k , 2 k n are positive. However, analytically, we can prove only the positivity of the leading coefficient, Γ n 2 .
Proposition 6.
For all n 1 , Γ n + 1 2 > 0 .
Proof. 
From the stationarity of the increments, it follows that
E Δ 2 Δ 3 , , Δ n + 1 = k = 3 n + 1 Γ n k 1 Δ k .
Similar to (29),
E Δ 1 Δ 2 , , Δ n , Δ n + 1 = Γ ˜ n + 1 2 Δ 2 E Δ 2 Δ 3 , , Δ n + 1 + k = 3 n + 1 Γ n + 1 k Δ k ,
and so
Γ n + 1 2 = Γ ˜ n + 1 2 = E Δ 2 E Δ 2 Δ 3 , , Δ n + 1 Δ 1 E Δ 2 E Δ 2 Δ 3 , , Δ n + 1 2 .
It remains to prove the positivity of the numerator
E Δ 2 E Δ 2 Δ 3 , , Δ n + 1 Δ 1 = ρ 1 k = 3 n + 1 Γ n k 1 ρ k 1 .
However, we know from (4) that
ρ 1 = k = 2 n Γ n k ρ k 2 .
Therefore,
ρ 1 k = 3 n + 1 Γ n k 1 ρ k 1 = k = 2 n Γ n k ρ k 2 k = 2 n Γ n k ρ k = k = 2 n Γ n k ρ k 2 ρ k > 0 ,
since the sequence ρ k is decreasing, see Corollary 1.    □

3. Numerical Results

3.1. Properties of Coefficients: Positivity and (non)monotonicity

In this section, we compute numerically the coefficients Γ n k for various values of H. In Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6, the results for H = 0.51 , 0.6 , 0.7 , 0.8 , 0.9 , and 0.99 are listed for 2 n 10 .
Observe the following:
1.
All coefficients are positive.
2.
The first coefficient in each row is the largest, i.e., Γ n 2 > Γ n k for any 3 k n . Moreover, often it is substantially larger than any other coefficient in the row.
3.
The conjecture concerning the monotonicity of coefficients (decrease along each row) does not hold in general. If we take sufficiently large values of H, for example H = 0.9 , we see that the coefficient Γ n 3 is always less than Γ 4 . Moreover, the last coefficient Γ n n is bigger than Γ n n 1 for sufficiently large H.
4.
The monotonicity along each column holds, i.e., Γ n k > Γ n + 1 k for fixed k. Figure 7 and Figure 8 illustrate the dependence of Γ n k of n for k = 2 , 3 and for various H.
5.
The limiting distribution of the coefficients as H 1 is not uniform.
6.
It immediately follows from (27) that the coefficients satisfy the following relation:
Γ n + 1 k + 1 = Γ n k + 1 Γ n + 1 n + 1 Γ n n k + 1 ,
whence
Γ n + 1 k + 1 Γ n + 1 k = Γ n k + 1 Γ n k Γ n + 1 n + 1 Γ n n k + 1 Γ n n k + 2 .
The second of these relations makes us expect that knowing that the coefficients decrease in k for n fixed and that the last coefficient Γ n + 1 n + 1 is positive, we can prove that they decrease in k for n n + 1 by induction. Unfortunately, if we take n = 3 as the start of the induction, we see that such a relation holds only if k = 2 , and indeed, Γ 4 2 > Γ 4 3 as we know from Proposition 3. However, the relation between Γ 4 3 and Γ 4 4 is not so stable and depends on H; see Figure 2. Therefore, we cannot state that Γ 4 3 > Γ 4 4 .

3.2. Comparison of the Methods: Computation Time

Let us compare the two methods in terms of computation time. The first method (solving the system of equations) was realized using the R function solve(). We considered two problems:
1.
For fixed n, compute the coefficients Γ n k , 2 k n , i.e., compute the nth row of the matrix.
2.
Compute the whole triangular array Γ m k 2 m n , 2 k m . This requires solving ( n 1 ) systems of equations.
The second method (recurrence relations) always gives us the whole array of coefficients, which can be considered an advantage.
Let us mention that both methods give exactly the same values of the coefficients.
We also compared the time needed for computation on an Intel Core i3-8145U processor by each method. The results are shown in Table 7. Observe that the recurrence method is always faster, especially for large n, if we need to compute the whole matrix. It takes less than 2 s for n = 2000 , while solving all systems of equations takes more that 21 min. Moreover, for large n the recurrence method is even faster than the calculation of a single row of the matrix, which requires solving only one system of equations.

3.3. Remarks on the Case H < 1 2

In this paper, we mainly focus on the case H > 1 2 (the case of long-range dependence). In this section, we give some brief comment on the other case H < 1 2 .
1.
Using the complete monotonicity of ρ (see Lemma 2), we can show that in the case H < 1 2 , the inequalities for ρ k from Corollary 1, Properties 1 and 2, remain valid with opposite signs (the sign “<” instead of the sign “>”). In other words, the sequence ρ k , k 0 is negative, increasing and concave. However, it remains log-convex, i.e., Property 3 of Corollary 1 holds for all H ( 0 , 1 / 2 ) ( 1 / 2 , 1 ) .
2.
The behavior of the coefficients for n = 3 , 4 , 5 is shown in Figure 9, Figure 10 and Figure 11. We see that for H < 1 2 , the coefficients are negative and increasing with respect to H. Moreover, for all H < 1 2 we also observe the monotonicity with respect to k, i.e., Γ n k < Γ n k + 1 (unlike the case H > 1 2 ).
3.
Let H = 0 . In this case B n H = B n 0 = ξ n ξ 0 / 2 , where { ξ i , i 0 } is a sequence of independent and identically distributed N ( 0 , 1 ) random variables. So, Δ 1 = ξ 1 ξ 0 / 2 and, in general,
Δ k = ξ k ξ k 1 2 , k 1 .
Consider the equality
E Δ 1 Δ 2 , , Δ n = k = 2 n Γ n k Δ k , n 2 .
Then
E Δ 1 Δ 2 = 1 2 , E Δ 1 Δ k = 0 , k > 2 , E Δ k Δ k + 1 = 1 2 , k 2 , E Δ k Δ l = 0 , | l k | > 1 .
Therefore, the system of linear equations has the form
1 2 = Γ n 2 1 2 Γ n 3 , 0 = 1 2 Γ n k + Γ n k + 1 1 2 Γ n k + 2 , 2 k n 2 , 0 = 1 2 Γ n n 1 + Γ n n ,
and we obtain
Γ n n = 1 2 Γ n n 1 , Γ n 2 = 1 2 Γ n 3 1 2 , Γ n 3 = 2 3 Γ n 4 1 3 , , Γ n k = k 1 k Γ n k + 1 1 k , , Γ n n 1 = n 2 n 1 Γ n n 1 n 1 .
Finding Γ n n and Γ n n 1 from the first and last equations, and then calculating successively Γ n n 2 , , Γ n 2 , we obtain the following solution:
Γ n k = n k + 1 n , k = 2 , , n .

Author Contributions

Investigation, Y.M., K.R. and R.L.S.; writing—original draft preparation, Y.M., K.R. and R.L.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Japan Science and Technology Agency, project CREST JPMJCR2115 and project STORM 274410 and supported through the joint Polish–German NCN–DFG “Beethoven 3” grant NCN 2018/31/G/ST1/02252 and DFG SCHI 419/11-1.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Biagini, F.; Hu, Y.; Øksendal, B.; Zhang, T. Stochastic Calculus for Fractional Brownian Motion and Applications; Probability and its Applications (New York); Springer: London, UK, 2008. [Google Scholar]
  2. Mishura, Y.S. Stochastic Calculus for Fractional Brownian Motion and Related Processes; Lecture Notes in Mathematics; Springer: Berlin, Germany, 2008; Volume 1929. [Google Scholar]
  3. Nourdin, I. Selected Aspects of Fractional Brownian Motion; Bocconi & Springer Series; Springer: Milan, Italy; Bocconi University Press: Milan, Italy, 2012; Volume 4. [Google Scholar]
  4. Samorodnitsky, G. Long range dependence. Found. Trends Stoch. Syst. 2006, 1, 163–257. [Google Scholar] [CrossRef]
  5. Wang, W.; Cherstvy, A.G.; Liu, X.; Metzler, R. Anomalous diffusion and nonergodicity for heterogeneous diffusion processes with fractional Gaussian noise. Phys. Rev. E 2020, 102, 012146. [Google Scholar] [CrossRef] [PubMed]
  6. Nie, D.; Deng, W. A unified convergence analysis for the fractional diffusion equation driven by fractional Gaussian noise with Hurst index H∈(0,1). SIAM J. Numer. Anal. 2022, 60, 1548–1573. [Google Scholar] [CrossRef]
  7. Nie, D.; Sun, J.; Deng, W. Strong convergence order for the scheme of fractional diffusion equation driven by fractional Gaussian noise. SIAM J. Numer. Anal. 2022, 60, 1879–1904. [Google Scholar] [CrossRef]
  8. Gao, F.Y.; Kang, Y.M.; Chen, X.; Chen, G. Fractional Gaussian noise-enhanced information capacity of a nonlinear neuron model with binary signal input. Phys. Rev. E 2018, 97, 052142. [Google Scholar] [CrossRef] [PubMed]
  9. Brouste, A.; Fukasawa, M. Local asymptotic normality property for fractional Gaussian noise under high-frequency observations. Ann. Statist. 2018, 46, 2045–2061. [Google Scholar] [CrossRef] [Green Version]
  10. Sørbye, S.H.; Rue, H.V. Fractional Gaussian noise: Prior specification and model comparison. Environmetrics 2018, 29, e2457. [Google Scholar] [CrossRef] [Green Version]
  11. Stratonovich, R.L. Theory of Information and Its Value; Springer: Cham, Switzerland, 2020; pp. xxii+419. [Google Scholar]
  12. Dávalos, A.; Jabloun, M.; Ravier, P.; Buttelli, O. On the statistical properties of multiscale permutation entropy: Characterization of the estimator’s variance. Entropy 2019, 21, 450. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Ramdani, S.; Bouchara, F.; Lesne, A. Probabilistic analysis of recurrence plots generated by fractional Gaussian noise. Chaos 2018, 28, 085721. [Google Scholar] [CrossRef] [PubMed]
  14. Dieker, A.B.; Mandjes, M. On spectral simulation of fractional Brownian motion. Probab. Engrg. Inform. Sci. 2003, 17, 417–434. [Google Scholar] [CrossRef]
  15. Gupta, A.; Joshi, S. Some studies on the structure of covariance matrix of discrete-time fBm. IEEE Trans. Signal Process. 2008, 56, 4635–4650. [Google Scholar] [CrossRef]
  16. Kijima, M.; Tam, C.M. Fractional Brownian motions in financial models and their Monte Carlo simulation. Theory Appl. Monte Carlo Simulations 2013, 53–85. [Google Scholar] [CrossRef] [Green Version]
  17. Montillet, J.P.; Yu, K. Covariance matrix analysis for higher order fractional Brownian motion time series. In Proceedings of the 2015 IEEE 28th Canadian Conference on Electrical and Computer Engineering (CCECE), Halifax, NS, Canada, 3–6 May 2015; pp. 1420–1424. [Google Scholar]
  18. Mishura, Y.; Ralchenko, K.; Shklyar, S. General conditions of weak convergence of discrete-time multiplicative scheme to asset price with memory. Risks 2020, 8, 11. [Google Scholar] [CrossRef]
  19. Mishura, Y.; Shevchenko, G. Theory and Statistical Applications of Stochastic Processes; John Wiley & Sons: Hoboken, NJ, USA, 2017. [Google Scholar]
  20. Banna, O.; Mishura, Y.; Ralchenko, K.; Shklyar, S. Fractional Brownian Motion: Approximations and Projections; ISTE Ltd. & Wiley: London, UK, 2019. [Google Scholar]
  21. Schilling, R.L.; Song, R.; Vondraček, Z. Bernstein Functions, 2nd ed.; De Gruyter Studies in Mathematics; Theory and applications; Walter de Gruyter & Co.: Berlin, Germany, 2012; Volume 37. [Google Scholar]
Figure 1. Case n = 3 : Γ 3 2 and Γ 3 3 as the functions of H.
Figure 1. Case n = 3 : Γ 3 2 and Γ 3 3 as the functions of H.
Fractalfract 06 00620 g001
Figure 2. Case n = 4 : Γ 4 2 , Γ 4 3 , and Γ 4 4 as the functions of H.
Figure 2. Case n = 4 : Γ 4 2 , Γ 4 3 , and Γ 4 4 as the functions of H.
Fractalfract 06 00620 g002
Figure 3. The left-hand side of (24).
Figure 3. The left-hand side of (24).
Fractalfract 06 00620 g003
Figure 4. The function ψ ( H , x ) as a surface.
Figure 4. The function ψ ( H , x ) as a surface.
Fractalfract 06 00620 g004
Figure 5. The function ψ ( H , x ) as a function of x for various H.
Figure 5. The function ψ ( H , x ) as a function of x for various H.
Fractalfract 06 00620 g005
Figure 6. Case n = 5 : Γ 5 2 , Γ 5 3 , Γ 5 4 , and Γ 5 5 depending on H.
Figure 6. Case n = 5 : Γ 5 2 , Γ 5 3 , Γ 5 4 , and Γ 5 5 depending on H.
Fractalfract 06 00620 g006
Figure 7. Case k = 2 : Γ n 2 as the functions of n.
Figure 7. Case k = 2 : Γ n 2 as the functions of n.
Fractalfract 06 00620 g007
Figure 8. Case k = 3 : Γ n 3 as the functions of n.
Figure 8. Case k = 3 : Γ n 3 as the functions of n.
Fractalfract 06 00620 g008
Figure 9. Case n = 3 : Γ 3 2 and Γ 3 3 as the functions of H.
Figure 9. Case n = 3 : Γ 3 2 and Γ 3 3 as the functions of H.
Fractalfract 06 00620 g009
Figure 10. Case n = 4 : Γ 4 2 , Γ 4 3 , and Γ 4 4 as the functions of H.
Figure 10. Case n = 4 : Γ 4 2 , Γ 4 3 , and Γ 4 4 as the functions of H.
Fractalfract 06 00620 g010
Figure 11. Case n = 5 : Γ 5 2 , Γ 5 3 , Γ 5 4 , and Γ 5 5 depending on H.
Figure 11. Case n = 5 : Γ 5 2 , Γ 5 3 , Γ 5 4 , and Γ 5 5 depending on H.
Fractalfract 06 00620 g011
Table 1. Coefficients Γ n k for H = 0.51 .
Table 1. Coefficients Γ n k for H = 0.51 .
n k 2345678910
20.01396
30.013890.00521
40.013870.005160.00339
50.013860.005150.003360.00253
60.013860.005140.003350.002500.00201
70.013850.005140.003340.002490.001990.00167
80.013850.005140.003340.002480.001980.001650.00143
90.013850.005130.003340.002480.001980.001650.001420.00125
100.013850.005130.003330.002480.001980.001640.001410.001240.00111
Table 2. Coefficients Γ n k for H = 0.6 .
Table 2. Coefficients Γ n k for H = 0.6 .
n k 2345678910
20.14870
30.141230.05020
40.139540.045420.03383
50.138680.044270.030310.02522
60.138170.043660.029420.022430.02013
70.137840.043290.028930.021700.017810.01675
80.137600.043030.028620.021290.017190.014770.01434
90.137420.042850.028400.021020.016830.014230.012620.01254
100.137280.042710.028240.020830.016600.013910.012140.011010.01114
Table 3. Coefficients Γ n k for H = 0.7 .
Table 3. Coefficients Γ n k for H = 0.7 .
n k 2345678910
20.31951
30.288670.09652
40.282070.076770.06840
50.278600.072880.054090.05074
60.276540.070690.051140.039460.04048
70.275180.069360.049420.037080.031170.03366
80.274210.068460.048350.035660.029170.025730.02881
90.273480.067820.047620.034760.027960.024010.021910.02518
100.272920.067330.047080.034130.027180.022950.020390.019070.02237
Table 4. Coefficients Γ n k for H = 0.8 .
Table 4. Coefficients Γ n k for H = 0.8 .
n k 2345678910
20.51572
30.443790.13947
40.429150.092870.10500
50.421080.085740.072020.07684
60.416370.081320.066760.051030.06130
70.413250.078730.063360.046900.040100.05089
80.411030.076980.061320.044140.036680.032910.04352
90.409380.075730.059930.042460.034350.029990.027900.03801
100.408100.074790.058920.041300.032920.027960.025340.024200.03373
Table 5. Coefficients Γ n k for H = 0.9 .
Table 5. Coefficients Γ n k for H = 0.9 .
n k 2345678910
20.74110
30.608090.17948
40.582130.091520.14465
50.567140.082040.084330.10362
60.558570.075060.077540.056710.08272
70.552900.071180.072230.051560.044450.06852
80.548890.068580.069210.047340.040280.036170.05851
90.545900.066730.067160.044920.036750.032660.030490.05105
100.543590.065350.065680.043260.034720.029620.027470.026330.04527
Table 6. Coefficients Γ n k for H = 0.99 .
Table 6. Coefficients Γ n k for H = 0.99 .
n k 2345678910
20.97247
30.765060.21328
40.725880.072750.18368
50.702330.063420.090590.12824
60.689170.054130.084080.056170.10262
70.680470.049370.076960.051590.044220.08474
80.674350.046170.073230.046020.040650.035550.07228
90.669790.043930.070670.043120.036040.032650.029800.06299
100.666280.042270.068850.041110.033630.028700.027350.025600.05582
Table 7. Computation time.
Table 7. Computation time.
n10050010002000
System method (last row)0.02 s0.20 s0.51 s3.10 s
System method (whole matrix)0.17 s16.46 s2.27 min21.78 min
Recurrence method0.04 s0.19 s0.48 s1.83 s
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mishura, Y.; Ralchenko, K.; Schilling, R.L. Analytical and Computational Problems Related to Fractional Gaussian Noise. Fractal Fract. 2022, 6, 620. https://doi.org/10.3390/fractalfract6110620

AMA Style

Mishura Y, Ralchenko K, Schilling RL. Analytical and Computational Problems Related to Fractional Gaussian Noise. Fractal and Fractional. 2022; 6(11):620. https://doi.org/10.3390/fractalfract6110620

Chicago/Turabian Style

Mishura, Yuliya, Kostiantyn Ralchenko, and René L. Schilling. 2022. "Analytical and Computational Problems Related to Fractional Gaussian Noise" Fractal and Fractional 6, no. 11: 620. https://doi.org/10.3390/fractalfract6110620

Article Metrics

Back to TopTop