Next Article in Journal
Invariant-Based Inverse Engineering for Fast and Robust Load Transport in a Double Pendulum Bridge Crane
Previous Article in Journal
Carnot Cycle and Heat Engine: Fundamentals and Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Note on Wavelet-Based Estimator of the Hurst Parameter

Center of Statistical Research, School of Statistics, Southwestern University of Finance and Economics, Chengdu 611130, China
Entropy 2020, 22(3), 349; https://doi.org/10.3390/e22030349
Submission received: 7 February 2020 / Revised: 15 March 2020 / Accepted: 15 March 2020 / Published: 18 March 2020

Abstract

:
The signals in numerous fields usually have scaling behaviors (long-range dependence and self-similarity) which is characterized by the Hurst parameter H. Fractal Brownian motion (FBM) plays an important role in modeling signals with self-similarity and long-range dependence. Wavelet analysis is a common method for signal processing, and has been used for estimation of Hurst parameter. This paper conducts a detailed numerical simulation study in the case of FBM on the selection of parameters and the empirical bias in the wavelet-based estimator which have not been studied comprehensively in previous studies, especially for the empirical bias. The results show that the empirical bias is due to the initialization errors caused by discrete sampling, and is not related to simulation methods. When choosing an appropriate orthogonal compact supported wavelet, the empirical bias is almost not related to the inaccurate bias correction caused by correlations of wavelet coefficients. The latter two causes are studied via comparison of estimators and comparison of simulation methods. These results could be a reference for future studies and applications in the scaling behavior of signals. Some preliminary results of this study have provided a reference for my previous studies.

1. Introduction

The signals in numerous fields usually have scaling behavior (long-range dependence and self-similarity) which has been recognized as a key property for data characterization and decision making (see e.g., [1,2,3,4,5]). It is usually characterized by the Hurst parameter H [6]. The key point for detecting the scaling behavior is the estimation of the Hurst parameter H. The Hurst parameter was first computed via R/S statistic by Hurst [7] for the study of hydrological properties of Nile river. Hurst found that R/S statistic on the Nile data grew approximately as n H , H = 0.74 . n is the number of observations. This phenomenon is called the Hurst phenomenon. To study the Hurst phenomenon, Mandelbrot introduced the concept of self-similar and explained the Hurst phenomenon successfully using self-similar fractional Brownian motion (FBM) [8]. A continuous process X ( t ) is said to be self-similar, if for a > 0 , X ( a t ) = d a H X ( t ) , H is the self-similar parameter. When H > 0.5 , the increments of FBM are long-range dependent, i.e., the summation of their auto-covariances is divergent. Thus, fractal Brownian motion and its increments (fractional Gaussian noise (FGN)) play important roles in modeling signals with self-similarity and long-range dependence. Most studies on this issue are based on FBM.
Wavelet analysis is a common method for signal processing (see e.g., [9,10]) and has been widely used for the fractal analysis of signals due to its multiresolution. Nicolis et al. [11] defined three kinds of wavelet-based entropy for studying the two-dimensional fractional Brownian field. Li et al. [12] used wavelet fractal and twin support vector machine to study the classification of heart sound signals. Ramírez-Pacheco et al. [13] studied fractal signal classification using non-extensive wavelet-based entropy.
The wavelet-based estimator of the Hurst parameter was well-established by Abry et al. (see [14,15,16,17,18,19,20,21]). Compared with other estimators, such as the R/S method, the periodogram, the variogram (semi-parametric or nonparametric estimator) and the parametric method, the wavelet-based estimator performs well in both the statistical and computational sense, and is superior in robustness (see [18,19] and the references therein). Besides, the wavelet-based method can also eliminate some trends (linear trends, polynomial trend, or more) by the property of its vanishing moments [17], which makes the estimator robust to some nonstationarities. More simulation studies for the estimation of Hurst parameter can be seen in [22]. Based on the standard wavelet-based estimator, some robust estimators are proposed. Soltani et al. [23] proposed an improved wavelet-based estimator via the average of two wavelet coefficients of half length apart and taking the logarithm first. Shen et al. [24] proposed a robust estimator of self-similar parameter using wavelet transform, which was less sensitive to some non-stationary traffic conditions. Park & Park [25] introduced a robust wavelet-based estimator which took the logarithm of wavelet coefficients first and averaged them later. Feng & Vidakovic [26] estimated the Hurst parameter via a general trimean estimator on nondecimated wavelet coefficients Kang & Vidakovic [27] proposed a robust estimator of Hurst parameter via medians of log-squared nondecimated wavelet coefficients.
Despite extensive studies of standard wavelet-based estimator proposed by Abry et al., there is still a lack of comprehensive and detailed numerical simulation study on fractal Brownian motion, especially for the selection of parameters and the empirical bias. I have not seen studies on the changes of bias and variance with all different Hs and with different data lengths, which I think is important for the selection of the lower octave bound j 1 , especially at small values of H. j 1 is selected via the minimum mean square error. Thus, this paper conducts a detailed numerical simulation study on the selection of parameters including the following contents.
  • The changes of bias and variance with all different Hs, different data lengths, different j 1 s and different wavelets;
  • The relations of selected j 1 with data length and H;
For the causes of the empirical bias which exist in standard wavelet-based estimator, the following three causes in the case of FBM are concluded.
  • The initialization for initial approximation wavelet coefficients which introduces errors in used detailed wavelet coefficients.
  • The inaccurate bias correction caused by correlations of wavelet coefficients.
  • The method of simulation for FBM is not enough exact that the empirical bias is caused.
There exist many studies on the reduction of empirical bias caused by the first two reasons, but lack of study on determining which is the main cause of empirical bias. It is important for reducing empirical bias via appropriate techniques. Combining with results of parameters selection, this paper analyzes the above three causes of empirical bias in the case of FBM via comparison of estimators and comparison of simulation methods. The results obtained from above numerical simulations can be a reference for future studies and applications in the scaling behavior of signals. Some preliminary results of this study have provided a reference for my previous studies on wavelet-based estimation of Hurst parameters [28,29,30].
This paper is organized as follows. In Section 2, this paper introduces two available estimators for the Hurst parameter, and the initialization methods for the initial approximation wavelet coefficients. The simulation methods of FBM are described in Section 3. The main results are reported and discussed in Section 4 and my works are concluded in Section 5.

2. Wavelet-Based Estimator

2.1. Definitions and Properties

Fractional Brownian motion { X ( t ) , t R } with Hurst parameter H > 0 is a real-valued mean-zero Gaussian process with the following covariance structure:
E X ( t ) X ( s ) = 1 2 { | t | 2 H + | s | 2 H | t s | 2 H } .
It is a self-similar process with stationary increment. Its wavelet coefficient is defined by
d X ( j , k ) = R ψ j , k ( t ) X ( t ) d t .
ψ ( t ) is the mother wavelet, which is defined through a scaling function ϕ ( t ) . Usually we choose ψ ( t ) as a base function, and we can change it to ψ j , k ( t ) = 2 j / 2 ψ ( 2 j t k ) , j , k Z . The factors 2 j and j are called the scale and octave respectively.
Please note that FBM is usually denoted by B H . this paper uses the symbol X instead of B H since the methods in this section for FBM can be applicable to a more general process named self-similar process with stationary increment and finite variance [20].
Some key properties of the wavelet coefficient of X are given in the following lemma. The proof of this lemma can be found in [19,20,31,32,33].
Lemma 1.
Let { X ( t ) , t R } be a fractional Brownian motion. ψ ( t ) L 2 ( R ) is an orthonormal wavelet with compact support and have N 1 vanishing moments. The wavelet coefficients of X ( t ) given in (2) have these properties below,
(1) E d X ( j , k ) = 0 and d X ( j , k ) is Gaussian, for any j , k Z .
(2) For fixed j Z ,
d X ( j , k ) = d 2 j ( H + 1 / 2 ) d X ( 0 , k ) , k Z .
(3) For fixed j Z ,
d X ( j , k + h ) = d d X ( j , h ) , k , h Z .
(4) For j , j , k , k Z ,
E d X ( j , k ) d X ( j , k ) | 2 j k 2 j k | 2 H 2 N , | 2 j k 2 j k | + .
In the above, = d means equality in distribution.
Remark 1.
In view of Equation (5), to avoid long-range dependence for d X ( j , k ) , i.e., to ensure that j , k Z E | d X ( j , k ) d X ( 0 , 0 ) | < , one needs to choose
N > H + 1 / 2 , i . e . , ( 2 H 2 N ) < 1 ,
that is, have at least N = 2 . Under this condition, the correlation of d X ( j , k ) tends rapidly to 0 at large lags.
According to Remark 1 and Equation (5), let the number of vanishing moments N 2 . It is reasonable to impose the following assumptions.
  • For fixed j, the d X ( j , · ) are independent and identically distributed;
  • The processes d X ( j , · ) and d X ( j , · ) , j j , are independent.
Under these two assumptions, according to Lemma 1, there exist two available least squares estimators for the Hurst parameter. [19,20,34], one of which is first applied in the case of FBM for the bias study of common estimator.

2.2. Two Wavelet-Based Estimators

The First Estimator 

The first estimator is the standard wavelet-based estimator of Hurst parameter which is proposed by Abry et al and commonly used in applications of various fields. In view of Equations (3) and (4), I can check the following formula.
E d X 2 ( j , k ) = C 1 2 j ( 2 H + 1 ) , C 1 = E d X 2 ( 0 , 0 ) .
Take the logarithm:
log 2 E d X 2 ( j , k ) = j ( 2 H + 1 ) + log 2 C 1 .
So the estimation of H can be conducted by a linear regression in the left part versus j diagram. The E d X 2 ( j , k ) is estimated by
S ( j ) : = 1 / n j d X 2 ( j , k ) .
where n j stands for the number of d X 2 ( j , k ) actually available at octave j.
Due to different variances of log 2 S ( j ) at different js, the weighted least squares for this regression model is needed. The weight is the reciprocal of the variance of log 2 S ( j ) .
Please note that
E log 2 S ( j ) log 2 E S ( j ) = log 2 E d X 2 ( j , k ) .
This can lead to the bias of estimator.
Define the variables y 1 ( j ) s as
y 1 ( j ) : = log 2 S ( j ) g ( j ) .
where g ( j ) is calculated such that E y 1 ( j ) = log 2 E S ( j ) . To ensure the unbiasedness of the estimator, I use y 1 ( j ) as the response variable instead of log 2 S ( j ) . Moreover, Var y 1 ( j ) = Var log 2 S ( j ) .
The calculation of g ( j ) and Var y 1 ( j ) are shown in [18,19,20].
g ( j ) = Γ ( n j / 2 ) / ( Γ ( n j / 2 ) ln 2 ) log 2 ( n j / 2 ) ,
Var y 1 ( j ) = ς ( 2 , n j / 2 ) / ln 2 2 ,
where ς ( 2 , z ) : = x = 0 1 / ( z + x ) 2 is a generalized Riemann Zeta function. Γ and Γ are the gamma function and its derivative respectively.
The g ( j ) and Var y 1 ( j ) can be also calculated via sample moment estimators [18,19,20]
g ( j ) log 2 e 2 C ( j ) ,
Var y 1 ( j ) ( log 2 e ) 2 C ( j ) ,
where C ( j ) = Var d X 2 ( j , · ) / [ n j ( E d X 2 ( j , · ) ) 2 ] . The C ( j ) term is estimated using the sample moment estimators of the fourth and second moments of d X ( j , · ) at each octave.

The Second Estimator 

As mentioned above, since E log 2 S ( j ) log 2 E S ( j ) , the first estimator needs to correct bias. For avoiding this case, the second least squares estimator for Hurst parameter is proposed. This estimator is also originally proposed by Abry et al. [34] and then studied by Park & Park [25] for the purpose of robustness.
Based on Equations (3) and (4),
d X 2 ( j , k ) = d 2 j ( 2 H + 1 ) d X 2 ( 0 , 0 ) .
Now take the logarithm first and then the expectation, obtain the following new equation.
E log 2 d X 2 ( j , k ) = j ( 2 H + 1 ) + C 2 , C 2 = E log 2 d X 2 ( 0 , 0 ) .
So the estimation of H can be conducted by a weighted linear regression in the left part versus j diagram. The E log 2 d X 2 ( j , k ) is estimated by
L S ( j ) : = 1 / n j log 2 d X 2 ( j , k ) .
where n j stands for the number of d X 2 ( j , k ) actually available at octave j.
Compared with the first estimator, the second estimator changes the order of expectation and logarithmic. The idea of this estimator is first proposed for analyzing the α -stable self-similar processes with infinite second-order statistics and long-range dependence [34].
Define the variables y 2 ( j ) as
y 2 ( j ) : = L S ( j ) .
We can check that E y 2 ( j ) = E L S ( j ) = E log 2 d X 2 ( j , k ) . Let y 2 ( j ) be the response variable of weighted linear regression. The unbiasedness of the estimator follows from the unbiasedness of y 2 ( j ) .
Similar to the calculation shown in [18,19,20], the variance of y 2 ( j ) can be calculated for the weight of regression.
Var y 2 ( j ) = ς ( 2 , 1 / 2 ) / ( n j ln 2 2 ) ,
And by sample moment estimators [18,19,20]
Var y 2 ( j ) = Var log 2 d X 2 ( j , · ) / n j .
The Var log 2 d X 2 ( j , · ) is estimated using its sample variance at each octave.

Explicit Formula of theTwo Estimators 

Let j 1 denote the lower bound of j, and j 2 denote the upper bound of j, i.e., the values of j are chosen j 1 j j 2 . According to the weighted least squares, the explicit formula of estimators can be obtained as follows,
H ^ = j = j 1 j 2 ω ( j ) y ( j ) 1 2 ,
where ω ( j ) = T 0 j T 1 σ 2 ( j ) ( T 0 T 2 T 1 2 ) , T 0 = j = j 1 j 2 1 / σ 2 ( j ) , T 1 = j = j 1 j 2 j / σ 2 ( j ) , T 2 = j = j 1 j 2 j 2 / σ 2 ( j ) .
When using the first method, y ( j ) = y 1 ( j ) and σ 2 ( j ) = Var y 1 ( j ) , let H 1 ^ denote the first estimator. When using the second method, y ( j ) = y 2 ( j ) and σ 2 ( j ) = Var y 2 ( j ) , let H 2 ^ denote the second estimator.

Variance Comparison 

The variances of H 1 ^ and H 2 ^ can be compared via a simple theoretical analysis. In view of Equation (19), the variance of H ^ can be calculated by
Var H ^ = 1 4 j = j 1 j 2 ω 2 ( j ) σ 2 ( j ) .
When n j is large, recall that the asymptotic form of Var y 1 ( j ) (see [19]).
Var y 1 ( j ) 2 / ( n j ln 2 2 ) .
Also recall Equation (17),
Var y 2 ( j ) = ς ( 2 , 1 / 2 ) / ( n j ln 2 2 ) .
So when n j is large, the asymptotic form of ratio can be obtained,
Var H 2 ^ Var H 1 ^ ς ( 2 , 1 / 2 ) 2 = 2.47 .
The variance of H 1 ^ is smaller than that of H 2 ^ .
Please note that the nondecimated wavelet coefficients have been used in wavelet-based estimator since they can decrease the variance due to their redundancy [26,27]. However, they can also increase the correlations in wavelet coefficients. Then when using nondecimated wavelet coefficients, we should take logarithm first. It is suitable to reduce the variance of the second estimator via nondecimated wavelet coefficients. For further considering the possible outliers caused by logarithmic transform, Kang & Vidakovic [27] suggest using medians for estimation of Hurst parameter in this case. This method is denoted by MEDL.

2.3. Calculation of Wavelet Coefficients

According to the multiresolution analysis (MRA), the wavelet coefficients can be calculated by fast pyramidal algorithm. The scaling function ϕ and the wavelet ψ satisfy so-called two-scale equation:
ϕ ( t / 2 ) = 2 n u n ϕ ( t n ) ,
ψ ( t / 2 ) = 2 n v n ϕ ( t n ) ,
where { u n } and { v n } are two existing sequences belonging to l 2 .
Define the approximation coefficients a X ( j , k ) :
a X ( j , k ) : = R ϕ j , k ( t ) X ( t ) d t
where ϕ j , k ( t ) = 2 j / 2 ϕ ( 2 j t k ) .
So d X ( j , k ) can be calculated by fast pyramidal algorithm.
a X ( j , k ) = n u n a X ( j 1 , 2 k + n ) ,
d X ( j , k ) = n v n a X ( j 1 , 2 k + n ) .
In view of above formulas, the a X ( 0 , · ) are obtained via integral. However, in practice, the data we obtained are always discrete and finite. The a X ( 0 , · ) cannot be obtained by integral in continuous time. When sampling frequency is high and the scale of wavelet transform is small, the typical approach is to set [35,36,37]
a X ( 0 , k ) = X ( k ) .
where { X ( k ) , k Z , 1 k n } is discrete and finite FBM.
In view of Equations (25) and (26), the number of available wavelet coefficients n j decreases by half. Then n j n 2 j .
Remark 2.
For a wavelet which has time support (finite or decreases very fast as | t | ), an increase in the number of vanishing moments N comes with an enlargement of the time support [20]. In the case of finite data, because of the boundary effects of wavelet transform, this will lead to the decrease of the number of available wavelet coefficients n j at each octave.

2.4. The Initialization Method

The discrete sampling for a continuous process X ( t ) usually implies an irrevocable loss of information on X ( t ) [35]. So the approach shown in Equation (27) introduces errors in d X ( j , k ) s. It is known [14,18] that these initialization errors are significant on small octaves but quickly decrease with increasing j. For large j, the initialization issue can be ignored. Veitch et al. [35] introduce an initialization method for discrete time series, which has been proved meaningful for correction of the initialization errors in the case of long-range dependent process.
This initialization method is based on the stochastic version of the Shannon sampling theorem [35,38]. Consider the bandlimited stationary stochastic process { X ( t ) , t R } , construct X ˜ ( t ) by
X ˜ ( t ) = n = X ( n ) sinc ( t n ) , where sinc ( t ) = sin π t π t .
The X ˜ ( t ) is bandlimited, and has the same spectral density as that of X ( t ) in the frequency band [ 1 / 2 , 1 / 2 ] (otherwise is zero). It is easy to check
{ X ˜ ( k ) , k Z } = { X ( k ) , k Z } .
Furthermore,
a X ( 0 , k ) = R ϕ ( t k ) X ˜ ( t ) d t = n = X ( n ) R ϕ ( t k ) sinc ( t n ) d t = n = X ( n ) I ( k n ) ,
where I ( m ) = R ϕ ( t ) sinc ( t + m ) d t . The sequence { I ( m ) } is calculated in [35].
Please note that because of the boundary effects, the initialization will lead to the decrease of the number of available wavelet coefficients n j .

3. Simulation of FBM

For studying the statistical performance of the two estimators in the case of FBM, the numerical simulation of FBM is conducted. Here, this section briefly introduces two simulation methods of FBM [39,40,41,42]. Let { X ( t ) , t [ 0 , 1 ] } be a mean-zero fractional Brownian motion with Hurst parameter H ( 0 , 1 ) .

The Cholesky Method 

The Cholesky method uses the Cholesky decomposition of the covariance matrix. The FBM generated by this method is exact in the sense of covariance structure, but this method is slow.
Let Σ = ( Σ i , j ) be the covariance matrix of FBM, where Σ i , j = Cov ( X ( t i ) , X ( t j ) ) , t i = i / n , t j = j / n , i , j = 1 , , n . Conduct the Cholesky decomposition Σ = A A .
At last, X = ( X ( t 1 ) , , X ( t n ) ) = A Z is the generated FBM, where Z = ( Z 1 , , Z n ) , Z 1 , , Z n are independent and identically distributed N ( 0 , 1 ) .

The Circulant Embedding Method 

The simulation procedure is based on the method of circulant embedding. The algorithm of circulant embedding was originally proposed by Davies and Harte [39], and was later simultaneously generalized by Dietrich and Newsam (see [40,41,42] and the references therein). It has been regarded as a fast and exact simulation of stationary Gaussian processes [42]. I use this method to generate a fractional Gaussian noise, and construct a fractional Brownian motion via the cumulative sum of generated fractional Gaussian noise [41].
First consider the fractional Gaussian noise, which is a zero-mean stationary Gaussian process { Z k , k = 1 , n } with covariance
Cov ( Z k , Z k + Δ k ) = | Δ k + 1 | 2 H + | Δ k 1 | 2 H 2 | Δ k | 2 H 2 , Δ k = 0 , , n .
Such a stationary Gaussian noise can be efficiently and exactly generated by the method of circulant embedding and fast Fourier transform [41,42]. The fractional Brownian motion { X ( t ) , t [ 0 , 1 ] } is constructed on a uniformly spaced grid via the cumulative sum [41]
X ( t k ) = n H i = 1 k Z i , k = 1 , , n .

4. Simulation Results and Discussions

This section focuses on the numerical study of commonly used wavelet-based estimator (the first estimator) which still lacks of comprehensive and detailed numerical study on estimation of fractal Brownian motion, especially on its empirical bias and the selection of parameters. The second estimator was also compared with the commonly used estimator in this section for the purpose of empirical bias analysis. If not specified, the sample trajectory of FBM used in this section is generated by the circulant embedding method.

4.1. Selection of Parameters

It is a key step to select octaves js and the number of vanishing moments N (or wavelet) before estimation. First this subsection studies the selection of these parameters for later estimations. For octaves js, the lower bound j 1 and the upper bound j 2 need to be determined. The j 2 is chosen as the largest possible. In practice, it is set equal to
j 2 = log 2 n C ,
where n denotes the data length and C is a constant (with value log 2 ( 2 N + 1 ) corresponding to the length of the support of the wavelet [20]). As the discussion in Section 2, the initialization for a X ( 0 , k ) given in (27) introduces errors in the d X ( j , k ) . It is known [14,18] that initialization errors are significant on small octaves but decrease with increasing j. So small octave cannot be chosen as j 1 . Based on prior studies [18,20], and this paper selects j 1 by the minimum mean square error (MSE), where the MSE is defined as
MSE ( H ^ ) : = E ( H ^ H ) 2 = ( E H ^ H ) 2 + var ( H ^ ) .
It allows the tradeoff between variance and bias. The results for the selection of j 1 are shown in Table 1 and Figure 1.
Figure 1 shows that the increase of j 1 causes the decrease of bias and the increase of standard deviation for all Hs. So it is suitable to choose the j 1 by the minimum of MSE. From Table 1, when H > 0.5 , j 1 is chosen j 1 = 3 by minimum MSE. When 0.4 H 0.5 , the RMSE of j 1 = 3 is close to that of chosen j 1 = 4 . So considering most Hs, j 1 = 3 should be chosen in the case of FBM.
Please note that the results of Figure 1 and Table 1 are based on long series. In this case, the variances of all Hs are small. For small values of H, the bias is large, and the MSE is mainly determined by the bias. So the estimator of small H trends to select large j 1 which can lead to small bias. Now I study the effects of data length and the selection of j 1 at different data lengths. The results of this issue are shown in Figure 2 and Table 2.
Figure 2 shows that the data length has little effect on the bias, but its decrease causes the increase of standard deviation for all Hs. The increase of standard deviation may affect the selection of j 1 . Thus, continue to use the minimum MSE to select j 1 at different data lengths for the tradeoff between variance and bias. The results of selection are shown in Table 2. From Table 2, it can be seen that the selected j 1 increases with the increase of data length, and the smaller the value of H, the faster the increase. Based on simulation results, the following formula is given for explanation.
min j 1 MSE ( H ^ ) = min j 1 Bias 2 ( H , j 1 ) + var ( j 1 , n ) ,
where Bias ( H , j 1 ) denotes the bias of estimator which decreases with the increase of H and the increase of j 1 . var ( j 1 , n ) denotes the variance which decreases with the decrease of j 1 and the increase of n. When n increases, the variance becomes smaller, the selected j 1 trends to increase for the tradeoff between variance and bias. The smaller the value of H, the larger the bias, and the more the selected j 1 increase.
For the wavelet, this paper chooses the classical Daubechies wavelets, which are orthonormal and have compact support. According to Remark 1, the number of vanishing moments must be chosen N 2 . For analyzing the effect of N, I use N = 1 8 to estimate the Hurst parameter of FBM. The results are shown in Figure 3.
From Figure 3, when N 2 , the increase of N makes no improvements to the performance of the estimator. Besides, according to Remark 2, large N will cause the loss of available wavelet coefficients. So appropriately we should choose N = 3 .
Finally, this subsection studies the performance of this estimator using various wavelets for further chosen of wavelet. The results are shown in Figure 4. db3 stands for Daubechies wavelet with three vanishing moments. sym4 stands for Symlets wavelet with four vanishing moments. dmey stands for discrete Meyer wavelet. bior3.1 stands for biorthogonal spline wavelets with orders N r = 3 (vanishing moments) and N d = 1 . Since the Symlets wavelet with three vanishing moments has the same filters as db3, this part uses this kind of wavelet with four vanishing moments. The first three wavelets are orthogonal and have compact support. The last wavelet is biorthogonal. It can be seen in Figure 4 that performance using the first three wavelets are almost the same except for the standard deviation of dmey at H = 0.95 . The biorthogonal spline wavelet performs worse than orthogonal wavelets except at H 0.1 . This is due to the large bias caused by strong correlations of biorthogonal wavelet coefficients, and is consistent with the conclusion of Lemma 1. We need to use orthogonal compact supported wavelet to control these correlations via vanishing moments.

4.2. Results and Discussions on Empirical Bias

This subsection conducts a detailed numerical analysis on the empirical bias exits in the commonly used wavelet-based estimator (the first estimator). Based on previous analysis, the following three possible causes of empirical bias are concluded.
  • The initialization for a X ( 0 , k ) given in (27) introduces errors in d X ( j , k ) , and the initialization errors are significant on small octaves but decrease with increasing j.
  • The inaccurate bias correction for E log 2 S ( j ) log 2 E S ( j ) (under independent assumptions) caused by correlations of wavelet coefficients.
  • The method of simulation for FBM is not enough exact that the empirical bias is caused.
From results of Section 4.1, I have the following information on empirical bias
  • The increase of N and change of wavelet made no improvements to the empirical bias. The chosen of biorthogonal wavelet makes the empirical bias worse.
  • The increase of j 1 leads to the decrease of empirical bias.
  • The empirical bias increases with the decrease of H. when choosing j 1 = 3 and N = 3 , the empirical bias of estimator H 1 ^ can be ignored for H 0.4 . So the estimator H 1 ^ is suitable to detect the long-range dependence (can be described by H > 0.5 ).
The fact that increase of j 1 leads to decrease of empirical bias is consistent with the first cause. As we know, the larger the value of H is, the smoother the sample path of FBM is, and the more exact the initialization given in (27) is. It is consistent with the fact that the empirical bias increases with the decrease of H. So I conclude that the initialization errors caused by (27) contribute to the empirical bias.
The first information indicates the empirical bias is related to correlations of wavelet coefficients. However, this effects can be fixed (maybe eliminated) via the selection of orthogonal compact supported wavelet.
Next, after choosing the orthogonal compact supported wavelet (db3) and fixing j 1 = 3 , this study analyzes the latter two causes via comparing with the second estimator and comparison of simulation methods respectively.

Comparison of Estimators 

Since the unbiasedness of the second estimator H 2 ^ is get naturally without independence assumptions of wavelet coefficients. This part compares H 2 ^ with H 1 ^ for studying its empirical bias. The length of simulation data is n = 2 18 . ( j 1 , j 2 ) are chosen ( 3 , 15 ) . The wavelet coefficients are computed using the classical Daubechies wavelet with N = 3 vanishing moments. The results are shown in Figure 5 and Table 3.
Table 3 shows the results for the estimator of ratio given in Equation (22). It indicates that the variance of H 2 ^ is about twice that of H 1 ^ , which roughly satisfy the theoretical results given in (22). From Figure 5, it can be seen that when H < 0.4 , both H 1 ^ and H 2 ^ have the same obvious bias despite the theoretical unbiasedness of the two estimators under independence assumptions of wavelet coefficients. Besides, the same as the results shown in Table 3, the standard deviation (Std) of H 2 ^ is larger than that of H 1 ^ .
Because the empirical bias also exists in H 2 ^ whose unbiasedness is get naturally, and is the same as that of H 2 ^ . I conclude that the empirical bias of H 1 ^ is not due to the inaccurate bias correction for E log 2 S ( j ) log 2 E S ( j ) caused by correlations of wavelet coefficients.
Besides, considering the variances of the two estimators, we should choose the first estimator H 1 ^ for the estimation of Hurst parameter.

Comparison of Simulation Methods 

For the third cause, this part applies H 1 ^ to the FBM that is exactly generated by the Cholesky method for comparison. The results are shown in Figure 6.
From Figure 6, it can be seen that estimations for the FBM respectively generated by the Cholesky method and the circulant embedding method has almost the same empirical bias. I conclude that the method of simulation is not the cause of empirical bias.

4.3. Analysis of the Initialization Method

It has been shown above that the empirical bias of H 1 ^ is due to the initialization errors caused by (27). The initialization method given in (29) has proved effective for errors in the case of long-range dependent process [35]. Although FBM is not a bandlimited stationary stochastic process, I tend to check whether this method is suitable for FBM.
This subsection applies the estimator H 1 ^ with this initialization to FBM for analysis, and compares it with the initialization by itself (or by Equation (27)). The length of simulation data is n = 2 18 . ( j 1 , j 2 ) are chosen ( 3 , 15 ) . The wavelet coefficients are computed using the classical Daubechies wavelet with N = 3 vanishing moments. The results are shown in Figure 7.
Figure 7 shows that both Biases and Stds for the two initializations are almost the same. It indicates that the initialization method given in (29) is inefficient in the case of FBM. Beside, it is known that the method Init2 will lead to the decrease of the number of available wavelet coefficients n j for the boundary effects, which may result in the increase of the variance of estimator. So I suggest choosing the initialization for a X ( 0 , k ) given in (27) in the future work.

4.4. Analysis of Noise Effects

At last, this paper adds this subsection for analysis of noise effects on the first estimator, which possibly happen in the real data. Various independent and identically distributed noises are added to the generated FBM for this issue. The signal-to-noise ratio (SNR) is defined as follows.
S N R = var X ( 1 ) var ε ,
where ε means noise. Set S N R = 2 in this subsection.
The results are shown in Figure 8. From Figure 8, I found that noises have some effects on the performance, and can lead to increase of bias. The effects of Gaussian and uniform noises are almost the same. The Cauchy noise can cause more increase of bias than the other two noises.

5. Conclusions

This paper focuses on the numerical simulation study of wavelet-based estimators in the case of FBM concluding the selection of parameters and the analysis of empirical bias which have not been studied comprehensively in previous studies. This study adds to previous numerical simulation studies of wavelet-based estimators which are not comprehensive in the case of FBM.
Results of the parameter selection showed that the increase of lower bound j 1 causes the decrease of bias and the increase of standard deviation for all Hs, and suggested j 1 = 3 via the minimum mean square error at a long data length n = 2 18 . In addition, it was also found that the empirical bias increased with the decrease of H and could be ignored for H 0.4 when j 1 = 3 and N = 3 . The effects of n on performance and relations of selected j 1 with n were also concluded via simulation studies. It was shown that the data length had little effect on the bias, but its decrease caused the increase of standard deviation for all Hs. The selected j 1 increased with the increase of data length, and the smaller the value of H, the faster the increase. For the vanishing moments N, when N 2 , the increase of N made no improvements to the performance of estimator. The change of orthogonal wavelets made no improvements to the empirical bias. The chosen of biorthogonal wavelet made empirical bias worse.
The analysis of empirical bias was conducted first via comparison of two available estimators and comparison of simulation methods. The results showed that the empirical bias was due to the initialization errors caused by discrete sampling, and was not related to simulation methods. When choosing an appropriate orthogonal compact supported wavelet, the empirical bias was almost not related to the inaccurate bias correction caused by correlations of wavelet coefficients. I regret to note that the initialization method given in (29), which has proved effective in the case of long-range dependent process, made no improvements to the empirical bias. All these results will be a guide for my future studies.

Funding

This research was funded by the National Natural Science Foundation of China grant number 61903309, the Major Project for New Generation of AI grant number 2018AAA0100400, the National Natural Science Foundation of Hunan grant number 2018JJ2098 and the Fundamental Research Funds for the Central Universities grant number JBK1806002.

Acknowledgments

The author is grateful to the anonymous reviewers for their time reviewing my paper and their valuable comments and suggestions, which helped improve this paper.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Li, Q.; Liang, S.Y.; Yang, J.; Li, B. Long range dependence prognostics for bearing vibration intensity chaotic time series. Entropy 2016, 18, 23. [Google Scholar] [CrossRef] [Green Version]
  2. Liu, C.; Yang, Z.; Shi, Z.; Ma, J.; Cao, J. A gyroscope signal denoising method based on empirical mode decomposition and signal reconstruction. Sensors 2019, 19, 5064. [Google Scholar] [CrossRef] [Green Version]
  3. Li, X.; Chen, W.; Chan, C.; Li, B.; Song, X. Multi-sensor fusion methodology for enhanced land vehicle positioning. Inform. Fusion 2019, 46, 51–62. [Google Scholar] [CrossRef]
  4. Dou, C.; Wei, X.; Lin, J. Fault diagnosis of gearboxes using nonlinearity and determinism by generalized Hurst exponents of shuffle and surrogate data. Entropy 2018, 20, 364. [Google Scholar] [CrossRef] [Green Version]
  5. Wu, L.; Chen, L.; Ding, Y.; Zhao, T. Testing for the source of multifractality in water level records. Physica A 2018, 508, 824–839. [Google Scholar] [CrossRef]
  6. Graves, T.; Gramacy, R.; Watkins, N.; Franzke, C. A brief history of long memory: Hurst, Mandelbrot and the road to ARFIMA, 1951–1980. Entropy 2017, 19, 437. [Google Scholar] [CrossRef] [Green Version]
  7. Hurst, H.E. Long-term storage capacity of reservoirs. Trans. Am. Soc. Civ. Eng. 1951, 116, 770–799. [Google Scholar]
  8. Mandelbrot, B.; Van Ness, J. Fractional Brownian Motions, Fractional Noises and Applications. SIAM Rev. 1968, 10, 422–437. [Google Scholar] [CrossRef]
  9. Deng, Z.; Wang, J.; Liang, X.; Liu, N. Function extension based real-time wavelet de-noising method for projectile attitude measurement. Sensors 2020, 20, 200. [Google Scholar] [CrossRef] [Green Version]
  10. He, K.; Xia, Z.; Si, Y.; Lu, Q.; Peng, Y. Noise reduction of welding crack AE signal based on EMD and wavelet packet. Sensors 2020, 20, 761. [Google Scholar] [CrossRef] [Green Version]
  11. Nicolis, O.; Mateu, J.; Contreras-Reyes, J.E. Wavelet-based entropy measures to characterize two-dimensional fractional Brownian fields. Entropy 2020, 22, 196. [Google Scholar] [CrossRef] [Green Version]
  12. Li, J.; Ke, L.; Du, Q. Classification of heart sounds based on the wavelet fractal and twin support vector machine. Entropy 2019, 21, 472. [Google Scholar] [CrossRef] [Green Version]
  13. Ramírez-Pacheco, J.C.; Trejo-Sánchez, J.A.; Cortez-González, J.; Palacio, R.R. Classification of fractal signals using two-parameter non-extensive wavelet entropy. Entropy 2017, 19, 224. [Google Scholar] [CrossRef] [Green Version]
  14. Flandrin, P. Wavelet analysis and synthesis of fractional Brownian motion. IEEE Trans. Inf. Theory 1992, 38, 910–917. [Google Scholar] [CrossRef]
  15. Abry, P.; Gonçalvès, P.; Flandrin, P. Wavelets, spectrum analysis and 1/f processes. In Wavelets and Statistics; Antoniadis, A., Oppenheim, G., Eds.; Springer: New York, NY, USA, 1995; Section 2; Volume 103, pp. 15–29. [Google Scholar] [CrossRef]
  16. Delbeke, L.; Van Assche, W. A wavelet based estimator for the parameter of self-similarity of fractional Brownian motion. In Proceedings of the 3rd International Conference on Approximation and Optimization in the Caribbean (Puebla, 1995), Puebla, Mexico, 8–13 October 1995; Volume 24, pp. 65–76. [Google Scholar]
  17. Abry, P.; Veitch, D. Wavelet analysis of long-range-dependent traffic. IEEE Trans. Inf. Theory 1998, 44, 2–15. [Google Scholar] [CrossRef]
  18. Veitch, D.; Abry, P. A wavelet-based joint estimator of the parameters of long-range dependence. IEEE Trans. Inf. Theory 1999, 45, 878–897. [Google Scholar] [CrossRef]
  19. Abry, P.; Flandrin, P.; Taqqu, M.; Veitch, D. Wavelets for the analysis, estimation and synthesis of scaling data. In Self-Similar Network Traffic and Performance Evaluation; Park, K., Willinger, W., Eds.; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2000; pp. 39–88. [Google Scholar]
  20. Abry, P.; Flandrin, P.; Taqqu, M.S.; Veitch, D. Self-similarity and long-range dependence through the wavelet lens. In Theory and Applications of Long-Range Dependence; Doukhan, P., Oppenheim, G., Taqqu, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2003; pp. 527–556. [Google Scholar]
  21. Abry, P.; Helgason, H.; Pipiras, V. Wavelet-based analysis of non-Gaussian long-range dependent processes and estimation of the Hurst parameter. Lith. Math. J. 2011, 51, 287–302. [Google Scholar] [CrossRef]
  22. Rea, W.; Oxley, L.; Reale, M.; Brown, J. Estimators for long range dependence: An empirical study. arXiv 2009, arXiv:0901.0762. [Google Scholar]
  23. Soltani, S.; Simard, P.; Boichu, D. Estimation of the self-similarity parameter using the wavelet transform. Signal Process. 2004, 84, 117–123. [Google Scholar] [CrossRef]
  24. Shen, H.; Zhu, Z.; Lee, T.C. Robust estimation of the self-similarity parameter in network traffic using wavelet transform. Signal Process. 2007, 87, 2111–2124. [Google Scholar] [CrossRef]
  25. Park, J.; Park, C. Robust estimation of the Hurst parameter and selection of an onset scaling. Stat. Sin. 2009, 19, 1531–1555. [Google Scholar]
  26. Feng, C.; Vidakovic, B. Estimation of the Hurst exponent using trimean estimators on nondecimated wavelet coefficients. arXiv 2017, arXiv:1709.08775. [Google Scholar]
  27. Kang, M.; Vidakovic, B. MEDL and MEDLA: Methods for assessment of scaling by medians of log-squared nondecimated wavelet coefficients. arXiv 2017, arXiv:1703.04180. [Google Scholar]
  28. Wu, L.; Ding, Y. Estimation of self-similar Gaussian fields using wavelet transform. Int. J. Wavelets Multiresolut. Inf. Process. 2015, 13, 1550044. [Google Scholar] [CrossRef]
  29. Wu, L.; Ding, Y. Wavelet-based estimator for the Hurst parameters of fractional Brownian sheet. Acta Math. Sci. 2017, 37B, 205–222. [Google Scholar] [CrossRef] [Green Version]
  30. Wu, L.; Ding, Y. Wavelet-based estimations of fractional Brownian sheet: Least squares versus maximum likelihood. J. Comput. Appl. Math. 2020, 371, 112609. [Google Scholar] [CrossRef]
  31. Bardet, J.M.; Lang, G.; Oppenheim, G.; Philippe, A.; Stoev, S.; Taqqu, M.S. Semi-parametric estimation of the long-range dependence parameter: A survey. In Theory and Applications of Long-Range Dependence; Doukhan, P., Oppenheim, G., Taqqu, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2003; pp. 557–577. [Google Scholar]
  32. Tewfik, A.H.; Kim, M. Correlation structure of the discrete wavelet coefficients of fractional Brownian motion. IEEE Trans. Inf. Theory 1992, 38, 904–909. [Google Scholar] [CrossRef]
  33. Dijkerman, R.W.; Mazumdar, R.R. On the correlation structure of the wavelet coefficients of fractional Brownian motion. IEEE Trans. Inf. Theory 1994, 40, 1609–1612. [Google Scholar] [CrossRef]
  34. Abry, P.; Delbeke, L.; Flandrin, P. Wavelet based estimator for the self-similarity parameter of α-stable processes. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Phoenix, AZ, USA, 15–19 March 1999; Volume 3, pp. 1729–1732. [Google Scholar] [CrossRef]
  35. Veitch, D.; Taqqu, M.S.; Abry, P. Meaningful MRA initialization for discrete time series. Signal Process. 2000, 80, 1971–1983. [Google Scholar] [CrossRef]
  36. Abry, P.; Flandrin, P. On the initialization of the discrete wavelet transform algorithm. IEEE Signal Process. Lett. 1994, 1, 32–34. [Google Scholar] [CrossRef] [Green Version]
  37. Veitch, D.; Abry, P.; Taqqu, M.S. On the automatic selection of the onset of scaling. Fractals 2003, 11, 377–390. [Google Scholar] [CrossRef]
  38. Papoulis, A.; Pillai, S.U. Probability, Random Variables and Stochastic Processes; Tata McGraw-Hill Education: New York, NY, USA, 2002. [Google Scholar]
  39. Davies, R.B.; Harte, D.S. Tests for Hurst effect. Biometrika 1987, 74, 95–101. [Google Scholar] [CrossRef]
  40. Dieker, T. Simulation of Fractional Brownian Motion. Master’s Thesis, University of Twente, Amsterdam, The Netherlands, 2004. [Google Scholar]
  41. Kroese, D.P.; Botev, Z.I. Spatial process simulation. In Stochastic Geometry, Spatial Statistics and Random Fields; Spodarev, E., Ed.; Springer: Berlin/Heidelberg, Germany, 2015; pp. 369–404. [Google Scholar] [CrossRef]
  42. Dietrich, C.; Newsam, G.N. Fast and exact simulation of stationary Gaussian processes through circulant embedding of the covariance matrix. SIAM J. Sci. Comput. 1997, 18, 1088–1107. [Google Scholar] [CrossRef]
Figure 1. The Bias, Std, RMSE for estimators: j 1 is the lower bound of octaves js. Std is the standard deviation, Bias = E H ^ H , RMSE is the square root of MSE. The values of Std, Bias, and RMSE are the estimated versions of those for 1000 independent copies of FBM with length n = 2 18 . The used wavelet is the Daubechies wavelet with N = 3 vanishing moments.
Figure 1. The Bias, Std, RMSE for estimators: j 1 is the lower bound of octaves js. Std is the standard deviation, Bias = E H ^ H , RMSE is the square root of MSE. The values of Std, Bias, and RMSE are the estimated versions of those for 1000 independent copies of FBM with length n = 2 18 . The used wavelet is the Daubechies wavelet with N = 3 vanishing moments.
Entropy 22 00349 g001
Figure 2. The Bias, Std, RMSE for estimators: n is the data length. Std is the standard deviation, Bias = E H ^ H , RMSE is the square root of MSE. The values of Std, Bias, and RMSE are the estimated versions of those for 1000 independent copies of FBM with length n. The lower bound of octaves js is chosen j 1 = 3 . The used wavelet is the Daubechies wavelet with N = 3 vanishing moments.
Figure 2. The Bias, Std, RMSE for estimators: n is the data length. Std is the standard deviation, Bias = E H ^ H , RMSE is the square root of MSE. The values of Std, Bias, and RMSE are the estimated versions of those for 1000 independent copies of FBM with length n. The lower bound of octaves js is chosen j 1 = 3 . The used wavelet is the Daubechies wavelet with N = 3 vanishing moments.
Entropy 22 00349 g002
Figure 3. The Bias, Std, RMSE for estimators: N is the number of vanishing moments of Daubechies wavelet. Std is the standard deviation, Bias = E H ^ H , RMSE is the square root of MSE. The values of Std, Bias and RMSE are the estimated versions of those for 1000 independent copies of FBM with length n = 2 18 . The lower bound of octaves js is chosen j 1 = 3 .
Figure 3. The Bias, Std, RMSE for estimators: N is the number of vanishing moments of Daubechies wavelet. Std is the standard deviation, Bias = E H ^ H , RMSE is the square root of MSE. The values of Std, Bias and RMSE are the estimated versions of those for 1000 independent copies of FBM with length n = 2 18 . The lower bound of octaves js is chosen j 1 = 3 .
Entropy 22 00349 g003
Figure 4. The Bias, Std, RMSE for estimators: db3 stands for Daubechies wavelet with three vanishing moments, sym4 stands for Symlets wavelet with four vanishing moments, dmey stands for discrete Meyer wavelet, bior3.1 stands for biorthogonal spline wavelets with orders N r = 3 (vanishing moments) and N d = 1 . The values of Std, Bias and RMSE are the estimated versions of those for 1000 independent copies of FBM with length n = 2 18 . The lower bound of octaves js is chosen j 1 = 3 .
Figure 4. The Bias, Std, RMSE for estimators: db3 stands for Daubechies wavelet with three vanishing moments, sym4 stands for Symlets wavelet with four vanishing moments, dmey stands for discrete Meyer wavelet, bior3.1 stands for biorthogonal spline wavelets with orders N r = 3 (vanishing moments) and N d = 1 . The values of Std, Bias and RMSE are the estimated versions of those for 1000 independent copies of FBM with length n = 2 18 . The lower bound of octaves js is chosen j 1 = 3 .
Entropy 22 00349 g004
Figure 5. The Bias and Std for estimators: M1 denotes the first estimator, M2 denotes the second estimator. Std is the standard deviation, Bias = E H ^ H . The values of Std and Bias are the estimated versions of those for 1000 independent copies of FBM with length n = 2 18 .
Figure 5. The Bias and Std for estimators: M1 denotes the first estimator, M2 denotes the second estimator. Std is the standard deviation, Bias = E H ^ H . The values of Std and Bias are the estimated versions of those for 1000 independent copies of FBM with length n = 2 18 .
Entropy 22 00349 g005
Figure 6. The Bias for estimators: circ denotes the circulant embedding method for simulation of FBM, chol denotes the Cholesky method for simulation of FBM. Bias = E H ^ H . The values of Bias are the estimated versions of those for 1000 independent copies of FBM with length n = 2 12 . The lower bound of octaves js is chosen j 1 = 3 . The used wavelet is the Daubechies wavelet with N = 3 vanishing moments.
Figure 6. The Bias for estimators: circ denotes the circulant embedding method for simulation of FBM, chol denotes the Cholesky method for simulation of FBM. Bias = E H ^ H . The values of Bias are the estimated versions of those for 1000 independent copies of FBM with length n = 2 12 . The lower bound of octaves js is chosen j 1 = 3 . The used wavelet is the Daubechies wavelet with N = 3 vanishing moments.
Entropy 22 00349 g006
Figure 7. The Bias and Std for estimators: Init1 denotes the initialization by itself (or by (27)), Init2 denotes the initialization denoted by (29). Std is the standard deviation, Bias = E H ^ H . The values of Std and Bias are the estimated versions of those for 1000 independent copies of FBM with length n = 2 18 .
Figure 7. The Bias and Std for estimators: Init1 denotes the initialization by itself (or by (27)), Init2 denotes the initialization denoted by (29). Std is the standard deviation, Bias = E H ^ H . The values of Std and Bias are the estimated versions of those for 1000 independent copies of FBM with length n = 2 18 .
Entropy 22 00349 g007
Figure 8. The Bias, Std, RMSE for estimators: orig stands for original series without noise, gau stands for Gaussian noise, unm stands for uniform noise, cau stands for Cauchy noise. The values of Std, Bias and RMSE are the estimated versions of those for 1000 independent copies of FBM with noise. The data length n = 2 18 . The lower bound of octaves js is chosen j 1 = 3 . The used wavelet is the Daubechies wavelet with N = 3 vanishing moments
Figure 8. The Bias, Std, RMSE for estimators: orig stands for original series without noise, gau stands for Gaussian noise, unm stands for uniform noise, cau stands for Cauchy noise. The values of Std, Bias and RMSE are the estimated versions of those for 1000 independent copies of FBM with noise. The data length n = 2 18 . The lower bound of octaves js is chosen j 1 = 3 . The used wavelet is the Daubechies wavelet with N = 3 vanishing moments
Entropy 22 00349 g008
Table 1. Estimation quality for FBM series. On the left, the  j 1 for minimum MSE and its Bias, Std, RMSE is given. On the right, the same quantities with j 1 = 3 are also given for comparison. RMSE is the square root of MSE. All the results are the estimated versions of Bias, Std, RMSE for 1000 independent copies of FBM with length n = 2 18 . The used wavelet is the Daubechies wavelet with N = 3 vanishing moments.
Table 1. Estimation quality for FBM series. On the left, the  j 1 for minimum MSE and its Bias, Std, RMSE is given. On the right, the same quantities with j 1 = 3 are also given for comparison. RMSE is the square root of MSE. All the results are the estimated versions of Bias, Std, RMSE for 1000 independent copies of FBM with length n = 2 18 . The used wavelet is the Daubechies wavelet with N = 3 vanishing moments.
H j 1 MSE BiasStdRMSE j 1 BiasStdRMSE
0.057−0.01220.01330.01803−0.14500.00300.1451
0.106−0.00870.00910.01263−0.08010.00310.0801
0.156−0.00460.00900.01013−0.04990.00300.0500
0.205−0.00560.00640.00853−0.03300.00310.0331
0.255−0.00320.00630.00713−0.02260.00320.0228
0.305−0.00190.00630.00663−0.01600.00320.0163
0.354−0.00380.00450.00593−0.01150.00320.0119
0.404−0.00230.00470.00523−0.00810.00330.0087
0.454−0.00190.00480.00523−0.00600.00330.0068
0.504−0.00120.00480.00493−0.00440.00330.0056
0.553−0.00300.00340.00463−0.00300.00340.0046
0.603−0.00250.00360.00443−0.00250.00360.0044
0.653−0.00180.00340.00383−0.00180.00340.0038
0.703−0.00140.00350.00383−0.00140.00350.0038
0.753−0.00130.00370.00393−0.00130.00370.0039
0.803−0.00080.00360.00373−0.00080.00360.0037
0.853−0.00060.00370.00383−0.00060.00370.0038
0.903−0.00060.00370.00383−0.00060.00370.0038
0.953−0.00060.00380.00383−0.00060.00380.0038
Table 2. Estimation quality for FBM series. On the left, the  j 1 for minimum MSE and its Bias, Std, RMSE is given. On the right, the same quantities with j 1 = 3 are also given for comparison. RMSE is the square root of MSE. All the results are the estimated versions of Bias, Std, RMSE for 1000 independent copies of FBM. The used wavelet is the Daubechies wavelet with N = 3 vanishing moments.
Table 2. Estimation quality for FBM series. On the left, the  j 1 for minimum MSE and its Bias, Std, RMSE is given. On the right, the same quantities with j 1 = 3 are also given for comparison. RMSE is the square root of MSE. All the results are the estimated versions of Bias, Std, RMSE for 1000 independent copies of FBM. The used wavelet is the Daubechies wavelet with N = 3 vanishing moments.
Hn j 1 MSE BiasStdRMSE j 1 BiasStdRMSE
2 10 2−0.06320.04730.07893−0.02390.07760.0811
2 12 3−0.02200.03050.03763−0.02200.03050.0376
0.3 2 14 4−0.00800.02020.02173−0.01860.01360.0231
2 16 4−0.00630.00960.01153−0.01670.00650.0179
2 18 5−0.00190.00630.00663−0.01600.00320.0163
2 10 2−0.02760.04790.05533−0.00780.07790.0783
2 12 2−0.02310.02020.03073−0.00730.03120.0320
0.5 2 14 3−0.00480.01420.01493−0.00480.01420.0149
2 16 3−0.00500.00680.00853−0.00500.00680.0085
2 18 4−0.00120.00480.00493−0.00440.00330.0056
2 10 2−0.01090.05260.05373−0.00560.08780.0879
2 12 2−0.00610.02330.02413−0.00100.03590.0359
0.8 2 14 2−0.00620.01060.01233−0.00150.01590.0160
2 16 3−0.00100.00740.00753−0.00100.00740.0075
2 18 3−0.00080.00360.00373−0.00080.00360.0037
Table 3. Estimations of ratio of variance.
Table 3. Estimations of ratio of variance.
H0.050.100.150.200.250.300.350.400.450.50
Var ^ H 2 ^ / Var ^ H 1 ^ 2.412.182.322.272.162.192.272.222.182.13
H0.550.600.650.700.750.800.850.900.95
Var ^ H 2 ^ / Var ^ H 1 ^ 1.922.082.001.941.761.951.711.891.88

Share and Cite

MDPI and ACS Style

Wu, L. A Note on Wavelet-Based Estimator of the Hurst Parameter. Entropy 2020, 22, 349. https://doi.org/10.3390/e22030349

AMA Style

Wu L. A Note on Wavelet-Based Estimator of the Hurst Parameter. Entropy. 2020; 22(3):349. https://doi.org/10.3390/e22030349

Chicago/Turabian Style

Wu, Liang. 2020. "A Note on Wavelet-Based Estimator of the Hurst Parameter" Entropy 22, no. 3: 349. https://doi.org/10.3390/e22030349

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop