Next Article in Journal
From Continuous-Time Chaotic Systems to Pseudo Random Number Generators: Analysis and Generalized Methodology
Next Article in Special Issue
A New Overdispersed Integer-Valued Moving Average Model with Dependent Counting Series
Previous Article in Journal
A Maximum Entropy Model of Bounded Rational Decision-Making with Prior Beliefs and Market Feedback
Previous Article in Special Issue
Count Data Time Series Modelling in Julia—The CountTimeSeries.jl Package and Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ordinal Pattern Dependence in the Context of Long-Range Dependence

Department of Mathematics, Siegen University, Walter-Flex-Straße 3, 57072 Siegen, Germany
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(6), 670; https://doi.org/10.3390/e23060670
Submission received: 28 April 2021 / Revised: 18 May 2021 / Accepted: 19 May 2021 / Published: 26 May 2021
(This article belongs to the Special Issue Time Series Modelling)

Abstract

:
Ordinal pattern dependence is a multivariate dependence measure based on the co-movement of two time series. In strong connection to ordinal time series analysis, the ordinal information is taken into account to derive robust results on the dependence between the two processes. This article deals with ordinal pattern dependence for a long-range dependent time series including mixed cases of short- and long-range dependence. We investigate the limit distributions for estimators of ordinal pattern dependence. In doing so, we point out the differences that arise for the underlying time series having different dependence structures. Depending on these assumptions, central and non-central limit theorems are proven. The limit distributions for the latter ones can be included in the class of multivariate Rosenblatt processes. Finally, a simulation study is provided to illustrate our theoretical findings.

1. Introduction

The origin of the concept of ordinal patterns is in the theory of dynamical systems. The idea is to consider the order of the values within a data vector instead of the full metrical information. The ordinal information is encoded as a permutation (cf. Section 3). Already in the first papers on the subject, the authors considered entropy concepts related to this ordinal structure (cf. [1]). There is an interesting relationship between these concepts and the well-known Komogorov–Sinai entropy (cf. [2,3]). Additionally, an ordinal version of the Feigenbaum diagram has been dealt with e.g., in [4]. In [5], ordinal patterns were used in order to estimate the Hurst parameter in long-range dependent time series. Furthermore, Ref. [6] have proposed a test for independence between time series (cf. also [7]). Hence, the concept made its way into the area of statistics. Instead of long patterns (or even letting the pattern length tend to infinity), rather short patterns have been considered in this new framework. Furthermore, ordinal patterns have been used in the context of ARMA processes [8] and change-point detection within one time series [9]. In [10], ordinal patterns were used for the first time in order to analyze the dependence between two time series. Limit theorems for this new concept were proven in a short-range dependent framework in [11]. Ordinal pattern dependence is a promising tool, which has already been used in financial, biological and hydrological data sets, see in this context, also [12] for an analysis of the co-movement of time series focusing on symbols. In particular, in the context of hydrology, the data sets are known to be long-range dependent. Therefore, it is important to also have limit theorems available in this framework. We close this gap in the present article.
All of the results presented in this article have been established in the Ph.D. thesis of I. Nüßgen written under the supervision of A. Schnurr.
The article is structured as follows: in the subsequent section, we provide the reader with the mathematical framework. The focus is on (multivariate) long-range dependence. In Section 3, we recall the concept of ordinal pattern dependence and prove our main results. We present a simulation study in Section 4 and close the paper by a short outlook in Section 5.

2. Mathematical Framework

We consider a stationary d-dimensional Gaussian time series Y j j Z (for d N ), with:
Y j : = Y j ( 1 ) , , Y j ( d ) t
such that E Y j ( p ) = 0 and E Y j ( p ) 2 = 1 for all j Z and p = 1 , , d . Furthermore, we require the cross-correlation function to fulfil r ( p , q ) ( k ) < 1 for p , q = 1 , , d and k 1 , where the component-wise cross-correlation functions r ( p , q ) ( k ) are given by r ( p , q ) ( k ) = E Y j ( p ) Y j + k ( q ) for each p , q = 1 , , d and k Z . For each random vector Y j , we denote the covariance matrix by Σ d , since it is independent of j due to stationarity. Therefore, we have Σ d = r ( p , q ) ( 0 ) p , q = 1 , , d .
We specify the dependence structure of Y j j Z and turn to long-range dependence: we assume that for the cross-correlation function r ( p , q ) ( k ) for each p , q = 1 , , d , it holds that:
r ( p , q ) ( k ) = L p , q ( k ) k d p + d q 1 ,
with L p , q ( k ) L p , q ( k ) for finite constants L p , q [ 0 , ) with L p , p 0 , where the matrix L = L p , q p , q = 1 , , d has full rank, is symmetric and positive definite. Furthermore, the parameters d p , d q 0 , 1 2 are called long-range dependence parameters. Therefore, Y j j Z is multivariate long-range dependent in the sense of [13], Definition 2.1.
The processes we want to consider have a particular structure, namely for h N , we obtain for fixed j Z :
Y j , h : = Y j ( 1 ) , , Y j + h 1 ( 1 ) , Y j ( 2 ) , , Y j + h 1 ( 2 ) , , Y j ( d ) , , Y j + h 1 ( d ) t R d h .
The following relation holds between the extendend process  Y j , h j Z and the primarily regarded process Y j j Z . For all k = 1 , , d h , j Z we have:
Y j , h ( k ) = Y j + ( k mod h ) 1 k 1 h + 1 ,
where x = max { k Z : k x } . Note that the process Y j , h j Z is still a centered Gaussian process since all finite-dimensional marginals of Y j j Z follow a normal distribution. Stationarity is also preserved since for all p , q = 1 , , d h , p q and k Z , the cross-correlation function r ( p , q , h ) ( k ) of the process Y j , h j Z is given by
r ( p , q , h ) ( k ) = E Y j , h ( p ) Y j + k , h ( q ) = E Y j + ( p mod h ) 1 p 1 h + 1 Y j + k + ( q mod h ) 1 q 1 h + 1 = r ( p 1 h + 1 , q 1 h + 1 ) ( k + ( ( q p ) mod h ) )
and the last line does not depend on j. The covariance matrix Σ d , h of Y j , h has the following structure:
Σ d , h p , q = 1 , , d , p q = r ( p , q , h ) ( 0 ) p , q = 1 , , d h , p q , , Σ d , h p , q = 1 , , d , p > q = r ( q , p , h ) ( 0 ) p , q = 1 , , d h , q < p .
Hence, we arrive at:
Σ d , h = Σ h ( p , q ) 1 p , q d ,
where Σ h ( p , q ) = E Y 1 ( p ) , , Y h ( p ) t Y 1 ( q ) , , Y h ( q ) = r ( p , q ) ( i k ) 1 i , k h , p , q = 1 , , d . Note that Σ h ( p , q ) R h × h and r ( p , q ) ( k ) = r ( q , p ) ( k ) , k Z since we are studying cross-correlation functions.
Therefore, we finally have to show that based on the assumptions on Y j j Z , the extended process is still long-range dependent.
Hence, we have to consider the cross-correlations again:
r ( p , q , h ) ( k ) = r ( p 1 h + 1 , q 1 h + 1 ) ( k + ( ( q p ) mod h ) ) = r ( p * , q * ) ( k + m * ) r ( p * , q * ) ( k ) ( k ) ,
since p * , q * { 1 , , d } and m * { 0 , , h 1 } , with p * : = p 1 h + 1 , q * : = q 1 h + 1 and m * = ( q p ) mod h .
Let us remark that a k b k lim k a k b k = 1 .
Therefore, we are still dealing with a multivariate long-range dependent Gaussian process. We see in the proofs of the following limit theorems that the crucial parameters that determine the asymptotic distribution are the long-range dependence parameters d p , p = 1 , , d of the original process Y j j Z and therefore, we omit the detailed description of the parameters d p * herein.
It is important to remark that the extended process Y j , h j Z is also long-range dependent in the sense of [14], p. 2259, since:
lim k k D r ( p , q , h ) ( k ) L ( k ) = lim k k D r ( p * , q * ) ( k ) L ( k ) = lim k k D L p * , q * k d p * + d q * 1 L ( k ) = : b p * , q * ,
with:
D : = min p * { 1 , , d } { 1 2 d p * } ( 0 , 1 )
and L ( k ) can be chosen as any constant L p , q that is not equal to zero, so for simplicity, we assume without a loss of generality L 1 , 1 0 , and therefore, L ( k ) = L 1 , 1 , since the condition in [14] only requires convergence to a finite constant b p * , q * . Hence, we may apply the results in [14] in the subsequent results.
We define the following set, which is needed in the proofs of the theorems of this section.
P * : = { p { 1 , , d } : d p d q , for all q { 1 , , d } }
and denote the corresponding long-range dependence parameter to each p P * by
d * : = d p , p P * .
We briefly recall the concept of Hermite polynomials as they play a crucial role in determining the limit distribution of functionals of multivariate Gaussian processes.
Definition 1.
(Hermite polynomial, [15], Definition 3.1)
The j-th Hermite polynomial H j ( x ) , j = 0 , 1 , , is defined as
H j ( x ) : = ( 1 ) j exp x 2 2 d j d x j exp x 2 2 .
Their multivariate extension is given by the subsequent definition.
Definition 2.
(Multivariate Hermite polynomial, [15], p. 122)
Let d N . We define as d-dimensional Hermite polynomial:
H k ( x ) : = H k 1 , , k d ( x ) : = H k 1 , , k d x 1 , , x d = j = 1 d H k j x j ,
with k = k 1 , , k d N 0 d { ( 0 , , 0 ) } .
Let us remark that the case k = ( 0 , , 0 ) is excluded here due to the assumption E f ( X ) = 0 .
Analogously to the univariate case, the family of multivariate Hermite polynomials
H k 1 , , k d , k 1 , , k d N forms an orthogonal basis of L 2 R d , φ I d , which is defined as
L 2 R d , φ I d : = f : R d R , R d f 2 x 1 , , x d φ x 1 φ x d d x d d x 1 < .
The parameter φ I d denotes the density of the d-dimensional standard normal distribution, which is already divided into the product of the univariate densities φ in the formula above.
We denote the Hermite coefficients by
C ( f , X , k ) : = C f , I d , k : = f , H k = E f ( X ) H k ( X ) .
The Hermite rank m f , I d of f with respect to the distribution N 0 , I d is defined as the largest integer m, such that:
E f ( X ) j = 1 d H k j X ( j ) = 0 for all 0 < k 1 + k d < m .
Having these preparatory results in mind, we derive the multivariate Hermite expansion given by
f ( X ) E f ( X ) = k 1 + + k d m f , I d C ( f , X , k ) k 1 ! k d ! j = 1 d H k j X ( j ) .
We focus on the limit theorems for functionals with Hermite rank 2. First, we introduce the matrix-valued Rosenblatt process. This plays a crucial role in the asymptotics of functionals with Hermite rank 2 applied to multivariate long-range dependent Gaussian processes. We begin with the definition of a multivariate Hermitian–Gaussian random measure B ˜ ( d λ ) with independent entries given by
B ˜ ( d λ ) = B ˜ ( 1 ) ( d λ ) , , B ˜ ( d ) ( d λ ) t ,
where B ˜ ( p ) ( d λ ) is a univariate Hermitian–Gaussian random measure as defined in [16], Definition B.1.3. The multivariate Hermitian–Gaussian random measure B ˜ ( d λ ) satisfies:
E B ˜ ( d λ ) = 0 , E B ˜ ( d λ ) B ˜ ( d λ ) * = I d d λ
and:
E B ˜ ( p ) ( d λ 1 ) B ˜ ( q ) ( d λ 2 ) ¯ = 0 , λ 1 λ 2 , p , q = 1 , , d ,
where B ˜ ( d λ ) * = B ( 1 ) d λ ¯ , , B ( d ) ( d λ ) ¯ denotes the Hermitian transpose of B ˜ ( d λ ) . Thus, following [14], Theorem 6, we can state the spectral representation of the matrix-valued Rosenblatt process Z 2 , H ( t ) , t [ 0 , 1 ] as
Z 2 , H ( t ) = Z 2 , H ( p , q ) ( t ) p , q = 1 , , d
where each entry of the matrix is given by
Z 2 , H ( p , q ) ( t ) = R 2 exp i t λ 1 + λ 2 1 i λ 1 + λ 2 B ˜ ( p ) d λ 1 B ˜ ( q ) d λ 2 .
The double prime in R 2 excludes the diagonals λ i = λ j , i j in the integration. For details on multiple Wiener-Itô integrals, as can be seen in [17].
The following results were taken from [18], Section 3.2. The corresponding proofs were outsourced to the Appendix A.
Theorem 1.
Let Y j j Z be a stationary Gaussian process as defined in (1) that fulfils (2) for d p 1 4 , 1 2 , p = 1 , d . For h N we fix:
Y j , h : = Y j ( 1 ) , , Y j + h 1 ( 1 ) , , Y j ( d ) , , Y j + h 1 ( d ) t R d h
with Y j , h N 0 , Σ d , h and Σ d , h as described in (6). Let f : R d h R be a function with Hermite rank 2 such that the set of discontinuity points D f is a Null set with respect to the d h -dimensional Lebesgue measure. Furthermore, we assume f fulfills E f 2 Y j , h < . Then:
n 2 d * ( C 2 ) 1 2 j = 1 n f Y j ( 1 ) , , Y j + h 1 ( d ) E f Y j ( 1 ) , , Y j + h 1 ( d )
D p , q P * α ˜ ( p , q ) Z 2 , d * + 1 / 2 ( p , q ) ( 1 ) ,
where:
Z 2 , d * + 1 / 2 ( p , q ) ( 1 ) = K p , q d * R 2 exp i λ 1 + λ 2 1 i λ 1 + λ 2 λ 1 λ 2 d * B ˜ L ( p ) d λ 1 B ˜ L ( q ) d λ 2 .
The matrix K d * is a normalizing constant, as can be seen in [18], Corollary 3.6. Moreover, B ˜ L ( d λ ) is a multivariate Hermitian–Gaussian random measure with E B L ( d λ ) B L ( d λ ) * = L d λ and L as defined in (2). Furthermore, C 2 : = 1 2 d * 4 d * 1 is a normalizing constant and:
α ˜ ( p , q ) : = i , k = 1 h α i , k ( p , q )
where α i , k ( p , q ) = α i + ( p 1 ) h , k + ( q 1 ) h for each p , q P * and i , k = 1 , , h and:
α i , k 1 i , k d h = Σ d , h 1 C Σ d , h 1
where C denotes the matrix of second order Hermite coefficients, given by
C = c i , k 1 i , k d h = E Y 1 , h f Y 1 , h E f Y 1 , h Y 1 , h t .
It is possible to soften the assumptions in Theorem 1 to allow for mixed cases of short- and long-range dependence.
Corollary 1.
Instead of demanding in the assumptions of Theorem 1 that (2) holds for Y j j Z with the addition that for all p = 1 , , d we have d p 1 4 , 1 2 , we may use the following condition.
We assume that:
r ( p , q ) ( k ) = k d p + d q 1 L p , q ( k ) ( k )
with L p , q ( k ) as given in (2), but we do no longer assume d p 1 4 , 1 2 for all p = 1 , , d but soften the assumption to d * 1 4 , 1 2 and for d p d * , p = 1 , , d we allow for d p , 0 0 , 1 4 . Then, the statement of Theorem 1 remains valid.
However, with a mild technical assumption on the covariances of the one-dimensional marginal Gaussian processes that is often fulfilled in applications, there is another way of normalizing the partial sum on the right-hand side in Theorem 1, this time explicitly for the case # P * = 2 and h N , such that the limit can be expressed in terms of two standard Rosenblatt random variables. This yields the possibility of further studying the dependence structure between these two random variables. In the following theorem, we assume # P * = d = 2 for the reader’s convenience.
Theorem 2.
Under the same assumptions as in Theorem 1 with # P * = d = 2 and d * 1 4 , 1 2 and the additional condition that r ( 1 , 1 ) ( l ) = r ( 2 , 2 ) ( l ) , for l = 0 , , h 1 , and L 1 , 1 + L 2 , 2 L 1 , 2 + L 2 , 1 , it holds that:
n 2 d * ( C 2 ) 1 2 j = 1 n f Y j ( 1 ) , , Y j + h 1 ( d ) E f Y j ( 1 ) , , Y j + h 1 ( d ) D α ˜ ( 1 , 1 ) α ˜ ( 1 , 2 ) L 2 , 2 L 2 , 1 L 1 , 2 + L 1 , 1 2 Z 2 , d * + 1 / 2 * ( 1 ) + α ˜ ( 1 , 1 ) + α ˜ ( 1 , 2 ) L 2 , 2 + L 2 , 1 + L 1 , 2 + L 1 , 1 2 Z 2 , d * + 1 / 2 * * ( 1 )
with C 2 : = 1 2 d * 4 d * 1 being the same normalizing factor as in Theorem 1, α i , k 1 i , k d h = Σ d , h 1 C Σ d , h 1 and C = c i , k 1 i , k d h = E Y 1 , h f Y 1 , h E f Y 1 , h Y 1 , h t . Note that Z 2 , d * + 1 / 2 * ( 1 ) and Z 2 , d * + 1 / 2 * * ( 1 ) are both standard Rosenblatt random variables whose covariance is given by
C o v Z 2 , d * + 1 / 2 * ( 1 ) , Z 2 , d * + 1 / 2 * * ( 1 ) = L 2 , 2 L 1 , 1 2 L 1 , 1 + L 2 , 2 2 L 1 , 2 + L 2 , 1 2 .

3. Ordinal Pattern Dependence

Ordinal pattern dependence is a multivariate dependence measure that compares the co-movement of two time series based on the ordinal information. First introduced in [10] to analyze financial time series, a mathematical framework including structural breaks and limit theorems for functionals of absolutely regular processes has been built in [11]. In [19], the authors have used the so-called symbolic correlation integral in order to detect the dependence between the components of a multivariate time series. Their considerations focusing on testing independence between two time series are also based on ordinal patterns. They provide limit theorems in the i.i.d.-case and otherwise use bootstrap methods. In contrast, in the mathematical model in the present article, we focus on asymptotic distributions of an estimator of ordinal pattern dependence having a bivariate Gaussian time series in the background but allowing for several dependence structures to arise. As it will turn out in the following, this yields central but also non-central limit theorems.
We start with the definition of an ordinal pattern and the basic mathematical framework that we need to build up the ordinal model.
Let S h denote the set of permutations in { 0 , , h } , h N 0 that we express as ( h + 1 ) -dimensional tuples, assuring that each tuple contains each of the numbers above exactly once. In mathematical terms, this yields:
S h = π N 0 h + 1 : 0 π i h , and π i π k , whenever i k , i , k = 0 , , h ,
as can be seen in [11], Section 2.1.
The number of permutations in S h is given by # S h = ( h + 1 ) ! . In order to get a better intuitive understanding of the concept of ordinal patterns, we have a closer look at the following example, before turning to the formal definition.
Example 1.
Figure 1 provides an illustrative understanding of the extraction of an ordinal pattern from a data set. The data points of interest are colored in red and we consider a pattern of length h = 3 , which means that we have to take n = 4 data points into consideration. We fix the points in time t 0 , t 1 , t 2 and t 3 and extract the data points from the time series. Then, we search for the point in time which exhibits the largest value in the resulting data and write down the corresponding time index. In this example, it was given by t = t 1 . We order the data points by writing the time position of the largest value as the first entry, the time position of the second largest as the second entry, etc. Hence, the absolute values are ordered from largest to smallest and the ordinal pattern ( 1 , 0 , 3 , 2 ) S 3 is obtained for the considered data points.
Formally, the aforementioned procedure can be defined as follows, as can be seen in [11], Section 2.1.
Definition 3.
As the ordinal pattern of a vector x = x 0 , , x h R h + 1 , we define the unique permutation π = π 0 , , π h S h :
Π ( x ) = Π x 0 , , x h = π 0 , , π h ,
such that:
x π 0 x π h ,
with π i 1 < π i if x π i 1 = x π i , i = 1 , , h .
The last condition assures the uniqueness of π if there are ties in the data sets. In particular, this condition is necessary if real-world data are to be considered.
In Figure 2, all ordinal patterns of length h = 2 are shown. As already mentioned in the introduction, from the practical point of view, a highly desirable property of ordinal patterns is that they are not affected by monotone transformations, as can be seen in [5], p. 1783.
Mathematically, this means that if f : R R is strictly monotone, then:
Π x 0 , , x h = Π f x 0 , , f x h .
In particular, this includes linear transformations f ( x ) = a x + b , with a R + and b R .
Following [11], Section 1, the minimal requirement of the data sets we use for ordinal analysis in the time series context, i.e., for ordinal pattern probabilities as well as for ordinal pattern dependence later on, is ordinal pattern stationarity (of order h). This property implies that the probability of observing a certain ordinal pattern of length h remains the same when shifting the moving window of length h through the entire time series and is not depending on the specific points in time. In the course of this work, the time series, in which the ordinal patterns occur, always have either stationary increments or are even stationary themselves. Note that both properties imply ordinal pattern stationarity. The reason why requiring stationary increments is a sufficient condition is given in the following explanation.
One fundamental property of ordinal patterns is that they are uniquely determined by the increments of the considered time series. As one can imagine in Example 1, the knowledge of the increments between the data points is sufficient to obtain the corresponding ordinal pattern. In mathematical terms, we can define another mapping Π ˜ , which assigns the corresponding ordinal pattern to each vector of increments, as can be seen in [5], p. 1783.
Definition 4.
We define for y = y 1 , , y h R h the mapping Π ˜ : R h S h :
Π ˜ y 1 , , y h : = Π 0 , y 1 , y 1 + y 2 , , y 1 + + y h ,
such that for y i = x i x i 1 , i = 1 , , h , we obtain:
Π ˜ y 1 , , y h = Π 0 , y 1 , y 1 + y 2 , , y 1 + + y h = Π 0 , x 1 x 0 , x 2 x 0 , , x h x 0 = Π x 0 , x 1 , x 2 , , x h .
We define the two mappings, following [5], p. 1784:
S : S h S h , π 0 , , π h π h , , π 0 , T : S h S h , π 0 , , π h h π 0 , , h π h .
An illustrative understanding of these mappings is given as follows. The mapping S ( π ) , which is the spatial reversion of the pattern π , is the reflection of π on a horizontal line, while T ( π ) , the time reversal of π , is its reflection on a vertical line, as one can observe in Figure 3.
Based on the spatial reversion, we define a possibility to divide S h into two disjoint sets.
Definition 5.
We define S h * as a subset of S h with the property that for each π S h , either π or S ( π ) are contained in the set, but not both of them.
Note that this definition does not yield the uniqueness of S h * .
Example 2.
We consider the case h = 2 again and we want to divide S 2 into a possible choice of S 2 * and the corresponding spatial reversal. We choose S 2 * = { ( 2 , 1 , 0 ) , ( 2 , 0 , 1 ) , ( 1 , 2 , 0 ) } , and therefore, S 2 S 2 * = { ( 0 , 1 , 2 ) , ( 1 , 0 , 2 ) , ( 0 , 2 , 1 ) } . Remark that S 2 * = { ( 0 , 1 , 2 ) , ( 2 , 0 , 1 ) , ( 1 , 2 , 0 ) } is also a possible choice. The only condition that has to be satisfied is that if one permutation is chosen for S 2 * , then its spatial reverse must not be an element of this set.
We stick to the formal definition of ordinal pattern dependence, as it is proposed in [11], Section 2.1. The considered moving window consists of h + 1 data points, and hence, h increments. We define:
p : = p X ( 1 ) , X ( 2 ) : = P Π X 0 ( 1 ) , , X h ( 1 ) = Π X 0 ( 2 ) , , X h ( 2 )
and:
q : = q X ( 1 ) , X ( 2 ) : = π S h P Π X 0 ( 1 ) , , X h ( 1 ) = π P Π X 0 ( 2 ) , , X h ( 2 ) = π .
Then, we define ordinal pattern dependence O P D as
O P D : = O P D X ( 1 ) , X ( 2 ) : = p q 1 q .
The parameter q represents the hypothetical case of independence between the two time series. In this case, p and q would obtain equal values and therefore, O P D would equal zero. Regarding the other extreme, the case in which both processes coincide or one is a strictly monotone increasing transform of the other one, we obtain the value 1. However, in the following, we assume p ( 0 , 1 ) and q ( 0 , 1 ) .
Note that the definition of ordinal pattern dependence in (17) only measures positive dependence. This is no restriction in practice, because negative dependence can be investigated in an analogous way, by considering O P D X ( 1 ) , X ( 2 ) . If one is interested in both types of dependence simultaneously, in [11], the authors propose to use O P D X ( 1 ) , X ( 2 ) + O P D X ( 1 ) , X ( 2 ) + . To keep the notation simple, we focus on O P D as it is defined in (17).
We compare whether the ordinal patterns in X j ( 1 ) j Z coincide with the ones in X j ( 2 ) j Z . Recall that it is an essential property of ordinal patterns that they are uniquely determined by the increment process. Therefore, we have to consider the increment processes Y j j Z = Y j ( 1 ) , Y j ( 2 ) j Z as defined in (1) for d = 2 , where Y j ( p ) = X j ( p ) X j 1 ( p ) , p = 1 , 2 . Hence, we can also express p and q (and consequently O P D ) as a probability that only depends on the increments of the considered vectors of the time series. Recall the definition of Y j , h j Z for d = 2 , given by
Y j , h = Y j ( 1 ) , , Y j + h 1 ( 1 ) , Y j ( 2 ) , , Y j + h 1 ( 2 ) t ,
such that Y j , h N 0 , Σ 2 , h with Σ 2 , h as given in (6).
In the course of this article, we focus on the estimation of p. For a detailed investigation of the limit theorems for estimators of O P D , we refer to [18]. We define the estimator of p, the probability of coincident patterns in both time series in a moving window of fixed length, by
p ^ n = 1 n h j = 0 n h 1 1 Π X j ( 1 ) , , X j + h ( 1 ) = Π X j ( 2 ) , , X j + h ( 2 ) = 1 n h j = 1 n h 1 Π ˜ Y j ( 1 ) , , Y j + h 1 ( 1 ) = Π ˜ Y j ( 2 ) , , Y j + h 1 ( 2 ) ,
where:
Π ˜ Y 1 , , Y h : = Π 0 , Y 1 , Y 1 + Y 2 , , Y 1 + + Y h = Π 0 , X 1 X 0 , , X h X 0 = Π X 0 , X 1 , , X h .
Figure 4 illustrates the way ordinal pattern dependence is estimated by p ^ n . The patterns of interest that are compared in each moving window are colored in red.
Having emphasized the crucial importance of the increments, we define the following conditions on the increment process Y j j Z : let Y j j Z be a bivariate, stationary Gaussian process with Y j ( p ) N ( 0 , 1 ) , p = 1 , 2 :
(L) 
We assume that Y j j Z fulfills (2) with d * in 1 4 , 1 2 . We allow for min d 1 , d 2 to be in the range , 0 0 , 1 4 .
(S) 
We assume d 1 , d 2 , 0 0 , 1 4 such that the cross-correlation function of Y j j Z fulfills for p , q = 1 , 2 :
r ( p , q ) ( k ) = k d p + d q 1 L p , q ( k ) ( k )
with L p , q ( k ) L p , q and L p , q R holds.
Furthermore, in both cases, it holds that r ( p , q ) ( k ) < 1 for p , q = 1 , 2 and k 1 to exclude ties.
We begin with the investigation of the asymptotics of p ^ n . First, we calculate the Hermite rank of p ^ n , since the Hermite rank determines for which ranges of d * the estimator p ^ n is still long-range dependent. Depending on this range, different limit theorems may hold.
Lemma 1.
The Hermite rank of f ( Y j , h ) = 1 Π ˜ Y j + 1 ( 1 ) , , Y j + h ( 1 ) = Π ˜ Y j + 1 ( 2 ) , , Y j + h ( 2 ) with respect to Σ 2 , h is equal to 2.
Proof. 
Following [20], Lemma 5.4 it is sufficient to show the following two properties:
(i)
m ( f , Σ 2 , h ) 2 ,
(ii)
m ( f , I 2 , h ) 2 .
Note that the conclusion is not trivial, because m ( f , Σ 2 , h ) m ( f , I 2 , h ) in general, as can be seen in [15], Lemma 3.7. Lemma 5.4 in [20] can be applied due to the following reasoning. Ordinal patterns are not affected by scaling, therefore, the technical condition that Σ 2 , h 1 I 2 , h is positive semidefinite is fulfilled in our case. We can scale the standard deviation of the random vector Y j , h by any positive real number σ > 0 since for all j Z we have:
Π ˜ Y j ( 1 ) , , Y j + h 1 ( 1 ) = Π ˜ Y j ( 2 ) , , Y j + h 1 ( 2 ) = Π ˜ σ Y j ( 1 ) , , σ Y j + h 1 ( 1 ) = Π ˜ σ Y j ( 2 ) , , σ Y j + h 1 ( 2 ) .
To show property ( i ) , we need to consider a multivariate random vector:
Y 1 , h : = Y 1 ( 1 ) , , Y h ( 1 ) , Y 1 ( 2 ) , , Y h ( 2 ) t
with covariance matrix Σ 2 , h . We fix i = 1 , , 2 h . We divide the set S h into disjoint sets, namely into S h * , as defined in Definition 5 and the complimentary set S h S h * . Note that:
Y j , h = D Y j , h
holds. This implies:
E Y j , h ( i ) 1 Π ˜ Y 1 ( 1 ) , , Y h ( 1 ) = Π ˜ Y 1 ( 2 ) , , Y h ( 2 ) = π = E Y j , h ( i ) 1 Π ˜ Y 1 ( 1 ) , , Y h ( 1 ) = Π ˜ Y 1 ( 2 ) , , Y h ( 2 ) = S ( π )
for π S h . Hence, we arrive at:
E Y j , h ( i ) f ( Y j , h ) = E Y j , h ( i ) 1 Π ˜ Y 1 ( 1 ) , , Y h ( 1 ) = Π ˜ Y 1 ( 2 ) , , Y h ( 2 ) = π S h E Y j , h ( i ) 1 Π ˜ Y 1 ( 1 ) , , Y h ( 1 ) = Π ˜ Y 1 ( 2 ) , , Y h ( 2 ) = π = π S h * E Y j , h ( i ) 1 Π ˜ Y 1 ( 1 ) , , Y h ( 1 ) = Π ˜ Y 1 ( 2 ) , , Y h ( 2 ) = π π S h S h * E Y j , h ( i ) 1 Π ˜ Y 1 ( 1 ) , , Y h ( 1 ) = Π ˜ Y 1 ( 2 ) , , Y h ( 2 ) = S ( π ) = 0
for i = 1 , , 2 h .
Consequently, m f , Σ 2 , h 2 .
In order to prove ( i i ) , we consider:
U 1 , h : = U 1 ( 1 ) , , U h ( 1 ) , U 1 ( 2 ) , , U h ( 2 ) t
to be a random vector with independent N ( 0 , 1 ) distributed entries. For i = 1 , , h and k = h + 1 , , 2 h such that k h = i , we obtain:
E U 1 , h ( i ) U 1 , h ( k ) f U 1 , h = E U i ( 1 ) U k h ( 2 ) 1 Π ˜ U 1 ( 1 ) , , U h ( 1 ) = Π ˜ U 1 ( 2 ) , , U h ( 2 ) = π S h E U i ( 1 ) U i ( 2 ) 1 Π ˜ U 1 ( 1 ) , , U h ( 1 ) = Π ˜ U 1 ( 2 ) , , U h ( 2 ) = π = π S h E U i ( 1 ) 1 Π ˜ U 1 ( 1 ) , , U h ( 1 ) = π 2 0 ,
since E U i ( 1 ) 1 Π ˜ U 1 ( 1 ) , , U h ( 1 ) = π 0 for all π S h . This was shown in the proof of Lemma 3.4 in [20].
All in all, we derive m ( f , Σ 2 , h ) = 2 and hence, have proven the lemma. □
The case m ( f , Σ 2 , h ) = 2 exhibits the property that the standard range of the long-range dependence parameter d * 0 , 1 2 has to be divided into two different sets. If d * 1 4 , 1 2 , the transformed process f Y j , h j Z is still long-range dependent, as can be seen in [16], Table 5.1. If d * 0 , 1 4 , the transformed process is short-range dependent, which means by definition that the autocorrelations of the transformed process are summable, as can be seen in [13], Remark 2.3. Therefore, we have two different asymptotic distributions that have to be considered for the estimator p ^ n of coincident patterns.

3.1. Limit Theorem for the Estimator of p in Case of Long-Range Dependence

First, we restrict ourselves to the case that at least one of the two parameters d 1 and d 2 is in 1 4 , 1 2 . This assures d * 1 4 , 1 2 . We explicitly include mixing cases where the process corresponding to min d 1 , d 2 is allowed to be long-range as well as short-range dependent.
Note that this setting includes the pure long-range dependence case, which means that for p = 1 , 2 , we have d p 1 4 , 1 2 , or even d 1 = d 2 = d * . However, in general, the assumptions are lower, such that we only require d p 1 4 , 1 2 for either p = 1 or p = 2 and the other parameter is also allowed to be in , 0 or 0 , 1 4 .
We can, therefore, apply the results of Corollary 1 and obtain the following asymptotic distribution for p ^ n :
Theorem 3.
Under the assumption in (L), we obtain:
n 1 2 d * ( C 2 ) 1 2 p ^ n p D p , q P * α ˜ ( p , q ) Z 2 , d * + 1 / 2 ( p , q ) ( 1 )
with Z 2 , d * + 1 / 2 ( p , q ) ( 1 ) as given in Theorem 1 for p , q P * and C 2 : = 1 2 d * 4 d * 1 being a normalizing constant. We have:
α ˜ ( p , q ) : = i , k = 1 h α i , k ( p , q ) , w h e r e α i , k ( p , q ) = α i + ( p 1 ) h , k + ( q 1 ) h ,
for each p , q P * and i , k = 1 , , h and α i , k 1 i , k d h = Σ 2 , h 1 C Σ 2 , h 1 , where the variable:
C = c i , k 1 i , k 2 h = E Y 1 , h 1 Π ˜ Y 1 ( 1 ) , , Y h ( 1 ) = Π ˜ Y 1 ( 2 ) , , Y h ( 2 ) p Y 1 , h t
denotes the matrix of second order Hermite coefficients.
Proof. 
The proof of this theorem is an immediate application of the Corollary 1 and Lemma 1. Note that for p ^ n it holds that it is square integrable with respect to Y j , h and that the set of discontinuity points is a Null set with respect to the 2 h -dimensional Lebesgue measure. This is shown in [18], Equation (4.5). □
Following Theorem 2, we are also able to express the limit distribution above in terms of two standard Rosenblatt random variables by modifying the weighting factors in the limit distribution. Note that this requires slightly stronger assumptions as in Theorem 1.
Theorem 4.
Let (L) hold with d 1 = d 2 . Additionally, we assume that r ( 1 , 1 ) ( l ) = r ( 2 , 2 ) ( l ) , for l = 0 , , h 1 , and L 1 , 1 + L 2 , 2 L 1 , 2 + L 2 , 1 . Then, we obtain:
n 1 2 d * ( C 2 ) 1 2 p ^ n p D α ˜ ( 1 , 1 ) α ˜ ( 1 , 2 ) L 2 , 2 L 2 , 1 L 1 , 2 + L 1 , 1 2 Z 2 , d * + 1 / 2 * ( 1 ) + α ˜ ( 1 , 1 ) + α ˜ ( 1 , 2 ) L 2 , 2 + L 2 , 1 + L 1 , 2 + L 1 , 1 2 Z 2 , d * + 1 / 2 * * ( 1 ) ,
with C 2 and α ˜ ( p , q ) as given in Theorem 3. Note that Z 2 , d * + 1 / 2 * ( 1 ) and Z 2 , d * + 1 / 2 * * ( 1 ) are both standard Rosenblatt random variables, whose covariance is given by
C o v Z 2 , d * + 1 / 2 * ( 1 ) , Z 2 , d * + 1 / 2 * * ( 1 ) = L 2 , 2 L 1 , 1 2 L 1 , 1 + L 2 , 2 2 L 1 , 2 + L 2 , 1 2 .
Remark 1.
Following [18], Corollary 3.14, if additionally r ( 1 , 1 ) ( k ) = r ( 2 , 2 ) ( k ) and r ( 1 , 2 ) ( k ) = r ( 2 , 1 ) ( k ) is fulfilled for all k Z , then the two limit random variables following a standard Rosenblatt distribution in Theorem 4 are independent. Note that due to the considerations in [21], Equation (10), we know that the distribution of the sum of two independent standard Rosenblatt random variables is not standard Rosenblatt. However, this yields a computational benefit, as it is possible to efficiently simulate the standard Rosenblatt distribution, for details, as can be seen in [21].
We turn to an example that deals with the asymptotic variance of the estimator of p in Theorem 3 in the case h = 1 .
Example 3.
We focus on the case h = 1 and consider the underlying process Y j , 1 j Z = Y j ( 1 ) , Y j ( 2 ) j Z . It is possible to determine the asymptotic variance depending on the correlation r ( 1 , 2 ) ( 0 ) between these two increment variables.
We start with the calculation of the second order Hermite coefficients in the case π = ( 1 , 0 ) . This corresponds to the event Y j ( 1 ) 0 , Y j ( 2 ) 0 , which yields:
c 1 , 1 π , 2 = E Y j ( 1 ) 2 1 1 Y j ( 1 ) 0 , Y j ( 2 ) 0
and:
c 1 , 2 π , 2 = E Y j ( 1 ) Y j ( 2 ) 1 Y j ( 1 ) 0 , Y j ( 2 ) 0 .
Due to r ( 1 , 2 ) ( 0 ) = r ( 2 , 1 ) ( 0 ) , we have Y j ( 1 ) , Y j ( 2 ) = D Y j ( 2 ) , Y j ( 1 ) and therefore, c 1 , 1 π , 2 = c 2 , 2 π , 2 . We identify the second order Hermite coefficients as the ones already calculated in [20], Example 3.13, although we are considering two consecutive increments of a univariate Gaussian process there. However, since the corresponding values are only determined by the correlation between the Gaussian variables, we can simply replace the autocorrelation at lag 1 by the cross-correlation at lag 0. Hence, we obtain:
c 1 , 1 π , 2 = φ 2 ( 0 ) r ( 1 , 2 ) ( 0 ) 1 r ( 1 , 2 ) ( 0 ) 2 , c 1 , 2 π , 2 = φ 2 ( 0 ) 1 r ( 1 , 2 ) ( 0 ) 2 .
Recall that the inverse Σ 2 , 1 1 = g i , j i , j = 1 , 2 of the correlation matrix of Y j ( 1 ) , Y j ( 2 ) is given by
Σ 2 , 1 1 = 1 1 r ( 1 , 2 ) ( 0 ) 2 1 r ( 1 , 2 ) ( 0 ) r ( 1 , 2 ) ( 0 ) 1 .
By using the formula for α ˜ ( p , q ) obtained in [18], Equation (4.23), we derive:
α ˜ π , 2 ( 1 , 1 ) = α 1 , 1 π , 2 = g 1 , 1 2 + g 1 , 2 2 c 1 , 1 π , 2 + 2 g 1 , 1 g 1 , 2 c 1 , 2 π , 2 , α ˜ π , 2 ( 1 , 2 ) = α 1 , 2 π , 2 = g 1 , 1 2 + g 1 , 2 2 c 1 , 2 π , 2 + 2 g 1 , 1 g 1 , 2 c 1 , 1 π , 2 .
Plugging the second order Hermite coefficients and the entries of the inverse of the covariance matrix depending on r ( 1 , 2 ) ( 0 ) into the formulas, we arrive at:
α ˜ π , 2 ( 1 , 1 ) = φ 2 ( 0 ) r ( 1 , 2 ) ( 0 ) 1 r ( 1 , 2 ) ( 0 ) 2 1 / 2
and:
α ˜ π , 2 ( 1 , 2 ) = φ 2 ( 0 ) 1 r ( 1 , 2 ) ( 0 ) 2 1 / 2 .
Therefore, in the case h = 1 , we obtain the following factors in the limit variance in Theorem 3:
α ˜ ( 1 , 1 ) = α ˜ ( 2 , 2 ) = 2 φ 2 ( 0 ) r ( 1 , 2 ) ( 0 ) 1 r ( 1 , 2 ) ( 0 ) 2 1 / 2 α ˜ ( 1 , 2 ) = α ˜ ( 2 , 1 ) = 2 φ 2 ( 0 ) 1 r ( 1 , 2 ) ( 0 ) 2 1 / 2 .
Remark 2.
It is not possible to analytically determine the limit variance for h = 2 , as this includes orthant probabilities of a four-dimensional Gaussian distribution. Following [22], no closed formulas are available for these probabilities. However, there are fast algorithms at hand that calculate the limit variance efficiently. It is possible to take advantage of the symmetry properties of the multivariate Gaussian distribution to keep the computational cost of these algorithms low. For detail, as can be seen in [18], Section 4.3.1.

3.2. Limit Theorem for the Estimator of p in Case of Short-Range Dependence

In this section, we focus on the case of d * , 0 0 , 1 4 . If d * 0 , 1 4 , we are still dealing with a long-range dependent multivariate Gaussian process Y j , h j Z . However, the transformed process p ^ n p is no longer long-range dependent, since we are considering a function with Hermite rank 2, see also [16], Table 5.1. Otherwise, if d * , 0 , the process Y j , h j Z itself is already short-range dependent, since the cross-correlations are summable. Therefore, we obtain the following central limit theorem by applying Theorem 4 in [14].
Theorem 5.
Under the assumptions in (S), we obtain:
n 1 2 p ^ n p D N 0 , σ 2
with:
σ 2 = k = E [ 1 Π ˜ Y 1 ( 1 ) , , Y h ( 1 ) = Π ˜ Y 1 ( 2 ) , , Y h ( 2 ) p × 1 Π ˜ Y 1 + k ( 1 ) , , Y h + k ( 1 ) = Π ˜ Y 1 + k ( 2 ) , , Y h + k ( 2 ) p ] .
We close this section with a brief retrospect of the results obtained. We established limit theorems for the estimator of p as probability of coincident pattern in both time series and hence, on the most important parameter in the context of ordinal pattern dependence. The long-range dependent case as well as the mixed case of short- and long-range dependence was considered. Finally, we provided a central limit theorem for a multivariate Gaussian time series that is short-range dependent if transformed by p ^ n . In the subsequent section, we provide a simulation study that illustrates our theoretical findings. In doing so, we shed light on the Rosenblatt distribution and the distribution of the sum of Rosenblatt distributed random variables.

4. Simulation Study

We begin with the generation of a bivariate long-range dependent fractional Gaussian noise series Y j ( 1 ) , Y j ( 2 ) j = 1 , , n .
First, we simulate two independent fractional Gaussian noise processes U j ( 1 ) j = 1 , , n and U j ( 2 ) j = 1 , , n derived by the R-package “longmemo”, for a fixed parameter H 1 2 , 1 in both time series. For the reader’s convenience, we denote the long-range dependence parameter d by H = d + 1 2 as it is common, when dealing with fractional Gaussian noise and fractional Brownian motion. We refer to H as Hurst parameter, tracing back to the work of [23]. For H = 0.7 and H = 0.8 we generate n = 10 6 samples, for H = 0.9 , we choose n = 2 · 10 6 . We denote the correlation function of univariate fractional Gaussian noise by r H ( 1 , 1 ) ( k ) , k 0 . Then, we obtain Y j ( 1 ) , Y j ( 2 ) j for j = 1 , , n :
Y j ( 1 ) = U j ( 1 ) , Y j ( 2 ) = ψ U j ( 1 ) + ϕ U j ( 2 ) ,
for ψ , ϕ R .
Note that this yields the following properties for the cross-correlations of the two processes for k 0 :
r H ( 1 , 2 ) ( k ) = E Y j ( 1 ) Y j + k ( 2 ) = ψ r H ( 1 , 1 ) ( k ) r H ( 2 , 1 ) ( k ) = r ( 1 , 2 ) ( k ) = ψ r H ( 1 , 1 ) ( k ) r H ( 2 , 2 ) ( k ) = E Y j ( 2 ) Y j + k ( 2 ) = ψ 2 + ϕ 2 r H ( 1 , 1 ) ( k ) .
We use ψ = 0.6 and ϕ = 0.8 to obtain unit variance in the second process.
Note that we chose the same Hurst parameter in both processes to get a better simulation result. The simulations of the processes Y j ( 1 ) j Z and Y j ( 2 ) j Z are visualized in Figure 5. On the left-hand side, the different fractional Gaussian noises depending on the Hurst parameter H are displayed. They represent the stationary long-range dependent Gaussian increment processes we need in the view of the limit theorems we derived in Section 3. The processes in which we are comparing the coincident ordinal patterns, namely X j ( 1 ) j Z and X j ( 2 ) j Z , are shown on the right-hand side in Figure 5. The long-range dependent behavior of the increment processes is very illustrative in these processes: roughly speaking, they become smoother the larger the Hurst parameter gets.
We turn to the simulation results for the asymptotic distribution of the estimator p ^ n . The first limit theorem is given in Theorem 3 for H = 0.8 and H = 0.9 . In the case of H = 0.7 , a different limit theorem holds, see Theorem 5. Therefore, we turn to the simulation results of the asymptotic distribution of the estimator p ^ n of p, as shown in Figure 6 for pattern length h = 2 . The asymptotic normality in case H = 0.7 can be clearly observed. We turn to the interpretation of the simulation results of the distribution of p ^ n p for H = 0.8 and H = 0.9 as the weighted sum of the sample (cross-)correlations: we observe in the Q–Q plot for H = 0.8 that the samples in the upper and lower tail deviate from the reference line. For H = 0.9 , a similar behavior in the Q–Q plot is observed.
We want to verify the result in Theorem 4 that it is possible, by a different weighting, to express the limit distribution of p ^ n p as the distribution of the sum of two independent standard Rosenblatt random variables. The simulated convergence result is provided in Figure 7. We observed the standard Rosenblatt distribution.

5. Conclusions and Outlook

We considered limit theorems in the context of the estimation of ordinal pattern dependence in the long-range dependence setting. Pure long-range dependence, as well as mixed cases of short- and long-range dependence, were considered alongside the transformed short-range dependent case. Therefore, we complemented the asymptotic results in [11]. Hence, we made ordinal pattern dependence applicable for long-range dependent data sets as they arise in the context of neurology, as can be seen in [24] or artificial intelligence, as can be seen in [25]. As these kinds of data were already investigated using ordinal patterns, as can be seen, for example, in [26], this emphasizes the large practical impact of the ordinal approach in analyzing the dependence structure multivariate time series. This yields various research opportunities in these fields in the future.
Our results rely on the assumption of Gaussianity of the considered multivariate time series. If we focus on comparing the coincident ordinal patterns in a stationary long-range dependent bivariate time series, we highly benefit from the property of ordinal patterns not being affected by monotone transformations. It is possible to transform the data set to the Gaussian framework without losing the necessary ordinal information. In applications, this property is highly desirable. If we consider the more general setting, that is, stationary increments, the mathematical theory in the background gets a lot more complex leading to the limitations of our results. A crucial argument used in the proofs of the results in Section 2 is given in the Reduction Theorem, originally proven in Theorem 4.1 in [27] in the univariate case and extended to the multivariate setting in Theorem 6 in [14]. For further details, we refer the reader to the Appendix A. However, this result only holds in the Gaussian case. Limit theorems for the sample cross-correlation process of multivariate linear long-range dependent processes with Hermite rank 2 have recently been proven in Theorem 4 in [28]. This is possibly an interesting starting point to adapt the proofs in the Appendix A to this larger class of processes without requiring Gaussianity. Considering the property of having a discrete bivariate time series in the background, an interesting extension is given in time continuous processes and the associated techniques of discretization to still regard the ordinal perspective. To think even further beyond our scope, a generalization to categorical data is conceivable and yields an interesting open research opportunity.

Author Contributions

Conceptualization, I.N. and A.S.; methodology and mathematical theory, I.N.; simulations, I.N.; validation, I.N. and A.S.; writing—original draft preparation, I.N.; writing—review and editing, A.S.; funding acquisition, A.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the German Research Foundation (DFG) grant number SCHN 1231/3-2.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author, Ines Nüßgen, upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Technical Appendix

All proofs in this appendix were taken from [18], Chapter 3.

Appendix A.1. Preliminary Results

Before turning to limit theorems, we introduce a possibility to decompose the d-dimensional Gaussian process Y j j Z using the Cholesky decomposition, as can be seen in [29]. Based on the definition of the multivariate normal distribution, as can be seen in [30], Definition 1.6.1, we find an upper triangular matrix A ˜ , such that A ˜ A ˜ t = Σ d . Then, it holds that:
Y j = D A ˜ U j * ,
where U j * is a d-dimensional Gaussian process where each U j * has independent and identically N ( 0 , 1 ) distributed entries. We want to assure that U j * j Z preserves the long-range dependent structure of Y j j Z . Since we know from (2) that:
E Y j Y j + k = Γ Y ( k ) k D 1 2 I d L k D 1 2 I d ( k ) ,
the process U j * has to fulfill:
E U j * U j + k * = Γ U * ( k ) k D 1 2 I d L U k D 1 2 I d ( k ) ,
with L = A ˜ L U * A ˜ t .
Then, it holds for all n N that:
Y j , j = 1 , , n = D A ˜ U j * , j = 1 , , n .
Note that the assumption in (A2) is only well-defined because we assumed r ( p , q ) ( k ) < 1 for k 1 and p , q = 1 , , d in (1). This becomes clear in the following considerations. In the proofs of the theorems in this chapter, we do not only need a decomposition of Y j , but also of Y j , h . As Y j , h is still a multivariate Gaussian process, the covariance matrix of Y j , h given by Σ d , h is positive definite. Hence, it is possible to find an upper triangular matrix A, such that A A t = Σ d , h . It holds that:
Y j , h = D A U j , h
for:
U j , h = U ( j 1 ) h + 1 ( 1 ) , , U j h ( 1 ) , , U ( j 1 ) h + 1 ( d ) , , U j h ( d ) t .
The random vector U j , h consists of ( d · h ) independent and standard normally distributed random variables. We notice the different structure of U j , h compared to Y j , h . We assure that for consecutive j, the entries in U j , h are all different while there are identical entries, for example in Y 1 , h = Y 1 ( 1 ) , Y 2 ( 1 ) , , Y h ( d ) t and Y 2 , h = Y 2 ( 1 ) , , Y h ( d ) , Y h + 1 ( d ) t . This complicates our aim that:
Y j , h , j = 1 , , n t = D A U j , h , j = 1 , , n t
holds.
The special structure of Y j , h j Z , namely that consisting of h consecutive entries of each marginal process Y j ( p ) , p = 1 , , d , alongside the dependence between two random vectors in the process Y j , h , has to be reflected in the covariance matrix of U j , h , j = 1 , , n . Hence, we need to check whether such a vector U j , h , j = 1 , , n exists, i.e., if there is a positive semi-definite matrix that fulfills these conditions. We define A as a block diagonal matrix with A as main-diagonal blocks and all off-diagonal blocks as d h × d h -zero matrix.
We denote the covariance matrix of Y j , h , j = 1 , , n t by Σ Y , n and define the following matrix:
Σ U , n : = inv A Σ Y , n inv A t .
We know that Σ Y , n is a positive semi-definite for all n N because Y j is a Gaussian process. Mathematically described, this means that:
x t Σ Y , n x 0 ,
for all x = x 1 , , x n h d t R n h d . We conclude:
x t Σ U , n x = x t inv A Σ Y , n inv A t x = inv A t x t Σ Y , n x t inv A ( A 6 ) 0 .
Therefore, Σ U , n is a positive semi-definite matrix for all n N and the random vector:
U j , h , j = 1 , , n t N 0 , Σ U , n
exists and (A4) holds. Note that we do not have any further information on the dependence structure within the process U j , in general, this process neither exhibits long-range dependence, nor independence, nor stationarity.
We continue with two preparatory results that are also necessary for proving Theorem 2.1.
Lemma A1.
Let Y j j Z be a d-dimensional Gaussian process as defined in (1) that fulfills (2) with d 1 = = d d = d * , such that:
Γ Y ( k ) = E Y j Y j + k t L k 2 d * 1 , ( k ) .
Let C 2 be a normalization constant:
C 2 = 1 2 d * 4 d * 1
and let B Y be an upper triangular matrix, such that:
B Y B Y t = L .
Furthermore, for l N we have:
Γ ^ Y , n ( l ) = 1 n l j = 1 n l Y j Y j + l t .
Then, for h N it holds that:
n 1 2 d * C 2 1 / 2 B Y B Y 1 vec Γ ^ n ( l ) Γ ( l ) , l = 0 , , h 1 D vec Z 2 , d * + 1 / 2 ( p , q ) ( 1 ) p , q = 1 , , d , l = 0 , , h 1 ,
where Z 2 , d * + 1 / 2 ( p , q ) ( 1 ) has the spectral domain representation:
Z 2 , d * + 1 / 2 ( p , q ) ( 1 ) = K p , q ( d * ) R 2 exp i λ 1 + λ 2 1 i λ 1 + λ 2 λ 1 λ 2 d * B ˜ ( p ) d λ 1 B ˜ ( q ) d λ 2
where:
K p , q 2 ( d * ) = 1 2 C 2 2 Γ ( 1 2 d * ) sin π d * 2 , p = q 1 C 2 2 Γ ( 1 2 d * ) sin π d * 2 , p q .
and B ˜ ( d λ ) = B ˜ ( 1 ) ( d λ ) , , B ˜ ( d ) ( d λ ) is a multivariate Hermitian–Gaussian random measure as defined in (12).
Proof. 
First, we can use (A1):
Y j = D A ˜ U j * ,
such that U j * is a multivariate Gaussian process with U j * N 0 , I d and U j * is still long-range dependent, as can be seen in (A2). It is possible to decompose the sample cross-covariance matrix Γ ^ Y , n ( l ) Γ Y ( l ) with respect to Y j at lag l given by
Γ ^ Y , n ( l ) Γ Y ( l ) = 1 n l j = 1 n l Y j Y j + l t E Y j Y j + l t
to:
Γ ^ Y , n ( l ) Γ Y ( l ) = D A ˜ Γ ^ U * , n ( l ) Γ U * ( l ) A ˜ t ,
where we define the sample cross-covariance matrix Γ ^ U * , n ( l ) Γ U * ( l ) with respect to U j * at lag l by
Γ ^ U * , n ( l ) Γ U * ( l ) = 1 n l j = 1 n l U j * U j + l * E U j * U j + l * .
Each entry of:
Γ ^ U * , n ( l ) Γ U * ( l ) = r ^ n , U * ( p , q ) ( l ) r U * ( p , q ) ( l ) p , q = 1 , , d
is given by
r ^ n , U * ( p , q ) ( l ) r U * ( p , q ) ( l ) : = j = 1 n U j * ( p ) U j + l * ( q ) E U j * ( p ) U j + l * ( q ) .
Following [31], proof of Lemma 7.4, the limit distribution of:
Γ ^ U * , n ( l ) Γ U * ( l ) , l = 0 , , h 1
is equal to the limit distribution of:
Γ ^ U * , n ( 0 ) Γ U * ( 0 ) , l = 0 , , h 1 .
We recall the assumption that d * = d p for all p = 1 , , d . We follow [14], Theorem 6 and use the Cramer–Wold device: Let a 1 , 1 , a 1 , 2 , , a d , d R . We are interested in the asymptotic behavior of:
n 1 2 d * p , q = 1 d a p , q r ^ n , U ( p , q ) ( 0 ) r U ( p , q ) ( 0 ) = n 2 d * j = 1 n p , q = 1 d a p , q U j * ( p ) U j * ( q ) E U j * ( p ) U j * ( q ) .
We consider the function:
f U j * = p , q = 1 d a p , q U j * ( p ) U j * ( q ) E U j * ( p ) U j * ( q )
and may apply Theorem 6 in [14]. Using the Hermite decomposition of f as given in (11), we observe that f and therefore, a p , q , p , q = 1 , , d , only affects the Hermite coefficients. Indeed, using [15], Lemma 3.5, the Hermite coefficients reduce to a p , q for each summand on the right-hand side in (A7). Hence, we can state:
n 2 d * j = 1 n p , q = 1 d a p , q U j * ( p ) U j * ( q ) E U j * ( p ) U j * ( q )
D p , q = 1 d a p , q Z 2 , d * + 1 / 2 ( p , q ) ( 1 ) ,
where Z 2 , d * + 1 / 2 ( p , q ) ( 1 ) has the spectral domain representation:
Z 2 , d * + 1 / 2 ( p , q ) ( 1 ) = K p , q ( d * ) R 2 exp i λ 1 + λ 2 1 i λ 1 + λ 2 λ 1 λ 2 d * B ˜ ( p ) d λ 1 B ˜ ( q ) d λ 2
where:
K p , q 2 ( d * ) = 1 2 C 2 2 Γ ( 1 2 d * ) sin π d * 2 , p = q 1 C 2 2 Γ ( 1 2 d * ) sin π d * 2 , p q .
and B ˜ ( d λ ) = B ˜ ( 1 ) ( d λ ) , , B ˜ ( d ) ( d λ ) is an appropriate multivariate Hermitian–Gaussian random measure. Thus, we proved convergence in the distribution of the sample-cross correlation matrix:
n 1 2 d * Γ ^ U * , n ( 0 ) Γ U * ( 0 ) D Z 2 , d * + 1 / 2 ( p , q ) ( 1 ) p , q = 1 , , d .
We take a closer look at the covariance matrix of vec Γ ^ U * , n ( 0 ) Γ U * ( 0 ) . Following [28], Lemma 5.7, we observe:
n 1 2 d * 4 d * 4 d * 1 1 / 2 Cov vec Γ ^ U * , n ( 0 ) Γ U * ( 0 ) , vec Γ ^ U * , n ( 0 ) Γ U * ( 0 ) = I d 2 + K d 2 L U * L U * ,
with L U * as defined in (A3) and ⊗ denotes the Kronecker product. Furthermore, K d denotes the commutation matrix that that transforms vec ( A ) into vec A t for A R d × d . Further details can be found in [32].
Hence, the covariance matrix of the vector of the sample cross-covariances is fully specified by the knowledge of L U * as it arises in the context of long-range dependence in (A3).
We obtain a relation between L and L U * , since:
Γ Y ( · ) = A ˜ Γ U ( · ) A ˜ t .
Both:
Γ Y ( k ) L k 2 d * 1 ( k )
and:
Γ U * ( k ) L U * k 2 d * 1 ( k )
hold and we obtain:
L = A ˜ L U * A ˜ t .
We study the covariance matrix of: vec Γ ^ Y , n ( 0 ) Γ Y ( 0 ) :
n 1 2 d * 4 d * 4 d * 1 1 / 2 Cov vec Γ ^ Y , n ( 0 ) Γ Y ( 0 ) , vec Γ ^ Y , n ( 0 ) Γ Y ( 0 ) t I d 2 + K d 2 L L = I d 2 + K d 2 A ˜ L U * A ˜ t A ˜ L U * A ˜ t = I d 2 + K d 2 A ˜ A ˜ · L U * L U * · A ˜ t A ˜ t .
Let B U * be an upper triangular matrix, such that:
B U * B U * t : = L U * .
We know that such a matrix exists because L U * is positive definite. Analogously, we define B Y :
B Y : = A ˜ B U * .
Then, it holds that:
B Y B Y t = L .
We arrive at:
n 1 2 d * C 2 1 / 2 B Y B Y 1 vec Γ ^ Y , n ( 0 ) Γ Y ( 0 ) = D n 1 2 d * C 2 1 / 2 B U * B U * 1 A A 1 vec A ˜ Γ ^ U * , n ( 0 ) Γ U * ( 0 ) A ˜ t = n 1 2 d * C 2 1 / 2 B U * B U * 1 vec Γ ^ U * , n ( 0 ) Γ U * ( 0 ) D vec Z 2 , d * + 1 / 2 ( p , q ) ( 1 ) p , q = 1 , , d ,
where Z 2 , d * + 1 / 2 ( p , q ) ( 1 ) has the spectral domain representation:
Z 2 , d * + 1 / 2 ( p , q ) ( 1 ) = K p , q ( d * ) R 2 exp i λ 1 + λ 2 1 i λ 1 + λ 2 λ 1 λ 2 d * B ˜ ( p ) d λ 1 B ˜ ( q ) d λ 2
where:
K p , q 2 ( d * ) = 1 2 C 2 2 Γ ( 1 2 d * ) sin π d * 2 , p = q 1 C 2 2 Γ ( 1 2 d * ) sin π d * 2 , p q .
and B ˜ ( d λ ) = B ˜ ( 1 ) ( d λ ) , , B ˜ ( d ) ( d λ ) is a multivariate Hermitian–Gaussian random measure as defined in (12). Note that the standardization on the left-hand side is appropriate since the covariance matrix of vec Z 2 , d * + 1 / 2 ( 1 ) is given by
E ( K 2 ( d * ) R 2 R 2 E λ 1 , λ 2 E λ 3 , λ 4 ¯ vec B ˜ d λ 1 B ˜ d λ 2 t vec B ˜ d λ 3 B ˜ d λ 4 t ¯ t ) .
by denoting:
E λ 1 , λ 2 : = exp i λ 1 + λ 2 1 i λ 1 + λ 2 λ 1 λ 2 d * .
We observe:
E vec B ˜ d λ 1 B ˜ d λ 2 t vec B ˜ d λ 3 B ˜ d λ 4 t ¯ t = I d 2 d λ 1 d λ 2 , λ 1 = λ 3 λ 2 = λ 4 , K d 2 d λ 1 d λ 2 , λ 1 = λ 4 λ 2 = λ 3 ,
following [28], (27). Neither the case λ 1 = λ 2 nor λ 3 = λ 4 has to be incorporated as the diagonals are excluded in the integration in (A12). □
Corollary A1.
Under the assumptions of Lemma A1, there is a different representation of the limit random vector. For h N , we obtain:
n 1 2 d * C 2 1 / 2 vec Γ ^ n ( l ) Γ ( l ) , l = 0 , , h 1 D vec Z 2 , d * + 1 / 2 ( 1 ) l = 0 , , h 1 ,
where vec Z 2 , d * + 1 / 2 ( 1 ) has the spectral domain representation:
vec Z 2 , d * + 1 / 2 ( 1 ) = D K d * R 2 exp i λ 1 + λ 2 1 i λ 1 + λ 2 λ 1 λ 2 d * vec B ˜ L d λ 1 B ˜ L d λ 2 t .
The matrix D K d * is a diagonal matrix:
D K d * = diag vec K d * ,
and K ( d * ) = K p , q ( d * ) p , q = 1 , , d is such that:
K 2 ( d * ) p , q = 1 2 C 2 2 Γ ( 1 2 d * ) sin π d * 2 , p = q 1 C 2 2 Γ ( 1 2 d * ) sin π d * 2 , p q .
Furthermore, B ˜ L ( d λ ) is a multivariate Hermitian–Gaussian random measure that fulfills:
E B ˜ L ( d λ ) B ˜ L ( d λ ) * = L d λ .
Proof. 
The proof is an immediate consequence of Lemma A1 using B ˜ L ( d λ ) = B Y B ˜ ( d λ ) with B Y B Y t = L and B ˜ ( d λ ) as defined in (12). □

Appendix A.2. Proof of Theorem 2.1

Proof. 
Without loss of generality, we assume E f Y j , h = 0 . Following the argumentation in [20], Theorem 5.9, we first remark that Y j , h = D A U j , h with U j , h and A as described in (A4) and (A5). We now want to study the asymptotic behavior of the partial sum j = 1 n f * U j where f * U j , h : = f A U j , h = D f Y j , h . Since m f * , I d h = m f A , I d h = m f , Σ d , h = 2 , as can be seen in [15], Lemma 3.7, hence, we know by [14], Theorem 6, that these partial sums are dominated by the second order terms in the Hermite expansion of f * :
j = 1 n f * U j , h j = 1 n l 1 + + l d h = 2 E f * U j , h H l 1 , , l d h U j , h H l 1 , , l d h U j , h + o P n 2 d * .
This follows from the multivariate extension of the Reduction Theorem as proven in [14]. We obtain:
l 1 + + l d h = 2 E f * U j , h H l 1 , , l d h U j , h H l 1 , , l d h U j , h = i = 1 d h E f * U j , h U j , h ( i ) 2 1 U j , h ( i ) 2 1 + 1 i , k d h , i k E f * U j , h U j , h ( i ) U j , h ( k ) U j , h ( i ) U j , h ( k ) = i = 1 d h E f * U j , h U j , h ( i ) 2 U j , h ( i ) 2 1 + 1 i , k d h , i k E f * U j , h U j , h ( i ) U j , h ( k ) U j , h ( i ) U j , h ( k ) ,
since E f * U j , h = E f Y j , h = 0 . This results in:
i = 1 d h E f * U j , h U j , h ( i ) 2 U j , h ( i ) 2 1 + 1 i , k d h , i k E f * U j , h U j , h ( i ) U j , h ( k ) U j , h ( i ) U j , h ( k ) = 1 i , k d h E f * U j , h U j , h ( i ) U j , h ( k ) U j , h ( i ) U j , h ( k ) i = 1 d h E f * U j , h U j , h ( i ) 2 .
Note that:
1 i , k d h E f * U j , h U j , h ( i ) U j , h ( k ) E U j , h ( i ) U j , h ( k ) = i = 1 d h E f * U j , h U j , h ( i ) 2
since the entries of U j , h are independent for fixed j and identically N ( 0 , 1 ) distributed. Thus, the subtrahend in (A14) equals the expected value of the minuend.
Define B : = b i , k 1 i , k d h R ( d h ) × ( d h ) with:
b i , k : = E f * U j , h U j , h ( i ) U j , h ( k ) = E f * U 1 U 1 ( i ) U 1 ( k )
since we are considering a stationary process. We obtain:
B = E U j , h f * U j , h U j , h t = E A 1 Y j , h f Y j , h Y j , h t A 1 t .
Hence, we can state the following:
1 i , k d h E f * U j , h U j , h ( i ) U j , h ( k ) U j , h ( i ) U j , h ( k ) = U j , h t B U j , h = D Y j , h t A 1 t B A 1 Y j , h = Y j , h t A 1 t A 1 E Y j , h f Y j , h Y j , h t A 1 t A 1 Y j , h = Y j , h t Σ d , h 1 E Y j , h f Y j , h Y j , h t Σ d , h 1 Y j , h = Y j , h t A Y j , h = 1 i , k d h Y j ( i ) Y j ( k ) α i k ,
where we define A : = α i k 1 i k d h : = Σ d , h 1 C Σ d , h 1 , with C : = E Y j , h f Y j , h Y j , h t as the matrix of second order Hermite coefficients, in contrast to B now with respect to the original considered process Y j , h j Z .
Remembering the special structure of Y j , h = Y j ( 1 ) , , Y j + h 1 ( 1 ) , , Y j ( d ) , Y j + h 1 ( d ) t , namely that Y j , h ( k ) = Y j + ( k mod h ) 1 k 1 h + 1 , k = 1 , , d h , we can see that:
j = 1 n 1 i k d h Y j , h ( i ) Y j , h ( k ) α i k = j = 1 n 1 i k d h Y j + ( i mod h ) 1 i 1 h + 1 Y j + ( k mod h ) 1 k 1 h + 1 α i k = j = 1 n p , q = 1 d i , k = 1 h Y j + i 1 ( p ) Y j + k 1 ( q ) α i k ( p , q ) ,
where we divide:
A = A ( 1 , 1 ) A ( 1 , 2 ) A ( 1 , d ) A ( 2 , 1 ) A ( 2 , 2 ) A ( 2 , d ) A ( d , 1 ) A ( d , 2 ) A ( d , d ) ,
with A ( p , q ) = α i , k ( p , q ) 1 i , k h R h × h such that α i , k ( p , q ) = α i + ( p 1 ) h , k + ( q 1 ) h for each p , q = 1 , , d and i , k = 1 , , h .
We can now split the considered sum in (A17) in a way such that we are afterwards able to express it in terms of sample cross-covariances. In order to do so, we define the sample cross-covariance at lag l by
r ^ n ( p , q ) ( l ) : = 1 n j = 1 n l X j ( p ) X j + l ( q )
for p , q = 1 , , d .
Note that in the case h = 1 , it follows directly that:
j = 1 n p , q = 1 d i , k = 1 h Y j + i 1 ( p ) Y j + k 1 ( q ) α i k ( p , q ) = p , q = 1 d α 1 , 1 ( p , q ) j = 1 n Y j ( p ) Y j ( q ) = n p , q = 1 d r ^ n ( p , q ) ( 0 ) .
The case h = 2 has to be regarded separately, too, and we obtain:
j = 1 n p , q = 1 d i , k = 1 2 Y j + i 1 ( p ) Y j + k 1 ( q ) α i k ( p , q ) = p , q = 1 d α 1 , 1 ( p , q ) j = 1 n Y j ( p ) Y j ( q ) + α 1 , 2 ( p , q ) j = 1 n Y j ( p ) Y j + 1 ( q ) + α 2 , 1 ( p , q ) j = 1 n Y j + 1 ( p ) Y j ( q ) + α 2 , 2 ( p , q ) j = 1 n Y j + 1 ( p ) Y j + 1 ( q ) = p , q = 1 d ( α 1 , 1 ( p , q ) n r ^ n ( p , q ) ( 0 ) + α 1 , 2 ( p , q ) n r ^ n ( p , q ) ( 1 ) + Y n ( p ) Y n + 1 ( q ) + α 2 , 1 ( p , q ) n r ^ n ( q , p ) ( 1 ) + Y n + 1 ( p ) Y n ( q ) + α 2 , 2 ( p , q ) n r ^ n ( p , q ) ( 0 ) + Y n + 1 ( p ) Y n + 1 ( q ) Y 1 ( p ) Y 1 ( q ) ) ,
Note that for each of the terms labeled by ★, the following holds for d * 1 4 , 1 2 :
n 2 d * P 0 , ( n ) .
We use this property later on when dealing with the asymptotics of the term in (A17). Finally, we consider the term in (A17) for h 3 and arrive at:
j = 1 n p , q = 1 d i , k = 1 h Y j + i 1 ( p ) Y j + k 1 ( q ) α i k ( p , q ) = p , q = 1 d i , k = 1 h α i k ( p , q ) j = i n + i 1 Y j ( p ) Y j + k i ( q ) = p , q = 1 d l = 0 h 1 i = 1 h l α i , i + l ( p , q ) j = i n + i 1 Y j ( p ) Y j + l ( q ) + p , q = 1 d l = ( h 1 ) 1 i = 1 l h α i , i + l ( p , q ) j = i n + i 1 Y j ( p ) Y j + l ( q ) = p , q = 1 d l = 0 h 1 i = 1 h l α i , i + l ( p , q ) j = i n + i 1 Y j ( p ) Y j + l ( q ) + p , q = 1 d l = 1 h 1 i = 1 h l α i + l , i ( p , q ) j = i n + i 1 Y j + l ( p ) Y j ( q )
= p , q = 1 d i = 1 h α i , i ( p , q ) j = i n + i 1 Y j ( p ) Y j ( q )
+ p , q = 1 d l = 1 h 1 i = 1 h l α i , i + l ( p , q ) j = i n + i 1 Y j ( p ) Y j + l ( q ) + α i + l , i ( p , q ) j = i n + i 1 Y j + l ( p ) Y j ( q ) = p , q = 1 d α 1 , 1 ( p , q ) j = 1 n Y j ( p ) Y j ( q ) + i = 2 h α i , i ( p , q ) j = i n + i 1 Y j ( p ) Y j ( q ) + p , q = 1 d l = 1 h 2 ( α 1 , 1 + l ( p , q ) j = 1 n Y j ( p ) Y j + l ( q ) + α 1 + l , 1 ( p , q ) j = 1 n Y j + l ( p ) Y j ( q ) + i = 2 h l α i , i + l ( p , q ) j = i n + i 1 Y j ( p ) Y j + l ( q ) + α i + l , i ( p , q ) j = i n + i 1 Y j + l ( p ) Y j ( q ) )
+ p , q = 1 d α 1 , h ( p , q ) j = 1 n Y j ( p ) Y j + h 1 ( q ) + α h , 1 ( p , q ) Y j + h 1 ( p ) Y j ( q ) = p , q = 1 d α 1 , 1 ( p , q ) n r ^ n ( p , q ) ( 0 ) + i = 2 h α i , i ( p , q ) j = n + 1 n + i 1 Y j ( p ) Y j ( q ) + n r ^ n ( p , q ) ( 0 ) j = 1 i 1 Y j ( p ) Y j ( q ) + p , q = 1 d l = 1 h 2 ( α 1 , 1 + l ( p , q ) n r ^ n ( p , q ) ( l ) + α 1 + l , 1 ( p , q ) n r ^ n ( q , p ) ( l ) + i = 2 h l ( α i , i + l ( p , q ) j = n l + 1 n + i 1 Y j ( p ) Y j + l ( q ) + n r ^ n ( p , q ) ( l ) j = 1 i 1 Y j ( p ) Y j + l ( q ) + α i + l , i ( p , q ) j = n l + 1 n + i 1 Y j + l ( p ) Y j ( q ) + n r ^ n ( q , p ) ( l ) j = 1 i 1 Y j + l ( p ) Y j ( q ) ) )
+ p , q = 1 d ( α 1 , h ( p , q ) j = n h + 2 n Y j ( p ) Y j + h 1 ( q ) + n r ^ n ( p , q ) ( h 1 )
+ α h , 1 ( p , q ) j = n h + 2 n Y j + h 1 ( p ) Y j ( q ) + n r ^ n ( q , p ) ( h 1 ) ) .
Again for each of the terms labeled by ★ it holds for d * 1 4 , 1 2 :
n 2 d * P 0 , ( n ) ,
since each ★ describes a sum with a finite number (independent of n) of summands. Therefore, we continue to express the terms denoted by ★ by o P n 2 d * .
With these calculations, we are able to re-express the partial sum, whose asymptotics we are interested in, in terms of the sample cross-correlations of the original long-range dependent process Y j j Z .
Finally, the previous calculations lead to:
j = 1 n f Y j , h = D j = 1 n f * U j , h = ( A 14 ) j = 1 n 1 i , k d h E f * U j , h U j , h ( i ) U j , h ( k ) U j , h ( i ) U j , h ( k ) i = 1 d h E f * U j , h U j , h ( i ) 2 + o P ( n 2 d * ) = ( A 17 ) D j = 1 n p , q = 1 d i , k = 1 h α i k ( p , q ) Y j + i 1 ( p ) Y j + k 1 ( q ) E Y j + i 1 ( p ) Y j + k 1 ( q ) + o P ( n 2 d * ) ,
where (A23) follows, since (A15) yields:
i = 1 d h E f * U j , h U j , h ( i ) 2 = 1 i , k d h E f * U j , h U j , h ( i ) U j , h ( k ) E U j , h ( i ) U j , h ( k ) = ( A 17 ) p , q = 1 d i , k = 1 h α i k ( p , q ) E Y j + i 1 ( p ) Y j + k 1 ( q ) .
Taking the parts containing the sample cross-correlations into account, we derive:
j = 1 n p , q = 1 d i , k = 1 h α i k ( p , q ) Y j + i 1 ( p ) Y j + k 1 ( q ) E Y j + i 1 ( p ) Y j + k 1 ( q ) + o P ( n 2 d * ) = ( A 22 ) p , q = 1 d α 1 , 1 ( p , q ) n r ^ n ( p , q ) ( 0 ) r ( p , q ) ( 0 ) + i = 2 h α i , i ( p , q ) n r ^ n ( p , q ) ( 0 ) r ( p , q ) ( 0 ) + p , q = 1 d l = 1 h 2 ( α 1 , 1 + l ( p , q ) n r ^ n ( p , q ) ( l ) r ( p , q ) ( l ) + α 1 + l , 1 ( p , q ) n r ^ n ( q , p ) ( l ) r ( q , p ) ( l ) + i = 2 h l α i , i + l ( p , q ) n r ^ n ( p , q ) ( l ) r ( p , q ) ( l ) + α i + l , i ( p , q ) n r ^ n ( q , p ) ( l ) r ( q , p ) ( l ) ) + p , q = 1 d α 1 , h ( p , q ) n r ^ n ( p , q ) ( h 1 ) r ( p , q ) ( h 1 ) + α h , 1 ( p , q ) n r ^ n ( q , p ) ( h 1 ) r ( q , p ) ( h 1 ) + o P ( n 2 d * )
= n p , q = 1 d l = 0 h 1 i = 1 h l α i , i + l ( p , q ) r ^ n ( p , q ) ( l ) r ( p , q ) ( l ) + l = 1 h 1 i = 1 h l α i + l , i ( p , q ) r ^ n ( q , p ) ( l ) r ( q , p ) ( l ) ) + o P ( n 2 d * ) .
We take a closer look at the impact of each long-range dependence parameter d p , p = 1 , , d to the convergence of this sum. The setting we are considering does not allow for a normalization depending on p and q for each cross-correlation r ^ n ( p , q ) ( l ) r ( p , q ) ( l ) , l = 0 , , h 1 , but we need to find a normalization for all p , q = 1 , , d . Hence, we need to remember the set P * : = { p { 1 , , d } : d p d q q { 1 , , d } } and the parameter d * = max p = 1 , , d d p , such that for each p P * , we have d p = d * . For each p , q { 1 , , d } with ( p , q ) P * × P * and l = 0 , , h 1 , we conclude that:
E n 1 2 d * r ^ n ( p , q ) ( l ) r ( p , q ) ( l ) 2 = n 2 d p + d q 2 d * E n 1 d p d q r ^ n ( p , q ) ( l ) r ( p , q ) ( l ) 2 = n 2 d p + 2 d q 4 d * C 2 L p , p L q , q + L p , q L q , p ( n ) 0 ,
since d p + d q 2 d * < 0 .
This implies that:
n 1 2 d * r ^ n ( p , q ) ( 0 ) r ( p , q ) ( 0 ) P 0
Hence, using Slutsky’s theorem, the crucial parameters that determine the normalization and therefore, the limit distribution of (A27) are given in P * . We have an equal long-range dependence parameter d * to regard for all p P * . Applying Lemma A1, we obtain the following, by using the symmetry in l = 0 of the cross-correlation function r ( p , q ) ( 0 ) = r ( q , p ) ( 0 ) for p , q P * :
p , q = 1 d l = 0 h 1 i = 1 h l α i , i + l ( p , q ) r ^ n ( p , q ) ( l ) r ( p , q ) ( l ) + l = 1 h 1 i = 1 h l α i + l , i ( p , q ) r ^ n ( q , p ) ( l ) r ( q , p ) ( l ) ) = p , q P * l = 0 h 1 i = 1 h l α i , i + l ( p , q ) r ^ n ( p , q ) ( 0 ) r ( p , q ) ( 0 ) + l = 1 h 1 i = 1 h l α i + l , i ( p , q ) r ^ n ( q , p ) ( 0 ) r ( q , p ) ( 0 ) ) + o P ( n 2 d * 1 ) = p , q P * r ^ n ( p , q ) ( 0 ) r ( p , q ) ( 0 ) l = 0 h 1 i = 1 h l α i , i + l ( p , q ) + l = 1 h 1 i = 1 h l α i + l , i ( p , q ) + o P ( n 2 d * ) = p , q P * r ^ n ( p , q ) ( 0 ) r ( p , q ) ( 0 ) i , k = 1 h α i , k ( p , q ) + o P ( n 2 d * ) = p , q P * α ˜ ( p , q ) r ^ n ( p , q ) ( 0 ) r ( p , q ) ( 0 ) + o P ( n 2 d * ) ,
by defining α ˜ ( p , q ) : = i , k = 1 h α i , k ( p , q ) . Applying the continuous mapping theorem given in [33], Theorem 2.3, to the result in Corollary A1, we arrive at:
n 2 d * C 2 1 / 2 j = 1 n f Y j , h = n 2 d * n p , q = 1 d α ˜ ( p , q ) r ^ n ( p , q ) ( 0 ) r ( p , q ) ( 0 ) + o P ( n 2 d * ) = n 1 2 d * C 2 1 / 2 p , q = 1 d α ˜ ( p , q ) r ^ n ( p , q ) ( 0 ) r ( p , q ) ( 0 ) + o P ( 1 ) D p , q P * α ˜ ( p , q ) Z 2 , d * + 1 / 2 ( p , q ) ( 1 ) ,
where:
Z 2 , d * + 1 / 2 ( p , q ) ( 1 ) = K p , q d * R 2 exp i λ 1 + λ 2 1 i λ 1 + λ 2 λ 1 λ 2 d * B ˜ L ( p ) d λ 1 B ˜ L ( q ) d λ 2 .
The matrix K d * is given in Corollary A1. Moreover, B ˜ L ( d λ ) is a multivariate Hermitian–Gaussian random measure with E B L ( d λ ) B L ( d λ ) * = L d λ and L as defined in (2). □

Appendix A.3. Proof of Corollary 2.2

Proof. 
We assumed d * 1 4 , 1 2 , because otherwise we leave the long-range dependent setting, since we are studying functionals with Hermite rank 2 and the transformed process would no longer be long-range dependent and limit theorems for functionals of short-range dependent processes would hold, as can be seen in Theorem 4 in [14]. This choice of d * assures that the multivariate generalization of the Reduction Theorem as it is used in the proof of Theorem 2.1 still holds for these softened assumptions, as explained in (8).
We turn to the asymptotics of g ( p , q ) Y j . We obtain for all p , q { 1 , , d } P * , i.e., excluding d p = d q = d * and for all l = 0 , , h 1 as in (A26), that:
E n 1 2 d * r ^ n ( p , q ) ( l ) r ( p , q ) ( l ) 2 = n 2 d p + d q 2 d * E n 1 d p d q r ^ n ( p , q ) ( l ) r ( p , q ) ( l ) 2 = n 2 d p + 2 d q 4 d * C 2 L p , p L q , q + L p , q L q , p ( n ) 0 ,
since d p + d q 2 d * < 0 .
This implies that:
n 1 2 d * r ^ n ( p , q ) ( 0 ) r ( p , q ) ( 0 ) P 0 .
Applying Slutsky’s theorem, we observe that only p , q P * have an impact on the convergence behavior as it is given in (A27) and hence, the result in Theorem 2.1 holds. □

Appendix A.4. Proof of Theorem 2.3

Proof. 
We follow the proof of Theorem 2.1 until (A27), in order to obtain a limit distribution that can be expressed by the sum of two standard Rosenblatt random variables:
p , q = 1 2 α ˜ ( p , q ) r ^ n ( p , q ) ( 0 ) r ( p , q ) ( 0 ) = 1 n j = 1 n p , q = 1 2 α ˜ ( p , q ) Y j ( p ) Y j ( q ) r ( p , q ) ( 0 ) = 1 n j = 1 n Y j ( 1 ) , Y j ( 2 ) α ˜ ( 1 , 1 ) α ˜ ( 1 , 2 ) α ˜ ( 2 , 1 ) α ˜ ( 2 , 2 ) Y j ( 1 ) , Y j ( 2 ) t E Y j ( 1 ) , Y j ( 2 ) α ˜ ( 1 , 1 ) α ˜ ( 1 , 2 ) α ˜ ( 2 , 1 ) α ˜ ( 2 , 2 ) Y j ( 1 ) , Y j ( 2 ) t .
We remember that α ˜ ( p , q ) = i , k = 1 h α i , k ( p , q ) = i , k = 1 h α i + ( p 1 ) h , k + ( q 1 ) h for p , q = 1 , 2 and A = α i , k 1 i , k 2 h = Σ 2 , h 1 C Σ 2 , h 1 . Since Σ 2 , h 1 is the inverse of the covariance matrix Σ 2 , h of Y 1 , h it is a symmetric matrix. The matrix of second order Hermite coefficients C has the representation C = E Y j , h f Y j , h Y j , h t and therefore, c i , k = E Y j , h ( i ) Y j , h ( k ) f Y j , h = c k , i for each i , k = 1 , , 2 h . Then, A is also a symmetric matrix, since A t = Σ 2 , h 1 C Σ 2 , h 1 t = Σ 2 , h 1 t C t Σ 2 , h 1 t = A . We can now show that α ˜ ( 1 , 1 ) α ˜ ( 1 , 2 ) α ˜ ( 2 , 1 ) α ˜ ( 2 , 2 ) is a symmetric matrix, i.e., α ˜ ( 1 , 2 ) = α ˜ ( 2 , 1 ) . To this end, we define I p = 0 , 0 , , 0 , 1 , , 1 , 0 , , 0 t R 2 h such that I p ( i ) = 1 only if i = ( p 1 ) h + 1 , , p h , p = 1 , 2 . Then, we obtain:
α ˜ ( 1 , 2 ) = i , k = 1 h α i , k ( 1 , 2 ) = α ˜ ( 1 , 2 ) t = I 1 t A I 2 t = I 2 t A I 1 = α ˜ ( 2 , 1 ) .
We now apply the new assumption that r ( 1 , 1 ) ( l ) = r ( 2 , 2 ) ( l ) , for l = 0 , , h 1 and show α ˜ ( 1 , 1 ) = α ˜ ( 2 , 2 ) with the symmetry features of the multivariate normal distribution discussed in (2.2) and in (2.3) in [18], since c i , j = c 2 h i + 1 , 2 h j + 1 , i , j = 1 , , 2 h .
We have to study:
α ˜ ( 2 , 2 ) = I 2 t A I 2 t = I 2 t Σ 2 , h 1 C Σ 2 , h 1 I 2 .
Since Σ 2 , h 1 = g i , k 1 i , k 2 h is a symmetric and persymmetric matrix, we have g i , k = g k , i and g i , k = g 2 h i + 1 , 2 h k + 1 for i , k = 1 , , 2 h . Then, we obtain:
I 2 t Σ 2 , h 1 = i = h + 1 2 h g i , 1 , , i = h + 1 2 h g i , 2 h = i = 1 h g i + h , 1 , , i = 1 h g i + h , 2 h = i = 1 h g h i + 1 , 2 h , , i = 1 h g h i + 1 , 1 = i = 1 h g i , 2 h , , i = 1 h g i , 1 = i = 1 h g 2 h , i , , i = 1 h g 1 , i = : g ˜ 2 h , , g ˜ 1 .
Note that:
Σ 2 , h 1 I 1 = i = 1 h g 1 , i , , i = 1 h g 2 h , i t = g ˜ 1 , , g ˜ 2 h t .
Then, we arrive at:
α ˜ ( 2 , 2 ) = I 2 t A I 2 t = I 2 t Σ 2 , h 1 C Σ 2 , h 1 I 2 = i , k = 1 2 h g ˜ 2 h i + 1 g ˜ 2 h k + 1 c i , k = i , k = 1 2 h g ˜ 2 h i + 1 g ˜ 2 h k + 1 c 2 h i + 1 , 2 h k + 1 = i , k = 1 2 h g ˜ i g ˜ k c i , k = I 1 t Σ 2 , h 1 C Σ 2 , h 1 I 1 = α ˜ ( 1 , 1 ) .
Therefore, we have to deal with a special type of 2 × 2 matrix, since the original matrix in the formula (A27), namely α ˜ ( 1 , 1 ) α ˜ ( 1 , 2 ) α ˜ ( 2 , 1 ) α ˜ ( 2 , 2 ) , has now reduced to α ˜ ( 1 , 1 ) α ˜ ( 1 , 2 ) α ˜ ( 1 , 2 ) α ˜ ( 1 , 1 ) .
Finally, we know that any real-valued symmetric matrix A can be decomposed via diagonalization into an orthogonal matrix V and a diagonal matrix D, where the entries of the latter one are determined via the eigenvalues of A, for details, as can be seen in [34], p. 327.
We can explicitly give formulas for the entries of these matrices here:
V = 2 1 / 2 2 1 / 2 2 1 / 2 2 1 / 2 , D = λ 1 = α ˜ ( 1 , 1 ) α ˜ ( 1 , 2 ) 0 0 λ 2 = α ˜ ( 1 , 1 ) + α ˜ ( 1 , 2 ) ,
such that:
V D V = λ 1 + λ 2 2 λ 2 λ 1 2 λ 2 λ 1 2 λ 1 + λ 2 2 = α ˜ ( 1 , 1 ) α ˜ ( 1 , 2 ) α ˜ ( 1 , 2 ) α ˜ ( 1 , 1 ) .
Therefore, continuing with (A29), we now have the representation:
1 n j = 1 n Y j ( 1 ) , Y j ( 2 ) α ˜ ( 1 , 1 ) α ˜ ( 1 , 2 ) α ˜ ( 2 , 1 ) α ˜ ( 2 , 2 ) Y j ( 1 ) , Y j ( 2 ) t E Y j ( 1 ) , Y j ( 2 ) α ˜ ( 1 , 1 ) α ˜ ( 1 , 2 ) α ˜ ( 2 , 1 ) α ˜ ( 2 , 2 ) Y j ( 1 ) , Y j ( 2 ) t = 1 n j = 1 n Y j ( 1 ) , Y j ( 2 ) V D V Y j ( 1 ) , Y j ( 2 ) t E Y j ( 1 ) , Y j ( 2 ) V D V Y j ( 1 ) , Y j ( 2 ) t = 1 n j = 1 n α ˜ ( 1 , 1 ) α ˜ ( 1 , 2 ) 2 Y j ( 2 ) Y j ( 1 ) 2 E Y j ( 2 ) Y j ( 1 ) 2 + 1 n j = 1 n α ˜ ( 1 , 1 ) + α ˜ ( 1 , 2 ) 2 Y j ( 1 ) + Y j ( 2 ) 2 E Y j ( 1 ) + Y j ( 2 ) 2 = 1 n j = 1 n α ˜ ( 1 , 1 ) α ˜ ( 1 , 2 ) 1 r ( 1 , 2 ) ( 0 ) Y j ( 2 ) Y j ( 1 ) 2 2 r ( 1 , 2 ) ( 0 ) 2 1 + 1 n j = 1 n α ˜ ( 1 , 1 ) + α ˜ ( 1 , 2 ) 1 + r ( 1 , 2 ) ( 0 ) Y j ( 1 ) + Y j ( 2 ) 2 + 2 r ( 1 , 2 ) ( 0 ) 2 1 = 1 n α ˜ ( 1 , 1 ) α ˜ ( 1 , 2 ) 1 r ( 1 , 2 ) ( 0 ) j = 1 n H 2 Y j * + 1 n α ˜ ( 1 , 1 ) + α ˜ ( 1 , 2 ) 1 + r ( 1 , 2 ) ( 0 ) j = 1 n H 2 Y j * * ,
with Y j * : = Y j ( 2 ) Y j ( 1 ) 2 2 r ( 1 , 2 ) ( 0 ) and Y j * * : = Y j ( 1 ) + Y j ( 2 ) 2 + 2 r ( 1 , 2 ) ( 0 ) .
Now, note that:
E Y j * Y j * * = E Y j ( 2 ) Y j ( 1 ) 2 2 r ( 1 , 2 ) ( 0 ) Y j ( 1 ) + Y j ( 2 ) 2 + 2 r ( 1 , 2 ) ( 0 ) = 0 .
Therefore, we created a bivariate long-range dependent Gaussian process, since:
1 1 1 1 Y j ( 1 ) , Y j ( 2 ) t = Y j * , Y j * * t N 0 , I 2
with cross-covariance function:
r * ( 1 , 2 ) ( k ) : = E Y j * Y j + k * * = E Y j ( 2 ) Y j ( 1 ) 2 2 r ( 1 , 2 ) ( 0 ) Y j + k ( 1 ) + Y j + k ( 2 ) 2 + 2 r ( 1 , 2 ) ( 0 ) = r ( 2 , 1 ) ( k ) + r ( 2 , 2 ) ( k ) r ( 1 , 1 ) ( k ) r ( 1 , 2 ) ( k ) 2 1 r ( 1 , 2 ) ( 0 ) 1 + r ( 1 , 2 ) ( 0 ) L 2 , 2 + L 2 , 1 L 1 , 2 L 1 , 1 2 1 r ( 1 , 2 ) ( 0 ) 1 + r ( 1 , 2 ) ( 0 ) k 2 d * 1 .
Note that the covariance functions have the following asymptotic behavior:
r * ( 1 , 1 ) ( k ) : = E Y j * Y j + k * = E Y j ( 2 ) Y j ( 1 ) 2 2 r ( 1 , 2 ) ( 0 ) Y j + k ( 2 ) Y j + k ( 1 ) 2 2 r ( 1 , 2 ) ( 0 ) = r ( 2 , 2 ) ( k ) r ( 2 , 1 ) ( k ) r ( 1 , 2 ) ( k ) + r ( 1 , 1 ) ( k ) 2 2 r ( 1 , 2 ) ( 0 ) L 2 , 2 L 2 , 1 L 1 , 2 + L 1 , 1 2 2 r ( 1 , 2 ) ( 0 ) = : L 1 , 1 * k 2 d * 1
and analogously:
r * ( 2 , 2 ) ( k ) : = E Y j * * Y j + k * * L 2 , 2 + L 2 , 1 + L 1 , 2 + L 1 , 1 2 + 2 r ( 1 , 2 ) ( 0 ) = : L 2 , 2 * k 2 d * 1 .
We can now apply the result of [14], Theorem 6, since we created a bivariate Gaussian process with independent entries for fixed j. Note that for the function we apply here, namely f ˜ Y j * , Y j * * = H 2 Y j * + H 2 Y j * * the weighting factors in [14], Theorem 6, reduce to e 1 , 1 = e 2 , 2 = 1 and e 1 , 2 = e 2 , 1 = 0 . These weighting factors fit into the result in [14], (3.6) and (3.7), which even yields the joint convergence of the vector of both univariate summands, H 2 Y j * , H 2 Y j * * , suitably normalized to a vector of two (dependent) Rosenblatt random variables. Since the long-range dependence property in [13], Definition 2.1 is more specific than in [14], p. 2259, (3.1) (see the considerations in (8)), we are able to scale the variances of each Rosenblatt random variable to 1 and give the covariance between them, by using the normalization given in [15], Theorem 4.3. We obtain:
n 2 d * 2 C 2 1 / 2 α ˜ ( 1 , 1 ) α ˜ ( 1 , 2 ) 1 r ( 1 , 2 ) ( 0 ) j = 1 n H 2 Y j * + n 2 d * 2 C 2 1 / 2 α ˜ ( 1 , 1 ) + α ˜ ( 1 , 2 ) 1 + r ( 1 , 2 ) ( 0 ) j = 1 n H 2 Y j * * D α ˜ ( 1 , 1 ) α ˜ ( 1 , 2 ) 1 r ( 1 , 2 ) ( 0 ) L 1 , 1 * Z 2 , d * + 1 / 2 * ( 1 ) + α ˜ ( 1 , 1 ) + α ˜ ( 1 , 2 ) 1 + r ( 1 , 2 ) ( 0 ) L 2 , 2 * Z 2 , d * + 1 / 2 * * = α ˜ ( 1 , 1 ) α ˜ ( 1 , 2 ) L 2 , 2 L 2 , 1 L 1 , 2 + L 1 , 1 2 Z 2 , d * + 1 / 2 * ( 1 ) + α ˜ ( 1 , 1 ) + α ˜ ( 1 , 2 ) L 2 , 2 + L 2 , 1 + L 1 , 2 + L 1 , 1 2 Z 2 , d * + 1 / 2 * * ( 1 )
with C 2 : = 1 2 d * 4 d * 1 being the same normalizing factor as in Theorem 2.1.
We observe that Z 2 , d * + 1 / 2 * ( 1 ) and Z 2 , d * + 1 / 2 * * ( 1 ) are both standard Rosenblatt random variables. Following Corollary A1, their covariance is given by
Cov Z 2 , d * + 1 / 2 * ( 1 ) , Z 2 , d * + 1 / 2 * * ( 1 ) = L 1 , 2 * + L 2 , 1 * 2 L 1 , 1 * L 2 , 2 * = 2 L 2 , 2 L 1 , 1 2 4 1 r ( 1 , 2 ) ( 0 ) 1 + r ( 1 , 2 ) ( 0 ) L 1 , 1 * L 2 , 2 * 1 = L 2 , 2 L 1 , 1 2 L 1 , 1 + L 2 , 2 2 L 1 , 2 + L 2 , 1 2 .
Note that L 1 , 1 + L 2 , 2 2 L 1 , 2 + L 2 , 1 2 0 is fulfilled since L 1 , 1 + L 2 , 2 L 1 , 2 + L 2 , 1 . □

References

  1. Bandt, C.; Pompe, B. Permutation entropy: A natural complexity measure for time series. Phys. Rev. Lett. 2002, 88, 174102. [Google Scholar] [CrossRef] [PubMed]
  2. Keller, K.; Sinn, M. Kolmogorov–Sinai entropy from the ordinal viewpoint. Phys. D Nonlinear Phenom. 2010, 239, 997–1000. [Google Scholar] [CrossRef] [Green Version]
  3. Gutjahr, T.; Keller, K. Ordinal Pattern Based Entropies and the Kolmogorov–Sinai Entropy: An Update. Entropy 2020, 22, 63. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Keller, K.; Sinn, M. Ordinal analysis of time series. Phys. A Stat. Mech. Its Appl. 2005, 356, 114–120. [Google Scholar] [CrossRef]
  5. Sinn, M.; Keller, K. Estimation of ordinal pattern probabilities in Gaussian processes with stationary increments. Comput. Stat. Data Anal. 2011, 55, 1781–1790. [Google Scholar] [CrossRef]
  6. Matilla-García, M.; Ruiz Marín, M. A non-parametric independence test using permutation entropy. J. Econom. 2008, 144, 139–155. [Google Scholar] [CrossRef] [Green Version]
  7. Caballero-Pintado, M.V.; Matilla-García, M.; Marín, M.R. Symbolic correlation integral. Econom. Rev. 2019, 38, 533–556. [Google Scholar] [CrossRef]
  8. Bandt, C.; Shiha, F. Order patterns in time series. J. Time Ser. Anal. 2007, 28, 646–665. [Google Scholar] [CrossRef]
  9. Unakafov, A.M.; Keller, K. Change-Point Detection Using the Conditional Entropy of Ordinal Patterns. Entropy 2018, 20, 709. [Google Scholar] [CrossRef] [Green Version]
  10. Schnurr, A. An ordinal pattern approach to detect and to model leverage effects and dependence structures between financial time series. Stat. Pap. 2014, 55, 919–931. [Google Scholar] [CrossRef] [Green Version]
  11. Schnurr, A.; Dehling, H. Testing for Structural Breaks via Ordinal Pattern Dependence. J. Am. Stat. Assoc. 2017, 112, 706–720. [Google Scholar] [CrossRef] [Green Version]
  12. López-García, M.N.; Sánchez-Granero, M.A.; Trinidad-Segovia, J.E.; Puertas, A.M.; Nieves, F.J.D.l. Volatility Co-Movement in Stock Markets. Mathematics 2021, 9, 598. [Google Scholar] [CrossRef]
  13. Kechagias, S.; Pipiras, V. Definitions and representations of multivariate long-range dependent time series. J. Time Ser. Anal. 2015, 36, 1–25. [Google Scholar] [CrossRef]
  14. Arcones, M.A. Limit theorems for nonlinear functionals of a stationary Gaussian sequence of vectors. Ann. Probab. 1994, 22, 2242–2274. [Google Scholar] [CrossRef]
  15. Beran, J.; Feng, Y.; Ghosh, S.; Kulik, R. Long-Memory Processes; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  16. Pipiras, V.; Taqqu, M.S. Long-Range Dependence and Self-Similarity; Cambridge University Press: Cambridge, UK, 2017; Volume 45. [Google Scholar]
  17. Major, P. Non-central limit theorem for non-linear functionals of vector valued Gaussian stationary random fields. arXiv 2019, arXiv:1901.04086. [Google Scholar]
  18. Nüßgen, I. Ordinal Pattern Analysis: Limit Theorems for Multivariate Long-Range Dependent Gaussian Time Series and a Comparison to Multivariate Dependence Measures. Ph.D. Thesis, University of Siegen, Siegen, Germany, 2021. [Google Scholar]
  19. Caballero-Pintado, M.V.; Matilla-García, M.; Rodríguez, J.M.; Ruiz Marín, M. Two Tests for Dependence (of Unknown Form) between Time Series. Entropy 2019, 21, 878. [Google Scholar] [CrossRef] [Green Version]
  20. Betken, A.; Buchsteiner, J.; Dehling, H.; Münker, I.; Schnurr, A.; Woerner, J.H. Ordinal Patterns in Long-Range Dependent Time Series. arXiv 2019, arXiv:1905.11033. [Google Scholar]
  21. Veillette, M.S.; Taqqu, M.S. Properties and numerical evaluation of the Rosenblatt distribution. Bernoulli 2013, 19, 982–1005. [Google Scholar] [CrossRef] [Green Version]
  22. Abrahamson, I. Orthant probabilities for the quadrivariate normal distribution. Ann. Math. Stat. 1964, 35, 1685–1703. [Google Scholar] [CrossRef]
  23. Hurst, H.E. Long-term storage capacity of reservoirs. Trans. Amer. Soc. Civil Eng. 1951, 116, 770–808. [Google Scholar] [CrossRef]
  24. Karlekar, M.; Gupta, A. Stochastic modeling of EEG rhythms with fractional Gaussian noise. In Proceedings of the 2014 22nd European Signal Processing Conference (EUSIPCO), Lisbon, Portugal, 1–5 September 2014; pp. 2520–2524. [Google Scholar]
  25. Shu, Y.; Jin, Z.; Zhang, L.; Wang, L.; Yang, O.W. Traffic prediction using FARIMA models. In Proceedings of the 1999 IEEE International Conference on Communications (Cat. No.99CH36311), Vancouver, BC, Canada, 6–10 June 1999; Volume 2, pp. 891–895. [Google Scholar]
  26. Keller, K.; Lauffer, H.; Sinn, M. Ordinal analysis of EEG time series. Chaos Complex. Lett. 2007, 2, 247–258. [Google Scholar]
  27. Taqqu, M.S. Weak convergence to fractional Brownian motion and to the Rosenblatt process. Probab. Theory Relat. Fields 1975, 31, 287–302. [Google Scholar]
  28. Düker, M.C. Limit theorems in the context of multivariate long-range dependence. Stoch. Process. Appl. 2020, 130, 5394–5425. [Google Scholar] [CrossRef]
  29. Golub, G.H.; Van Loan, C.F. Matrix Computations; JHU press: Baltimore, MD, USA, 2013; Volume 3. [Google Scholar]
  30. Brockwell, P.J.; Davis, R.A. Time Series: Theory and Methods; Springer-Verlag New York, Inc.: New York, NY, USA, 1991. [Google Scholar]
  31. Düker, M.C. Limit Theorems for Multivariate Long-Range Dependent Processes. arXiv 2017, arXiv:1704.08609v3. [Google Scholar]
  32. Magnus, J.R.; Neudecker, H. The commutation matrix: Some properties and applications. Ann. Stat. 1979, 7, 381–394. [Google Scholar] [CrossRef]
  33. Van der Vaart, A.W. Asymptotic Statistics; Cambridge University Press: Cambridge, UK, 2000; Volume 3. [Google Scholar]
  34. Beutelspacher, A. Lineare Algebra; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
Figure 1. Example of the extraction of an ordinal pattern of a given data set.
Figure 1. Example of the extraction of an ordinal pattern of a given data set.
Entropy 23 00670 g001
Figure 2. Ordinal patterns for h = 2 .
Figure 2. Ordinal patterns for h = 2 .
Entropy 23 00670 g002
Figure 3. Space and time reversion of the pattern π = ( 1 , 3 , 2 , 0 ) .
Figure 3. Space and time reversion of the pattern π = ( 1 , 3 , 2 , 0 ) .
Entropy 23 00670 g003
Figure 4. Illustration of estimation of ordinal pattern dependence.
Figure 4. Illustration of estimation of ordinal pattern dependence.
Entropy 23 00670 g004
Figure 5. Plots of 500 data points of one path of two dependent fractional Gaussian noise processes (left) and the paths of the corresponding fractional Brownian motions (right) for different Hurst parameters: H = 0.7 (top), H = 0.8 (middle), H = 0.9 (bottom).
Figure 5. Plots of 500 data points of one path of two dependent fractional Gaussian noise processes (left) and the paths of the corresponding fractional Brownian motions (right) for different Hurst parameters: H = 0.7 (top), H = 0.8 (middle), H = 0.9 (bottom).
Entropy 23 00670 g005
Figure 6. Histogram, kernel density estimation and Q–Q plot with respect to the normal distribution ( H = 0.7 ) or to the Rosenblatt distribution of p ^ n p with h = 2 for different Hurst parameters: H = 0.7 (top); H = 0.8 (middle); H = 0.9 (bottom).
Figure 6. Histogram, kernel density estimation and Q–Q plot with respect to the normal distribution ( H = 0.7 ) or to the Rosenblatt distribution of p ^ n p with h = 2 for different Hurst parameters: H = 0.7 (top); H = 0.8 (middle); H = 0.9 (bottom).
Entropy 23 00670 g006
Figure 7. Histogram, kernel density estimation and Q–Q plot with respect to the Rosenblatt distribution of 1 n j = 1 n H 2 Y j * for different Hurst parameters: H = 0.8 (top); H = 0.9 (bottom).
Figure 7. Histogram, kernel density estimation and Q–Q plot with respect to the Rosenblatt distribution of 1 n j = 1 n H 2 Y j * for different Hurst parameters: H = 0.8 (top); H = 0.9 (bottom).
Entropy 23 00670 g007
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Nüßgen, I.; Schnurr, A. Ordinal Pattern Dependence in the Context of Long-Range Dependence. Entropy 2021, 23, 670. https://doi.org/10.3390/e23060670

AMA Style

Nüßgen I, Schnurr A. Ordinal Pattern Dependence in the Context of Long-Range Dependence. Entropy. 2021; 23(6):670. https://doi.org/10.3390/e23060670

Chicago/Turabian Style

Nüßgen, Ines, and Alexander Schnurr. 2021. "Ordinal Pattern Dependence in the Context of Long-Range Dependence" Entropy 23, no. 6: 670. https://doi.org/10.3390/e23060670

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop