Next Article in Journal
Parameter Estimation of the Heston Volatility Model with Jumps in the Asset Prices
Previous Article in Journal
Online Hybrid Neural Network for Stock Price Prediction: A Case Study of High-Frequency Stock Trading in the Chinese Market
Previous Article in Special Issue
Modeling COVID-19 Infection Rates by Regime-Switching Unobserved Components Models Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Factorization of a Spectral Density with Smooth Eigenvalues of a Multidimensional Stationary Time Series

by
Department of Mathematics, Budapest University of Technology and Economics, 1111 Budapest, Hungary
Econometrics 2023, 11(2), 14; https://doi.org/10.3390/econometrics11020014
Received: 7 March 2023 / Revised: 27 May 2023 / Accepted: 29 May 2023 / Published: 31 May 2023
(This article belongs to the Special Issue High-Dimensional Time Series in Macroeconomics and Finance)

## Abstract

:
The aim of this paper to give a multidimensional version of the classical one-dimensional case of smooth spectral density. A spectral density with smooth eigenvalues and $H ∞$ eigenvectors gives an explicit method to factorize the spectral density and compute the Wold representation of a weakly stationary time series. A formula, similar to the Kolmogorov–Szeg$o ”$ formula, is given for the covariance matrix of the innovations. These results are important to give the best linear predictions of the time series. The results are applicable when the rank of the process is smaller than the dimension of the process, which occurs frequently in many current applications, including econometrics.
MSC:
62M10; 60G10; 60G12

## 1. Introduction

Let $X t = ( X t 1 , … , X t d )$$t ∈ Z$, be a d-dimensional weakly stationary time series, where each $X t j$ is a complex-valued random variable on the same probability space $( Ω , F , P )$. It is a second-order, and in this sense, translation-invariant process:
$E X t = μ ∈ C d , E ( ( X t + h − μ ) ( X t * − μ * ) ) = C ( h ) ∈ C d × d , ∀ t , h ∈ Z ,$
where $C ( h )$$h ∈ Z$, is the non-negative definite covariance matrix function of the process. (Here and below, if $A$ is a matrix then $A *$ denotes its adjoint matrix, i.e., its complex conjugate transpose. Vectors like $X t$ are written as column matrices, so $X t *$ is a row matrix. All the results are valid for real-valued time series as well with no change; then $A *$ denotes matrix transpose.) Without loss of generality, from now on it is assumed that $μ = 0$.
Thus, the considered random variables will be d-dimensional, square integrable, zero expectation random complex vectors whose components belong to the Hilbert space $L 2 ( Ω , F , P )$. The orthogonality of the random vectors $X$ and $Y$ is defined by the relationship $X ⊥ Y ⇔ Cov ( X , Y ) = E ( X Y * ) = O$.
The past of ${ X t }$ until time $n ∈ Z$ is the closed linear span in $L 2 ( Ω , F , P )$ of the past and present values of the components of the process:
$H n − = H n − ( X ) : = span ¯ { X t ℓ : ℓ = 1 , … , d ; t ≤ n } .$
The remote past of ${ X t }$ is $H − ∞ : = ⋂ n ∈ Z H n −$. The process ${ X t }$ is called regular if $H − ∞ = { 0 }$ and it is called singular if $H − ∞ = H ( X ) : = span ¯ { X t : t ∈ Z }$. Of course, there is a range of cases between these two extremes.
Singular processes are also called deterministic (see, e.g., Brockwell et al. 1991) because based on the past $H 0 −$, future values $X 1$$X 2$, …, can be predicted with zero mean square error. On the other hand, regular processes are also called purely non-deterministic, since their behavior is completely influenced by random innovations. Consequently, knowing $H 0 −$, future values $X 1$$X 2$, …, can be predicted only with positive mean square errors $σ 1 2$$σ 2 2$, …, and $lim t → ∞ σ t 2 = ∥ X 0 ∥ 2 : = E ( X 0 X 0 * )$. This shows why studying regular time series is a primary target both in the theory and applications. The Wold decomposition proves that any non-singular process can be decomposed into an orthogonal sum of a nonzero regular and singular process. This also supports why it is important to study regular processes.
The Wold decomposition implies (see, e.g., the classical references Rozanov 1967 and Wiener and Masani 1957) that ${ X t }$ is regular if and only if ${ X t }$ can be written as a causal infinite moving average (a Wold representation)
$X t = ∑ j = 0 ∞ b ( j ) ξ t − j , t ∈ Z , b ( j ) ∈ C d × r ,$
where ${ ξ t } t ∈ Z$ is an r-dimensional $( r ≤ d )$ orthonormal white noise sequence $E ξ t = 0$$E ( ξ s ξ t * ) = δ s t I r$$I r$ is the $r × r$ identity matrix. If the white noise process ${ ξ t }$ in (1) is given by the Wold representation, then it is unique up to a multiplication by a constant unitary matrix; therefore, it is called a fundamental process of the regular time series. In this case the pasts of ${ X t }$ and ${ ξ t }$ are identical: $H n − ( X ) = H n − ( ξ )$ for any $n ∈ Z$.
An important use of Wold representation is that the best linear h-step ahead prediction $X ^ h$ can be given in terms of that. If the present time is 0, then the orthogonal projection of a future random vector $X h$ ($h > 0$) to the Hilbert subspace $H 0 − ( X )$ representing past and present is
$X ^ h = ∑ j = h ∞ b ( j ) ξ h − j = ∑ k = − ∞ 0 b ( h − k ) ξ k .$
Then $X ^ h$ gives the best linear prediction of $X h$ with minimal least square error.
An alternative way to write Wold representation is
$X t = ∑ j = 0 ∞ a ( j ) η t − j , t ∈ Z , a ( j ) ∈ C d × d , a ( 0 ) = I d ,$
where ${ η t } t ∈ Z$ is the d-dimensional white noise process of innovations:
$η t : = X t − Proj H t − 1 − X t , t ∈ Z ,$
where $Proj H t − 1 − X t$ denotes the orthogonal projection of the random vector $X t$ to the Hilbert subspace $H t − 1 −$. Furthermore, $E η t = 0$$E ( η s η t * ) = δ s t Σ$$Σ$ is a $d × d$ non-negative definite matrix of rank r$1 ≤ r ≤ d$, the covariance matrix of the best linear one-step ahead prediction error.
It is also classical that any weakly stationary process has a non-negative definite spectral measure matrix $d F$ on $[ − π , π ]$ such that
$C ( h ) = ∫ − π π e i h ω d F ( ω ) , h ∈ Z .$
Then ${ X t }$ is regular (see again, e.g., Rozanov 1967 and Wiener and Masani 1957) if and only if $d F = f$, the spectral density $f$ has a.e. constant rank r, and can be factored in a form
$f ( ω ) = 1 2 π ϕ ( ω ) ϕ * ( ω ) , ϕ ( ω ) = [ ϕ k ℓ ( ω ) ] d × r , for a . e . ω ∈ [ − π , π ] ,$
where
$ϕ ( ω ) = ∑ j = 0 ∞ b ˜ ( j ) e − i j ω , ∥ ϕ ∥ 2 = ∑ j = 0 ∞ ∥ b ˜ ( j ) ∥ 2 < ∞ ,$
$∥ · ∥$ denotes spectral norm. Here the sequence of coefficients ${ b ˜ ( j ) }$ is not necessarily the same as in the Wold decomposition. Furthermore,
$ϕ ( ω ) = Φ ( e − i ω ) , ω ∈ ( − π , π ] , Φ ( z ) = ∑ j = 0 ∞ b ˜ ( j ) z j , z ∈ D ,$
so the entries of the analytic matrix function $Φ ( z ) = [ Φ j k ( z ) ] d × r$ are analytic functions in the open unit disc D and belong to the class $L 2 ( T )$ on the unit circle T; consequently, they belong to the Hardy space $H 2$. It is written as $Φ ∈ H d × r 2$ or briefly $Φ ∈ H 2$.
Recall that the Hardy space $H p$$0 < p ≤ ∞$, denotes the space of all functions g analytic in D whose $L p$ norms over all circles ${ z ∈ C : | z | = r }$$0 < r < 1$, are bounded; see, e.g., (Rudin 2006, Definition 17.7). If $p ≥ 1$, then equivalently, $H p$ is the Banach space of all functions $g ∈ L p ( T )$ such that
$g ( e i ω ) = ∑ n = 0 ∞ a n e i n ω , ω ∈ [ − π , π ] ,$
so the Fourier series of $g ( e i ω )$ is one-sided, $a n = 0$ when $n < 0$; see, e.g., (Fuhrmann 2014, sct. II.12). Notice that in formulas (6) and (7) there is a negative sign in the exponent; this is a matter of convention that I am going to use in the sequel too.
An especially important special case of Hardy spaces is $H 2$, which is a Hilbert space, and which by Fourier transform is isometrically isomorphic with the $ℓ 2$ space of sequences ${ a 0 , a 1 , a 2 , … }$ with norm square
$1 2 π ∫ − π π | g ( e i ω ) | 2 d ω = ∑ n = 0 ∞ | a n | 2 .$
For a one-dimensional time series ${ X t }$ ($d = 1$) there exists a rather simple sufficient and necessary condition of regularity given by (Kolmogorov 1941):
(1)
${ X t }$ has an absolutely continuous spectral measure with density f;
(2)
$log f ∈ L 1$, that is, $∫ − π π log f ( ω ) d ω > − ∞$.
Then the Kolmogorov–Szeg$o ”$ formula also holds:
$σ 2 = 2 π exp ∫ − π π log f ( ω ) d ω 2 π ,$
where $σ 2$ is the variance of the innovations $η t : = X t − Proj H t − 1 − X t$, that is, the variance of the one-step ahead prediction.
For a multidimensional time series ${ X t }$ which has full rank, that is, when $f$ has a.e. full rank$r = d$, and so the innovations $η t$ defined by (4) have full rank d, there exists a similar simple sufficient and necessary condition of regularity; see Rozanov (1967) and Wiener and Masani (1957):
(1)
${ X t }$ has an absolutely continuous spectral measure matrix $d F$ with density matrix $f$;
(2)
$log det f ∈ L 1$, that is, $∫ − π π log det f ( ω ) d ω > − ∞$.
Then the d-dimensional Kolmogorov–Szeg$o ”$ formula also holds:
$det Σ = ( 2 π ) d exp ∫ − π π log det f ( ω ) d ω 2 π ,$
where $Σ$ is the covariance matrix of the innovations $η t$ defined by (4).
Many current research works investigate high-dimensional time series with low rank; see, e.g., (Anderson et al. 2022; Basu et al. 2019; Cao et al. 2023; Lippi et al. 2023; Wang et al. 2022). Applications can be found, e.g., in macroeconomic models, finance, and biological and social networks.
Unfortunately, the generic case of regular time series when the rank of the process can be smaller than the dimension is rather complicated. To the best of my knowledge, in that case, (Rozanov 1967, Theorem 8.1) gives a necessary and sufficient condition of regularity. Namely, a d-dimensional stationary time series ${ X t }$ is regular and of rank r$1 ≤ r ≤ d$, if and only if each of the following conditions holds:
(1)
It has an absolutely continuous spectral measure matrix $d F$ with density matrix $f ( ω )$ which has rank r for a.e. $ω ∈ [ − π , π ]$.
(2)
The density matrix $f ( ω )$ has a principal minor $M ( ω ) = det [ f i p j q ( ω ) ] p , q = 1 r$, which is nonzero a.e. and
$∫ − π π log M ( ω ) d ω > − ∞ .$
(3)
Let $M k ℓ ( ω )$ denote the determinant obtained from $M ( ω )$ by replacing its th row by the row $[ f k i p ( ω ) ] p = 1 r$. Then the functions $γ k ℓ ( e − i ω ) = M k ℓ ( ω ) / M ( ω )$ are boundary values of functions of the Nevanlinna class N.
It is immediately remarked in the cited reference that “unfortunately, there is no general method determining, from the boundary value $γ ( e − i ω )$ of a function $γ ( z )$, whether it belongs to the class N”.
Recall that the Nevanlinna class N consists of all functions g analytic in the open unit ball D that can be written as a ratio $g = g 1 / g 2$$g 1 ∈ H p$$g 2 ∈ H q$$p , q > 0$, where $H p$ and $H q$ denote Hardy spaces; see, e.g., (Nikolski 2019, Definition 3.3.1).
The aim of this paper is to extend from the one-dimensional case to the multidimensional case a well-known sufficient condition for the regularity and a method of finding an $H 2$ spectral factor and the covariance matrix $Σ$ in the case of smooth spectral density.

## 2. Generic Regular Processes

To find an $H 2$ spectral factor if possible, a simple idea is to use a spectral decomposition of the spectral density matrix. (Take care that here we use the word ‘spectral’ in two different meanings. On one hand, we use the spectral density of a time series in terms of a Fourier spectrum, and on the other hand we take the spectral decomposition of a matrix in terms of eigenvalues and eigenvectors.)
So let ${ X t }$ be a d-dimensional stationary time series and assume that its spectral measure matrix $d F$ is absolutely continuous with density matrix $f ( ω )$ which has rank r$1 ≤ r ≤ d$, for a.e. $ω ∈ [ − π , π ]$. Take the parsimonious spectral decomposition of the self-adjoint, non-negative definite matrix $f ( ω )$:
$f ( ω ) = ∑ j = 1 r λ j ( ω ) u j ( ω ) u j * ( ω ) = U ˜ ( ω ) Λ r ( ω ) U ˜ * ( ω ) ,$
where
$Λ r ( ω ) = diag [ λ 1 ( ω ) , … , λ r ( ω ) ] , λ 1 ( ω ) ≥ ⋯ ≥ λ r ( ω ) > 0 ,$
for a.e. $ω ∈ [ − π , π ]$, is the diagonal matrix of nonzero eigenvalues of $f ( ω )$ and
$U ˜ ( ω ) = [ u 1 ( ω ) , … , u r ( ω ) ] ∈ C d × r$
is a sub-unitary matrix of corresponding right eigenvectors, not unique even if all eigenvalues are distinct. Then, still, we have
$Λ r ( ω ) = U ˜ * ( ω ) f ( ω ) U ˜ ( ω ) .$
The matrix function $Λ r ( ω )$ is a self-adjoint, positive definite function, and
$tr ( Λ r ( ω ) ) = tr ( f ( ω ) ) ,$
where $f ( ω )$ is the density function of a finite spectral measure. This shows that the integral of $tr ( Λ r ( ω ) )$ over $[ − π , π ]$ is finite. Thus $Λ r ( ω )$ can be considered the spectral density function of a full rank regular process. So it can be factored, and in fact, we may take a miniphase $H 2$ spectral factor $D r ( ω )$ of it:
$Λ r ( ω ) = 1 2 π D r ( ω ) D r * ( ω ) ,$
where $D r ( ω )$ is a diagonal matrix.
Then a simple way to factorize $f$ is to choose
$ϕ ( ω ) = U ˜ ( ω ) D r ( ω ) A ( ω ) = U ˜ ( ω ) A ( ω ) D r ( ω ) = U ˜ A ( ω ) D r ( ω ) ,$
where $A ( ω ) = diag [ a 1 ( ω ) , … , a r ( ω ) ]$, each $a k ( ω )$ being a measurable function on $[ − π , π ]$ such that $| a k ( ω ) | = 1$ for any $ω$, but otherwise arbitrary and $U ˜ A ( ω )$ still denotes a sub-unitary matrix of eigenvectors of $f$ in the same order as the one of the eigenvalues.
To the best of my knowledge, it is not known if for any regular time series ${ X t }$ there exists such a matrix-valued function $A ( ω )$ such that $ϕ ( ω )$ defined by (13) has a Fourier series with only non-negative powers of $e − i ω$. Equivalently, does there exist an $H 2$ analytic matrix function $Φ ( z )$ whose boundary value is the above spectral factor $ϕ ( ω )$ with some $A ( ω )$, according to Formulas (5)–(7)?
Thus, the role of each $a k ( ω )$$( k = 1 , … , r )$ is to modify the corresponding eigenvector $u k ( ω )$ so that $a k ( ω ) u k ( ω )$ has a Fourier series with only non-negative powers of $e − i ω$, this way achieving that $u k ( ω ) ∈ H ∞ ⊂ H 2$. Equivalently, if $u k ( ω )$ is the boundary value of a complex function $w k ( z )$ defined in the unit disc, $u k ( ω ) = w k ( e − i k ω )$, and $w ( z )$ has singularities for $| z | ≤ 1$, then we would like to find a complex function $α k ( z )$ in the unit disc so that $α k ( e − i k ω ) = a k ( ω )$ and $α k ( z ) w k ( z )$ is analytic in the open unit disc D and continuous in the closed unit disc $| z | ≤ 1$,
$α k ( z ) w k ( z ) = ∑ j = 0 ∞ ψ k ( j ) z j , z ∈ D .$
Carrying out this procedure for $k = 1 , … , r$, one would obtain an $H ∞$ sub-unitary matrix function.
Example 2.2.4 in (Szabados 2022) shows that—at least theoretically—this task can be carried out in certain cases. Furthermore, as a very similar problem, in the case of a rational spectral density, one can always find an inner matrix multiplier so that it gives a rational analytic matrix function $Φ ( z )$ whose boundary value is an $H 2$ spectral factor $ϕ ( z )$; see, e.g., (Rozanov 1967, chp. I, Theorem 10.1).
Theorem 1.
(a)
Assume that a d-dimensional stationary time series ${ X t }$ is regular of rank r, $1 ≤ r ≤ d$. Then for $Λ r ( ω )$ defined by (10) one has $log det Λ r ∈ L 1 = L 1 ( [ − π , π ] , B , d ω )$, equivalently,
$∫ − π π log λ r ( ω ) d ω > − ∞ .$
(b)
If, moreover, one assumes that the regular time series ${ X t }$ is such that it has an $H 2$ spectral factor of the form (13), then the following statement holds as well:
The sub-unitary matrix function $U ˜ ( ω )$ appearing in the spectral decomposition of $f ( ω )$ in (9) can be chosen so that it belongs to the Hardy space $H ∞ ⊂ H 2$, and thus
$U ˜ ( ω ) = ∑ j = 0 ∞ ψ ( j ) e − i j ω , ψ ( j ) ∈ C d × r , ∑ j = 0 ∞ ∥ ψ ( j ) ∥ 2 < ∞ .$
In this case one may call $U ˜ ( ω )$ an inner matrix function.
The next theorem gives a sufficient condition for the regularity of a generic weakly stationary time series; compare with the statements of Theorem 1 above. Observe that assumptions (1) and (2) in the next theorem are necessary conditions of regularity as well. Only assumption (3) is not known to be necessary. We think that these assumptions are simpler to check in practice than those of Rozanov’s theorem cited above. By formula (13), checking assumption (3) means that for each eigenvector $u k ( ω )$ of $f ( ω )$ we are searching for a complex function multiplier $a k ( ω )$ of unit absolute value that gives an $H ∞$ function result.
Theorem 2.
Let ${ X t }$ be a d-dimensional time series. It is regular of rank $r ≤ d$ if the following three conditions hold.
(1)
It has an absolutely continuous spectral measure matrix $d F$ with density matrix $f ( ω )$ which has rank r for a.e. $ω ∈ [ − π , π ]$.
(2)
For $Λ r ( ω )$ defined by (10) one has $log det Λ r ∈ L 1 = L 1 ( [ − π , π ] , B , d ω )$; equivalently, (15) holds.
(3)
The sub-unitary matrix function $U ˜ ( ω )$ appearing in the spectral decomposition of $f ( ω )$ in (9) can be chosen so that it belongs to the Hardy space $H ∞ ⊂ H 2$; thus, (16) holds.
Next we discuss a multivariable version of a well-known one-dimensional theorem; see, e.g., (Lamperti 1977, sct. 4.4). This theorem gives the Wold representation of a regular time series ${ X t }$. First let us recall some facts we are going to use. A sufficient and necessary condition of regularity is given by the factorization (5) and (6) of the spectral density $f$, where the $d × r$ spectral factor $ϕ$ is in $H 2$. Using Singular Value Decomposition (SVD), we can write that
$ϕ ( ω ) = V ( ω ) S ( ω ) U * ( ω ) ,$
where $V ( ω )$ is a $d × r$ sub-unitary matrix, $U ( ω )$ is an $r × r$ unitary matrix, $S ( ω ) = diag [ s 1 , s 2 , … , s r ]$ is an $r × r$ diagonal matrix of positive singular values $s 1 ≥ s 2 ≥ ⋯ ≥ s r$, for a.e. $ω ∈ [ − π , π ]$. Clearly, $s j = λ j$, for $j = 1 , … , r$.
The (generalized) inverse of $ϕ ( ω )$ is not unique when $d > r$. Let $ψ ( ω )$ be the Moore–Penrose inverse of $ϕ ( ω )$:
$ψ ( ω ) : = U ( ω ) S − 1 ( ω ) V * ( ω ) , ψ ( ω ) ϕ ( ω ) = I r , a . e . ω ∈ [ − π , π ] .$
We also need the spectral (Cramér) representation of the stationary time series
$X t = ∫ − π π e i t ω d Z ω , t ∈ Z ,$
where ${ Z ω }$ is a stochastic process with orthogonal increments, obtained by the isometry between the Hilbert spaces $H ( X ) ⊂ L 2 ( Ω , F , P )$ and $L 2 ( d F ) : = L 2 ( [ − π , π ] , B , tr ( d F ) )$; see, e.g., (Bolla and Szabados 2021, sct. 1.3). Namely, if $Y j = ∫ − π π ψ j ( ω ) d Z ω$ ($j = 1 , 2$), then
$E ( Y 1 Y 2 * ) = 〈 Y 1 k , Y 2 ℓ 〉 H ( X ) d × d = 〈 ψ 1 k , ψ 2 ℓ 〉 L 2 ( d F k ℓ ) d × d = ∫ − π π ψ 1 ( ω ) d F ( ω ) ψ 2 * ( ω ) .$
Theorem 3.
Assume that the spectral measure of a d-dimensional weakly stationary time series ${ X t }$ is absolutely continuous with density $f$ which has constant rank r, $1 ≤ r ≤ d$. Moreover, assume that there is a finite constant M such that $∥ f ( ω ) ∥ ≤ M$ for all $ω ∈ [ − π , π ]$, and $f$ has a factorization $f = 1 2 π ϕ ϕ *$, where $ϕ ∈ H 2$ and its Moore–Penrose inverse $ψ ∈ H 2$ as well.
Then the time series is regular and its fundamental white noise process can be obtained as
$ξ t = ∫ − π π e i t ω ψ ( ω ) d Z ω$
$= ∑ k = 0 ∞ γ ( k ) X t − k ( t ∈ Z ) ,$
where
$ψ ( ω ) = ∑ k = 0 ∞ γ ( k ) e − i k ω , γ ( k ) = 1 2 π ∫ − π π e i k ω ψ ( ω ) d ω$
is the Fourier series of $ψ$, convergent in $L 2$ sense.
The sequence of coefficients ${ b ( j ) }$ of the Wold representation is given by the $L 2$ convergent Fourier series
$ϕ ( ω ) = ∑ j = 0 ∞ b ( j ) e − i j ω , b ( j ) = 1 2 π ∫ − π π e i j ω ϕ ( ω ) d ω .$
Proof.
The regularity of ${ X t }$ obviously follows from the assumptions by (5) and (6).
First let us verify that the stochastic integral (19) is correct, that is, the components of $ψ$ belong to $L 2 ( [ − π , π ] , B , tr ( d F ) )$:
$∫ − π π ψ ( ω ) f ( ω ) ψ * ( ω ) d ω = 1 2 π ∫ − π π ψ ( ω ) ϕ ( ω ) ϕ * ( ω ) ψ * ( ω ) d ω = I r .$
This also justifies that
$ξ t = ∫ − π π e i t ω ψ ( ω ) d Z ω = ∫ − π π e i t ω ∑ k = 0 ∞ γ ( k ) e − i k ω d Z ω = ∑ k = 0 ∞ γ ( k ) X t − k .$
Second, let us check that the sequence defined by (19) is orthonormal, using the isometry (18):
$E ( ξ n ξ m * ) = ∫ − π π e i n ω ψ ( ω ) f ( ω ) e − i m ω ψ * ( ω ) d ω = 1 2 π ∫ − π π e i ( n − m ) ω ψ ( ω ) ϕ ( ω ) ϕ * ( ω ) ψ * ( ω ) d ω = δ n , m I r .$
Third, let us show that $ξ n$ is orthogonal to the past $H n − k − ( X )$ for any $k > 0$:
$E ( X n − k ξ n * ) = ∫ − π π e i ( n − k ) ω f ( ω ) e − i n ω ψ * ( ω ) d ω = 1 2 π ∫ − π π e − i k ω ϕ ( ω ) ϕ * ( ω ) ψ * ( ω ) d ω = 1 2 π ∫ − π π e − i k ω ϕ ( ω ) d ω = 0 d × r$
for any $k > 0$, since $ϕ ∈ H 2$, so its Fourier coefficients with negative indices are zero.
Fourth, let us see that $ξ n ∈ H n − ( X )$ for $n ∈ Z$. Because of stationarity, it is enough to show that $ξ 0 ∈ H 0 − ( X )$. Since $H 0 − ( X )$ is the closure in $L 2 ( Ω , F , P )$ of the components of all finite linear combinations of the form $∑ k = 0 N γ k X − k$, by the isometry it is equivalent to the fact that $ψ$ belongs to the closure in $L 2 ( [ − π , π ] , B , tr ( d F ) )$ of all finite linear combinations of the form $∑ k = 0 N γ k e − i k ω$.
Using the assumed boundedness of $∥ f ∥$, we obtain that
$∫ − π π ∑ k = 0 N γ k e − i k ω − ψ ( ω ) f ( ω ) ∑ k = 0 N γ k * e i k ω − ψ * ( ω ) d ω ≤ ∫ − π π ∑ k = 0 N γ k e − i k ω − ψ ( ω ) ∥ f ( ω ) ∥ ∑ k = 0 N γ k * e i k ω − ψ * ( ω ) d ω ≤ M ∫ − π π ∑ k = 0 N γ k e − i k ω − ψ ( ω ) 2 d ω .$
We assumed that $ψ ∈ H 2$, which means that $ψ$ has a one-sided Fourier series (21) which converges in $L 2 ( [ − π , π ] , B , d ω )$, where $d ω$ denotes Lebesgue measure:
$lim N → ∞ 1 2 π ∫ − π π ∑ k = 0 N γ k j ℓ e − i k ω − ψ j ℓ ( ω ) 2 d ω = 0 ( j , ℓ = 1 , … , d )$
Since the spectral norm squared of a matrix is bounded by the sum of the absolute values squared of the entries of the matrix, these imply that the last term in (23) tends to 0 as $N → ∞$. This shows that $ξ 0 ∈ H 0 − ( X )$.
Fifth, by (17), we see that
$( ϕ ψ − I d ) f ( ψ * ϕ * − I d ) = ( ϕ ψ − I d ) 1 2 π ϕ ϕ * ( ψ * ϕ * − I d ) = 0 d × d ,$
a.e. in $[ − π , π ]$. Consequently, the difference
$Δ t : = ∫ − π π e i t ω ϕ ( ω ) ψ ( ω ) d Z ω − ∫ − π π e i t ω d Z ω$
is orthogonal to itself in $H ( X )$, so it is a zero vector. Then by (19) and (22),
$X t = ∫ − π π e i t ω d Z ω = ∫ − π π e i t ω ϕ ( ω ) ψ ( ω ) d Z ω = ∫ − π π e i t ω ∑ j = 0 ∞ b ( j ) e − i j ω ψ ( ω ) d Z ω = ∑ j = 0 ∞ b ( j ) ξ t − j .$
Equation (24) shows that each entry of $ϕ ψ$ belongs to $L 2 ( [ − π , π ] , B , tr ( d F ) )$, so the isometry between this space and $H ( X )$ justifies (25).
Finally, the previous steps show that the innovation spaces of the sequences ${ X t }$ and ${ ξ t }$ are the same for any time $n ∈ Z$, so the pasts $H n − ( X )$ and $H n − ( ξ )$ agree as well for any $n ∈ Z$. Thus (25) gives the Wold representation of ${ X t }$. □

## 3. Smooth Eigenvalues of the Spectral Density

In the one-dimensional case there is a well-known sufficient condition of regularity, which at the same time gives a formula for an $H 2$ spectral factor and also for the white noise sequence and the coefficients in the Wold decomposition (1). This is the assumption that the process has a continuously differentiable spectral density $f ( ω ) > 0$ for any $ω ∈ [ − π , π ]$; see, e.g., (Lamperti 1977, p. 76) or (Bolla and Szabados 2021, sct. 2.8.2).
This sufficient condition can be partially generalized to the multidimensional case. When a regular d-dimensional time series ${ X t }$ has an $H 2$ spectral factor of the form (13), equivalently, it has a sub-unitary matrix function $U ˜ ( ω )$ appearing in the spectral decomposition of $f ( ω )$ in (9) that can be chosen so that it belongs to the Hardy space $H ∞ ⊂ H 2$, and therefore the smoothness of the nonzero eigenvalues of the spectral density $f$ gives a formula for an $H 2$ spectral factor. The argument above in the paragraph around Equation (14) shows that in certain cases one can find such a sub-unitary matrix function $U ˜ ( ω )$.
Theorem 4.
Let ${ X t }$ be a d-dimensional time series. It is regular of rank $r ≤ d$ if the following three conditions hold.
(1)
It has an absolutely continuous spectral measure matrix $d F$ with density matrix $f ( ω )$ which has rank r for a.e. $ω ∈ [ − π , π ]$.
(2)
Each nonzero eigenvalue $λ j ( ω )$$( j = 1 , … , r )$ of $f ( ω )$ is a continuously differentiable positive function on $[ − π , π ]$.
(3)
The sub-unitary matrix function $U ˜ ( ω )$ appearing in the spectral decomposition of $f ( ω )$ in (9) can be chosen so that it belongs to the Hardy space $H ∞ ⊂ H 2$, and thus (16) holds.
Moreover, ${ X t }$ satisfies the conditions of Theorem 3 too, so formulas (19)–(22) give the Wold representation of ${ X t }$.
Proof.
Condition (2) implies that each $log λ j ( ω )$$( j = 1 , … , r )$ is also a continuously differentiable function on $[ − π , π ]$ and so it can be expanded into a uniformly convergent Fourier series
$log λ j ( ω ) = ∑ n = − ∞ ∞ β j , n e i n ω , β j , − n = β ¯ j , n .$
Write it as
$log λ j ( ω ) = Q j ( ω ) + Q ¯ j ( ω ) : = 1 2 β j , 0 + ∑ n = 1 ∞ β ¯ j , n e − i n ω + 1 2 β j , 0 + ∑ n = 1 ∞ β j , n e i n ω .$
Define
$γ j ( ω ) : = exp ( Q j ( ω ) ) = ∑ k = 0 ∞ 1 k ! 1 2 β j , 0 + ∑ n = 1 ∞ β ¯ j , n e − i n ω k .$
Then
$λ j ( ω ) = γ j ( ω ) γ ¯ j ( ω ) = exp ( Q j ( ω ) ) exp ( Q ¯ j ( ω ) ) , j = 1 , … , r .$
Observe that each $Q j ( ω )$ and consequently each $γ j ( ω )$$( j = 1 , … , r )$ is a continuous function on $[ − π , π ]$, so it is in $L 2 ( T )$. Moreover, each $Q j ( ω )$ and consequently each $γ j ( ω )$ has only positive powers of $e − i ω$ in its Fourier series. So each $γ j$ belongs to the Hardy space $H 2$.
Substitute (28) into (9):
$f ( ω ) = U ˜ ( ω ) Γ ( ω ) Γ * ( ω ) U ˜ * ( ω ) , Γ ( ω ) : = diag [ γ 1 ( ω ) , … , γ r ( ω ) ] .$
Thus we can take a spectral factor
$ϕ ( ω ) : = 2 π U ˜ ( ω ) Γ ( ω ) .$
Since each $γ j ( ω ) ∈ H 2$ and by condition (3) each entry of $U ˜ ( ω )$ is in $H ∞$, each entry of $ϕ ( ω )$ is in $H 2$. It means that $ϕ$ is an $H 2$ spectral factor as in (5) and (6), and consequently ${ X t }$ is regular.
Take the Moore–Penrose inverse of $ϕ$:
$ψ ( ω ) : = ϕ + ( ω ) = ( 2 π ) − 1 2 Γ − 1 ( ω ) U ˜ * ( ω ) , Γ − 1 ( ω ) : = diag [ γ 1 − 1 ( ω ) , … , γ r − 1 ( ω ) ] ,$
where by (27) each
$γ j − 1 ( ω ) = exp ( − Q j ( ω ) ) = ∑ k = 0 ∞ 1 k ! − 1 2 β j , 0 − ∑ n = 1 ∞ β ¯ j , n e − i n ω k ,$
so it is also continuous and its Fourier series has only positive powers of $e − i ω$ too. It implies that $ψ ∈ H 2$.
Finally, since each $λ j ( ω )$ is a continuous function on $[ − π , π ]$, and thus bounded, and the components of $U ˜ ( ω )$ are bounded functions because $U ˜ ( ω )$ is sub-unitary, it follows that $∥ f ∥$ is bounded. □
The d-dimensional Kolmogorov–Szeg$o ”$ Formula (8) gives only the determinant of the covariance matrix $Σ$ of the innovations in the full rank regular time series. Similar is the case when the rank r of the process is less than d:
$det Σ r = ( 2 π ) d exp ∫ − π π log det Λ r ( ω ) d ω 2 π ,$
where $Λ r$ is the diagonal matrix of the r nonzero eigenvalues of $f$ and $Σ r$ is the covariance matrix of the innovation of an r-dimensional subprocess of rank r of the original time series; see (Bolla and Szabados 2021, Corollary 4.5) or (Szabados 2022, Corollary 2.5)
Fortunately, under the conditions of Theorem 4, one can obtain the covariance matrix $Σ$ itself by a similar formula, as the next theorem shows.
Theorem 5.
Assume that a weakly stationary d-dimensional time series satisfies the conditions of Theorem 4. Then the covariance matrix Σ of the innovations of the process can be obtained as
$Σ = 2 π ψ ( 0 ) diag exp ∫ − π π log λ j ( ω ) d ω 2 π j = 1 , … , r ψ * ( 0 ) ,$
where $λ j$$j = 1 , … , r$, are the nonzero eigenvalues of the spectral density matrix $f$ of the process, $U ˜ ( ω )$ is the $d × r$ matrix of corresponding orthonormal eigenvectors, and
$ψ ( 0 ) = 1 2 π ∫ − π π U ˜ ( ω ) d ω .$
Proof.
The error of the best 1-step linear prediction by (2), and the same time, the innovation is
$X 1 − X ^ 1 = b ( 0 ) ξ 1 ,$
using the Wold decomposition of ${ X t }$. Thus the covariance of the innovation is
$Σ = E ( ( X 1 − X ^ 1 ) ( X 1 − X ^ 1 ) * ) = b ( 0 ) b * ( 0 ) .$
With the analytic function $Φ ( z )$ corresponding to the Wold decomposition by (7), $b ( 0 ) = Φ ( 0 )$. Taking the Fourier series (16), let
$U ^ ( z ) : = ∑ j = 0 ∞ ψ ( j ) z j , | z | ≤ 1 .$
$Γ ^ ( z ) : = diag ∑ k = 0 ∞ 1 k ! 1 2 β j , 0 + ∑ n = 1 ∞ β ¯ j , n z n k j = 1 , … , r , | z | ≤ 1 .$
Now using (29), it follows that
$Φ ( z ) = 2 π U ^ ( z ) Γ ^ ( z ) , | z | ≤ 1 ,$
and
$Φ ( 0 ) = 2 π U ^ ( 0 ) Γ ^ ( 0 ) = ψ ( 0 ) diag exp ( β j , 0 / 2 ) j = 1 , … , r .$
Combining the previous results,
$Σ = 2 π ψ ( 0 ) diag exp ( β j , 0 ) j = 1 , … , r ψ * ( 0 ) ,$
where $ψ ( 0 )$ is given by (30) and by (26),
$β j , 0 = 1 2 π ∫ − π π log λ j ( ω ) d ω , j = 1 , … , r .$
This completes the proof of the theorem. □

## Funding

This research received no external funding.

Not applicable.

Not applicable.

Not applicable.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. Anderson, Brian, Manfred Deistler, and Marco Lippi. 2022. Linear system challenges of dynamic factor models. Econometrics 10: 35. [Google Scholar] [CrossRef]
2. Basu, Sumanta, Xianqi Li, and George Michailidis. 2019. Low rank and structured modeling of high-dimensional vector autoregressions. IEEE Transactions on Signal Processing 67: 1207–22. [Google Scholar] [CrossRef]
3. Bolla, Marianna, and Tamás Szabados. 2021. Multidimensional Stationary Time Series: Dimension Reduction and Prediction. Boca Raton: CRC Press. [Google Scholar]
4. Brockwell, Peter J., Richard A. Davis, and Stephen E. Fienberg. 1991. Time Series: Theory and Methods. New York: Springer. [Google Scholar]
5. Cao, Wenqi, Giorgio Picci, and Anders Lindquist. 2023. Identification of low rank vector processes. Automatica 151: 110938. [Google Scholar] [CrossRef]
6. Fuhrmann, Paul A. 2014. Linear Systems and Operators in Hilbert Space. Chelmsford: Courier Corporation. [Google Scholar]
7. Kolmogorov, Andreĭ Nikolaevich. 1941. Stationary sequences in Hilbert space. Moscow University Mathematics Bulletin 2: 1–40. [Google Scholar]
8. Lamperti, John. 1977. Stochastic Processes: A Survey of the Mathematical Theory. New York: Springer. [Google Scholar]
9. Lippi, Marco, Manfred Deistler, and Brian Anderson. 2023. High-dimensional dynamic factor models: A selective survey and lines of future research. Econometrics and Statistics 26: 3–16. [Google Scholar] [CrossRef]
10. Nikolski, Nikolaï. 2019. Hardy Spaces. Cambridge: Cambridge University Press, vol. 179. [Google Scholar]
11. Rozanov, Yu A. 1967. Stationary Random Processes. San Francisco: Holden–Day. [Google Scholar]
12. Rudin, Walter. 2006. Real and Complex Analysis. Singapore: Tata McGraw-Hill Education. [Google Scholar]
13. Szabados, Tamás. 2022. Regular multidimensional stationary processes. Journal of Time Series Analysis 43: 263–84, Erratum in Journal of Time Series Analysis 44: 331–32. [Google Scholar] [CrossRef]
14. Wang, Di, Yao Zheng, Heng Lian, and Guodong Li. 2022. High-dimensional vector autoregressive time series modeling via tensor decomposition. Journal of the American Statistical Association 117: 1338–56. [Google Scholar] [CrossRef]
15. Wiener, Norbert, and Pesi Masani. 1957. The prediction theory of multivariate stochastic processes, I. The regularity condition. Acta Mathematica 98: 111–50. [Google Scholar] [CrossRef]
 Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

## Share and Cite

MDPI and ACS Style

Szabados, T. Factorization of a Spectral Density with Smooth Eigenvalues of a Multidimensional Stationary Time Series. Econometrics 2023, 11, 14. https://doi.org/10.3390/econometrics11020014

AMA Style

Szabados T. Factorization of a Spectral Density with Smooth Eigenvalues of a Multidimensional Stationary Time Series. Econometrics. 2023; 11(2):14. https://doi.org/10.3390/econometrics11020014

Chicago/Turabian Style

Szabados, Tamás. 2023. "Factorization of a Spectral Density with Smooth Eigenvalues of a Multidimensional Stationary Time Series" Econometrics 11, no. 2: 14. https://doi.org/10.3390/econometrics11020014

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.