Next Article in Journal
Working Memory Decline in Alzheimer’s Disease Is Detected by Complexity Analysis of Multimodal EEG-fNIRS
Previous Article in Journal
Lottery and Auction on Quantum Blockchain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Asymptotic Optimality of a Low-Complexity Coding Strategy for WSS, MA, and AR Vector Sources

by
Jesús Gutiérrez-Gutiérrez
*,
Marta Zárraga-Rodríguez
and
Xabier Insausti
Tecnun, University of Navarra, Paseo Manuel Lardizábal 13, 20018 San Sebastián, Spain
*
Author to whom correspondence should be addressed.
Entropy 2020, 22(12), 1378; https://doi.org/10.3390/e22121378
Submission received: 7 October 2020 / Revised: 17 November 2020 / Accepted: 30 November 2020 / Published: 5 December 2020
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
In this paper, we study the asymptotic optimality of a low-complexity coding strategy for Gaussian vector sources. Specifically, we study the convergence speed of the rate of such a coding strategy when it is used to encode the most relevant vector sources, namely wide sense stationary (WSS), moving average (MA), and autoregressive (AR) vector sources. We also study how the coding strategy considered performs when it is used to encode perturbed versions of those relevant sources. More precisely, we give a sufficient condition for such perturbed versions so that the convergence speed of the rate remains unaltered.

1. Introduction

In [1], Kolmogorov gave a formula for the rate distortion function (RDF) of Gaussian vectors and for the RDF of Gaussian wide sense stationary (WSS) sources. In [2], Pearl presented an upper bound for the RDF of finite-length data blocks of any Gaussian WSS source and proved that such a bound tends to the RDF of the source when the length of the data block grows. However, he did not propose a coding strategy to achieve his bound for a given block length. In [3], we presented a tighter upper bound for the RDF of finite-length data blocks of any Gaussian WSS source, and we proposed a low-complexity coding strategy to achieve our bound. Obviously, since such a bound is tighter than the one given by Pearl, it also tends to the RDF of the source when the length of the data block grows. In [4], we generalized our low-complexity coding strategy to encode (compress) finite-length data blocks of any Gaussian vector source. Moreover, in [4], we also gave a sufficient condition for the vector source in order to make such a coding strategy asymptotically optimal. We recall that a coding strategy is asymptotically optimal if its rate tends to the RDF of the source as the length of the data block grows. Such a sufficient condition requires the Gaussian vector source to be asymptotically WSS (AWSS). The definition of the AWSS process was first introduced in [5], Section 6, and extended to vector processes in [6], Definition 7.1. However, the convergence speed of the rate of the coding strategy considered (i.e., how fast the rate of the coding strategy tends to the RDF of the AWSS vector source) was not studied in [4].
In this paper, we present a less restrictive sufficient condition for the vector source to make the coding strategy considered asymptotically optimal. Moreover, we study the convergence speed of the rate of such a coding strategy when it is used to encode the most relevant vector sources, namely, WSS, moving average (MA), and autoregressive (AR) vector sources. In this paper, we also study how the coding strategy considered performs when it is used to encode perturbed versions of those relevant sources. Specifically, we give a sufficient condition for such perturbed versions so that the convergence speed of the rate remains unaltered.
The study of the convergence speed in any information-theoretic problem is not an easy task. To study the aforementioned convergence speed, we first need to derive new mathematical results on block Toeplitz matrices and new mathematical results on the correlation matrices of the WSS, MA, and AR vector processes. These new mathematical results are useful not only to study the convergence speed in the information-theoretic problem considered, but also in other problems. In fact, as an example, in Appendix H, we use such mathematical results to study the convergence speed in a statistical signal processing problem on filtering WSS vector processes.
The paper is organized as follows. In Section 2, we give several new mathematical results on block Toeplitz matrices. In Section 3, using the results obtained in Section 2, we give several new mathematical results on the correlation matrices of WSS, MA, and AR vector processes. In Section 4, we recall the low-complexity coding strategy presented in [4], and using the results obtained in Section 3, we study the asymptotic optimality of such a coding strategy when it is used to encode WSS, MA, and AR vector sources. In Section 4, we also study how the coding strategy considered performs when it is used to encode perturbed versions of those sources. Finally, in Section 5, some conclusions are presented.

2. Several New Results on Block Toeplitz Matrices

In this section, we present new results on the product of block Toeplitz matrices, on the inverse of a block Toeplitz matrix, and on block circulant matrices. These results will be used in Section 3. We begin by introducing some notation.

2.1. Notation

In this paper, N , Z , R , and C denote the set of natural numbers (that is, the set of positive integers), the set of integer numbers, the set of real numbers, and the set of complex numbers, respectively. C M × N is the set of all M × N complex matrices. I N stands for the N × N identity matrix. 0 M × N denotes the M × N zero matrix. V n is the n × n Fourier unitary matrix, i.e.,
[ V n ] j , k = 1 n e 2 π ( j 1 ) ( k 1 ) n i , j , k { 1 , , n } ,
with i being the imaginary unit. We denote by λ 1 ( A ) , , λ n ( A ) the eigenvalues of an n × n Hermitian matrix A arranged in decreasing order. ∗ denotes the conjugate transpose. ⊗ is the Kronecker product. · 2 and · F are the spectral norm and the Frobenius norm, respectively.
If n N and A j C M × N for all j { 1 , , n } , then diag ( A 1 , , A n ) is the n × n block diagonal matrix whose M × N blocks are given by:
[ diag ( A 1 , , A n ) ] j , k = δ j , k A j , j , k { 1 , , n } ,
where δ is the Kronecker delta. We also denote by diag 1 k n ( A k ) the matrix diag ( A 1 , , A n ) .
If n N and F : R C M × N is a continuous 2 π -periodic function, T n ( F ) stands for the n × n block Toeplitz matrix generated by F whose M × N blocks are given by:
[ T n ( F ) ] j , k = F j k , j , k { 1 , , n } ,
where { F k } k Z is the sequence of Fourier coefficients of F, that is,
F k = 1 2 π 0 2 π e k ω i F ( ω ) d ω k Z .
We denote by C n ( F ) the n × n block circulant matrix with M × N blocks defined as:
C n ( F ) = ( V n I M ) diag 1 k n F 2 π ( k 1 ) n ( V n I N ) * .
If A n C n M × n N , then C A n is the n × n block circulant matrix with the M × N blocks given by:
C A n = ( V n I M ) diag 1 k n [ ( V n I M ) * A n ( V n I N ) ] k , k ( V n I N ) * .
We denote by C ^ n ( F ) the n × n block circulant matrix with the M × N blocks defined as C ^ n ( F ) = C T n ( F ) .
If F ( ω ) is Hermitian for all ω R (or equivalently, T n ( F ) is Hermitian for all n N (see, e.g., [6], Theorem 4.4), then inf F denotes inf ω [ 0 , 2 π ] λ N ( F ( ω ) ) . We recall that (see [7], Proposition 3):
inf n N λ n N ( T n ( F ) ) = inf F = min ω [ 0 , 2 π ] λ N ( F ( ω ) ) .

2.2. Product of Block Toeplitz Matrices

We begin this subsection with a result on the entries of the block Toeplitz matrices generated by the product of two functions, which is a direct consequence of the Parseval theorem.
Lemma 1.
Consider two continuous 2 π -periodic functions F : R C M × N and G : R C N × K . Let { F k } k Z and { G k } k Z be the sequences of Fourier coefficients of F and G, respectively. Then:
[ T n ( F G ) ] j , k = h = F j h G h k
for all n N and j , k { 1 , , n } .
Proof. 
See Appendix A. □
We can now give a result on the product of two block Toeplitz matrices when one of them is generated by a trigonometric polynomial. We recall that an M × N trigonometric polynomial of degree p N { 0 } is a function F : R C M × N of the form:
F ( ω ) = k = p p e k ω i A k ω R ,
where A k C M × N with | k | p . It can be easily proven (see, e.g., [6], Example 4.3) that the sequence of the Fourier coefficients { F k } k Z of the continuous 2 π -periodic function F in Equation (2) is given by:
F k = A k if   | k | p , 0 M × N if   | k | > p .
Lemma 2. 
Let F, G, { F k } k Z , and { G k } k Z be as in Lemma 1.
1. 
If F is a trigonometric polynomial of degree p, then:
[ T n ( F ) T n ( G ) T n ( F G ) ] j , k = h = j p 0 F j h G h k i f   j p , 0 M × K i f   p + 1 j n p , h = n + 1 j + p F j h G h k i f   j n p + 1 ,
and:
T n ( F ) T n ( G ) T n ( F G ) F p ( p + 1 ) 1 2 π 0 2 π F ( ω ) F 2 d ω 1 2 π 0 2 π G ( ω ) F 2 d ω
for all n N and j , k { 1 , , n } .
2. 
If G is a trigonometric polynomial of degree q, then:
[ T n ( F ) T n ( G ) T n ( F G ) ] j , k = h = k q 0 F j h G h k i f   k q , 0 M × K i f   q + 1 k n q , h = n + 1 k + q F j h G h k i f   k n q + 1 ,
and:
T n ( F ) T n ( G ) T n ( F G ) F q ( q + 1 ) 1 2 π 0 2 π F ( ω ) F 2 d ω 1 2 π 0 2 π G ( ω ) F 2 d ω
for all n N and j , k { 1 , , n } .
3. 
If F is a trigonometric polynomial of degree p and G is a trigonometric polynomial of degree q, then:
T n ( F ) T n ( G ) T n ( F G ) = ξ 1 ( F , G ) 0 p M × ( n 2 q ) K 0 p M × q K 0 ( n 2 p ) M × q K 0 ( n 2 p ) M × ( n 2 q ) K 0 ( n 2 p ) M × q K 0 p M × q K 0 p M × ( n 2 q ) K ξ 2 ( F , G )
and:
T n ( F ) T n ( G ) T n ( F G ) F = ξ 1 ( F , G ) F 2 + ξ 2 ( F , G ) F 2
for all n max { 2 p , 2 q } , where ξ 1 ( F , G ) , ξ 2 ( F , G ) C p M × q K are given by:
[ ξ 1 ( F , G ) ] j , k = h = max { j p , k q } 0 F j h G h k
and:
[ ξ 2 ( F , G ) ] j , k = h = 1 min { j , k } F j p h G h + q k
for all j { 1 , , p } and k { 1 , , q } .
Proof. 
See Appendix B. □

2.3. Inverse of a Block Toeplitz Matrix

Lemma 3. 
Let F : R C N × N be a trigonometric polynomial of degree p.
1. 
If F ( ω ) is invertible for all ω R and { T n ( F ) } is stable (i.e., T n ( F ) is invertible for all n N and { ( T n ( F ) ) 1 2 } is bounded), then:
( T n ( F ) ) 1 T n ( F 1 ) F sup m N ( T m ( F ) ) 1 2 p ( p + 1 ) 1 2 π 0 2 π F ( ω ) F 2 d ω 1 2 π 0 2 π ( F ( ω ) ) 1 F 2 d ω
for all n N .
2. 
If F ( ω ) is positive definite for all ω R , then:
( T n ( F ) ) 1 T n ( F 1 ) F 1 inf F p ( p + 1 ) 1 2 π 0 2 π F ( ω ) F 2 d ω 1 2 π 0 2 π ( F ( ω ) ) 1 F 2 d ω
for all n N .
Proof. 
See Appendix C. □

2.4. Block Circulant Matrices

Lemma 4.
Consider A n , B n C n M × n N . Then:
C A n C B n F A n B n F
and:
A n C A n F 2 A n B n F + B n C B n F .
Moreover, if B n is an n × n block circulant matrix with M × N blocks, then:
C B n = B n
and:
A n C A n F 2 A n B n F .
Proof. 
See Appendix D. □
Lemma 5.
Let F : R C M × N be a trigonometric polynomial of degree p. Then:
T n ( F ) C ^ n ( F ) F T n ( F ) C n ( F ) F = k = 1 p k F k F 2 + F k F 2
for all n > 2 p . Furthermore,
lim n T n ( F ) C ^ n ( F ) F = lim n T n ( F ) C n ( F ) F .
Proof. 
See Appendix E. □

3. Several New Results on the Correlation Matrices of Certain Random Vector Processes

Let { x n } be a (complex) random N-dimensional vector process, that is x n is a (complex) random N-dimensional (column) vector for all n N . In this section, we study the boundedness of the sequence E x n : 1 x n : 1 * C E x n : 1 x n : 1 * F when { x n } is a WSS, MA, or AR vector process, where:
x n : 1 = x n x n 1 x 1 , n N ,
and E denotes expectation.

3.1. WSS Vector Processes

In this subsection, we review the concept of the WSS vector process, and we prove that the sequence E x n : 1 x n : 1 * C E x n : 1 x n : 1 * F is bounded when { x n } is a WSS vector process whose power spectral density (PSD) is a trigonometric polynomial.
Definition 1.
Let X : R C N × N be continuous and 2 π -periodic. A random N-dimensional vector process { x n } is said to be WSS with PSD X if it has constant mean (i.e., E ( x n 1 ) = E ( x n 2 ) for all n 1 , n 2 N ) and { E x n : 1 x n : 1 * } = { T n ( X ) } .
Lemma 6.
If { x n } is a WSS vector process whose PSD is a trigonometric polynomial, then E x n : 1 x n : 1 * C E x n : 1 x n : 1 * F is bounded.
Proof. 
This is a direct consequence of Lemma 5. □

3.2. VMA Processes

In this subsection, we review the concept of the MA vector (VMA) process, and we prove that the sequence E x n : 1 x n : 1 * C E x n : 1 x n : 1 * F is bounded when { x n } is a VMA process of finite order.
Definition 2.
A zero-mean random N-dimensional vector process { x n } is said to be a VMA process if:
x n = w n + k = 1 n 1 G k w n k n N ,
where G k C N × N for all k N and { w n } is a zero-mean WSS N-dimensional vector process whose PSD is an N × N positive semidefinite matrix Λ. If there exists q N such that G k = 0 N × N for all k > q , then { x n } is called a VMA process of (finite) order q or a VMA ( q ) process.
Lemma 7.
If { x n } is a VMA ( q ) process as in Definition 2, then E x n : 1 x n : 1 * C E x n : 1 x n : 1 * F is bounded.
Proof. 
See Appendix F. □

3.3. VAR Processes

In this subsection, we review the concept of the AR vector (VAR) process, and we study the boundedness of the sequence E x n : 1 x n : 1 * C E x n : 1 x n : 1 * F when { x n } is a VAR process of finite order.
Definition 3.
A zero-mean random N-dimensional vector process { x n } is said to be a VAR process if:
x n = w n k = 1 n 1 F k x n k n N ,
where F k C N × N for all k N and { w n } is a zero-mean WSS N-dimensional vector process whose PSD is an N × N positive definite matrix Λ. If there exists p N such that F k = 0 N × N for all k > p , then { x n } is called a VAR process of (finite) order p or a VAR ( p ) process.
Lemma 8.
Let { x n } be a VAR ( p ) process as in Definition 3. Suppose that F ( ω ) = I N + k = 1 p e k ω i F k is invertible for all ω R and { ( T n ( F ) ) 1 2 } is bounded. Then, E x n : 1 x n : 1 * C E x n : 1 x n : 1 * F is bounded.
Proof. 
See Appendix G. □

4. On the Asymptotic Optimality of a Low-Complexity Coding Strategy for Gaussian Vector Sources

4.1. Low-Complexity Coding Strategy Considered

In [1], Kolmogorov gave a formula for the RDF of a real zero-mean Gaussian N-dimensional vector x with a positive definite correlation matrix E x x , namely,
R x ( D ) = 1 N k = 1 N max 0 , 1 2 ln λ k E x x θ D 0 , tr E x x N ,
where stands for the transpose, tr denotes the trace, and θ is a real number satisfying:
D = 1 N k = 1 N min θ , λ k E x x .
We recall that R x ( D ) can be thought of as the minimum rate (measured in nats) at which x can be encoded (compressed) in order to be able to recover it with a mean squared error (MSE) per dimension no larger than a given distortion D, that is:
E x x ˜ 2 2 N D ,
where x ˜ denotes the estimation of x .
If D 0 , λ N E x x , an optimal coding strategy to achieve R x ( D ) is to encode [ z ] 1 , 1 , , [ z ] N , 1 separately with E ( [ z ] k , 1 [ z ] k , 1 ˜ 2 2 ) D for all k { 1 , , N } , where z = U x with U being a real orthogonal eigenvector matrix of E x x (see [8], Corollary 1). Observe that in order to obtain U, we need to know the correlation matrix E x x . This coding strategy also requires an optimal coding method for real Gaussian random variables.
In [4], Theorem 3, we gave a low-complexity coding strategy for any Gaussian N-dimensional vector source { x n } . According to that strategy, to encode a finite-length data block x n : 1 of such a source, we first compute the block discrete Fourier transform (DFT) of x n : 1 :
y n : 1 = V n * I N x n : 1 ,
and then, we encode y n 2 , , y n separately (i.e., if n is even, we encode y n 2 , y n 2 + 1 ^ , , y n 1 ^ , y n separately, and if n is odd, we encode y n + 1 2 ^ , , y n 1 ^ , y n separately) with:
E y k ^ y k ^ ˜ 2 2 2 N D 2 , k n 2 , , n 1 \ n 2 ,
and:
E y k y k ˜ 2 2 N D , k n 2 , n N ,
where x denotes the smallest integer higher than or equal to x R and:
z ^ = Re ( z ) Im ( z ) = Re ( [ z ] 1 , 1 ) Re ( [ z ] N , 1 ) Im ( [ z ] 1 , 1 ) Im ( [ z ] N , 1 ) z C N × 1
with Re and Im being the real part and the imaginary part of a complex number, respectively.
As our coding strategy requires the computation of the block DFT, its computational complexity is O ( n N log n ) whenever the fast Fourier transform (FFT) algorithm is used. We recall that the computational complexity of the optimal coding strategy for x n : 1 is O ( n 2 N 2 ) since it requires the computation of U n x n : 1 , where U n is a real orthogonal eigenvector matrix of E x n : 1 x n : 1 . Observe that such an eigenvector matrix U n also needs to be computed, which further increases the complexity. Hence, the main advantage of our coding strategy is that it notably reduces the computational complexity of coding x n : 1 . Moreover, our coding strategy does not require the knowledge of E x n : 1 x n : 1 . It only requires the knowledge of E y k ^ y k ^ , with k { n 2 , , n } .
We finish this subsection by reviewing a result that provides an upper bound for the distance between R x n : 1 ( D ) and the rate of our coding strategy R ˜ x n : 1 ( D ) (see [4], Theorem 3).
Theorem 1.
Consider n , N N . Let x k be a random N-dimensional vector for all k { 1 , , n } . Suppose that x n : 1 is a real zero-mean Gaussian vector with a positive definite correlation matrix (or equivalently, λ n N E x n : 1 x n : 1 > 0 ). Let y n : 1 be the random vector given by Equation (14). If D 0 , λ n N E x n : 1 x n : 1 , then:
0 R ˜ x n : 1 ( D ) R x n : 1 ( D ) 1 2 ln 1 + E x n : 1 x n : 1 C E x n : 1 x n : 1 F n N λ n N E x n : 1 x n : 1 ,
where:
R ˜ x n : 1 ( D ) = R y n 2 D + 2 k = n 2 + 1 n 1 R y k ^ D 2 + R y n ( D ) n if   n   is   even , 2 k = n + 1 2 n 1 R y k ^ D 2 + R y n ( D ) n if   n   is   odd .

4.2. On the Asymptotic Optimality of the Low-Complexity Coding Strategy Considered

In this subsection, we study the asymptotic optimality of our coding strategy for Gaussian vector sources. We begin by presenting a new result that provides a sufficient condition for the source to make such a coding strategy asymptotically optimal.
Theorem 2.
Let { x n } be a real zero-mean Gaussian N-dimensional vector process. Suppose that inf n N λ n N E x n : 1 x n : 1 > 0 and lim n E x n : 1 x n : 1 C E x n : 1 x n : 1 F n = 0 . If D 0 , inf n N λ n N E x n : 1 x n : 1 , then:
lim n R ˜ x n : 1 ( D ) R x n : 1 ( D ) = 0 .
Hence, if { R x n : 1 ( D ) } is convergent, then:
lim n R ˜ x n : 1 ( D ) = lim n R x n : 1 ( D ) .
Proof. 
From Equation (15), we have:
0 R ˜ x n : 1 ( D ) R x n : 1 ( D ) 1 2 ln 1 + E x n : 1 x n : 1 C E x n : 1 x n : 1 F n N inf m N λ m N E x m : 1 x m : 1 n N ,
and therefore, Theorem 2 is proven. □
We recall that lim n R x n : 1 ( D ) is the RDF of the source { x n } .
In [4], Theorem 4, we gave a more restrictive sufficient condition for the source to make the coding strategy considered asymptotically optimal. Specifically, in [4], Theorem 4, we proved that Equation (16) holds if { x n } is AWSS. However, the convergence speed of the rate of the coding strategy considered (i.e., how fast the rate of the coding strategy tends to the RDF of the AWSS vector source) was not studied in [4]. We now study the convergence speed of the rate of such a coding strategy when it is used to encode the most relevant vector sources, namely WSS vector sources, VMA sources, and VAR sources. It should be mentioned that this convergence speed depends on the sequence E x n : 1 x n : 1 C E x n : 1 x n : 1 F whose boundedness is studied in Section 3 for these three types of vector sources.
Theorem 3.
Let { x n } be a real zero-mean Gaussian WSS N-dimensional vector process whose PSD X is a trigonometric polynomial. Suppose that inf X > 0 (or equivalently, det ( X ( ω ) ) 0 for all ω R ). If D 0 , inf X , there exists K [ 0 , ) such that:
0 R ˜ x n : 1 ( D ) R x n : 1 ( D ) 1 2 ln 1 + K n n N .
Proof. 
As { T n ( X ) } = { E x n : 1 x n : 1 * } , T n ( X ) is positive semidefinite for all n N . Consequently, from [7], Proposition 3, X ( ω ) is positive semidefinite for all ω R . Therefore, applying Equation (1), inf X > 0 if and only if det ( X ( ω ) ) 0 for all ω R . Equation (17) is a direct consequence of Equation (1), Theorem 1, and Lemma 6. □
Theorem 4.
Let { x n } be a VMA(q) process as in Definition 2. Suppose that det ( Λ ) 0 and { ( T n ( G ) ) 1 2 } is bounded with G ( ω ) = I N + k = 1 q e k ω i G k for all ω R . If { x n } is real and Gaussian, and D 0 , inf n N λ n N E x n : 1 x n : 1 , there exists K [ 0 , ) such that:
0 R ˜ x n : 1 ( D ) R x n : 1 ( D ) 1 2 ln 1 + K n n N .
Proof. 
Since det ( T n ( G ) ) = 1 for all n N , from Equation (A3), we have:
λ n N E x n : 1 x n : 1 = 1 E x n : 1 x n : 1 1 2 = 1 T n ( G ) T n ( Λ ) T n ( G ) * 1 2 = 1 T n ( G ) 1 * T n ( Λ 1 ) T n ( G ) 1 2 1 T n ( G ) 1 * 2 T n ( Λ 1 ) 2 T n ( G ) 1 2 = λ N ( Λ ) T n ( G ) 1 2 2 λ N ( Λ ) sup m N T m ( G ) 1 2 2 > 0 n N .
Hence, inf n N λ n N E x n : 1 x n : 1 > 0 . Theorem 1 and Lemma 7 prove Theorem 4. □
Theorem 5.
Let { x n } be a VAR(p) process as in Definition 3. Suppose that F ( ω ) = I N + k = 1 p e k ω i F k is invertible for all ω R and { ( T n ( F ) ) 1 2 } is bounded. If { x n } is real and Gaussian and D 0 , inf n N λ n N E x n : 1 x n : 1 , there exists K [ 0 , ) such that:
0 R ˜ x n : 1 ( D ) R x n : 1 ( D ) 1 2 ln 1 + K n n N .
Proof. 
As det ( T n ( F ) ) = 1 for all n N , applying Equation (A4) and [6], Theorem 4.3, yields:
λ n N E x n : 1 x n : 1 = 1 E x n : 1 x n : 1 1 2 = 1 T n ( F ) 1 T n ( Λ ) T n ( F ) * 1 1 2 = 1 ( T n ( F ) ) * T n ( Λ 1 ) T n ( F ) 2 1 ( T n ( F ) ) * 2 T n ( Λ 1 ) 2 T n ( F ) 2 = λ N ( Λ ) T n ( F ) 2 2 λ N ( Λ ) sup m N T m ( F ) 2 2 > 0 n N .
Thus, inf n N λ n N E x n : 1 x n : 1 > 0 . Theorem 1 and Lemma 8 prove Theorem 5. □

4.3. On How the Low-Complexity Coding Strategy Considered Performs under Perturbations

In this subsection, we study how the low-complexity coding strategy considered performs when it is used to encode a perturbed version, { z n } , of a WSS, MA, or AR vector source { x n } . Observe that if E z n : 1 z n : 1 E x n : 1 x n : 1 F is bounded, from Equation (9), we conclude that our coding strategy can also be used to optimally encode { z n } , and the convergence speed of the rate remains unaltered.
We now present three numerical examples that show how the coding strategy considered performs in the presence of a perturbation. In all of them, N = 2 and:
E z n : 1 z n : 1 = E x n : 1 x n : 1 + 0 2 n 2 × 2 n 2 0 2 n 2 × 2 0 2 × 2 n 2 I 2 n N .
Obviously, E z n : 1 z n : 1 E x n : 1 x n : 1 F is bounded since E z n : 1 z n : 1 E x n : 1 x n : 1 F = 2 for all n N . The three vector sources { x n } considered in our numerical examples are the zero-mean WSS vector source in [9], Section 4, the VMA(1) source in [10], Example 2.1, and the VAR(1) source in [10], Example 2.3. In [9], Section 4, the Fourier coefficients of the PSD X are:
X 0 = 2.0002 0.7058 0.7058 2.0000 , X 1 = X 1 * = 0.3542 0.1016 0.1839 0.2524 , X 2 = X 2 * = 0.0923 0.0153 0.1490 0.0696 ,
X 3 = X 3 * = 0.1443 0.0904 0.0602 0.0704 , X 4 = X 4 * = 0.0516 0.0603 0 0 ,
and X k = 0 2 × 2 with | k | > 4 . In [10], Example 2.1, G 1 and Λ are given by:
0.8 0.7 0.4 0.6
and:
4 1 1 2 ,
respectively. In [10], Example 2.3, F 1 and Λ are given by Equations (18) and (19), respectively.
Figure 1a, Figure 2a and Figure 3a show R x n : 1 ( D ) and R ˜ x n : 1 ( D ) for the three vector sources { x n } considered by assuming that they are Gaussian. Figure 1b, Figure 2b and Figure 3b show R z n : 1 ( D ) and R ˜ z n : 1 ( D ) for these three vector sources. In Figure 1, Figure 2 and Figure 3, n 100 and D = 0.001 . The figures bear the evidence of the fact that the rate of the low-complexity coding strategy considered tends to the RDF of the source even in the presence of a perturbation.

5. Conclusions

In [4], we proposed a low-complexity coding strategy to encode finite-length data blocks of any Gaussian vector source. In this paper, we proved that the convergence speed of the rate of our coding strategy is O 1 n when it is used to encode the most relevant vector sources, namely WSS, MA, and AR vector sources. This means that the rate of our coding strategy will be close enough to the RDF of the source even if the length n of the data blocks is relatively small. Therefore, we conclude that our coding strategy is not only low-complexity and asymptotically optimal, but also low-latency. These three features make our coding strategy very useful in practical coding applications.

Author Contributions

Authors are listed in order of their degree of involvement in the work, with the most active contributors listed first. J.G.-G. conceived the research question. All authors were involved in the research and wrote the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Spanish Ministry of Science and Innovation through the ADELE project (PID2019-104958RB-C44).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Lemma 1

Proof. 
Fix n N and j , k { 1 , , n } . As F : R C M × N and G : R C N × K are continuous and 2 π -periodic, F G : R C M × K is also continuous and 2 π -periodic. Applying the Parseval theorem (see, e.g., [11], p. 191) yields:
[ T n ( F G ) ] j , k r , s = 1 2 π 0 2 π e ( j k ) ω i F ( ω ) G ( ω ) d ω r , s = 1 2 π 0 2 π e ( j k ) ω i [ F ( ω ) G ( ω ) ] r , s d ω = 1 2 π 0 2 π e ( k j ) ω i t = 1 N [ F ( ω ) ] r , t [ G ( ω ) ] t , s d ω = t = 1 N 1 2 π 0 2 π e ( k j ) ω i [ F ( ω ) ] r , t [ G ( ω ) ] t , s d ω = t = 1 N 1 2 π 0 2 π [ G ( ω ) ] t , s e k ω i [ F ( ω ) ] r , t e j ω i d ω = t = 1 N h = 1 2 π 0 2 π e h ω i [ G ( ω ) ] t , s e k ω i d ω 1 2 π 0 2 π e h ω i [ F ( ω ) ] r , t e j ω i ¯ d ω ¯ = t = 1 N h = 1 2 π 0 2 π e h ω i [ G ( ω ) ] t , s e k ω i d ω 1 2 π 0 2 π e h ω i [ F ( ω ) ] r , t e j ω i d ω = t = 1 N h = 1 2 π 0 2 π e ( j h ) ω i [ F ( ω ) ] r , t d ω 1 2 π 0 2 π e ( h k ) ω i [ G ( ω ) ] t , s d ω = t = 1 N h = 1 2 π 0 2 π e ( j h ) ω i F ( ω ) d ω r , t 1 2 π 0 2 π e ( h k ) ω i G ( ω ) d ω t , s = t = 1 N h = F j h r , t G h k t , s = h = t = 1 N F j h r , t G h k t , s = h = F j h G h k r , s = h = F j h G h k r , s
for all r { 1 , , M } and s { 1 , , K } . □

Appendix B. Proof of Lemma 2

Proof. 
(1) Fix n N . As F r = 0 M × N with | r | > p , from Lemma 1, we have:
[ T n ( F ) T n ( G ) T n ( F G ) ] j , k = [ T n ( F ) T n ( G ) ] j , k [ T n ( F G ) ] j , k = h = 1 n [ T n ( F ) ] j , h [ T n ( G ) ] h , k h = F j h G h k = h = 1 n F j h G h k lim H h = H H F j h G h k = lim H h = H 0 F j h G h k + h = n + 1 H F j h G h k = h = j p 0 F j h G h k h = n + 1 j + p F j h G h k ,
and consequently, Equation (3) holds for all j , k { 1 , , n } . Applying Equation (3), the Schwarz inequality (see, e.g., [11], p. 15), the Parseval theorem for continuous matrix-valued functions (see, e.g., [6], p. 208), and the well-known formula for the partial sums of the arithmetic series yields:
T n ( F ) T n ( G ) T n ( F G ) F 2 = j = 1 n k = 1 n [ T n ( F ) T n ( G ) T n ( F G ) ] j , k F 2 = j = 1 p k = 1 n h = j p 0 F j h G h k F 2 + j = n p + 1 n k = 1 n h = n + 1 j + p F j h G h k F 2 = j = 1 p k = 1 n h = j p 0 F j h G h k F 2 + j = n p + 1 n k = 1 n h = n + 1 j + p F j h G h k F 2 j = 1 p k = 1 n h = j p 0 F j h G h k F 2 + j = n p + 1 n k = 1 n h = n + 1 j + p F j h G h k F 2 j = 1 p k = 1 n h = j p 0 F j h F G h k F 2 + j = n p + 1 n k = 1 n h = n + 1 j + p F j h F G h k F 2 j = 1 p k = 1 n h = j p 0 F j h F 2 l = j p 0 G l k F 2 + j = n p + 1 n k = 1 n h = n + 1 j + p F j h F 2 l = n + 1 j + p G l k F 2 = j = 1 p h = j p 0 F j h F 2 l = j p 0 k = 1 n G l k F 2 + j = n p + 1 n h = n + 1 j + p F j h F 2 l = n + 1 j + p k = 1 n G l k F 2 j = 1 p h = j p 0 F j h F 2 l = j p 0 1 2 π 0 2 π G ( ω ) F 2 d ω + j = n p + 1 n h = n + 1 j + p F j h F 2 l = n + 1 j + p 1 2 π 0 2 π G ( ω ) F 2 d ω = 1 2 π 0 2 π G ( ω ) F 2 d ω j = 1 p ( p j + 1 ) h = j p 0 F j h F 2 + j = n p + 1 n ( j + p n ) h = n + 1 j + p F j h F 2 1 2 π 0 2 π G ( ω ) F 2 d ω j = 1 p ( p j + 1 ) 1 2 π 0 2 π F ( ω ) F 2 d ω + j = n p + 1 n ( j + p n ) 1 2 π 0 2 π F ( ω ) F 2 d ω = 1 2 π 0 2 π F ( ω ) F 2 d ω 1 2 π 0 2 π G ( ω ) F 2 d ω j = 1 p ( p j + 1 ) + j = n p + 1 n ( j + p n ) = 1 2 π 0 2 π F ( ω ) F 2 d ω 1 2 π 0 2 π G ( ω ) F 2 d ω p ( p + 1 ) 2 + p ( 1 + p ) 2 ,
and therefore, Equation (4) is proven.
(2) Fix n N . As G r = 0 N × K with | r | > q , from Lemma 1, we obtain:
[ T n ( F ) T n ( G ) T n ( F G ) ] j , k = lim H h = H 0 F j h G h k + h = n + 1 H F j h G h k = h = k q 0 F j h G h k h = n + 1 k + q F j h G h k ,
and hence, Equation (5) holds for all j , k { 1 , , n } . Since G is a trigonometric polynomial of degree q, G * is also a trigonometric polynomial of degree q, where G * ( ω ) : = ( G ( ω ) ) * for all ω R . Applying [6], Lemma 4.2, and Equation (4) yields:
T n ( F ) T n ( G ) T n ( F G ) F = T n ( F ) T n ( G ) T n ( F G ) * F = T n ( G ) * T n ( F ) * T n ( F G ) * F = T n G * T n F * T n G * F * F q ( q + 1 ) 1 2 π 0 2 π ( G ( ω ) ) * F 2 d ω 1 2 π 0 2 π ( F ( ω ) ) * F 2 d ω ,
and thus, Equation (6) is proven.
(3) Fix n max { 2 p , 2 q } . As F r = 0 M × N with | r | > p and G s = 0 N × K with | s | > q , from Lemma 1, we obtain:
[ T n ( F ) T n ( G ) T n ( F G ) ] j , k = lim H h = H 0 F j h G h k + h = n + 1 H F j h G h k = h = max { j p , k q } 0 F j h G h k h = n + 1 min { j + p , k + q } F j h G h k = h = max { j p , k q } 0 F j h G h k if   j p   and   k q , 0 M × K if   j p   and   k q + 1 , 0 M × K if   p + 1 j n p , 0 M × K if   j n p + 1   and   k n q , h = n + 1 min { j + p , k + q } F j h G h k if   j n p + 1   and   k n q + 1 ,
for all j , k { 1 , , n } . Observe that:
[ T n ( F ) T n ( G ) T n ( F G ) ] n p + j , n q + k = h = n + 1 min { n + j , n + k } F n p + j h G h n + q k = l = 1 min { j , k } F p + j l G l + q k
for all j { 1 , , p } and k { 1 , , q } . □

Appendix C. Proof of Lemma 3

Proof. 
(1) Since F : R C N × N is continuous and 2 π -periodic, F 1 : R C N × N is also continuous and 2 π -periodic, where F 1 ( ω ) : = ( F ( ω ) ) 1 for all ω R . As:
( T n ( F ) ) 1 T n ( F 1 ) F = ( T n ( F ) ) 1 ( I n N T n ( F ) T n ( F 1 ) ) F ( T n ( F ) ) 1 2 I n N T n ( F ) T n ( F 1 ) F = ( T n ( F ) ) 1 2 T n ( I N ) T n ( F ) T n ( F 1 ) F = ( T n ( F ) ) 1 2 T n ( F ) T n ( F 1 ) T n ( F F 1 ) F
for all n N , Equation (4) proves Assertion 1 of Lemma 3.
(2) Since F ( ω ) is positive definite for all ω R (or equivalently, F ( ω ) is Hermitian and λ N ( F ( ω ) ) > 0 for all ω R ), F ( ω ) is invertible for all ω R (or equivalently, det ( F ( ω ) ) = k = 1 N λ k ( F ( ω ) ) is non-zero for all ω R ), T n ( F ) is Hermitian, and λ n N ( T n ( F ) ) inf F > 0 for all n N (see Equation (1)). As T n ( F ) is positive definite for all n N , ( T n ( F ) ) 1 is also positive definite for all n N . Therefore,
( T n ( F ) ) 1 2 = λ 1 ( ( T n ( F ) ) 1 ) = 1 λ n N ( T n ( F ) ) 1 inf F
for all n N . Assertion 2 of Lemma 3 can now be obtained from Assertion 1 of Lemma 3. □

Appendix D. Proof of Lemma 4

Proof. 
Consider A n , B n C n M × n N . As V n I m is unitary, ( V n I m ) * is also unitary for all m N . Consequently, since the Frobenius norm is unitarily invariant, we have
C A n C B n F = ( V n I M ) diag 1 k n [ ( V n I M ) * ( A n B n ) ( V n I N ) ] k , k ( V n I N ) * F = diag 1 k n [ ( V n I M ) * ( A n B n ) ( V n I N ) ] k , k F ( V n I M ) * ( A n B n ) ( V n I N ) F = A n B n F
and:
A n C A n F A n C B n F + C B n C A n F A n B n F + B n C B n F + C A n C B n F 2 A n B n F + B n C B n F .
If B n is an n × n block circulant matrix with M × N blocks, then (see, e.g., [6], Lemma 5.1, or [12], Lemma 3) there exist Λ 1 , , Λ n C M × N such that:
B n = ( V n I M ) diag ( Λ 1 , , Λ n ) ( V n I N ) * .
Therefore,
C B n = ( V n I M ) diag 1 k n [ ( V n I M ) * B n ( V n I N ) ] k , k ( V n I N ) * = ( V n I M ) diag 1 k n [ diag ( Λ 1 , , Λ n ) ] k , k ( V n I N ) * = ( V n I M ) diag 1 k n ( Λ k ) ( V n I N ) * = B n ,
and combining Equations (9) and (10), we obtain Equation (11). □

Appendix E. Proof of Lemma 5

Proof. 
Fix n > 2 p . From [13], p. 5, we obtain:
[ C ^ n ( F ) ] j , 1 = F 0 if   j = 1 , 1 j 1 n F j 1 + j 1 n F j 1 n if   j { 2 , , n } , = F 0 if   j = 1 , 1 j 1 n F j 1 if   j { 2 , , p + 1 } , 0 M × N if   j { p + 2 , , n p } , j 1 n F j 1 n if   j { n p + 1 , , n } .
Hence, the Frobenius norm of the n × n block Toeplitz matrix with M × N blocks T n ( F ) C ^ n ( F ) is given by:
T n ( F ) C ^ n ( F ) F 2 = j , k = 1 n [ T n ( F ) C ^ n ( F ) ] j , k F 2 = j = 1 n ( n j + 1 ) [ T n ( F ) C ^ n ( F ) ] j , 1 F 2 + k = 2 n ( n k + 1 ) [ T n ( F ) C ^ n ( F ) ] 1 , k F 2 = j = 1 n ( n j + 1 ) [ T n ( F ) ] j , 1 [ C ^ n ( F ) ] j , 1 F 2 + k = 2 n ( n k + 1 ) [ T n ( F ) ] 1 , k [ C ^ n ( F ) ] 1 , k F 2 = j = 1 n ( n j + 1 ) F j 1 [ C ^ n ( F ) ] j , 1 F 2 + k = 2 n ( n k + 1 ) F 1 k [ C ^ n ( F ) ] n + 2 k , 1 F 2 = j = 2 n ( n j + 1 ) F j 1 [ C ^ n ( F ) ] j , 1 F 2 + h = 2 n ( h 1 ) F h 1 n [ C ^ n ( F ) ] h , 1 F 2 = j = 2 n ( n j + 1 ) F j 1 [ C ^ n ( F ) ] j , 1 F 2 + ( j 1 ) F j 1 n [ C ^ n ( F ) ] j , 1 F 2 = j = 2 p + 1 ( n j + 1 ) F j 1 [ C ^ n ( F ) ] j , 1 F 2 + ( j 1 ) F j 1 n [ C ^ n ( F ) ] j , 1 F 2 + j = n p + 1 n ( n j + 1 ) F j 1 [ C ^ n ( F ) ] j , 1 F 2 + ( j 1 ) F j 1 n [ C ^ n ( F ) ] j , 1 F 2 = j = 2 p + 1 ( n j + 1 ) F j 1 1 j 1 n F j 1 F 2 + ( j 1 ) 1 j 1 n F j 1 F 2 + j = n p + 1 n ( n j + 1 ) j 1 n F j 1 n F 2 + ( j 1 ) F j 1 n j 1 n F j 1 n F 2 = j = 2 p + 1 ( n j + 1 ) j 1 n 2 F j 1 F 2 + ( j 1 ) n j + 1 n 2 F j 1 F 2 + j = n p + 1 n ( n j + 1 ) j 1 n 2 F j 1 n F 2 + ( j 1 ) n j + 1 n 2 F j 1 n F 2 = j = 2 p + 1 ( n j + 1 ) ( j 1 ) n F j 1 F 2 + j = n p + 1 n ( n j + 1 ) ( j 1 ) n F j 1 n F 2 = k = 1 p k n k n F k F 2 + h = 1 p h n h n F h F 2 = k = 1 p k n k n F k F 2 + F k F 2 .
Applying [6], Lemma 5.4, we have:
[ C n ( F ) ] j , 1 = F 0 if   j = 1 , F j 1 if   j { 2 , , p + 1 } , 0 M × N if   j { p + 2 , , n p } , F j 1 n if   j { n p + 1 , , n } .
Consequently, the Frobenius norm of the n × n block Toeplitz matrix with M × N blocks T n ( F ) C n ( F ) is given by:
T n ( F ) C n ( F ) F 2 = j = 2 p + 1 ( n j + 1 ) F j 1 [ C n ( F ) ] j , 1 F 2 + ( j 1 ) F j 1 n [ C n ( F ) ] j , 1 F 2 + j = n p + 1 n ( n j + 1 ) F j 1 [ C n ( F ) ] j , 1 F 2 + ( j 1 ) F j 1 n [ C n ( F ) ] j , 1 F 2 = j = 2 p + 1 ( j 1 ) F j 1 F 2 + j = n p + 1 n ( n j + 1 ) F j 1 n F 2 = k = 1 p k F k F 2 + h = 1 p h F h F 2 = k = 1 p k F k F 2 + F k F 2 .
Equations (A1) and (A2) prove Lemma 5. □

Appendix F. Proof of Lemma 7

Proof. 
From Equation (12), we obtain:
x n : 1 = T n ( G ) w n : 1 n N ,
with G ( ω ) = I N + k = 1 q e k ω i G k for all ω R . Therefore, applying [6], Lemma 4.2, yields:
{ E x n : 1 x n : 1 * } = { T n ( G ) E w n : 1 w n : 1 * ( T n ( G ) ) * } = { T n ( G ) T n ( Λ ) T n ( G * ) } = { T n ( G Λ ) T n ( G * ) } .
Hence, using Equation (9), we have:
E x n : 1 x n : 1 * C E x n : 1 x n : 1 * F 2 E x n : 1 x n : 1 * T n ( G Λ G * ) F + T n ( G Λ G * ) C T n ( G Λ G * ) F = 2 T n ( G Λ ) T n ( G * ) T n ( G Λ G * ) F + T n ( G Λ G * ) C ^ n ( G Λ G * ) F
for all n N . Thus, to finish the proof, we only need to show that { T n ( G Λ ) T n ( G * ) T n ( G Λ G * ) F } and T n ( G Λ G * ) C ^ n ( G Λ G * ) F are bounded. As G Λ and G * are trigonometric polynomials, from Equation (7), we obtain that { T n ( G Λ ) T n ( G * ) T n ( G Λ G * ) F } is bounded. Since G Λ G * is a trigonometric polynomial, applying Lemma 5, we conclude that T n ( G Λ G * ) C ^ n ( G Λ G * ) F is bounded. □

Appendix G. Proof of Lemma 8

Proof. 
As Λ is positive definite, if ω R and v C N × 1 , then:
v * ( F ( ω ) ) 1 Λ ( ( F ( ω ) ) 1 ) * v = ( ( ( F ( ω ) ) 1 ) * v ) * Λ ( ( F ( ω ) ) 1 ) * v = ( ( ( F ( ω ) ) * ) 1 v ) * Λ ( ( F ( ω ) ) * ) 1 v > 0
whenever ( ( F ( ω ) ) * ) 1 v 0 N × 1 , or equivalently, v 0 N × 1 . Since ( F ( ω ) ) 1 Λ ( ( F ( ω ) ) 1 ) * is positive definite for all ω R , we have:
( T n ( F 1 Λ ( F 1 ) * ) ) 1 2 1 inf ( F 1 Λ ( F 1 ) * ) n N .
From Equation (13), we obtain:
w n : 1 = T n ( F ) x n : 1 n N .
Consequently,
{ T n ( Λ ) } = { E w n : 1 w n : 1 * } = { T n ( F ) E x n : 1 x n : 1 * ( T n ( F ) ) * } .
Therefore, as det ( T n ( F ) ) = 1 for all n N , we have:
{ E x n : 1 x n : 1 * } = { ( T n ( F ) ) 1 T n ( Λ ) ( ( T n ( F ) ) * ) 1 } .
Hence, applying Equation (11) and [6], Lemma 4.2, yields:
E x n : 1 x n : 1 * C E x n : 1 x n : 1 * F 2 E x n : 1 x n : 1 * C n ( F 1 Λ ( F 1 ) * ) F 2 ( T n ( F ) ) 1 T n ( Λ ) ( ( T n ( F ) ) * ) 1 T n ( F 1 Λ ( F 1 ) * ) F + T n ( F 1 Λ ( F 1 ) * ) C n ( F 1 Λ ( F 1 ) * ) F 2 ( T n ( F ) ) 1 T n ( Λ ) ( ( T n ( F ) ) * ) 1 2 I n N ( T n ( F ) ) * ( T n ( Λ ) ) 1 T n ( F ) T n ( F 1 Λ ( F 1 ) * ) F + T n ( F 1 Λ ( F 1 ) * ) 2 I n N ( T n ( F 1 Λ ( F 1 ) * ) ) 1 C n ( F 1 Λ ( F 1 ) * ) F 2 ( T n ( F ) ) 1 2 T n ( Λ ) 2 ( ( T n ( F ) ) * ) 1 2 I n N T n ( F * ) T n ( Λ 1 ) T n ( F ) T n ( F 1 Λ ( F 1 ) * ) F + T n ( F 1 Λ ( F 1 ) * ) 2 ( C n ( F 1 Λ ( F 1 ) * ) ) 1 ( T n ( F 1 Λ ( F 1 ) * ) ) 1 F C n ( F 1 Λ ( F 1 ) * ) 2 = 2 ( T n ( F ) ) 1 2 λ 1 ( Λ ) ( ( T n ( F ) ) 1 ) * 2 I n N T n ( F * ) T n ( Λ 1 F ) T n ( F 1 Λ ( F 1 ) * ) F + T n ( F 1 Λ ( F 1 ) * ) 2 C n ( ( F 1 Λ ( F 1 ) * ) 1 ) ( T n ( F 1 Λ ( F 1 ) * ) ) 1 F C n ( F 1 Λ ( F 1 ) * ) 2 2 T n ( F 1 Λ ( F 1 ) * ) 2 ( T n ( F ) ) 1 2 2 λ 1 ( Λ ) ( T n ( F 1 Λ ( F 1 ) * ) ) 1 T n ( F * ) T n ( Λ 1 F ) F + C n ( F 1 Λ ( F 1 ) * ) 2 C n ( ( F 1 Λ ( F * ) 1 ) 1 ) ( T n ( F 1 Λ ( F 1 ) * ) ) 1 F 2 T n ( F 1 Λ ( F 1 ) * ) 2 ( T n ( F ) ) 1 2 2 λ 1 ( Λ ) ( T n ( F 1 Λ ( F 1 ) * ) ) 1 T n ( F * Λ 1 F ) F + T n ( F * Λ 1 F ) T n ( F * ) T n ( Λ 1 F ) F + C n ( F 1 Λ ( F 1 ) * ) 2 C n ( F * Λ 1 F ) T n ( F * Λ 1 F ) F + T n ( F * Λ 1 F ) ( T n ( F 1 Λ ( F 1 ) * ) ) 1 F = 2 T n ( F 1 Λ ( F 1 ) * ) 2 ( T n ( F ) ) 1 2 2 λ 1 ( Λ ) T n ( F * ) T n ( Λ 1 F ) T n ( F * Λ 1 F ) F + ( T n ( F ) ) 1 2 2 λ 1 ( Λ ) + C n ( F 1 Λ ( F 1 ) * ) 2 ( T n ( F 1 Λ ( F 1 ) * ) ) 1 T n ( F * Λ 1 F ) F + C n ( F 1 Λ ( F 1 ) * ) 2 T n ( F * Λ 1 F ) C n ( F * Λ 1 F ) F 2 T n ( F 1 Λ ( F 1 ) * ) 2 ( T n ( F ) ) 1 2 2 λ 1 ( Λ ) T n ( F * ) T n ( Λ 1 F ) T n ( F * Λ 1 F ) F + ( T n ( F ) ) 1 2 2 λ 1 ( Λ ) + C n ( F 1 Λ ( F 1 ) * ) 2 ( T n ( F 1 Λ ( F 1 ) * ) ) 1 2 T n ( F * Λ 1 F ) 2 × ( T n ( F * Λ 1 F ) ) 1 T n ( F 1 Λ ( F 1 ) * ) F + C n ( F 1 Λ ( F 1 ) * ) 2 T n ( F * Λ 1 F ) C n ( F * Λ 1 F ) F = 2 T n ( F 1 Λ ( F 1 ) * ) 2 ( T n ( F ) ) 1 2 2 λ 1 ( Λ ) T n ( F * ) T n ( Λ 1 F ) T n ( F * Λ 1 F ) F + ( T n ( F ) ) 1 2 2 λ 1 ( Λ ) + C n ( F 1 Λ ( F 1 ) * ) 2 ( T n ( F 1 Λ ( F 1 ) * ) ) 1 2 T n ( F * Λ 1 F ) 2 × ( T n ( F * Λ 1 F ) ) 1 T n ( ( F * Λ 1 F ) 1 ) F + C n ( F 1 Λ ( F 1 ) * ) 2 T n ( F * Λ 1 F ) C n ( F * Λ 1 F ) F
for all n N . Thus, to finish the proof, we only need to show that { T n ( F 1 Λ ( F 1 ) * ) 2 } , { T n ( F * ) T n ( Λ 1 F ) T n ( F * Λ 1 F ) F } , { C n ( F 1 Λ ( F 1 ) * ) 2 } , { T n ( F * Λ 1 F ) 2 } , { ( T n ( F * Λ 1 F ) ) 1 T n ( ( F * Λ 1 F ) 1 ) F } , and { T n ( F * Λ 1 F ) C n ( F * Λ 1 F ) F } are bounded. From [6], Theorem 4.3, we obtain that { T n ( F 1 Λ ( F 1 ) * ) 2 } and { T n ( F * Λ 1 F ) 2 } are bounded. Applying [6], Lemma 5.2, we have that { C n ( F 1 Λ ( F 1 ) * ) 2 } is bounded. Since F * and Λ 1 F are trigonometric polynomials, from Equation (7), we obtain that { T n ( F * ) T n ( Λ 1 F ) T n ( F * Λ 1 F ) F } is bounded. As F * Λ 1 F is a trigonometric polynomial, applying Lemma 5, we have that { T n ( F * Λ 1 F ) C n ( F * Λ 1 F ) F } is bounded. Since ( F ( ω ) ) 1 Λ ( ( F ( ω ) ) 1 ) * = ( F ( ω ) ) 1 Λ ( ( F ( ω ) ) * ) 1 is positive definite for all ω R , ( ( F ( ω ) ) 1 Λ ( ( F ( ω ) ) * ) 1 ) 1 = ( F ( ω ) ) * Λ 1 F ( ω ) is also positive definite for all ω R , and consequently, from Equation (8), we conclude that { ( T n ( F * Λ 1 F ) ) 1 T n ( ( F * Λ 1 F ) 1 ) F } is bounded. □

Appendix H. A Statistical Signal Processing Application on Filtering WSS Vector Processes

Consider a zero-mean WSS M-dimensional vector process { x n } . Let Y be the PSD of a zero-mean WSS N-dimensional vector process { y n } with inf Y > 0 . Assume that those two processes are jointly WSS with joint PSD Z, that is Z : R C M × N is a continuous 2 π -periodic function satisfying that { E x n : 1 y n : 1 * } = { T n ( Z ) } .
For every n N , if x ˜ n : 1 is an estimation of x n : 1 from y n : 1 of the form:
x ˜ n : 1 = W y n : 1
with W C n M × n N , the MSE per sample is:
MSE ( W ) n = E x n : 1 x ˜ n : 1 2 2 n ,
and the minimum MSE (MMSE) is given by MMSE = MSE ( W 0 ) , where W 0 is the Wiener filter, i.e.,
W 0 = E x n : 1 y n : 1 * ( E y n : 1 y n : 1 * ) 1 = T n ( Z ) ( T n ( Y ) ) 1 .
In [13], Equation (6), it was shown that there is no difference in the MSE per sample for large enough n if we substitute the optimal filter W 0 by W C , where W C = C ^ n ( Z ) ( C ^ n ( Y ) ) 1 , that is,
lim n MSE ( W C ) n MMSE n = 0 .
Obviously, the computational complexity of the operation (A5) is notably reduced when applying this substitution and the FFT algorithm is used. Specifically, the computational complexity is reduced from O ( n 2 ) to O ( n log n ) .
We here study the convergence speed of the sequence MSE ( W C ) n MMSE n (i.e., how fast this sequence tends to zero) by assuming that Y and Z are trigonometric polynomials. Applying [13], p. 11, and Lemma 5, we conclude that there exists K [ 0 , ) such that:
0 MSE ( W C ) n MMSE n M σ 1 ( Z ) inf Y 1 + sup Y inf Y C ^ n ( Z ) T n ( Z ) F n + σ 1 ( Z ) inf Y C ^ n ( Y ) T n ( Y ) F n K n n N ,
where σ 1 ( Z ) = sup ω [ 0 , 2 π ] Z ( ω ) 2 and sup Y = max ω [ 0 , 2 π ] λ 1 ( Y ( ω ) ) . Therefore,
MSE ( W C ) n MMSE n = O 1 n .
Equation (A6) was proven in [2] for the case M = N = 1 .

References

  1. Kolmogorov, A.N. On the Shannon theory of information transmission in the case of continuous signals. IRE Trans. Inf. Theory 1956, IT-2, 102–108. [Google Scholar] [CrossRef]
  2. Pearl, J. On coding and filtering stationary signals by discrete Fourier transforms. IEEE Trans. Inf. Theory 1973, 19, 229–232. [Google Scholar] [CrossRef]
  3. Gutiérrez-Gutiérrez, J.; Zárraga-Rodríguez, M.; Insausti, X. Upper bounds for the rate distortion function of finite-length data blocks of Gaussian WSS sources. Entropy 2017, 19, 554. [Google Scholar] [CrossRef] [Green Version]
  4. Zárraga-Rodríguez, M.; Gutiérrez-Gutiérrez, J.; Insausti, X. A low-complexity and asymptotically optimal coding strategy for Gaussian vectors sources. Entropy 2019, 21, 965. [Google Scholar] [CrossRef] [Green Version]
  5. Gray, R.M. Toeplitz and circulant matrices: A review. Found. Trends Commun. Inf. Theory 2006, 2, 155–239. [Google Scholar] [CrossRef]
  6. Gutiérrez-Gutiérrez, J.; Crespo, P.M. Block Toeplitz matrices: Asymptotic results and applications. Found. Trends Commun. Inf. Theory 2011, 8, 179–257. [Google Scholar] [CrossRef] [Green Version]
  7. Gutiérrez-Gutiérrez, J. A modified version of the Pisarenko method to estimate the power spectral density of any asymptotically wide sense stationary vector process. Appl. Math. Comput. 2019, 362, 124526. [Google Scholar] [CrossRef]
  8. Gutiérrez-Gutiérrez, J.; Zárraga-Rodríguez, M.; Villar-Rosety, F.M.; Insausti, X. Rate-distortion function upper bounds for Gaussian vectors and their applications in coding AR sources. Entropy 2018, 20, 399. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Gutiérrez-Gutiérrez, J.; Iglesias, I.; Podhorski, A. Geometric MMSE for one-sided and two-sided vector linear predictors: From the finite-length case to the infinite-length case. Signal Process. 2011, 91, 2237–2245. [Google Scholar] [CrossRef]
  10. Reinsel, G.C. Elements of Multivariate Time Series Analysis; Springer: Berlin/Heidelberg, Germany, 1993. [Google Scholar]
  11. Rudin, W. Principles of Mathematical Analysis; McGraw-Hill: New York, NY, USA, 1976. [Google Scholar]
  12. Gutiérrez-Gutiérrez, J.; Crespo, P.M. Asymptotically equivalent sequences of matrices and multivariate ARMA processes. IEEE Trans. Inf. Theory 2011, 57, 5444–5454. [Google Scholar] [CrossRef]
  13. Gutiérrez-Gutiérrez, J.; Zárraga-Rodríguez, M.; Insausti, X.; Hogstad, B.O. On the complexity reduction of coding WSS vector processes by using a sequence of block circulant matrices. Entropy 2017, 19, 95. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Rates for the considered wide sense stationary (WSS) vector source: (a) without perturbation and (b) with perturbation.
Figure 1. Rates for the considered wide sense stationary (WSS) vector source: (a) without perturbation and (b) with perturbation.
Entropy 22 01378 g001
Figure 2. Rates for the considered VMA(1) source: (a) without perturbation and (b) with perturbation.
Figure 2. Rates for the considered VMA(1) source: (a) without perturbation and (b) with perturbation.
Entropy 22 01378 g002
Figure 3. Rates for the considered VAR(1) source: (a) without perturbation and (b) with perturbation.
Figure 3. Rates for the considered VAR(1) source: (a) without perturbation and (b) with perturbation.
Entropy 22 01378 g003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gutiérrez-Gutiérrez, J.; Zárraga-Rodríguez, M.; Insausti, X. On the Asymptotic Optimality of a Low-Complexity Coding Strategy for WSS, MA, and AR Vector Sources. Entropy 2020, 22, 1378. https://doi.org/10.3390/e22121378

AMA Style

Gutiérrez-Gutiérrez J, Zárraga-Rodríguez M, Insausti X. On the Asymptotic Optimality of a Low-Complexity Coding Strategy for WSS, MA, and AR Vector Sources. Entropy. 2020; 22(12):1378. https://doi.org/10.3390/e22121378

Chicago/Turabian Style

Gutiérrez-Gutiérrez, Jesús, Marta Zárraga-Rodríguez, and Xabier Insausti. 2020. "On the Asymptotic Optimality of a Low-Complexity Coding Strategy for WSS, MA, and AR Vector Sources" Entropy 22, no. 12: 1378. https://doi.org/10.3390/e22121378

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop