Next Article in Journal
A Fundamental Scale of Descriptions for Analyzing Information Content of Communication Systems
Next Article in Special Issue
Geometry of Fisher Information Metric and the Barycenter Map
Previous Article in Journal
High Recharge Areas in the Choushui River Alluvial Fan (Taiwan) Assessed from Recharge Potential Analysis and Average Storage Variation Indexes
Previous Article in Special Issue
Geometric Shrinkage Priors for Kählerian Signal Filters
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Kählerian Information Geometry for Signal Processing †

Department of Applied Mathematics and Statistics, The State University of New York (SUNY), StonyBrook, NY 11794, USA
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in MaxEnt 2014, Amboise, France, 21–26 September 2014.
Entropy 2015, 17(4), 1581-1605; https://doi.org/10.3390/e17041581
Submission received: 16 January 2015 / Revised: 13 March 2015 / Accepted: 20 March 2015 / Published: 25 March 2015
(This article belongs to the Special Issue Information, Entropy and Their Geometric Structures)

Abstract

:
We prove the correspondence between the information geometry of a signal filter and a Kähler manifold. The information geometry of a minimum-phase linear system with a finite complex cepstrum norm is a Kähler manifold. The square of the complex cepstrum norm of the signal filter corresponds to the Kähler potential. The Hermitian structure of the Kähler manifold is explicitly emergent if and only if the impulse response function of the highest degree in z is constant in model parameters. The Kählerian information geometry takes advantage of more efficient calculation steps for the metric tensor and the Ricci tensor. Moreover, α-generalization on the geometric tensors is linear in α. It is also robust to find Bayesian predictive priors, such as superharmonic priors, because Laplace–Beltrami operators on Kähler manifolds are in much simpler forms than those of the non-Kähler manifolds. Several time series models are studied in the Kählerian information geometry.

1. Introduction

Since the introduction of Riemannian geometry to statistics [1,2], information geometry has been developed along various directions. The statistical curvature as the differential-geometric analogue of information loss and sufficiency was proposed by Efron [3]. The α-duality of information geometry was found by Amari [4]. Not being limited to statistical inference, information geometry has become popular in many different fields, such as information-theoretic generalization of the expectation-maximization algorithm [5], hidden Markov models [6], interest rate modeling [7], phase transition [8,9] and string theory [10]. More applications can be found in the literature [11] and the references therein.
In particular, time series analysis and signal processing are well-known applications of information geometry. Ravishanker et al. [12] found the information geometry of autoregressive moving average (ARMA) models in the coordinate system of poles and zeros. It was also extended to fractionally-integrated ARMA (ARFIMA) models [13]. The information geometry of autoregressive (AR) models in the reflection coefficient coordinates was also reported by Barbaresco [14]. In the information-theoretic framework, Bayesian predictive priors outperforming the Jeffreys prior were derived for the AR models by Komaki [15].
Kähler manifolds are interesting topics in differential geometry. On a Kähler manifold, the metric tensor and the Levi–Civita connection are straightforwardly calculated from the Kähler potential, and the Ricci tensor is obtained from the determinant of the metric tensor. Moreover, its holonomy group is related to the unitary group. Because of these properties, many implications of Kähler manifolds are found in mathematics and theoretical physics. In addition to these fields, information geometry is one of those fields where the Kähler manifolds are intriguing. After the symplectic structure in information geometry and its connection to statistics were discovered [16], Barbaresco [14] notably introduced Kähler manifolds to information geometry for time series models and also generalized the differential-geometric approach with mathematical structures, such as Koszul geometry [17,18]. Additionally, Zhang and Li [19] found symplectic and Kähler structures in divergence functions.
In this paper, we prove that the information geometry of a signal filter with a finite complex cepstrum norm is a Kähler manifold. The Kähler potential of the geometry is the square of the Hardy norm of the logarithmic transfer function of a linear system. The Hermitian structure of the manifold is explicitly seen in the metric tensor under certain conditions on the transfer functions of linear models and filters. The calculation of geometric objects and the search for Bayesian predictive priors are simplified by exploiting the properties of Kähler geometry. Additionally, α-correction terms on the geometric objects exhibit α-linearity. This paper is structured as follows. In the next section, we shortly review information geometry for signal processing and derive basic lemmas in terms of the spectral density function and transfer function. In Section 3, main theorems for Kählerian information manifolds are proven and the consequences of the theorems are provided. The implications of Kähler geometry to time series models are reported in Section 4. We conclude the paper in Section 5.

2. Information Geometry for Signal Processing

2.1. Spectral Density Representation in the Frequency Domain

We model an output signal y(w) as a linear system with a transfer function h(w; ξ) of model parameters ξ = (ξ1, ξ2,⋯, ξn):
y ( w ) = h ( w ; ξ ) x ( w )
where x(w) is an input signal in frequency domain w. Complex inputs, outputs and model parameters are considered in this paper. The properties of a given signal filter are characterized by the transfer function h(w; ξ) and the model parameters ξ.
In signal processing, one of the most important quantities is the spectral density function. The spectral density function S(w; ξ) is defined as the absolute square of the transfer function:
S ( w ; ξ ) = | h ( w ; ξ ) | 2 .
The spectral density function describes the way that energy in the frequency domain is distributed by a given signal filter. In terms of signal amplitude, the spectral density function encodes an amplitude response to a monochromatic input eiw. For example, the spectral density function of the all-pass filter is constant in the frequency domain, because the filter passes all inputs to outputs up to the phase difference regardless of frequency. The high-pass filters only allow the signals in the high-frequency domain. Meanwhile, the low-pass filters only permit low-frequency inputs. The properties of other well-known filters are also described by their specific spectral density functions.
The spectral density function is also important in information geometry, because the information-geometric objects of the signal processing geometry are derived from the spectral density function [20,21]. Among the geometric objects, the length and distance concepts are most fundamental in geometry. One of the most important distance measures in information geometry is the α-divergence, also known as Chernoff’s α-divergence, that is the only divergence which is both an f-divergence and a Bregman divergence [22]. The α-divergence between two spectral density functions S1 and S2 is defined as
D ( α ) ( S 1 S 2 ) = { 1 2 π α 2 π π { ( S 2 S 1 ) α 1 α log S 2 S 1 } d w ( α 0 ) 1 4 π π π ( log S 2 log S 1 ) 2 d w ( α = 0 )
and the divergence conventionally measures the distance from S1 to S2. The α-divergence, except for α = 0, is a pseudo-distance, because it is not symmetric under exchange between S1 and S2. In spite of the asymmetry, the α-divergence is frequently used for measuring differences between two linear models or two filters. Some α-divergences are more popular than others, because those divergences have been already known in information theory and statistics. For example, the (1)-divergence is the Kullback–Leibler divergence. The 0-divergence is well known as the square of the Hellinger distance in statistics. The Hellinger distance is locally asymptotically equivalent to the information distance and globally tightly bounded by the information distance [23].
The metric tensor of a statistical manifold, also known as the Fisher information matrix, is derived from the α-divergence. In order to define the information geometry of a linear system, the conditions on a signal filter are found in Amari and Nagaoka [21]: stability, minimum phase and
1 2 π π π | log S ( w ; ξ ) | 2 d w <
which imposes that the unweighted power cepstrum norm [24,25] is finite. According to the literature [20,21], the metric tensor of the linear system geometry is given by
g μ v ( ξ ) = 1 2 π π π ( μ log S ) ( v log S ) d w
where the partial derivatives are taken with respect to the model parameters ξ, i.e., μ = ξ μ. dimension of the manifold is n, the metric tensor is an n × n matrix.
Other information-geometric objects are also determined by the spectral density function. The α-connection, which encodes the change of a vector being parallel-transported along a curve, is expressed with
Γ μ v , ρ ( α ) ( ξ ) = 1 2 π π π ( μ v log S α ( μ log S ) ( v log S ) ) ( ρ log S ) d w
where α is a real number. Notice that the α-connection is not a tensor. The α-connection is related to the Levi–Civita connection, Γμν,ρ(ξ), also known as the metric connection. The relation is given by the following equations:
Γ μ v , ρ ( α ) ( ξ ) = Γ μ v , ρ ( ξ ) α 2 T μ v , ρ ( ξ )
T μ v , ρ ( ξ ) = 1 π π π ( μ log S ) ( v log S ) ( ρ log S ) d w
where the tensor T is symmetric under the exchange of the indices. The Levi–Civita connection corresponds to the α = 0 case.
These information-geometric objects have interesting properties with the reciprocality of spectral density functions. The spectral density function of an inverse system is the reciprocal spectral density function of the original system. The geometric properties of the inverse system are described by the α-dual description. The following lemma shows the correspondence between the reciprocality of the spectral density function and the α-duality.
Lemma 1. The information geometry of an inverse system is the α-dual geometry to the information geometry of the original system.
Proof. The metric tensor is invariant under the reciprocality of spectral density functions, i.e., plugging S1 into Equation (2) provides the identical metric tensor.
Meanwhile, the α-connection is not invariant under the reciprocality and exhibits a more interesting property. The α-connection from the reciprocal spectral density function is given by
Γ μ v , ρ ( α ) ( S 1 ; ξ ) = 1 2 π π π ( μ v log S + α ( μ log S ) ( v log S ) ) ( ρ log S ) d w = Γ μ v , ρ ( α ) ( S ; ξ )
and the above equation shows that the α-connection induced by the reciprocal spectral density function corresponds to the (−α)-connection of the original geometry.
Similar to the α-connection, the α-divergence is equipped with the same property. The α-divergence between two reciprocal spectral density functions is straightforwardly found from the definition of the α-divergence, and it is represented by the (−α)-divergence between the two spectral density functions:
D ( α ) ( S 1 1 S 2 1 ) = D ( α ) ( S 1 S 2 ) .
Using the inverse systems, we can construct the α-dual description of signal processing models in information geometry. The multiplicative inverse of a spectral density function corresponds to the α-duality of the geometry.
Lemma 1 indicates that given a linear system geometry, there is no way to discern whether the metric tensor is derived from the filters with S or S1. Additionally, the model S1 is (−α)-flat if and only if S is α-flat. The 0-connection is self-dual under the reciprocality. A consequence of Lemma 1 is the following multiplication rule:
D ( α ) ( S 1 S 2 1 ) = 1 2 π α 2 π π { ( S 1 S 2 ) α 1 + α log ( S 1 S 2 ) } d w = D ( α ) ( S 0 S 1 S 2 ) = D ( α ) ( S 1 S 2 S 0 )
where S0 is the unit spectral density function of the all-pass filter. Plugging S1 = S0 and S2 = S, we have D(0)(S0||S1) = D(0) (S0||S) = D(0)(S||S0).
We observe that the bilateral transfer functions log | h ( e i w ; ξ ) | 2 L 2 ( T ) (T) are isomorphically embedded in the space R z H 2 ( D ).
Lemma 2. Let log | h ( e i w ; ξ ) | 2 L 2 ( T ). Then, there is an analytic function f exp ( H 2 ( D ) ), such that
| h ( e i w ; ξ ) | 2 = | f ( e i w ; ξ ) | 2
and
log | h ( e i w ; ξ ) | 2 log | h ( 1 ; ξ ) | 2 | | L 2 ( T ) = log | f ( e i w ; ξ ) | 2 log | f ( 1 ; ξ ) | 2 | | H 2 ( D ) .
This has the interpretation that the information manifold of log |h(eiw; ξ)|2L2 is isometric to the Hardy–Hilbert space.
Proof. log h(eiw; ξ) is represented by the Fourier series:
log | h ( e i w ; ξ ) | 2 = r = α r e i r w
and since log |h(eiw; ξ)|2 is real, we have a−r = ār, and in particular, a0 is real. We define the conjugate series by the coefficients ãr, so that ar + r = 0 for r < 0 and ãr for r > 0; so that ã (e) is real valued, we choose ã0 = 0. This implies
a ˜ r = { 1 i a r ( r < 0 ) 1 i a r ( r > 0 )
and if {ar} ∈ lp for 1 ≤ p ≤ ∞, then {ãr} ∈ lp, in particular,
r 0 | a r | 2 = r 0 | a ˜ r | 2 .
The analytic function f (z) = exp (a0 + a (z) + (z)) has
log | h ( e i w ; ξ ) | 2 = log | f ( e i w ; ξ ) | 2
and
log f ( z ; ξ ) log f ( 1 ; ξ ) | | H 2 2 = log | h ( e i w ; ξ ) | 2 log | h ( 1 ; ξ ) | 2 | | L 2 ( T ) 2 <
and because f exp ( H 2 ( D ) ), f (and f1) is outer, we may write
h ( e i w ; ξ ) = u ( e i w ; ξ ) f ( e i w ; ξ )
where log u(eiw; ξ) ∈ L2 is pure imaginary, that is, |u(eiw; ξ)| = 1.
This has the interpretation that h has a well-defined outer factor, and the information geometry of h depends only on h. In the case that the power series coefficients ak (ξ) are continuous, smooth, analytic, etc., then the embedding is likewise smooth.

2.2. Transfer Function Representation in the z Domain

By using transfer functions, it is also possible to reproduce all of the previous results with the spectral density function. With Fourier transformation and Z-transformation, z = eiw, a transfer function h(z; ξ) is expressed with a series expansion of z,
h ( z ; ξ ) = r = h r ( ξ ) z r
where hr(ξ) is an impulse response function. It is a bilateral (or two-sided) transfer function expression, which has both positive and negative degrees in z, including the zero-th degree. In the causal response case that hr(ξ) = 0 for all negative r, the transfer function is unilateral. In many applications, the main concern is the causality of linear filters, which is represented by unilateral transfer functions. In this paper, we start with bilateral transfer functions as generalization and then will focus on causal filters.
In the complex z-domain, all formulae for the information-geometric objects are identical to the expressions in the frequency domain, except for the change of the integral measure:
1 2 π π π G ( e i w ; ξ ) d w 1 2 π i | z | = 1 G ( z ; ξ ) d z z
for an arbitrary integrand G. Since the evaluation of the integration is obtained from the line integral along the unit circle on the complex plane, it is easy to calculate the above integration with the aid of the residue theorem. According to the residue theorem, the poles only inside the unit circle contribute to the value of the integration. If G(z; ξ) is analytic on the unit disk, the constant term in z of G(z; ξ) is the value of the integration. For more details, see Cima et al. [26] and the references therein.
One advantage of using Z-transformation is that a transfer function can be understood in the framework of functional analysis. A transfer function defined on the complex plane is expanded by the orthonormal basis z−r for integers r with impulse response functions as the coefficients. In functional analysis, it is possible to define the inner product between two complex functions F and G in the Hilbert space:
F , G = 1 2 π i | z | = 1 F ( z ) G ( z ) ¯ d z z .
By using this inner product, the condition for the stationarity, r = 0 | h r | 2 < , is written as the Hardy norm (H2-norm) in complex functional analysis,
h ( z ; ξ ) H 2 2 = h ( z ; ξ ) , h ( z ; ξ ) = r = 0 | h r | 2 < .
Since the functional space with a finite Hardy norm is called the Hardy–Hilbert space H2, the unilateral transfer functions satisfying the stationarity condition live on the H2-space. A transfer function of a stationary system is a function in the L2-space if the transfer function is in the bilateral form.
The conditions on the transfer function of a signal filter are also necessary for defining the information geometry of a linear system in terms of the transfer function. Similar to the spectral density representation, the conditions on the linear filters are stability and minimum phase. In addition to these conditions, we also need the following condition on the finite H2-norm of the logarithmic transfer function,
1 2 π i | z | = 1 | log h ( z ; ξ ) | 2 d z z <
and for the above condition, it is also known that the unweighted complex cepstrum norm [25,27] is finite. From now on, signal filters in this paper are the linear systems satisfying the above norm conditions. This is a necessary condition for a finite power cepstrum norm.
It is natural to complexify the coordinate system as being used in the complex differential geometry. In holomorphic and anti-holomorphic coordinates, the metric tensor of a linear system geometry is represented by
g μ v = 1 2 π i | z | = 1 μ ( log h ( z ; ξ ) + log h ¯ ( z ¯ ; ξ ¯ ) ) v ( log h ( z ; ξ ) + log h ¯ ( z ¯ ; ξ ¯ ) ) d z z
where both μ and ν run over all holomorphic and anti-holomorphic coordinates, i.e., μ, ν = 1, 2, ⋯, n, 1 ¯, 2 ¯, ⋯, n ¯.
The components of the metric tensor are categorized into two classes: one with pure indices from holomorphic coordinates and anti-holomorphic coordinates, and another with the mixed indices. The metric tensor components in these categories are given by
g i j ( ξ ) = 1 2 π i | z | = 1 i log h ( z ; ξ ) j log h ( z ; ξ ) d z z
g i j ¯ ( ξ ) = 1 2 π i | z | = 1 i log h ( z ; ξ ) j ¯ log h ¯ ( z ¯ ; ξ ¯ ) d z z
where g i j ¯ = ( g i j ) * and g i ¯ j = ( g i j ¯ ) *, and the indices i and j run from one to n. It is also possible to express the α-connection and the α-divergence in terms of the transfer function by using Equation (1), the relation between the transfer function and the spectral density function.
It is noteworthy that the information geometry of a linear system is invariant under the multiplicative factor of z in the transfer function, because the metric tensor is not changed by the factorization. The invariance is also true for the geometry induced by the spectral density function.
Lemma 3. The information geometry of a signal filter is invariant under the multiplicative factor of z.
Proof. Any transfer function can be factored zR out in the form of
h ( z ; ξ ) = z R h ˜ ( z ; ξ )
where R is an integer and ĥ is the factored-out transfer function. In the spectral density function representation, the contribution of the factorization is |z|2R, and it is a unity in the line integration. It imposes that the metric tensor, the α-connection and the α-divergence are independent of the factorization.
When a transfer function is considered, the same conclusion is obtained. Since the contribution from the factorization parts, log zR, is canceled by the partial derivatives in the metric tensor and the α-connection expression, the geometry is invariant under the factorization. It is also easy to show that α-divergence is also not changed by the factorization. Another explanation is that the terms of ih/h in the metric tensor and the α-connection are invariant under zR-scaling. □
Based on Lemma 3, it is possible to obtain the unilateral transfer function from a transfer function with a finite upper bound in degrees of z. In particular, this factorization invariance of the geometry is useful in the case that the transfer function has a finite number of terms in the non-causal direction of the bilateral transfer function. If the highest degree in z of the transfer function is finite, the transfer function is factored out as
h ( z ; ξ ) = z R ( h R + h ( R 1 ) z 1 + ) = z R h ˜ ( z ; ξ )
where R is the maximum degree in z of the transfer function and ĥ is a unilateral transfer function.
A bilateral transfer function can be expressed with the multiplication of a unilateral transfer function f(z; ξ) and an analytic function a(z; ξ) on the disk:
h ( z ; ξ ) = f ( z ; ξ ) a ( z ; ξ ) = ( f 0 + f 1 z 1 + f 2 z 2 + ) ( a 0 + a 1 z 1 + a 2 z 2 + )
where fr and ar are functions of ξ. For a causal filter, all ai’s are zero, except for a0. This decomposition also includes the case of Lemma 3 by setting ai = 0 for i < R and aR = 1. However, it is natural to take f0 and a0 as non-zero functions of ξ. This is because powers of z could be factored out for non-zero coefficient terms with the maximum degree in f(z; ξ) and the minimum degree in a(z; ξ), and the transfer function is reducible to
h ( z ; ξ ) = z R h ˜ ( z ; ξ )
where ĥ(z; ξ) has non-zero f ˜ 0 and ã0 and R is an integer, which is the sum of the degrees in z with the first non-zero coefficient terms from f(z; ξ) and a(z; ξ), respectively. By Lemma 3, the information geometry of the linear system with the transfer function h(z; ξ) is the same as the geometry induced by the factored-out transfer function ĥ(z; ξ).
The relation between f(z; ξ), a(z; ξ) and h(z; ξ) is described by the following Toeplitz system:
( h 0 h 1 h 2 h 1 h 0 h 1 h 2 h 1 h 0 ) = ( f 0 f 1 f 2 0 f 0 f 1 0 0 f 0 ) ( a 0 0 0 a 1 a 0 0 a 2 a 1 a 0 ) .
For a given h(z; ξ), fr is determined by the coefficients of a(z; ξ), i.e., if we choose a(z; ξ), f(z; ξ) is conformable to the choice under the above Toeplitz system. The following lemma is noteworthy for further discussions. It is the generalization of Lemma 3.
Lemma 4. The information geometry of a signal filter is invariant under the choice of a(z; ξ).
Proof. It is obvious that the information geometry of a linear system is only decided by the transfer function h(z; ξ). Whatever a(z; ξ) is chosen, the transfer function is the same, because f(z; ξ) is conformable to the Toeplitz system. □
For further generalization, the transfer function is extended to the consideration of the Blaschke product b(z), which corresponds to the all-pass filter in signal processing. The transfer function can be decomposed into the following form:
h ( z ; ξ ) = f ( z ; ξ ) a ( z ; ξ ) b ( z )
where the Blaschke product b(z) is given by
b ( z ) = s b ( z , z s ) = s | z s | z s z s z 1 z ¯ s z
and every zs is on the unit disk. Although the Blaschke product can be written in z1 instead of z, our conclusion is not changed, and we choose z for our convention. When zs = 0, the Blaschke product is given by b(z, zs) = z. Regardless of zs, the Blaschke product is analytic on the unit disk. Since the Taylor expansion of the Blaschke product provides positive order terms in z, it is also possible to incorporate the Blaschke product into a(z; ξ). However, the Blaschke product is separately considered in the paper.
The logarithmic transfer function of a linear system is represented in terms of f, a and b:
log h ( z ; ξ ) = log ( f 0 a 0 ) + log ( 1 + r = 1 f r f 0 z r ) + log ( 1 + r = 1 a r a 0 z r ) + log b ( z ) = ϕ 0 + s log | z s | + r = 1 ϕ r ( ξ ) z r + r = 1 α r ( ξ ) z r + r = 1 β r z r
where ϕ0 = log (f0a0) and ϕr, αr are the r-th coefficients of the logarithmic expansions. ϕr and αr are functions of ξ unless all fr/f0 and ar/a0 constant. Meanwhile, β r = 1 r s | z s | 2 r 1 z s r is a constant in ξ.
It is also straightforward to show that the information geometry is independent of the Blaschke product.
Lemma 5. The information geometry of a signal filter is independent of the Blaschke product.
Proof. It is obvious that the Blaschke product is independent of the coordinate system ξ. Plugging the above series into the expression of the metric tensor in complex coordinates, Equations (8) and (9), the metric tensor components are expressed in terms of ϕr and αr:
g i j = i ϕ 0 j ϕ 0 + r = 1 i ϕ r j α r + r = 1 i α r j ϕ r g i j ¯ = r = 0 i ϕ r j ¯ ϕ ¯ r + r = 1 i α r j ¯ α ¯ r
and it is noteworthy that the metric tensor components are independent of the βr terms, which are related to the Blaschke product, because those are not functions of ξ. This is why the z-convention for the Blaschke product is not important. It is straightforward to repeat the same calculation for the α-connection. Based on these, the information geometry of a linear system is independent of the Blaschke product. □
According to Lemma 4, the geometry is invariant under the degree of freedom in choosing a(z; ξ). By using the invariance of the geometry, it is possible to fix the degree of freedom as ar/a0 constant. With the choice of the degree of freedom, the metric tensor components of the information manifold are given by
g i j = i ϕ 0 j ϕ 0
g i j ¯ = r = 0 i ϕ r j ¯ ϕ ¯ r
and it is easy to verify that the metric tensor components are only dependent on ϕr and ϕ ¯ r. In other words, the metric tensor is dependent only on the unilateral part of the transfer function and a constant term in z of the analytic part.
By Lemma 3, any transfer function with the upper-bounded degree in z is reducible to a unilateral transfer function with a constant term. For this class of transfer functions, a similar expression for the metric tensor can be obtained. First of all, the logarithmic transfer function is given in the series expansion:
log h ( z ; ξ ) = log z R + log h R + log ( 1 + r = 1 h R + r h R z r ) = log z R + r = 0 η r z r
where R is the highest degree in z. The coefficients ηr are also known as the complex cepstrum [27], and η0 = log h−R. After the series expansion of this logarithmic transfer function is plugged into the formulae of the metric tensor components, Equations (8) and (9), the metric tensor components are obtained as
g i j = i η 0 j η 0
g i j ¯ = r = 0 i η r j ¯ η ¯ r
and these expressions for the metric tensor components are similar to Equations (10) and (11) with the exchange of ϕr ↔ ηr.
As an extension of Lemma 5, it is possible to generalize it to the inner-outer factorization of the H2-functions. A function in the H2-space can be expressed as the product of outer and inner functions by the Beurling factorization [28]. The generalization with the Beurling factorization is given by the following lemma.
Lemma 6. The information geometry of a signal filter is independent of the inner function.
Proof. A transfer function h(z; ξ) in the H2-space can be decomposed by the inner-outer factorization:
h ( z ; ξ ) = O ( z ; ξ ) I ( z ; ξ )
where O ( z ; ξ ) is an outer function and I ( z ; ξ ) is an inner function. The α-divergence is expressed with S ( z ; ξ ) = | h ( z ; ξ ) | 2 = | O ( z ; ξ ) I ( z ; ξ ) | 2 = | O ( z ; ξ ) | 2 on the unit circle, because the inner function has the unit modulus on the unit circle. Since the α-divergence is represented only with the outer function, other geometric objects, such as the metric tensor and the α-connection, are also independent of the inner function. □

3. Kähler Manifold for Signal Processing

An advantage of the transfer function representation in the complex z-domain is that it is easy to test whether or not the information geometry of a given signal processing filter is a Kähler manifold. As mentioned before, choosing the coefficients in a(z; ξ) is considered as fixing the degrees of freedom in calculation without changing any geometry. By setting a(z; ξ)/a0(ξ) a constant function in ξ, the description of a statistical model becomes much simpler, and the emergence of Kähler manifolds can be easily verified. Since causal filters are our main concerns in practice, we concentrate on unilateral transfer functions. Although we will work with causal filters, the results in this section are also valid for the cases of bilateral transfer functions.
Theorem 1. For a signal filter with a finite complex cepstrum norm, the information geometry of the signal filter is a Kähler manifold.
Proof. The information manifold of a signal filter is described by the metric tensor g with the components of the expressions, Equation (10) and Equation (11). Any complex manifold admits a Hermitian manifold by introducing a new metric tensor ĝ [29]:
g ^ p ( X , Y ) = 1 2 ( g p ( X , Y ) + g p ( J p X , J p Y ) )
where X, Y are tangent vectors at point p on the manifold and J is the almost complex structure, such that
J p ξ i = i ξ i , J p ξ ¯ i = i ξ ¯ i .
With the new metric tensor ĝ, it is straightforward to verify that the information manifold is equipped with the Hermitian structure:
g ^ i j = g ^ ( i , j ) = 0 g ^ i j ¯ = g ^ ( i , j ¯ ) = g i j ¯ .
Based on the above metric tensor expressions, it is obvious that the information geometry of a linear system is a Hermitian manifold.
The Kähler two-form Ω of the manifold is given by
Ω = i g ^ i j ¯ d ξ i Λ d ξ ¯ j
where ∧ is the wedge product. By plugging Equation (11) into Ω, it is easy to check that the Kähler two-form is closed by satisfying k g ^ i j ¯ = i g ^ k j ¯ and k j ¯ g ^ i j ¯ = j ¯ g ^ i k ¯.
Since Kähler manifolds are defined as the Hermitian manifolds with the closed Kähler two-forms, the information geometry of a signal filter is a Kähler manifold.
An information manifold for a linear system with purely real parameters is a submanifold of a Kählerian information manifold where the metric tensor has the isometry of exchanging holomorphic- and anti-holomorphic coordinates. In addition to that, a given linear system can be described by two manifolds: one is Kähler, and another is non-Kähler. Although the dimension is doubled, working with Kähler manifolds has many advantages, which will be reiterated later.
In Theorem 1, the Hermitian condition is clearly seen after introducing the new metric tensor ĝ. It is also possible to find a condition for which the metric tensor g shows the explicit Hermitian structure. To impose the explicit Hermitian condition, the following theorem is worthwhile to mention.
Theorem 2. In the Kählerian information geometry of a signal filter, the Hermitian structure is explicit in the metric tensor if and only if ϕ0 (or f0a0) is a constant in ξ. Similarly, for the transfer function of which the highest degree in z is finite, the Hermitian condition is directly found if and only if the coefficient of the highest degree in z of the logarithmic transfer function is a constant in ξ.
Proof. Let us prove the first statement.
(⇒) If the geometry is Kähler, it should be the Hermitian manifold satisfying
g i j = i ϕ 0 j ϕ 0 = 0
for all i and j. This equation exhibits that f0a0 is a constant in ξ, because ϕ0 = log (f0a0).
(⇐) If ϕ0 (or f0a0) is a constant in ξ, the metric tensor is found from Equations (10) and (11),
g i j = 0 g i j ¯ = r = 0 i ϕ r j ¯ ϕ ¯ r
and these metric tensor conditions impose that the geometry is the Hermitian manifold. It is noteworthy that the non-vanishing metric tensor components are expressed only with ϕr and ϕ ¯ r, which are functions of the impulse response functions fr in f(z; ξ), the unilateral part of the transfer function. For the manifold to be a Kähler manifold, the Kähler two-form Ω needs to be a closed two-form. The condition for the closed Kähler two-form Ω is that k g i j ¯ = i g k j ¯ and k ¯ g i j ¯ = j ¯ g i k ¯. It is easy to verify that the metric tensor components, Equation (14), satisfy the conditions for the closed Kähler two-form. The Hermitian manifold with the closed Kähler two-form is a Kähler manifold.
The proof for the second statement is straightforward, because it is similar to the proof of the first one by exchanging ϕr ↔ ηr. Let us assume that the highest degree in z is R. According to Lemma 3, it is possible to reduce a bilateral transfer function with finite terms along the non-causal direction to the unilateral transfer function by using the factorization of zR. After that, we need to replace η0 with ϕ0 in the proof. The two theorems are equivalent. □
Theorem 2 can be applied to submanifolds of the information manifolds. For example, a submanifold of a linear system is a Kähler manifold if and only if ϕ0 (or f0a0) is constant on the submanifold, i.e., ϕ0 is a function of the coordinates orthogonal to the submanifold.
On a Kähler manifold, the metric tensor is derived from the following equation:
g i j ¯ = i j ¯ K
where K is the Kähler potential. There exists the degree of freedom in Kähler potential up to the holomorphic and anti-holomorphic function: K ( ξ , ξ ¯ ) = K ( ζ , ζ ¯ ) + ϕ ( ζ ) + ψ ( ζ ¯ ). However, geometry is derived from the same relation: g i j ¯ = i j ¯ K. By using Equation (15), the information on the geometry can be extracted from the Kähler potential. It is necessary to find the Kähler potential for the signal processing geometry. The following corollary shows how to get the Kähler potential for the Kählerian information manifold.
Corollary 1. For a given Kählerian information geometry, the Kähler potential of the geometry is the square of the Hardy norm of the logarithmic transfer function. In other words, the Kähler potential is the square of the complex cepstrum norm of a signal filer.
Proof. Given a transfer function h(z; ξ), the non-trivial components of the metric tensor for a signal processing model are given by Equation (9). By using integration by parts, the metric tensor component is represented by
g i j ¯ = 1 2 π i | z | = 1 { i ( log h ( z ; ξ ) j ¯ log h ¯ ( ξ ¯ ; ξ ¯ ) ) log h ( z ; ξ ) i j ¯ log h ¯ ( ξ ¯ ; ξ ¯ ) } d z z
where the latter term goes to zero by holomorphicity. When we integrate by parts with respect to the anti-holomorphic derivative once again, the metric tensor is expressed with
g i j ¯ = 1 2 π i | z | = 1 { i j ¯ ( log h ( z ; ξ ) log h ¯ ( ξ ¯ ; ξ ¯ ) ) i ( j ¯ log h ( z ; ξ ) log h ¯ ( ξ ¯ ; ξ ¯ ) ) } d z z
and the latter term is also zero, because h(z; ξ) is a holomorphic function.
Finally, the metric tensor is obtained as
g i j ¯ = i j ¯ ( 1 2 π i | z | = 1 ( log h ( z ; ξ ) ) ( log h ( z ; ξ ) ) * d z z )
and by the definition of the Kähler potential, Equation (15), the Kähler potential of the linear system geometry is given by
K = 1 2 π i | z | = 1 ( log h ( z ; ξ ) ) ( log h ( z ; ξ ) ) * d z z
up to a holomorphic function and an anti-holomorphic function. The right-handed side of the above equation is known as the square of the Hardy norm for the logarithmic transfer function. It is straightforward to derive the relation between the Kähler potential and the square of the Hardy norm of the logarithmic transfer function:
K = 1 2 π i | z | = 1 ( log h ( z ; ξ ) ) ( log h ( z ; ξ ) ) * d z z = log h ( z ; ξ ) | | H 2 2 .
Additionally, the Hardy norm of the logarithmic transfer function is also known as the complex cepstrum norm of a linear system [25,27].
For a given linear system, the Kähler potential of the geometry is given by ϕr, αr and the complex conjugates of ϕr, αr:
K = r = 0 ( ϕ r ϕ ¯ r + α r α ¯ r ) .
However, the geometry is not dependent on α and α ¯, because those are not the functions of the model parameters ξ under fixing the degree of the freedom. By using Equation (14), the Kähler potential is expressed with
K = r = 0 ϕ r ϕ ¯ r
and it is noticeable that the Kähler potential only depends on ϕr and ϕ ¯ r, which come from the unilateral part of the transfer function decomposition. It is possible to obtain a similar expression for the finite highest upper-degree case by changing ϕr to ηr.
Since we assume that the complex cepstrum norm is finite, a transfer function h(z; ξ) in the H2-space also lives in the Hardy space of
K = log h ( z ; ξ ) | | H 2 2 < .
This implies that the transfer function lives not only in H2, but also in exp (H2), equivalently log h in the H2-space.
From Equation (15), the metric tensor is derived from the Kähler potential. Additionally, the metric tensor is also calculated from the α-divergence. These facts indicate that there exists a connection between the Kähler potential and the α-divergence.
Corollary 2. The Kähler potential is a constant term in α, up to purely holomorphic or purely anti-holomorphic functions, of the α-divergence between a signal processing filter and the all-pass filter of a unit transfer function.
Proof. After replacing the spectral density function with the transfer function, the 0-divergence between a signal filter and the all-pass filter with a unit transfer function is given by
D ( 0 ) ( 1 h ) = 1 2 π i | z | = 1 1 2 ( log h + log h ¯ ) 2 d z z = K + 1 2 π i | z | = 1 1 2 ( ( log h ) 2 + ( log h ¯ ) 2 ) d z z = K + F ( ξ ) + F ¯ ( ξ ¯ )
where F ( ξ ) = 1 2 ϕ 0 2 = 1 2 ( log ( f 0 a 0 ) ) 2. For a bilateral transfer function, F ( ξ ) = 1 2 ( ϕ 0 + log | z s | ) 2 + r = 1 ϕ r ( α r + β r ).
For non-zero α, the α-divergence between a signal and the white noise is also obtained as
D ( 0 ) ( 1 h ) = 1 2 π i α 2 | z | = 1 { h α 1 α ( log h + log h ¯ ) } d z z = 1 2 π i | z | = 1 ( 1 2 ( log h + log h ¯ ) 2 + n = 1 1 ( n + 2 ) ! α n ( log h + log h ¯ ) n + 2 ) d z z = D ( 0 ) ( 1 h ) + O ( α ) = K + F ( ξ ) + F ¯ ( ξ ¯ ) + O ( α ) .
When f0a0 is unity, a constant term in α of the α-divergence is the Kähler potential. This shows the relation between the α-divergence and the Kähler potential.
The α-connection on a Kähler manifold is expressed with the transfer function by using Equation (1) and Equation (3). It is also cross-checked from the α-divergence in the transfer function representation.
Corollary 3. The α-connection components of the Kählerian information geometry are found as
Γ i j , k ¯ ( α ) = 1 2 π i | z | = 1 ( i j log h ( z ; ξ ) α i log h ( z ; ξ ) j log h ( z ; ξ ) ) ( k log h ( z ; ξ ) ) * d z z Γ i j , k ( α ) = 1 2 π i | z | = 1 ( i j log h ( z ; ξ ) α i log h ( z ; ξ ) j log h ( z ; ξ ) ) ( k log h ( z ; ξ ) ) d z z Γ i j ¯ , k ( α ) = 1 2 π i | z | = 1 α ( i log h ( z ; ξ ) ( j log h ( z ; ξ ) ) * ( k log h ( z ; ξ ) ) d z z Γ i j ¯ , k ¯ ( α ) = 1 2 π i | z | = 1 α ( i log h ( z ; ξ ) ( j ¯ log h ( z ; ξ ) ) * ( k log h ( z ; ξ ) ) * d z z
and the non-trivial components of the symmetric tensor T are given by
T i j , k ¯ = 1 π i | z | = 1 ( i log h ( z ; ξ ) ) ( j log h ( z ; ξ ) ) ( k log h ( z ; ξ ) ) * d z z T i j , k = 1 π i | z | = 1 ( i log h ( z ; ξ ) ) ( j log h ( z ; ξ ) ) ( k log h ( z ; ξ ) ) d z z .
In particular, the non-vanishing 0-connection components are expressed with
Γ i j , k ¯ ( 0 ) = ( Γ i j ¯ , k ( 0 ) ) * = 1 2 π i | z | = 1 ( i j log h ( z ; ξ ) ( k log h ( z ; ξ ) ) * d z z
and the 0-connection is directly derived from the Kähler potential:
Γ i j , k ¯ ( 0 ) = i j k ¯ K .
Additionally, the α-connection and the (−α)-connection are dual to each other.
Proof. After plugging Equation (1) into Equation (3), the derivation of the α-connection is straightforward by considering holomorphic and anti-holomorphic derivatives in the expression. The same procedure is applied to the derivation of the symmetric tensor T.
The 0-connection is also directly derived from the Kähler potential. The proof is as follows:
Γ i j , k ¯ ( 0 ) = 1 2 π i | z | = 1 ( i j log h ( z ; ξ ) ( k log h ( z ; ξ ) ) * d z z = i j k ¯ ( 1 2 π i | z | = 1 ( log h ( z ; ξ ) ( log h ( z ; ξ ) ) * d z z ) = i j k ¯ ( log h ( z ; ξ ) | | H 2 2 ) = i j k ¯ K .
To prove the α-duality, we need to test the following relation:
μ g v ρ = Γ μ v , ρ ( α ) + Γ μ v , ρ ( α )
where the Greek letters run from 1, ⋯, n, 1 ¯, ⋯, n ¯. After tedious calculation, it is obvious that the above equation is satisfied regardless of combinations of the indices. Therefore, the α-duality also exists on the Kählerian information manifolds.
The 0-connection and the symmetric tensor T are expressed in terms of ϕr and ϕ ¯ r,
Γ i j , k ¯ ( 0 ) = r = 0 i j ϕ r k ¯ ϕ ¯ r Γ i j , k ( 0 ) = i j ϕ 0 k ϕ 0 T i j , k ¯ = 2 r , s = 0 i ϕ r j ϕ s k ¯ ϕ ¯ r + s T i j , k = 2 i ϕ 0 j ϕ 0 k ϕ 0 .
With the degree of freedom that ϕ0 is a constant in the model parameters ξ, the non-trivial components of the 0-connection and the symmetric tensor T are Γ i j , k ¯ ( 0 ) and T i j , k ¯, respectively. In this degree of freedom, the Hermitian condition on the metric tensor is obviously emergent, and it is also beneficial to check the α-duality condition for non-vanishing components:
k g i j ¯ = Γ k i , j ¯ ( α ) + Γ k j ¯ , i ( α ) k ¯ g i j ¯ = Γ k ¯ i , j ¯ ( α ) + Γ k ¯ j ¯ , i ( α ) .
We can cross-check these formulae for the geometric objects of the linear system geometry with the well-known results on a Kähler manifold. First of all, the fact that the 0-connection is the Levi–Civita connection can be verified as follows:
Γ ( 0 ) i j k = g k m ¯ Γ i j , m ¯ ( 0 ) = g k m ¯ i j m ¯ K = g k m ¯ i g j m ¯ = i ( log g m n ¯ ) k j = Γ i j k
where the last equality comes from the expression for the Levi–Civita connection on a Kähler manifold. This is well-matched to the Levi–Civita connection on a Kähler manifold.
In Riemannian geometry, the Riemann curvature tensor, corresponding to the 0-curvature tensor, is given by
R ρ σ μ v = μ Γ v σ ρ v Γ μ σ ρ + Γ μ λ ρ Γ v σ λ Γ v λ ρ Γ μ σ λ
where the Greek letters can be any holomorphic and anti-holomorphic indices. Similar to a Hermitian manifold, the non-vanishing components of the 0-curvature tensor on a Kähler manifold are R σ μ ¯ j ρ and its complex conjugate, i.e., the components with three holomorphic indices and one anti-holomorphic index (and the complex conjugate component). The non-trivial components of the Riemann curvature tensor are represented by
R ( 0 ) k i ¯ j l = i ¯ Γ j k l j Γ i ¯ k l + Γ i ¯ m l Γ j k m Γ j m l Γ i ¯ k m = i ¯ Γ j k l = i ¯ ( g l m ¯ j l m ¯ K ) = ( R ( 0 ) k ¯ i j ¯ l ¯ ) *
because the 0-connection components with the mixed indices are vanishing.
Taking index contraction on holomorphic upper and lower indices in the Riemann curvature tensor, the 0-Ricci tensor is found as
R i j ¯ ( 0 ) = R ( 0 ) k i j ¯ k = R ( 0 ) k j ¯ i k = j ¯ i ( log g m n ¯ ) k k = j ¯ i t r ( log g m n ¯ ) = j ¯ i log G
where G is the determinant of the metric tensor. This result is consistent with the expression of the Ricci tensor on a Kähler manifold. It is also straightforward to obtain the 0-scalar curvature by contracting the indices in the 0-Ricci tensor:
R ( 0 ) = g i j ¯ R i j ¯ ( 0 ) = 1 2 Δ log G
where Δ is the Laplace–Beltrami operator on the Kähler manifold.
The α-generalization of the curvature tensor, the Ricci tensor and the scalar curvature is based on the α-connection, Equation (4). The α-curvature tensor is given by
R ( α ) k i ¯ j l = i ¯ Γ ( α ) j k l = i ¯ ( Γ ( α ) j k l α 2 g l m ¯ T j k , m ¯ ) = R ( 0 ) k i ¯ j l α 2 i ( g l m ¯ T j k , m ¯ ) .
The α-Ricci tensor and the α-scalar curvature are obtained as
R i j ¯ ( α ) = R ( α ) k k i j ¯ = R ( α ) k k j ¯ i = j ¯ ( Γ ( 0 ) i k k α 2 g k l ¯ T i k , l ¯ ) = R i j ¯ ( 0 ) + α 2 j ¯ T i k k R ( α ) = R ( 0 ) + α 2 g i j ¯ j ¯ T i ρ ρ .
It is noteworthy that the α-curvature tensor, the α-Ricci tensor and the α-scalar curvature on a Kähler manifold have the linear corrections in α comparing with the quadratic corrections in α on non-Kähler manifolds. A submanifold of a Kähler manifold is also a Kähler manifold. When a submanifold of dimension m exists, the transfer function of a linear system can be decomposed into two parts:
h ( z ; ξ ) = h ( z ; ξ 1 , , ξ m ) h ( z ; ξ m + 1 , , ξ n )
where h|| is the transfer function on the submanifold and h is the transfer function orthogonal to the submanifold. When it is plugged into Equation (16), the Kähler potential of the geometry is decomposed into three terms as follows:
K = 1 2 π i | z | = 1 ( log h + log h ) ( log h + log h ) * d z z = 1 2 π i | z | = 1 log h + log h ¯ d z z + 1 2 π i | z | = 1 log h + log h ¯ d z z + 1 2 π i | z | = 1 log h + log h ¯ d z z + ( c . c . ) = K + K + K ×
where K contains the coordinates from the submanifold, K × is for the cross-terms and K is orthogonal to the submanifold.
It is obvious that each part in the decomposition of the Kähler potential provides the metric tensors for submanifolds,
g M N ¯ = M N ¯ K g M n ¯ = M n ¯ K × g m n ¯ = m n ¯ K
where an uppercase index is for the coordinates on the submanifold and a lowercase index is for the coordinates orthogonal to the submanifold. As we already know, the induced metric tensor for the submanifold is derived from K , the Kähler potential of the submanifold. Based on this decomposition, it is also possible to use K as the Kähler potential of the submanifold, because it endows the same metric with K . However, the Riemann curvature tensor and the Ricci tensors include the mixing terms from embedding in the ambient manifold, because the inverse metric tensor contains the orthogonal coordinates by the Schur complement. In statistical inference, connections, tensors and scalar curvature play important roles. If those corrections are negligible, dimensional reduction to the submanifolds is meaningful from the viewpoints not only of Kähler geometry, but also of statistical inference.
The benefits of introducing a Kähler manifold as an information manifold are as follows. First of all, on a Käher manifold, the calculation of geometric objects, such as the metric tensor, the α-connection and the Ricci tensor, is simplified by using the Kähler potential. For example, the 0-connection on a non-Kähler manifold is given by
Γ i j , k ( 0 ) = 1 2 ( i g k j + j g i k k g i j )
demanding three-times more calculation steps than the Kähler case, Equation (18). Additionally, the Ricci tensor on a Kähler manifold is directly derived from the determinant of the metric tensor. Meanwhile, the Ricci tensor on a non-Kähler manifold needs more procedures. In the beginning, the connection should be calculated from the metric tensor. Additionally, then, the Riemann curvature is obtained after taking the derivatives on the connection and considering quadratic terms of the connection. Finally, the Ricci tensor on the non-Kähler manifold is found by the index contraction on the curvature tensor indices.
Secondly, α-corrections on the Riemann curvature tensor, the Ricci tensor and the scalar curvature on the Kähler manifold are linear in α. Meanwhile, there exist the quadratic α-corrections in non-Kähler cases. The α-linearity makes it much easier to understand the properties of α-family.
Moreover, submanifolds in Kähler geometry are also Kähler manifolds. When a statistical model is reducible to its lower-dimensional models, the information geometry of the reduced statistical model is a submanifold of the geometry. If the ambient manifold is Kähler, the dimensional reduction also provides a Kähler manifold as the information geometry of the reduced model, and the submanifold is equipped with all of the properties of the Kähler manifold.
Lastly, finding the superharmonic priors suggested by Komaki [15] is more straightforward in the Kähler setup, because the Laplace–Beltrami operator on a Kähler manifold is of the more simplified form compared to that in non-Kähler cases. For a differentiable function ψ, the Laplace–Beltrami operator on a Kähler manifold is given by
Δ ψ = 2 g i j ¯ i j ¯ ψ
comparing with the Laplace–Beltrami operator on a non-Kähler manifold:
Δ ψ = 1 G i ( G g i j j ψ )
where G is the determinant of the metric tensor. On a Kähler manifold, the partial derivatives only act on the superharmonic prior functions. Meanwhile, the contributions from the derivatives acting on G and gij should be considered in the non-Kähler cases. This computational redundancy is not on the Kähler manifold.

4. Example: AR, MA and ARMA Models

In the previous section, we show that the information geometry of a signal filter is a Kähler manifold. From the viewpoint of signal processing, time series models can be interpreted as a signal filter that transforms a randomized input x(z) to an output y(z). The geometry of a time series model can also be found by using the results in the previous section. In particular, we cover the AR, the MA and the ARMA models as examples.
First of all, the transfer functions of these time series models need to be identified. The transfer functions of the AR, the MA and the ARMA models with model parameters ξ = (σ, ξ1, ⋯, ξn) are represented by
h ( z ; ξ ) = σ 2 2 π i = 1 n ( 1 ξ i z 1 ) c i
where ci = 1 if ξi is an AR pole and ci = 1 if ξi is an MA root.
The ARMA models can be considered as the fraction of two AR models or two MA models. By Lemma 1, the correspondence between the α-duality and the reciprocality of transfer functions is also valid for the ARMA(p, q) models. For example, the ARMA(p, q) model with α-connection is α-dual to the ARMA(q, p) model with the (−α)-connection under the reciprocality of the transfer function. Simply speaking, the AR model and the MA model are exchangeable by Lemma 1. The correspondence is given as follows:
ARMA ( p , q ) ARMA ( q , p ) poles zeros zeros poles σ / 2 π 2 π / σ α α Γ ( α ) Γ ( α ) D ( α ) ( h ( 0 ) h ) D ( α ) ( h ( 0 ) h )
where h(0) is the unit transfer function of an all-pass filter.

4.1. Kählerian Information Geometry of ARMA(p, q) Models

The ARMA(p, q) model is the (p+q+1)-dimensional model with ξ = (σ, ξ1, ⋯, ξp+q), and the time series model is characterized by its transfer function:
h ( z ; ξ ) = σ 2 2 π ( 1 ξ p + 1 z 1 ) ( 1 ξ p + 2 z 1 ) ( 1 ξ p + q z 1 ) ( 1 ξ 1 z 1 ) ( 1 ξ 2 z 1 ) ( 1 ξ p z 1 )
where σ is the gain and ξi is a pole with the condition of |ξi| < 1. The logarithmic transfer function of the ARMA(p, q) model is given by
log h ( z ; ξ ) = log σ 2 2 π + i = 1 p + q c i log ( 1 ξ i z 1 )
and it is easy to verify that f0a0 = σ2/2π.
According to Theorem 1, the information geometry of the ARMA model is a Kähler manifold because of stability, minimum phase and the finite complex cepstrum norm of the ARMA filter. By using Theorem 2, the Hermitian condition on the metric tensor is explicitly checked on the submanifold of the ARMA model, where σ is a constant. In addition to that, this submanifold is also a Kähler manifold, because a submanifold of a Kähler manifold is also Kähler. Since it is possible to gauge σ by normalizing the amplitude of an input signal, the σ-coordinate can be considered as the denormalization coordinate [21]. Similar to the non-complexified ARMA models [12], g0i for all non-zero i vanish by direct calculation using Equation (2). Considering these facts, we work only with the submanifolds of a constant gain.
As mentioned, the Käher potential is crucial for the Kähler manifolds and defined as the square of the Hardy norm of the logarithmic transfer function, equivalently the square of the complex cepstrum norm, Equation (16). For the ARMA(p, q) model, the Kähler potential is given by
K = r = 1 1 r 2 | i = 1 p + q c i ( ξ i ) r | 2
Since the metric tensor is simply derived from taking the partial derivatives on the Kähler potential, Equation (15), the metric tensor of the ARMA(p, q) model is represented as
g i j ¯ = c i c j 1 ξ i ξ ¯ j .
where other fully holomorphic- and fully anti-holomorphic-indexed components are all zero. It is easily verified that if ci and cj are both from the AR or the MA models, ci and cj exhibit the same signature, which imposes that the AR(p)- and the MA(q)-submanifolds of the ARMA(p, q) model have the same metric tensors with the AR(p) and the MA(q) models, respectively. If two indices are from the different models, there exists only the sign difference in the metric tensor. The metric tensor of the geometry is of a similar form as the metric tensor in Ravishanker’s work on the ARMA geometry [12].
By considering the Schur complement, the inverse metric tensor can be deduced from the inverse metric tensor of the AR(p+q) model. The inverse metric tensor of the geometry is represented by
g i j ¯ = c i c j ( 1 ξ i ξ ¯ j ) k i ( 1 ξ k ξ ¯ j ) k j ( 1 ξ i ξ ¯ k ) k i ( ξ k ξ i ) k j ( ξ ¯ k ξ ¯ j )
and the only difference with the AR case is the signature cicj in the AR-MA mixed components. With the sign difference in the metric tensor components with the AR-MA mixed indices, the determinant of the metric tensor can be calculated with the aid of the Schur complement. The determinant of the metric tensor is found as
G = det g i j ¯ = 1 j < k n | ξ k ξ j | 2 j , k ( 1 ξ j ξ ¯ k ) .
The 0-connection and the symmetric tensor T for the Kähler-ARMA model can be found from the results in the previous section. The non-trivial 0-connection components are calculated from Equation (18):
Γ i j , k ¯ ( 0 ) = c j c k δ i j ξ ¯ k ( 1 ξ j ξ ¯ k ) 2
and the non-zero components of the symmetric tensor T are given by Equation (17):
T i j , k ¯ = 2 c i c j c k ξ ¯ k ( 1 ξ i ξ ¯ k ) ( 1 ξ j ξ ¯ k ) .
Based on the above expressions, the α-connection is easily obtained from Equation (4).
The 0-Ricci tensor of the ARMA geometry is represented by Equation (19):
R i j ¯ ( 0 ) = 1 ( 1 ξ i ξ ¯ j ) 2
and it is noteworthy that the Ricci tensor is not dependent on ci. The 0-scalar curvature is calculated from the 0-Ricci tensor by index contraction:
R ( 0 ) = i , j c i c j k i ( 1 ξ k ξ ¯ j ) k j ( 1 ξ i ξ ¯ k ) ( 1 ξ i ξ ¯ k ) k i ( ξ k ξ i ) k j ( ξ ¯ k ξ ¯ j )
where ci, cj are from the inverse metric tensor of the ARMA model.
It is straightforward to derive the α-generalization of the Riemann curvature tensor, the Ricci tensor and the scalar curvature by using the results in Section 3.

4.2. Superharmonic Priors for Kähler-ARMA(p, q) Models

As mentioned before, the Laplace–Beltrami operator on a Kähler manifold is of a much simpler form than that on a non-Kähler manifold. The simplified Laplace–Beltrami operator of the geometry makes finding superharmonic priors easier. Although it is also valid in any arbitrary dimension, let us confine ourselves to the ARMA(1,1) model as a simplification. For the ARMA(1, 1) model, the metric tensor is expressed with
g i j ¯ = ( 1 1 | ξ 1 | 2 1 1 ξ 1 ξ ¯ 2 1 1 ξ 2 ξ ¯ 1 1 1 | ξ 2 | 2 ) .
It is trivial to show that ψ1 = (1 − |ξ1|2) + (1 − |ξ2|2) and ψ2 = (1 − |ξ1|2)(1 − |ξ2|2) are superharmonic prior functions.
In order to compare with the literature on superharmonic priors for the non-Kählerian AR models [30,31], let us consider the Kähler-AR(p) models. For p = 2, the metric tensor is given by
g i j ¯ = ( 1 1 | ξ 1 | 2 1 1 ξ 1 ξ ¯ 2 1 1 ξ 2 ξ ¯ 1 1 1 | ξ 2 | 2 ) .
With the Laplace–Beltrami operator on a Kähler manifold, it is obvious that (1 − |ξk|2) for k = 1, ⋯, p is a superharmonic function in arbitrary p-dimensional AR geometry. The proof for superharmonicity is as follows:
Δ ( 1 | ξ k | 2 ) = 2 g i j ¯ i j ¯ ( 1 | ξ k | 2 ) = 2 g i j ¯ δ i , k δ j , k = 2 g k k ¯ < 0
because the diagonal components of the inverse metric tensor are all positive. By additivity, the sum of these prior functions, k = 1 n ( 1 | ξ k | 2 ), are also superharmonic. Obviously, ψ1 = (1−|ξ1|2)+(1−|ξ2|2) is a superharmonic prior function in the two-dimensional case.
Another superharmonic prior function for the AR(2) model is ψ2 = (1 − |ξ1|2)(1 − |ξ2|2). The Laplace–Beltrami operator acting on ψ2 is represented by and it is simply verified that
( Δ ψ 2 ψ 2 ) = 2 ( 2 ξ 1 ξ ¯ 2 ξ 2 ξ ¯ 1 ) | ξ 1 ξ 2 | 2
and it is simply verified that ( Δ ψ 2 ψ 2 ) < 0, because 2 ξ 1 ξ ¯ 2 ξ 2 ξ ¯ 1 > 0. In addition to that, since ψ2 is positive, ψ2 = (1 − |ξ1|2)(1 − |ξ2|2) is a superharmonic prior function.
Additionally, it is found that ψ 3 = ( 1 ξ 1 ξ ¯ 2 ) ( 1 ξ 2 ξ ¯ 1 ) ( 1 | ξ 1 | 2 ) ( 1 | ξ 2 | 2 ) is also a superharmonic prior function. The Laplace–Beltrami operator acting on this prior function gives
( Δ ψ 3 ψ 3 ) = 6 G | ξ 1 ξ 2 | 2 ( 1 ξ 1 ξ ¯ 2 ) ( 1 ξ 2 ξ ¯ 1 ) ( 1 | ξ 1 | 2 ) ( 1 | ξ 2 | 2 ) = 6
and it is straightforward that ψ3 is superharmonic, because ψ3 is positive. This prior function is similar to the prior function found in the literature [30,31]. If the prior function is represented in the complexified coordinates, the prior function is (1 − |ξ1|2), because the two coordinates in his paper are complex conjugate to each other.
To obtain superharmonic priors, the superharmonic prior functions found above are multiplied by the Jeffreys prior, which is the volume form of the information manifold. After that, the superharmonic priors outperform the Jeffreys prior [15].

5. Conclusion

In this paper, we prove that the information geometry of a signal filter with a finite complex cepstrum norm is a Kähler manifold. The conditions on the transfer function of the filter make the Hermitian structure explicit. The first condition on the transfer function for the Kählerian information manifold is whether or not multiplication between the zero-th degree terms in z of the unilateral part and the analytic part in the transfer function decomposition is a constant. The second condition is whether or not the coefficient of the highest degree in z is a constant in the model parameters. These two conditions are equivalent to each other for some transfer functions.
It is also found that the square of the Hardy norm of a logarithmic transfer function is the Kähler potential of the information geometry. It is also known as the unweighted complex cepstrum norm of a linear system. Using the Kähler potential, it is easy to derive the geometric objects, such as the metric tensor, the α-connection and the Ricci tensor. Additionally, the Kähler potential is a constant term in α of the α-divergence, i.e., it is related to the 0-divergence.
The Kählerian information geometry for signal processing is not only mathematically interesting, but also computationally practical. Contrary to non-Kähler manifolds where tedious and lengthy calculation is needed in order to obtain the tensors, it is relatively easier to calculate the metric tensor, the connection and the Ricci tensor on a Kähler manifold. Taking derivatives on the Kähler potential provides the metric tensor and the connection on a Kähler manifold. The Ricci tensor is obtained from the determinant of the metric tensor. Moreover, α-generalization on the curvature tensor, the Ricci tensor and the scalar curvature is linear in α. Meanwhile, there exist the non-linear corrections in the non-Kähler cases. Additionally, since the Laplace–Beltrami operator in Kähler geometry is of the simpler form, it is more straightforward to find superharmonic priors.
The information geometries of the AR, the MA and the ARMA models, the most well-known time series models, are the Kähler manifolds. The metric tensors, the connections and the divergences of the linear system geometries are derived from the the Kähler potentials with simplified calculation. In addition to that, the superharmonic priors for those models are found with much less computational efforts.

Acknowledgments

We are thankful to Robert J. Frey and Michael Tiano for useful discussions.

Author Contributions

Both authors contributed equally to the main idea. The research was conducted out by both authors. Both authors wrote the paper. Both authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rao, C.R. Information and accuracy attainable in the estimation of statistical parameters. Bull. Calcutta Math. Soc. 1945, 37, 81–89. [Google Scholar]
  2. Jeffreys, H. An invariant form for the prior probability in estimation problems. Proc. R. Soc. London Ser. A 1946, 196, 453–461. [Google Scholar]
  3. Efron, B. Defining the curvature of a statistical problem (with applications to second order efficiency). Ann. Stat. 1975, 3, 1189–1217. [Google Scholar]
  4. Amari, S. Differential-Geometrical Methods in Statistics; Springer: Berlin and Heidelberg, Germany, 1990. [Google Scholar]
  5. Matsuyama, Y. The α-EM algorithm: Surrogate likelihood maximization using α-Logarithmic information measures. IEEE Transactions on Information Theory 2003, 49, 692–706. [Google Scholar]
  6. Matsuyama, Y. Hidden Markov model estimation based on alpha-EM algorithm: Discrete and continuous alpha-HMMs, Proceedings of International Conference on Neural Networks, San Jose, CA, USA, 31 July–5 August 2011; pp. 808–816.
  7. Brody, D.C.; Hughston, L.P. Interest rates and information geometry. Proc. R. Soc. Lond. A 2001, 457, 1343–1363. [Google Scholar]
  8. Janke, W.; Johnston, D.A.; Kenna, R. Information geometry and phase transitions. Physica A 2004, 336, 181–186. [Google Scholar]
  9. Zanardi, P.; Giorda, P.; Cozzini, M. Information-theoretic differential geometry of quantum phase transitions. Phys. Rev. Lett. 2007, 99, 100603. [Google Scholar]
  10. Heckman, J.J. Statistical inference and string theory arXiv, 1305.3621.
  11. Arwini, K.; Dodson, C.T.J. Information Geometry: Near Randomness and Near Independence; Springer: Berlin and Heidelberg, Germany, 2008. [Google Scholar]
  12. Ravishanker, N.; Melnick, E.L.; Tsai, C. Differential geometry of ARMA models. J. Time Ser. Anal. 1990, 11, 259–274. [Google Scholar]
  13. Ravishanker, N. Differential geometry of ARFIMA processes. Commun. Stati. Theory Methods. 2001, 30, 1889–1902. [Google Scholar]
  14. Barbaresco, F. Information intrinsic geometric flows. AIP Conf. Proc. 2006, 872, 211–218. [Google Scholar]
  15. Komaki, F. Shrinkage priors for Bayesian prediction. Ann. Stat. 2006, 34, 808–819. [Google Scholar]
  16. Barndorff-Nielsen, O.E.; Jupp, P.E. Statistics, yokes and symplectic geometry. Annales de la faculté des sciences de Toulouse 6 série 1997, 6, 389–427. [Google Scholar]
  17. Barbaresco, F. Information geometry of covariance matrix: Cartan-Siegel homogeneous bounded domains, Mostow/Berger fibration and Fréchet median. In Matrix Information Geometry; Bhatia, R., Nielsen, F., Eds.; Springer: Berlin and Heidelberg, Germany, 2012; pp. 199–256. [Google Scholar]
  18. Barbaresco, F. Koszul information geometry and Souriau geometric temperature/capacity of Lie group thermodynamics. Entropy. 2014, 16, 4521–4565. [Google Scholar]
  19. Zhang, J.; Li, F. Symplectic and Kähler structures on statistical manifolds induced from divergence functions. Geom. Sci. Inf. 2013, 8085, 595–603. [Google Scholar]
  20. Amari, S. Differential geometry of a parametric family of invertible linear systems—Riemannian metric, dual affine connections and divergence. Math. Syst. Theory 1987, 20, 53–82. [Google Scholar]
  21. Amari, S.; Nagaoka, H. Methods of information geometry; Oxford University Press: Oxford, UK, 2000. [Google Scholar]
  22. Amari, S. α-divergence is unique, belonging to both f-divergence and Bregman divergence classes. IEEE Trans. Inf. Theory 2009, 55, 4925–4931. [Google Scholar]
  23. Zhang, K.; Mullhaupt, A.P. Hellinger distance and information distance, 2015; in preparation.
  24. Bogert, B.; Healy, M.; Tukey, J. The quefrency alanysis of time series for echoes: cepstrum, pseudo-autocovariance, cross-cepstrum and saphe cracking, Proceedings of the Symposium on Time Series Analysis, Brown University, Providence, RI, USA, 11–14 June 1963; pp. 209–243.
  25. Martin, R. J. A metric for ARMA processes. IEEE Trans. Signal Process. 2000, 48, 1164–1170. [Google Scholar]
  26. Cima, J.A.; Matheson, A.L.; Ross, W.T. The Cauchy Transform; American Mathematical Society: Providence, RI, USA, 2006. [Google Scholar]
  27. Oppenheim, A.V. Superposition in a class of nonlinear systems. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 1965. [Google Scholar]
  28. Beurling, A. On two problems concerning linear transformations in Hilbert space. Acta Math. 1949, 81, 239–255. [Google Scholar]
  29. Nakahara, M. Geometry, Topology and Physics; Institute of Physics Publishing: Bristol, UK and Philadelphia, PA, USA, 2003. [Google Scholar]
  30. Tanaka, F.; Komaki, F. A superharmonic prior for the autoregressive process of the second order. J. Time Ser. Anal. 2008, 29, 444–452. [Google Scholar]
  31. Tanaka, F. Superharmonic Priors for Autoregressive Models; Mathematical Engineering Technical Reports; University of Tokyo: Tokyo, Japan, 2009. [Google Scholar]

Share and Cite

MDPI and ACS Style

Choi, J.; Mullhaupt, A.P. Kählerian Information Geometry for Signal Processing. Entropy 2015, 17, 1581-1605. https://doi.org/10.3390/e17041581

AMA Style

Choi J, Mullhaupt AP. Kählerian Information Geometry for Signal Processing. Entropy. 2015; 17(4):1581-1605. https://doi.org/10.3390/e17041581

Chicago/Turabian Style

Choi, Jaehyung, and Andrew P. Mullhaupt. 2015. "Kählerian Information Geometry for Signal Processing" Entropy 17, no. 4: 1581-1605. https://doi.org/10.3390/e17041581

Article Metrics

Back to TopTop