Next Article in Journal
Scale Enhancement Pyramid Network for Small Object Detection from UAV Images
Next Article in Special Issue
Information and Divergence Measures
Previous Article in Journal
Correction: Kudo, K. Localization Detection Based on Quantum Dynamics. Entropy 2022, 24, 1085
Previous Article in Special Issue
Some Properties of Weighted Tsallis and Kaniadakis Divergences
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Transversality Conditions for Geodesics on the Statistical Manifold of Multivariate Gaussian Distributions

Department of Computer Engineering and Sciences, Florida Institute of Technology, 150 W. University Blvd., Melbourne, FL 32940, USA
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(11), 1698; https://doi.org/10.3390/e24111698
Submission received: 30 September 2022 / Revised: 16 November 2022 / Accepted: 18 November 2022 / Published: 21 November 2022
(This article belongs to the Special Issue Information and Divergence Measures)

Abstract

:
We consider the problem of finding the closest multivariate Gaussian distribution on a constraint surface of all Gaussian distributions to a given distribution. Previous research regarding geodesics on the multivariate Gaussian manifold has focused on finding closed-form, shortest-path distances between two fixed distributions on the manifold, often restricting the parameters to obtain the desired solution. We demonstrate how to employ the techniques of the calculus of variations with a variable endpoint to search for the closest distribution from a family of distributions generated via a constraint set on the parameter manifold. Furthermore, we examine the intermediate distributions along the learned geodesics which provide insight into uncertainty evolution along the paths. Empirical results elucidate our formulations, with visual illustrations concretely exhibiting dynamics of 1D and 2D Gaussian distributions.

1. Introduction

The importance of a metric to measure the similarity between two distributions has weaved itself into a plethora of applications. Fields concerning statistical inference [1,2,3], model selection [4,5,6] and machine learning have found it necessary to quantify the likeness of two distributions. A common approach to measure this similarity is to define a distance or divergence between distributions using the tenets of information geometry, e.g., the Fisher–Rao distance or the f-divergence [7], respectively. To the best of our knowledge, research and results in information geometry have predominantly focused on establishing similarities between two given distributions. Here, we consider an important class of problems where one or both endpoint distributions are not fixed, but instead, constrained to live on subset of the parameter manifold.
When one relaxes the fixed endpoint requirements, the development of finding the shortest path between a given distribution and constraint surface (not single distribution) must be reconsidered using transversality conditions [8,9] for the standard length-minimizing functional. This is precisely the focus of the present work, where we derive the transversality conditions for working in the Riemannian space of multivariate Gaussian distributions. This approach opens new avenues for research and application areas where one no longer needs to provide the ending distribution but rather a description of the constraint set where the most similar model must be discovered. For example, applications such as domain adaptation [10,11,12] can be reformulated such that the optimal target domain distribution is discovered among a constraint family starting from a known source distribution estimated from training data, or even model selection [4,5,6,13,14], where a search in a constrained family of distributions may be better aligned with the desired objective versus evaluating the MLE fit penalized by parameter cardinality.
In this work, we have purposefully chosen to work in the natural parameter space of multivariate Gaussians (mean vector, μ , and covariance matrix, Σ) and address the problem formulation in a completely Riemannian context. We are aware of the usual dually flat constructions [2,15,16] afforded by information geometry using divergence measures and the Legendre transformation. Though elegant in their algebraic constructions, these alternate parameterizations have yet to be employed in most statistical and machine-learning applications. Hence, we develop all geometric motivations and consequential mathematical derivations using Riemannian geometry and the natural parameter space, i.e., we employ the Fisher information matrix metric tensor and find the length-minimizing curve to the constraint set.
The remainder of this paper is organized as follows. In Section 2, we give a summary of important results concerning geodesics on manifolds. In Section 3, we provide a brief introduction to the techniques of calculus of variations with the goal of developing the Euler-Lagrange equations necessary to find the shortest path between two multivariate Gaussian distributions. In this section, we restrict ourselves to problems in which both the initial and final distributions are known exactly. Following this, in Section 3.3, we develop the conditions required to satisfy transversality constraints, or constraints where either the initial and/or final distribution are determined from those residing on a defined subsurface of the manifold rather than being exactly known. The results from these sections are employed in Section 4, where we explore various constraint surfaces and numerical experiments to demonstrate the utility of our variable-endpoint framework. Finally, some closing perspectives and remarks are given in Section 5.

2. Related Works

Most efforts towards measuring distances on statistical manifolds build on the foundation started by Fisher in [17], in which he introduces the idea of the information matrix. In [18] Kullback and Leibler published a pioneering effort to describe this distance. Works such as [19,20], endowed statistical distributions with geometrical properties. However, it was Rao [21] that expanded on the ideas of Fisher that defined a metric for statistical models based on the information matrix. The information matrix satisfies the conditions of a metric on a Riemannian statistical manifold, and is widely used because of its invariance [22]. This connection between distance and distributions encouraged others to explore the distance between specific families of distributions [23]. Among these families include special cases of the multivariate normal model [24], the negative binomial distribution [25], the gamma distribution [26,27], Poisson distribution [28], among others.
In [29], the authors offer a detailed exploration of geodesics on a multivariate Gaussian manifold. They show that there exists a geodesic connecting any two distributions on a Gaussian manifold. However, a closed-form solution for the most general case remains an open problem.
In [30] and expanded on in [31], the authors offer a very detailed discussion, focusing primarily on the univariate normal distribution for which a closed-form solution for the Fisher–Rao distance is known. Here, the authors focus on a geometrical approach, abandoning the “proposition-proof” format offered in previous research. With this geometric approach, closed-form solutions to various special cases are derived: univariate Gaussian distributions, isotropic Gaussian distributions, and Gaussian distributions with diagonal covariance matrix.
Another novel application of geodesics on a Gaussian statistical manifold is explored in [32], where the authors use information geometry for shape analysis. Shapes were represented using a K-component Gaussian Mixture Model, with the number of components being the same for each shape. With this, each shape occupied a unique point on a common statistical manifold. Upon mapping two shapes to their points on this manifold, the authors use an iterative approach to calculate the geodesic between these two points, with the length of the geodesic offering a measure of similarity of the shapes. Furthermore, because of the iterative approach to solving for the geodesic, all intermediate points along path are revealed. These points can be mapped to their own unique shapes, essentially showing the evolution from one shape to another. This shape deformation exhibits the benefit of analyzing more than just the distance between points on a manifold and that “walk” along the path has real substance.
In [33], the authors explore the complexity of Gaussian geodesic paths, with the ultimate goal of relating the complexity of a geodesic path on a manifold to the correlation of the variables labeling its macroscopic state. Specifically, the authors show that, if there is a dependence between the variables, the complexity of the geodesic path decreases. Complexity, for these purposes is defined as the volume of the manifold traversed by the geodesic connecting a known initial state to a future state, which is well defined. It is shown that this volume decays by a power law at a rate that is determined by the correlation between the variables on the Gaussian manifold.
In [34], the authors use the geometry of statistical manifolds to study how the quantum characteristics of a system are affected by its statistical properties. Similar to our work, the authors prescribe an initial distribution on the manifold of Gaussians and examine the geodesics emanating from it, without dictating a specific terminating distribution. The authors show that these paths tend to terminate at distributions that minimize Shannon entropy. However, unlike our work, these paths are free to roam on the manifold and are not required to terminate on a specific surface on the manifold. Furthermore, the most relevant part of the author’s work considers only univariate Gaussians with a two-dimensional parameter manifold, without ever considering higher dimensions.
Though we have chosen to work with Riemannian geometry, it is worth mentioning that information geometry often employs dualistic geometries that can be established using divergence measures. In [35], the authors detail the use of divergence measures to obtain the dual coordinates for space of multivariate Gaussians. However, they point out that the choice of divergence measure is not unique and resulting geometries lack the same interpretative power of the natural parameterization.
Though these previous works operate in the space of multivariate Gaussians and deriving geodesics therein, they all require defining the initial and terminal distributions on the manifold. In this work, we address a novel problem of finding the geodesics when the terminal conditions are hypersurface constraints rather than a single point. Technically, these transversality conditions are variable boundary conditions placed on the initial and final distributions requiring them to reside on a parametric surface typically defined by constraining the coordinates. The usefulness of these variable boundary conditions has emerged in many areas including physics [36] in which the author studied wetting phenomenon on rough surfaces and in [37], where the authors studied the elasticity of materials. Additionally, in [8,38,39], transversality conditions were employed in economic optimal control problems with a free-time terminal condition. However, as practical as transversality conditions have been in the above fields, their application in information geometry literature is deficient.

3. Geodesics for Fixed-Endpoint Problems

We begin by briefly developing standard calculus of variation results for discovering the shortest path between fixed points on a differentiable manifold. Then, the result is applied to the space of multivariate Gaussians, with detailed derivations for the case of bivariate Gaussians. We finally extend the formulation to show how to incorporate variable-endpoint boundary conditions.

3.1. General Euler-Lagrange Equations

Among differential calculus’ many applications are problems regarding finding the maxima and minima of functions. Analogously, techniques of calculus of variation operates on functionals, which are mappings from a space of functions to its underlying field of scalars. considering the functional L [ y ] , the typical formulation of a calculation of variations problem is
min L [ y ] = x 0 x 1 F ( x , y , y ˙ ) d x y ( x 0 ) = y 0 y ( x 1 ) = y 1
where initial and terminal values are defined as ( x 0 , y 0 ) and ( x 1 , y 1 ), respectively and y ˙ is notation for the derivative. In general, y can be a vector of functions dependent on x, a vector of independent variables. The theory behind finding the extremum to problems such as these are analogous to single variable calculus, in which we a vanishing first derivative to locate critical points. Here, we locate the extremal functions using functional derivatives, leading to solving the Euler-Lagrange equations outlined in Section 3.2.
Though the formulation of finding the shortest path as a calculus of variations problem is rather elementary, its solution is involved. Obviously, in Euclidean geometry, this path is a straight line, and this distance is easily found. However, moving these ideas onto statistical manifolds complicates both the geometry and the calculus of this seemingly elementary problem. Analogous to the Euclidean setting, in a Riemannian manifold such as our space of Gaussians, solving for the shortest path L, involves the summation of many infinitely small arc lengths, d s
d s 2 = θ ˙ T g ( θ ) θ ˙ ,
where θ is a parameter vector, g ( θ ) is a metric tensor dependent on the parameter vector and ( · ) T represents the transpose of a vector. The metric tensor for Euclidean space is the identity matrix but on the multivariate Gaussian manifold, this metric tensor is the Fisher information matrix, discussed later in Section 3.2.
This makes the functional we wish to minimize
P = x 0 x 1 θ ˙ T g ( θ ) θ ˙ d x
or, because the square root is a monotonically increasing function, we can conveniently use
F = x 0 x 1 θ ˙ T g ( θ ) θ ˙ d x .
With this, the calculus of variation problem that solves for the minimum distance on a manifold is
min F [ θ ] = x 0 x 1 θ ˙ T g ( θ ) θ ˙ d x θ ( x 0 ) = [ θ 01 , θ 02 , , θ 0 n ] θ ( x 1 ) = [ θ 11 , θ 12 , , θ 1 n ]
In the present context, the Fisher information metric tensor g ( θ ) = g ( μ , Σ ) , the natural parameterization for multivariate Gaussians. Moreover, θ 0 and θ 1 are the parameters of the initial and final distributions.
The minimizer to Equation (5) is the well-known Euler-Lagrange equation, a system of second-order differential equations. These equations operate on the function in Equation (4). Accordingly, we define
K = θ ˙ T g ( θ ) θ ˙
With this, the Euler-Lagrange equations are
K θ d d x K θ ˙ = 0 .
where K θ and K θ ˙ are the functional derivatives with respect the curve θ ( x ) .

3.2. Euler-Lagrange Equation for Gaussian Distributions

The Fisher information matrix is a measure of how much information about the parameter of interest from a multivariate distribution is revealed from random data space. Intuitively, it can be considered an indication of how “peaked” a distribution is around a parameter. If the distribution is sharply peaked, very few data points are required to locate it. As such, each data point carries a lot of information.
For a multivariate probability distribution, the Fisher information matrix is given by
g i , j ( θ ) = f ( x ; θ ) θ i log f ( x ; θ ) θ j log f ( x ; θ ) d x ,
where the index ( i , j ) represents the appropriate parameter pair of the multivariate parameter vector θ .
Alternatively, there are additional useful forms of the Fisher information, provided that certain regularity conditions are satisfied. First, the Fisher information matrix is the expectation of the Hessian of the log likelihood of the density function. Specifically,
g i , j ( θ ) = E 2 θ i θ j log f ( x ; θ ) = E H ,
where H is the Hessian matrix of the log-likelihood.
Second, the Fisher information can be calculated from the variance of the score function
g ( θ ) = Var ( S f ( x ; θ ) ) ,
where
S f ( x ; θ ) = log f ( x ; θ ) .
Most importantly and for our purposes, the Fisher information matrix is the metric tensor that will define distances on Riemannian Gaussian manifold. Given a distribution on a manifold, by use of this metric tensor, we can minimize an appropriate functional to find a closest second distribution residing on a constrained subset the manifold. A class of problems covered by variable-endpoint conditions in the calculus of variations.
Consider the n-dimensional multivariate Gaussian with density given by
f ( x n : μ n , Σ ) = 2 π n 2 det Σ 1 2 exp ( X μ ) T Σ 1 ( X μ ) 2
where X is the random variable vector, μ = μ 1 , μ 2 , . . . , μ n is the n-dimensional mean vector of the distribution and Σ is the n × n covariance matrix.
Since the covariance matrix is symmetric, it contains ( n + 1 ) ( n ) 2 number of unique parameters, i.e., the number of diagonal and the upper (or lower) triangular elements. With the n-dimensional mean vector, the total number of scalar parameters in an n-dimensional multivariate Gaussian is ( n + 3 ) n 2 , which will be the size of the Fisher information matrix. For all further developments in the parameter space, these parameters are collected in a single vector, θ such that
θ = { μ 1 , μ 2 , . . . μ n θ 1 , θ 2 , . . . , θ n , σ 1 , 1 2 , σ 1 , 2 2 , . . . , σ n , n 2 } θ n + 1 , . . . , θ ( n + 3 ) n 2
To clarify, this new parameter θ has the mean vector μ as its first n components and the resulting components are made up of the unique elements of the covariance matrix, starting with the first row, followed, by the second row but without the first entry, since Σ 1 , 2 = Σ 2 , 1 and Σ 1 , 2 is already included in θ . We capture all the parameters of the multivariate Gaussian distribution in this non-traditional vector form because it is more conceptually in line with the calculation of the Fisher information matrix defined in Equation (8).
Therefore, using Equations (8) and (12), the Fisher information for the general multivariate Gaussian distribution is
g i j ( μ , Σ ) = 1 2 t r Σ 1 Σ θ i Σ 1 Σ θ j + μ θ i T Σ 1 μ θ j
for which a very detailed proof can be found in the Appendix A of this paper. In the case of the bivariate Gaussian distribution, this 5 × 5 matrix has only 15 unique elements, because of its symmetry. Once again, the detailed derivation of each of the elements is provided in the Appendix A. The resulting metric tensor elements are
g 11 = σ 2 2 σ 1 2 σ 2 2 σ 12 2 g 22 = σ 1 2 σ 1 2 σ 2 2 σ 12 2 g 33 = 1 2 σ 2 2 σ 1 2 σ 2 2 σ 12 2 2 g 44 = 1 2 σ 1 2 σ 1 2 σ 2 2 σ 12 2 2 g 55 = σ 1 2 σ 2 2 + σ 12 2 σ 1 2 σ 2 2 σ 12 2 2 g 12 = σ 12 σ 1 2 σ 2 2 σ 12 2 = g 21 g 34 = 1 2 σ 12 σ 1 2 σ 2 2 σ 12 2 2 = g 43 g 35 = σ 12 σ 2 2 σ 1 2 σ 2 2 σ 12 2 2 = g 53 g 45 = σ 12 σ 1 2 σ 1 2 σ 2 2 σ 12 2 2 = g 54
All elements dealing with the information between a component of μ and an element of Σ vanish, which is a property extended to every multivariate Gaussian distribution of higher dimensions as well.

3.3. Variable-Endpoint Formulation: Transversality Boundary Conditions

Our development so far has focused on summarizing the usual situation of finding the length-minimizing path between two fixed points on a Riemannian manifold. We now shift to the situation of allowing one or both of the initial or final endpoints (distributions in this context) to be variable. This changes both the scope and the mathematics of the problem and requires the use of transversality boundary conditions.
Transversality conditions have been useful in other applications. In economics, the conditions are needed to solve the infinite horizon [40,41] problems. Other applications are found in biology [42] and physics [43]. However, to the best of our knowledge, there is little to no prior work investigating variable-endpoint formulations in the domain of information geometry. As we will demonstrate, the crux of employing these methods revolves around appropriately defining the parameter constraint surface. Though our results here provide concrete examples of interesting constraint surfaces, guidance on prescribing these subsets or developing techniques for automatically learning them will be important areas for future research.
The transversality conditions take into account the constrained hypersurface, ϕ ( θ ) , from the coordinates parameterizing the distributions of the statistical manifold. If, for example, we are given an initial distribution and are asked to find which distribution on ϕ ( θ ) is closest, the usual geodesic problem formulated in Equation (5) now becomes
min F [ θ ] = 1 2 x 0 x 1 θ ˙ T g ( θ ) θ ˙ d x θ ( x 0 ) = [ θ 01 , θ 02 , , θ 0 n ] θ ( x 1 ) = ϕ ( θ )
Therefore, in addition to satisfying the Euler-Lagrange equation, now with transversality conditions, the optimal solution must also satisfy
K θ 1 ˙ K θ 2 ˙ K θ n ˙ = α ϕ θ 1 ϕ θ 2 ϕ θ n
In Equation (17), the left-hand side is a vector tangent to the optimal path and the vector on the right right-hand side is the gradient of the terminal surface, which is orthogonal to the surface. Considering that the optimal path and the constraint surface intersect at the terminal distribution, this view of the transversality requirement implies that the tangent vector to the optimal path and the gradient of the constraint surface be collinear at the intersecting distribution. The scalar multiple α affects that magnitude of the vector and, from a geometric perspective, there is no loss of generality by setting α = 1 .

4. Results: Transversal Euler-Lagrange Equations for Bivariate Gaussian Distributions

We now turn our attention to various use cases of the transversal boundary conditions on the Gaussian manifold. We limit our derivations to bivariate Gaussians to make the calculations tractable and for visualization purposes. However, the same development is applicable to higher-dimensional Gaussians. It is worth mentioning that even in the fixed-endpoint scenario, there are no closed-form solutions for the geodesic when manifold coordinates include μ and Σ , with analytical solutions existing only for special cases such zero-mean distributions.
Using the Fisher information matrix defined in Equation (14), more specifically for the bivariate Gaussian distribution discussed in the Appendix, and employing the general form of the geodesic functional in Equation (4), we can define the integrand of the arc-length-minimizing function on the space of bivariate Gaussian distribution as
K ( θ ) = σ 2 2 μ 1 ˙ 2 k + σ 1 2 μ 2 ˙ 2 k + ( σ 2 2 ) 2 ( σ ˙ 1 2 ) 2 2 k 2 + ( σ 1 2 ) 2 ( σ ˙ 2 2 ) 2 2 k 2 + σ 1 2 σ 2 2 σ ˙ 12 2 k 2 + σ 12 2 σ ˙ 12 2 k 2 2 σ 12 μ ˙ 1 μ ˙ 2 k + σ 12 2 σ ˙ 1 2 σ 2 ˙ 2 k 2 2 σ 12 σ 2 2 σ ˙ 1 2 σ ˙ 12 k 2 2 σ 12 σ 1 2 σ ˙ 2 2 σ ˙ 12 k 2
where k = σ 1 2 σ 2 2 σ 12 2 .
We can use Equation (18) to derive the system of second-order differential equations, solutions to which yield the shortest path between two distributions.
μ ¨ 1 = μ ˙ 1 σ ˙ 1 2 σ 2 2 + μ ˙ 2 σ 1 2 σ ˙ 12 μ ˙ 2 σ ˙ 1 2 σ 12 μ ˙ 1 σ 12 σ ˙ 12 σ 1 2 σ 2 2 σ 12 2
μ ¨ 2 = μ ˙ 2 σ 1 2 σ ˙ 2 2 + μ ˙ 1 σ 2 2 σ ˙ 12 μ ˙ 1 σ ˙ 2 2 σ 12 μ ˙ 2 σ 12 σ ˙ 12 σ 1 2 σ 2 2 σ 12 2
σ ¨ 1 2 = μ ˙ 1 2 σ 12 2 + σ ˙ 1 2 σ 2 2 + σ 1 2 σ ˙ 12 2 μ ˙ 1 2 σ 1 2 σ 2 2 2 σ ˙ 1 2 σ 12 σ ˙ 12 σ 1 2 σ 2 2 σ 12 2
σ ¨ 2 2 = μ 2 ˙ 2 σ 12 2 + σ ˙ 2 2 σ 1 2 + σ 2 2 σ ˙ 12 2 μ ˙ 2 2 σ 1 2 σ 2 2 2 σ ˙ 2 2 σ 12 σ ˙ 12 σ 1 2 σ 2 2 σ 12 2
σ ¨ 12 = σ 12 σ ˙ 12 2 μ ˙ 2 μ 2 ˙ σ 12 2 σ 1 2 σ ˙ 2 2 σ ˙ 12 2 σ ˙ 1 2 σ 2 2 σ ˙ 12 2 + σ ˙ 1 2 σ ˙ 2 2 σ 12 2 + μ ˙ 2 μ 2 ˙ σ 1 2 σ 2 2 σ 1 2 σ 2 2 σ 12 2
Along with satisfying this system of equations, the solutions presented here must satisfy transversality conditions at one or both the terminal and initial boundaries. In what follows, those conditions will be prescribed accordingly, considering various applications of interest.

4.1. Isotropic Terminal Distribution

Working with the full parameterization of a multivariate Gaussian distribution can be computationally expensive, especially when calculating their Fisher information matrix. For example, the number of parameters grows quadratically with dimension. However, isotropic Gaussian distributions, defined below in Equation (24) grow only linearly with the mean vector’s size, orders of magnitude more favorable. In addition to being computationally efficient, isotropic Gaussian distributions provide a submanifold prescribable by a tractable transversality condition. Accordingly, we formulate the following variable-endpoint problem: Given a general multivariate Gaussian distribution, what is the closest (in a geodesic sense) isotropic distribution?
Let Σ i be the covariance matrix of a multivariate Gaussian distribution with n mean components. This distribution is isotropic if
Σ i = σ 2 I n
where I n is the n-dimensional identity matrix.
Formally, let θ capture all the parameters of a multivariate Gaussian distribution according to Equation (13). The functional to minimize is given be
min F [ θ ] = 1 2 x 0 x 1 θ ˙ T g ( θ ) θ ˙ d x θ 0 = [ μ 0 , Σ 0 ] θ 1 = ϕ ( μ 1 , Σ 1 )
where μ 0 and Σ 0 are the known parameters of the starting distribution, but μ 1 and Σ 1 are identified by solving the Euler-Lagrange equations while satisfying the transversal condition in Equation (24).
For the bivariate Gaussian distribution, the terminal surface described in Equation (24) can be defined by
Φ ( σ 1 2 , σ 2 2 ) = σ 1 2 σ 2 2 = 0 ,
with μ 1 free and σ 12 = 0 . Appling Equation (17) to this surface, we obtain the condition
( σ 2 2 ) 2 σ 1 ˙ 2 + ( σ 1 2 ) 2 σ 2 ˙ 2 + σ 12 2 σ 2 ˙ 2 + σ 12 2 σ 1 ˙ 2 2 σ 12 σ 1 2 σ 12 ˙ 2 σ 12 σ 2 2 σ 12 ˙ = 0 .
Therefore, in addition to the Euler-Lagrange equations in Equation (19) through Equation (23), requiring the final distribution to be isotropic implies the geodesic must also satisfy Equation (27). Additionally, the terminal distribution must satisfy the conditions of constraint surface in Equation (26).

4.1.1. Constant Mean Vector with Initial Isotropic Covariance

In this use case, we introduce a slight modification of the previous scenario, where now we set the mean vector of the initial and final distributions to be μ 0 = μ 1 = [ 0 , 0 ] . We are still interested in finding the closed isotropic Gaussian, but now not allowing the curve evolution to move the distribution’s mode.
For example, let us assume an initial distribution with
μ 0 = [ 0 , 0 ] , Σ 0 = 7 0 0 2
and the final distribution lie on the surface defined in Equation (26).
Applying the Euler-Lagrange equation with the transversality conditions to the problem above results in a final isotropic distribution with σ 2 = 3.74 . Figure 1a shows the information path (dashed and curved) from the initial distribution to the chosen distribution on the isotropic constraint. A Euclidean path would end with a distribution with an isotropic variance equivalent to the average of the original variances. The geodesic path calculated under the Fisher information matrix is an indication of the curvature of the manifold in this region of the parameter space. It is possible in this special zero-mean case to analytically calculate the ending variance on the isotropic constraint surface. Given initial variances of σ 1 2 , σ 2 2 , it can be shown that the final variance, σ f 2 is given by
σ f 2 = σ 1 2 σ 2 2 .
In Figure 1b, we can see the evolution of all the parameters, starting from the initial distribution to end. Noteworthy is that, even though σ 12 is not required to stay at 0, there is no benefit for it deviating from 0, as seen in Figure 1b. The Fisher information matrix is independent of the mean vector and, since the values of the mean vector are also not part of our isotropic constraint on the final distribution, the mean vector is not compelled to change from the original distribution, justifying the exclusion of the mean vector’s path in Figure 1.

4.1.2. Constant Mean Vector with Initial Full Covariance

In the example above, the distributions move along a 2-dimensional manifold parameterized just by the individual variances of the variables. Neither the mean vector, nor the off-diagonal values of the covariance matrices are considered by the Euler-Lagrange equation and therefore remain static. However, starting with a full covariance matrix changes the problem appreciably. For comparison, we will start with same diagonal elements of the covariance matrix but now incorporate the off-diagonal element.
The problem formulation is analogous to Equation (25) subject to the isotropic constraint in Equation (26). Furthermore, we define the initial distribution as
μ 0 = [ 0 , 0 ] , Σ 0 = 7 3 3 2
As seen in Figure 2, including σ 12 = 3 alters where the geodesic terminates on the transversality constraint, with the final covariance matrix having diagonal elements of σ 1 2 = σ 2 2 = 2.24 . In Figure 2a, the effects of including the off-diagonal value σ 12 are clearly visible on the path of the geodesic causing it to deviate significantly. Figure 2b shows all values of the parameters along the geodesic. As mentioned before, the mean vector remains constant at all intermediate values of the distribution. Unlike before, the value for σ 12 must evolve to satisfy the terminal constraint surface, illustrated in Figure 2b and emphasized in Figure 2c.

4.1.3. Starting on the Constraint Surface

When searching for the isotropic boundary condition, interesting paths occur if we start with an initial isotropic distribution, but require the mean vector to change. That is, if the initial distribution already resides on the terminal constraint surface, it would seem counter-intuitive if the geodesic is compelled to leave this constraint. However, as seen in Figure 3, this is exactly what happens.
Employing the same modeling equations as the previous example, we define the initial distribution as
μ 0 = [ 3 , 3 ] , Σ 0 = 1 0 0 1
which is already isotropic. We search for the closest final distribution that is isotropic but with a mean vector μ 1 = [ 3 , 4 ] . If we track evolution of just the mean vector, it starts in Quadrant II and moves to a distribution in Quadrant IV. Qualitatively, the initial and final distributions are geometrically symmetric. However, the isotropic uncertainty is considerably different.
The insights gleaned from this example shed new light into information evolution. Naively, one would think the shortest path would be one that maintains its current shape and just moves along the constraint surface to reach the desired μ . However, as shown in Figure 3, this is not the case. In fact, intermediate distributions obtain covariance matrices with σ 12 < 0 , as demonstrated in Figure 3c. Instead of just staying on the constraint surface, the distributions stretch in the direction of the desired mean, which explains negative values of the covariance between the variables. This elongation of the covariance in the direction of final mean vector suggests that the information metric prefers uncertainty reduction in regions with few plausible solutions. Instead, the distributions along the geodesic “reach” for their destination, i.e., the initial mean vector in Quadrant II and final in Quadrant IV. This behavior is illustrated by the intermediate (red) ellipse in Figure 4.
To reiterate this insightful behavior of information flow, a second example was conducted with a mean vector that starts in Quadrant III, μ 0 = [ 3 , 3 ] , and seeks out a final mean vector in Quadrant I, μ 1 = [ 3 , 4 ] , as shown in Figure 5. The initial distribution is still isotropic and the requirement to end isotropic remains. However, as seen in Figure 6, the values of σ 12 acquire positive values along the geodesic. The path of the main diagonal variances remains unchanged.

4.2. Initial and Terminal Variable-Endpoint Conditions

It is possible to place transversality conditions on both the initial and final boundaries, with each being entirely independent of the other. Essentially, we are searching for a geodesic between two almost unknown distributions, with only minimal knowledge about the constraint set characterizing the initial and final hypersurfaces.
We consider the example where the final distribution is prescribed to be isotropic as before, but now the initial distribution must have a mean vector with equal components. This enforcement has the practical benefit of reducing the parameter dimensionality of allowable distributions. The problem is formulated as
min F [ θ ] = 1 2 x 0 x 1 θ ˙ T g ( θ ) θ ˙ d x θ 0 = ϕ 0 ( μ 0 , Σ 0 ) , θ 1 = ϕ 1 ( μ 1 , Σ 1 )
where ϕ 0 and ϕ 1 represent the initial and final transversality constraint surfaces, such that
ϕ 0 ( μ 01 , μ 02 ) = μ 01 μ 02 = 0 and ϕ 1 ( σ 1 2 , σ 2 2 ) = σ 1 2 σ 2 2 = 0 .
To demonstrate this concretely, the initial distribution with unknown mean vector is prescribed with the following covariance matrix
Σ 0 = 10 0 0 2 .
Similarly, the unknown isotropic terminal distribution is given the mean vector μ 1 = [ 3 , 13 ] .
Using Equation (17), the constraint requiring the initial distribution to reside on ϕ 0 imposes that the geodesic satisfy
( σ 2 2 σ 12 ) μ ˙ 1 + ( σ 1 2 σ 12 ) μ ˙ 2 = 0 .
As shown in Figure 7, the unknown initial mean vector satisfying the ϕ 1 is μ 0 = [ 7.4 , 7.4 ] and the final isotropic distribution has σ 1 2 = σ 2 2 = 26.4 . The behavior of the geodesic under these variable-endpoint conditions is shown in Figure 8.

5. Conclusions

In this work, we have explored new formulations for working on information geometric manifolds. Previous and contemporary work using the Riemannian geometry of statistical manifolds has focused on establishing geodesics between two fixed-endpoint distributions. Here, by employing techniques from the calculus of variations, we have developed constructions that allow variable endpoints that are prescribed by a constraint set rather than fixed points.
These transversality conditions on initial and final distributions, enable new insights into how information evolves under different constraint use cases. Though this present effort focused on just a small variety of constraints on Gaussian manifolds, this approach can be readily extended to other statistical families. This novel approach of relaxing fixed endpoints and moving constraint sets has the potential to impact several application domains that employ information geometric models. In future research, we plan to recast the problem of optimal distribution discovery, in areas such as model selection and domain adaptation, using the presented framework, which allows for greater expressive power. We also plan to investigate observational data scenarios where parameters must be estimated and impacts uncertainty modeling of the manifold parameters.

Author Contributions

Conceptualization: T.H. and A.M.P.; Investigation: T.H. and A.M.P.; Formal Analysis: T.H. and A.M.P.; Methodology: T.H. and A.M.P.; Validation and Software: T.H. and A.M.P.; Original Draft Preparation: T.H.; Writing (Review and Editing): T.H. and A.M.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Fisher Information for Multivariate Gaussian

Here, we consider the structure of the Fisher information matrix for n-dimensional multivariate Gaussian with density given by
f ( x n : μ n , Σ ) = 2 π n 2 det Σ 1 2 exp ( X μ ) T Σ 1 ( X μ ) 2
where X is a data vector, μ = [ μ 1 , μ 2 , . . . , μ n ] is the n-dimensional mean vector of the distribution and Σ is the n × n covariance matrix.
These define an n-dimensional multivariate Gaussian. This distribution has ( n + 3 ) n 2 unique parameters, which we will capture as a single vector θ such that
θ = { μ 1 , μ 2 , . . . μ n θ 1 , θ 2 , . . . , θ n , σ 1 , 1 2 , σ 1 , 2 2 , . . . , σ n , n 2 } θ n + 1 , . . . θ ( n + 3 ) n 2 .
The log-likelihood associated with Equation (A1) is
log f = L ( θ ) = n 2 ln 2 π 1 2 ln det Σ 1 2 ( x μ ) T Σ 1 ( x μ ) .
As stated, we will find an equation that will yield each element of the Fisher information matrix using the following definition
g i j ( θ ) = E θ i log f ( x ; θ ) θ j log f ( x ; θ ) .
Equation (A4) requires the partial derivative of each parameter in the log-likelihood defined in Equation (A3).
L θ i = 1 2 t r Σ 1 Σ θ i A + 1 2 ( x μ ) T Σ 1 Σ θ i Σ 1 ( x μ ) B + μ θ i T Σ 1 ( x μ ) C
Similarly, we can take the partial derivative with respect to a different parameter, θ j and obtain the same result, indexed with j instead of i. The below equation labels A . B , C will provide clarity in the proof.
To find each g i j in the Fisher information matrix, we need to find the expectation of the product of every combination two partial derivatives which will result in nine terms. However, upon taking the expectation, some of these terms will vanish to 0, because the expectation of data vector x approaches the mean vector μ . Specifically, let us denote A i , B i , C i to be the terms of L θ i and A j , B j , C j to be the terms of L θ j . Upon taking the expectation of the product, A i C j = C i A j = B i C j = B j C i = 0 . Ignoring these, we will look individually at each of the remaining terms. Starting with A i A j
A i A j = 1 2 t r Σ 1 Σ θ i 1 2 t r Σ 1 Σ θ j = 1 4 t r Σ 1 Σ θ i t r Σ 1 Σ θ j
Next, calculating B i B j ,
B i B j = 1 2 ( x μ ) T Σ 1 Σ θ i Σ 1 ( x μ ) 1 2 ( x μ ) T Σ 1 Σ θ j Σ 1 ( x μ ) = 1 4 ( x μ ) T Σ 1 Σ θ i Σ 1 ( x μ ) ( x μ ) T Σ 1 Σ θ j Σ 1 ( x μ )
We are required to take the expectation of this, which is a fourth moment of the multivariable normal distribution. The result of this is
B i B j = 1 4 t r Σ 1 Σ θ i t r Σ 1 Σ θ j + 2 t r Σ 1 Σ θ i Σ 1 Σ θ j .
Turning attention to C i C j ,
C i C j = μ θ i T Σ 1 ( x μ ) ( x μ ) T Σ 1 μ θ j = μ θ i T Σ 1 Σ Σ 1 μ θ j = μ θ i T Σ 1 μ θ j .
The final set of non-vanishing terms, A i B j + B i A j are considered simultaneously.
A i B j + B i A j = 1 2 t r Σ 1 Σ θ i 1 2 ( x μ ) T Σ 1 Σ θ j Σ 1 ( x μ ) + 1 2 ( x μ ) T Σ 1 Σ θ i Σ 1 ( x μ ) 1 2 t r Σ 1 Σ θ j = 1 4 { t r Σ 1 Σ θ i ( x μ ) T Σ 1 Σ θ j Σ 1 ( x μ ) + ( x μ ) T Σ 1 Σ θ i Σ 1 ( x μ ) t r Σ 1 Σ θ j }
Now, we use the identity
b T A b = t r ( b b T A )
on all terms of Equation (A10) that do not yet involve the trace of a matrix. Doing so, Equation (A10) becomes
A i B j + B i A j = 1 4 { t r Σ 1 Σ θ i t r ( x μ ) ( x μ ) T Σ 1 Σ θ j Σ 1 + t r ( x μ ) ( x μ ) T Σ 1 Σ θ i Σ 1 t r Σ 1 Σ θ j } = 1 4 { t r Σ 1 Σ θ i t r Σ Σ 1 Σ θ j Σ 1 + t r Σ Σ 1 Σ θ i Σ 1 t r Σ 1 Σ θ j } = 1 4 { t r Σ 1 Σ θ i t r Σ θ j Σ 1 + t r Σ θ i Σ 1 t r Σ 1 Σ θ j } .
Once again, we took the expectation as required to find the Fisher information. Finally, we use the commutative property of trace to clean up the expression in Equation (A12)
A i B j + B i A j = 1 4 t r Σ 1 Σ θ i t r Σ 1 Σ θ j + t r Σ 1 [ Σ θ i t r Σ 1 Σ θ j = 1 2 t r Σ 1 Σ θ i t r Σ 1 Σ θ j .
Combining Equations (A6), (A7, (A9 and (A13) into Equation (A5), we obtain
g i , j ( θ ) = 1 4 t r Σ 1 Σ θ i t r Σ 1 Σ θ j + 1 4 t r Σ 1 Σ θ i t r Σ 1 Σ θ j + 2 t r Σ 1 Σ θ i Σ 1 Σ θ j + μ θ i T Σ 1 μ θ j + 1 2 t r Σ 1 Σ θ i t r Σ 1 Σ θ j = 1 2 t r Σ 1 Σ θ i Σ 1 Σ θ j + μ θ i T Σ 1 μ θ j .

Appendix B. Fisher Information of a 2-Dimensional Gaussian

The usefulness of Equation (A14) lies in the ability of calculating the individual terms in the equation. Here, the inverse of a covariance matrix is extremely illusive for a high-dimensional Gaussian distribution, even after leveraging its symmetric properties.
Considering just a 2 × 2 covariance matrix, Equation (A14) is tractable, since its inverse is known exactly and is reasonably manageable. Furthermore, we will collect all the parameters of a general bivariate Gaussian into a single vector, to facilitate the calculation of each element of the Fisher information matrix.
θ = θ 1 , θ 1 , θ 1 , θ 1 , θ 1 = μ 1 , μ 2 , σ 1 2 , σ 2 2 , σ 12
Starting with the diagonal elements, g 11 and g 22 share similar structures. Focusing just on g 11 , and consider a 2-dimensional Gaussian with mean vector μ T = [ μ 1 , μ 2 ] and covariance matrix
Σ = σ 1 2 σ 12 σ 12 σ 2 2
which will be indexed according to Equation (A15). We now have, using the standard definition of the inverse
Σ 1 = 1 σ 1 2 σ 2 2 σ 12 2 σ 2 2 σ 12 σ 12 σ 1 2 .
For succinctness, we will let k = 1 σ 1 2 σ 2 2 σ 12 2 .
Conveniently, the means are not involved in the covariance matrix, so the first term of Equation (A14) vanishes. To find g 11 we need
g 11 = 1 k μ μ 1 T σ 2 2 σ 12 σ 12 σ 1 2 μ μ 1 = 1 k 1 0 σ 2 2 σ 12 σ 12 σ 1 2 1 0 = σ 2 2 k = σ 2 2 σ 1 2 σ 2 2 σ 12 2 .
Finding g 22 easily follows the above, resulting in
g 22 = σ 1 2 σ 1 2 σ 2 2 σ 12 2 .
Finding the remaining diagonal elements involve the just the first term of Equation (A14). For the variance of the first variable, we will need to find
g 33 = 1 2 t r Σ 1 Σ σ 1 2 2 = 1 2 t r 1 k σ 2 2 σ 12 σ 12 σ 1 2 1 0 0 0 2 = 1 2 k 2 t r σ 2 2 0 σ 12 0 2 = 1 2 k 2 t r ( σ 2 2 ) 2 0 ( σ 12 σ 2 2 ) 2 0 = 1 2 k 2 ( σ 2 2 ) 2 = 1 2 σ 2 2 σ 1 2 σ 2 2 σ 12 2 2 .
Once again, the element of the Fisher information matrix for the variance of the second variable mirrors the above exactly.
g 44 = 1 2 σ 1 2 σ 1 2 σ 2 2 σ 12 2 2 .
The covariance component will complete the diagonal elements of the Fisher information matrix.
g 55 = 1 2 t r Σ 1 Σ σ 12 2 = 1 2 t r 1 k σ 2 2 σ 12 σ 12 σ 1 2 0 1 1 0 2 = 1 2 k 2 t r σ 12 σ 2 2 σ 1 2 σ 12 2 = 1 2 k 2 t r σ 1 2 σ 2 2 + σ 12 2 2 σ 2 2 σ 12 2 σ 1 2 σ 12 σ 1 2 σ 2 2 + σ 12 2 = 1 2 k 2 2 ( σ 1 2 σ 2 2 + σ 12 2 ) = σ 1 2 σ 2 2 + σ 12 2 σ 1 2 σ 2 2 σ 12 2 2 .
The off-diagonal elements are only slightly more involved. However, because the terms in Equation (A14) involve the partial derivatives, and because the mean vector and the covariance matrix have no overlapping terms, many of the off-diagonal elements vanish, specifically the ones that involve both a mean component and a variance component, i.e., g i j = 0 for i ( 1 , 2 ) and j ( 3 , 4 , 5 ) . For the other off-diagonal components, we will employ all the conveniences of symmetry to complete the Fisher information matrix.
Turning our attention to the g 12 , the element concerning the two means:
g 12 = μ θ 1 T Σ 1 μ θ 2 = 1 k 1 0 σ 2 2 σ 12 σ 12 σ 1 2 0 1 = 1 k σ 12 = σ 12 σ 1 2 σ 2 2 σ 12 2 = g 21 .
Next, we consider the elements of the Fisher information matrix involving both variances, g 34
g 34 = 1 2 t r Σ 1 Σ σ 1 2 Σ 1 Σ σ 2 2 = 1 2 k 2 t r σ 2 2 σ 12 σ 12 σ 1 2 1 0 0 0 σ 2 2 σ 12 σ 12 σ 1 2 0 0 0 1 = 1 2 k 2 t r σ 2 2 0 σ 12 0 0 σ 12 0 σ 1 2 = 1 2 k 2 t r 0 σ 2 2 σ 12 0 σ 12 2 = 1 2 σ 12 σ 1 2 σ 2 2 σ 12 2 2 = g 43 .
The variance/covariance elements of the Fisher information matrix will all have similar structures. We calculate one of them below
g 35 = 1 2 t r Σ 1 Σ σ 1 2 Σ 1 Σ σ 12 = 1 2 k 2 t r σ 2 2 σ 12 σ 12 σ 1 2 1 0 0 0 σ 2 2 σ 12 σ 12 σ 1 2 0 1 1 0 = 1 2 k 2 t r σ 2 2 0 σ 12 0 σ 12 σ 2 2 σ 1 2 σ 12 = 1 2 k 2 t r σ 12 σ 2 2 σ 12 2 σ 12 2 σ 12 σ 2 2 = σ 12 σ 2 2 σ 1 2 σ 2 2 σ 12 2 2 = g 53 .
Similarly, the element involving the second variance with the covariance is
g 45 = g 54 = σ 12 σ 1 2 σ 1 2 σ 2 2 σ 12 2 2 .

References

  1. Kass, R.E. The geometry of asymptotic inference. Stat. Sci. 1989, 4, 188–234. [Google Scholar] [CrossRef]
  2. Amari, S.I. Differential Geometric Methods in Statistics; Springer: Berlin/Heidelberg, Germany, 1995. [Google Scholar]
  3. Critchley, F.; Marriott, P. Information Geometry and Its Applications: An Overview. In Computational Information Geometry: For Image and Signal Processing; Nielsen, F., Critchley, F., Dodson, C.T.J., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 1–31. [Google Scholar]
  4. Akaike, H. A new look at the statistical model identification. IEEE Trans. Autom. Control. 1974, 19, 716–723. [Google Scholar] [CrossRef]
  5. Rissanen, J. Modeling by shortest data description. Automatica 1978, 14, 465–471. [Google Scholar] [CrossRef]
  6. Herntier, T.; Ihou, K.E.; Smith, A.; Rangarajan, A.; Peter, A.M. Spherical minimum description length. Entropy 2018, 20, 575. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Rényi, A. On measures of entropy and information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA, USA, 1961 June 20; Volume 1. [Google Scholar]
  8. Miller, R.E. Dynamic Optimization and Economic Applications; McGraw-Hill Inc.: New York, NY, USA, 1979. [Google Scholar]
  9. Gel’fand, I.; Fomin, S. Calculus of Variations by I.M. Gelfand and S.V. Fomin; Selected Russian Publications in the Mathematical Sciences; Prentice-Hall: Upper Saddle River, NJ, USA, 1964. [Google Scholar]
  10. Ben-David, S.; Blitzer, J.; Crammer, K.; Kulesza, A.; Pereira, F.; Vaughan, J. A theory of learning from different domains. Mach. Learn. 2010, 79, 151–175. [Google Scholar] [CrossRef] [Green Version]
  11. Ganin, Y.; Ustinova, E.; Ajakan, H.; Germain, P.; Larochelle, H.; Laviolette, F.; Marchand, M.; Lempitsky, V. Domain-adversarial training of neural networks. J. Mach. Learn. Res. 2016, 17, 2096–2130. [Google Scholar]
  12. Sun, B.; Feng, J.; Saenko, K. Return of frustratingly easy domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; Volume 30. [Google Scholar]
  13. Rissanen, J. Stochastic complexity. J. R. Stat. Soc. 1987, 49, 223–239. [Google Scholar] [CrossRef]
  14. Schwarz, G. Estimating the dimension of a model. Ann. Stat. 1978, 6, 461–464. [Google Scholar] [CrossRef]
  15. Efron, B. Defining the curvature of a statistical problem (with applications to second order efficiency). Ann. Stat. 1975, 3, 1189–1242. [Google Scholar] [CrossRef]
  16. Felice, D.; Ay, N. Towards a canonical divergence within information geometry. Inf. Geom. 2021, 4, 65–130. [Google Scholar] [CrossRef]
  17. Fisher, R.A. On the mathematical foundations of theoretical statistics. Philos. Trans. R. Soc. Lond. A 1922, 222, 309–368. [Google Scholar]
  18. Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  19. Bhattacharyya, A. On a measure of divergence between two multinomial populations. Sankhyā: Indian J. Stat. (1933–1960) 1946, 7, 401–406. [Google Scholar]
  20. Hotelling, H. Spaces of statistical parameters. Bull. Am. Math. Soc. 1930, 36, 191. [Google Scholar]
  21. Rao, C.R. Information and accuracy attainable in estimation of statistical parameters. Bull. Calcutta Math. Soc. 1945, 37, 81–91. [Google Scholar]
  22. Čencov, N. Algebraic foundation of mathematical statistics. Ser. Stat. 1978, 9, 267–276. [Google Scholar] [CrossRef]
  23. Isham, V.; Murray, M. Differential Geometry and Statistics; Routledge: London, UK, 2017. [Google Scholar]
  24. Skovgaard, L.T. A Riemannian geometry of the multivariate normal model. Scand. J. Stat. 1984, 11, 211–223. [Google Scholar]
  25. Manté, C. The Rao’s distance between negative binomial distributions for exploratory analyses and Goodness-of-Fit testing. In Proceedings of the 61st World Statistics Congress—ISI2017, Marrakech, Maroc, 16–21 July 2017. [Google Scholar]
  26. Reverter, F.; Oller, J. Computing the Rao distance for Gamma distributions. J. Comput. Appl. Math. 2003, 157, 155–167. [Google Scholar] [CrossRef] [Green Version]
  27. Arwini, K.A.; Dodson, C.T. Information Geometry; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  28. Nielsen, F. The many faces of information geometry. Not. Am. Math. Soc. 2022, 69, 36–45. [Google Scholar] [CrossRef]
  29. Calvo, M.; Oller, J.M. An explicit solution of information geodesic equation for the multivatriate normal model. Stat. Risk Model. 1991, 9, 119–138. [Google Scholar]
  30. Burbea, J. Informative geometry of probability spaces. Expo. Math. 1986, 4, 347–378. [Google Scholar]
  31. Costa, S.I.R.; Santos, S.A.; Strapasson, J.E. Fisher information distance: A geometrical reading. Discret. Appl. Math. 2015, 197, 59–69. [Google Scholar] [CrossRef]
  32. Peter, A.M.; Rangarajan, A. Information geometry for landmark shape analysis: Unifying shape representation and deformation. IEEE Trans. PAMI 2009, 31, 337–350. [Google Scholar] [CrossRef] [Green Version]
  33. Cafaro, C.; Mancini, S. Quantifying the complexity of geodesic paths on curved statistical manifolds through information geometric entropies and Jacobi fields. Phys. Nonlinear Phenom. 2011, 240, 607–618. [Google Scholar] [CrossRef] [Green Version]
  34. Ciaglia, F.M.; Cosmo, F.D.; Felice, D.; Mancini, S.; Marmo, G.; Pérez-Pardo, J.M. Aspects of geodesical motion with Fisher-Rao metric: Classical and quantum. Open Syst. Inf. Dyn. 2018, 25, 1850005. [Google Scholar] [CrossRef] [Green Version]
  35. Yoshizawa, S.; Tanabe, K. Dual differential geometry associated with the Kullback-Leibler information on the Gaussian distributions and its 2-parameter deformations. SUT J. Math. 1999, 35, 113–137. [Google Scholar] [CrossRef]
  36. Bormashenko, E. Contact angles of sessile droplets deposited on rough and flat surfaces in the presence of external fields. Math. Model. Nat. Phenom. 2012, 7, 1–5. [Google Scholar] [CrossRef] [Green Version]
  37. Gladkov, S.O. A contribution to the computation of the Young Modulus. J. Eng. Phys. Thermophys. 2003, 76, 1126–1130. [Google Scholar] [CrossRef]
  38. Halkin, H. Necessary conditions for optimal control problems with infinite horizons. Econometrica 1974, 42, 267–272. [Google Scholar] [CrossRef]
  39. Kamihigashi, T. Transversality Conditions and Dynamic Economic Behaviour. In The New Palgrave Dictionary of Economics; Palgrave Macmillan: London, UK, 2018; pp. 13858–13862. [Google Scholar]
  40. Michel, P. On the transversality condition in infinite horizon optimal problems. Econometrica 1982, 50, 975–985. [Google Scholar] [CrossRef] [Green Version]
  41. Leung, S.F. Transversality condition and optimality in a class of infinite horizon continuous time economic models. J. Econ. Theory 1991, 54, 224–233. [Google Scholar] [CrossRef]
  42. Moroz, A. 2-Some general optimal control problems useful for biokinetics. In The Common Extremalities in Biology and Physics, 2nd ed.; Moroz, A., Ed.; Elsevier: London, UK, 2012; pp. 43–110. [Google Scholar]
  43. Li, C.F. A key role of transversality condition in quantization of photon orbital angular momentum. arXiv 2016, arXiv:1609.07717. [Google Scholar]
Figure 1. Shown above in (a) is the shortest path (dashed line) from a prescribed initial distribution with diagonal covariance matrix, σ 1 2 = 7 , σ 2 2 = 2 , to the closest isotropic distribution. The final distribution has σ 1 2 = σ 2 2 = 3.74 . The red solid line above is the transversality constraint σ 1 2 = σ 2 2 , represents the isotropic submanifold. Additionally illustrated, in blue, is the Euclidean path, which is clearly the straight-line path resulting from an identity metric tensor. In (b) are the paths showing the value of each element of Σ as the distributions move towards the transversality constraint.
Figure 1. Shown above in (a) is the shortest path (dashed line) from a prescribed initial distribution with diagonal covariance matrix, σ 1 2 = 7 , σ 2 2 = 2 , to the closest isotropic distribution. The final distribution has σ 1 2 = σ 2 2 = 3.74 . The red solid line above is the transversality constraint σ 1 2 = σ 2 2 , represents the isotropic submanifold. Additionally illustrated, in blue, is the Euclidean path, which is clearly the straight-line path resulting from an identity metric tensor. In (b) are the paths showing the value of each element of Σ as the distributions move towards the transversality constraint.
Entropy 24 01698 g001
Figure 2. In (a) is the shortest path (blue line) from a prescribed initial distribution with an off-diagonal covariance element of σ 12 = 3 . Additionally, the path from (a) (dashed black) is shown to illustrate differences in the variances of the terminal distribution. The final distribution, when starting with a full covariance, has σ 1 2 = σ 2 2 = 2.24 . The red line above is the transversality constraint σ 1 2 = σ 2 2 , and represents the isotropic submanifold. Figure (b) captures the path of all parameters from the initial to the final distribution. Figure (c) highlights the values of σ 12 for each distribution in the geodesic on the manifold.
Figure 2. In (a) is the shortest path (blue line) from a prescribed initial distribution with an off-diagonal covariance element of σ 12 = 3 . Additionally, the path from (a) (dashed black) is shown to illustrate differences in the variances of the terminal distribution. The final distribution, when starting with a full covariance, has σ 1 2 = σ 2 2 = 2.24 . The red line above is the transversality constraint σ 1 2 = σ 2 2 , and represents the isotropic submanifold. Figure (b) captures the path of all parameters from the initial to the final distribution. Figure (c) highlights the values of σ 12 for each distribution in the geodesic on the manifold.
Entropy 24 01698 g002
Figure 3. Above are evolutions of the geodesic from an initial isotropic distribution to a final isotropic distribution with a different mean vector. In (a), the values of all five parameters are shown at each iteration. Figure (b) highlights the values of σ 12 showing that it leaves the constraint surface and acquires negative values. The individual variances of the variables also temporarily abandon their required isotropicity as seen in (c). In (c), the dotted line shows the path of the σ 1 2 and σ 2 2 and the solid line shows the isotropic constraint surface.
Figure 3. Above are evolutions of the geodesic from an initial isotropic distribution to a final isotropic distribution with a different mean vector. In (a), the values of all five parameters are shown at each iteration. Figure (b) highlights the values of σ 12 showing that it leaves the constraint surface and acquires negative values. The individual variances of the variables also temporarily abandon their required isotropicity as seen in (c). In (c), the dotted line shows the path of the σ 1 2 and σ 2 2 and the solid line shows the isotropic constraint surface.
Entropy 24 01698 g003
Figure 4. The ellipses above illustrate uncertainty contour for three different density functions along the geodesic. The initial distribution shown in the top left and final distribution in the bottom right are isotropic. The uncertainty evolution is clearly visible in the intermediate distribution which acquires negative covariance values as it “reaches” towards the final distribution.
Figure 4. The ellipses above illustrate uncertainty contour for three different density functions along the geodesic. The initial distribution shown in the top left and final distribution in the bottom right are isotropic. The uncertainty evolution is clearly visible in the intermediate distribution which acquires negative covariance values as it “reaches” towards the final distribution.
Entropy 24 01698 g004
Figure 5. Similar to Figure 4, uncertainty level curves of three densities along the geodesic are shown. This time, the intermediate distribution acquires σ 12 > 0 along the geodesic as the distributions move from Quadrant III to Quadrant I along the dotted path.
Figure 5. Similar to Figure 4, uncertainty level curves of three densities along the geodesic are shown. This time, the intermediate distribution acquires σ 12 > 0 along the geodesic as the distributions move from Quadrant III to Quadrant I along the dotted path.
Entropy 24 01698 g005
Figure 6. Illustration of geodesic paths from an initial isotropic distribution with a mean vector in Quadrant III and a final isotropic distribution with a mean vector Quadrant I. In (a), the evolutions of all five parameters are shown. Figure (b) highlights the values of σ 12 showing that it leaves the constraint surface and acquires positive values in contrast to the previous example. The individual variances of the variables also temporarily abandon their required isotropicity as seen in Figure 3c. In (c), the dotted line shows the path of the σ 1 2 and σ 2 2 , with the solid red line representing the isotropic constraint surface.
Figure 6. Illustration of geodesic paths from an initial isotropic distribution with a mean vector in Quadrant III and a final isotropic distribution with a mean vector Quadrant I. In (a), the evolutions of all five parameters are shown. Figure (b) highlights the values of σ 12 showing that it leaves the constraint surface and acquires positive values in contrast to the previous example. The individual variances of the variables also temporarily abandon their required isotropicity as seen in Figure 3c. In (c), the dotted line shows the path of the σ 1 2 and σ 2 2 , with the solid red line representing the isotropic constraint surface.
Entropy 24 01698 g006aEntropy 24 01698 g006b
Figure 7. Alternate visualization of the resulting geodesic when both endpoints are allowed to vary. The uncertainty contour ellipses of the initial, intermediate and final distribution are also shown. The initial distribution represented by the bottom right ellipse, has components of the mean vector that are equal. The final distribution, at the top left of the path, isotropic.
Figure 7. Alternate visualization of the resulting geodesic when both endpoints are allowed to vary. The uncertainty contour ellipses of the initial, intermediate and final distribution are also shown. The initial distribution represented by the bottom right ellipse, has components of the mean vector that are equal. The final distribution, at the top left of the path, isotropic.
Entropy 24 01698 g007
Figure 8. Illustration of geodesics resulting from initial and terminal variable-endpoint boundary conditions. Figure (a) shows the behavior of all parameters along the geodesic. Figure (b) shows how the geodesic (dashed) evolves the initial distribution to reach the constraint (solid) surface defined by the means. Similarly, Figure (c) shows the evolution of final distribution to the isotropic covariance constraint.
Figure 8. Illustration of geodesics resulting from initial and terminal variable-endpoint boundary conditions. Figure (a) shows the behavior of all parameters along the geodesic. Figure (b) shows how the geodesic (dashed) evolves the initial distribution to reach the constraint (solid) surface defined by the means. Similarly, Figure (c) shows the evolution of final distribution to the isotropic covariance constraint.
Entropy 24 01698 g008
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Herntier, T.; Peter, A.M. Transversality Conditions for Geodesics on the Statistical Manifold of Multivariate Gaussian Distributions. Entropy 2022, 24, 1698. https://doi.org/10.3390/e24111698

AMA Style

Herntier T, Peter AM. Transversality Conditions for Geodesics on the Statistical Manifold of Multivariate Gaussian Distributions. Entropy. 2022; 24(11):1698. https://doi.org/10.3390/e24111698

Chicago/Turabian Style

Herntier, Trevor, and Adrian M. Peter. 2022. "Transversality Conditions for Geodesics on the Statistical Manifold of Multivariate Gaussian Distributions" Entropy 24, no. 11: 1698. https://doi.org/10.3390/e24111698

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop