Next Article in Journal
A Review on Machine Learning Approaches for Network Malicious Behavior Detection in Emerging Technologies
Previous Article in Journal
Intelligent Fault Identification for Rolling Bearings Fusing Average Refined Composite Multiscale Dispersion Entropy-Assisted Feature Extraction and SVM with Multi-Strategy Enhanced Swarm Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

α-Geodesical Skew Divergence

1
Department of Statistical Science, School of Multidisciplinary Sciences, The Graduate University for Advanced Studies (SOKENDAI), Kanagawa 240-0193, Japan
2
The Institute of Statistical Mathematics, Tokyo 190-0014, Japan
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(5), 528; https://doi.org/10.3390/e23050528
Submission received: 1 April 2021 / Revised: 22 April 2021 / Accepted: 24 April 2021 / Published: 25 April 2021

Abstract

:
The asymmetric skew divergence smooths one of the distributions by mixing it, to a degree determined by the parameter λ , with the other distribution. Such divergence is an approximation of the KL divergence that does not require the target distribution to be absolutely continuous with respect to the source distribution. In this paper, an information geometric generalization of the skew divergence called the α -geodesical skew divergence is proposed, and its properties are studied.

1. Introduction

Let ( X , F , μ ) be a measure space where X denotes the sample space, F the σ -algebra of measurable events, and μ a positive measure. The set of the strictly positive probability measure P is defined as
P f ( x ) > 0 ( x X ) , and X f ( x ) d μ ( x ) = 1 ,
and the set of nonnegative probability measure P + is defined as
P + f ( x ) 0 ( x X ) , and X f ( x ) d μ ( x ) = 1 .
Then a number of divergences that appear in statistics and information theory [1,2] are introduced.
Definition 1
(Kullback–Leibler divergence [3]). The Kullback–Leibler divergence or KL-divergence D K L : P + × P [ 0 , ] is defined between two Radon–Nikodym densities p and q of μ-absolutely continuous probability measures by
D K L [ p q ] X p ln p q d μ .
KL-divergence is a measure of the difference between two probability distributions in statistics and information theory [4,5,6,7]. This is also called the relative entropy and is known not to satisfy the axiom of distance. Because the KL-divergence is asymmetric, several symmetrizations have been proposed in the literature [8,9,10].
Definition 2
(Jensen–Shannon divergence [8]). The Jensen–Shannon divergence or JS-divergence D J S : P × P [ 0 , ) is defined between two Radon–Nikodym densities p and q of μ-absolutely continuous probability measures by
D J S [ p q ] : = 1 2 D K L p p + q 2 + D K L q p + q 2 = 1 2 X p ln 2 p p + q + q ln 2 q p + q d μ = D J S [ q p ] .
The JS-divergence is a symmetrized and smoothed version of the KL-divergence, and it is bounded as
0 D J S [ p q ] ln 2 .
This property contrasts with the fact that KL-divergence is unbounded.
Definition 3
(Jeffreys divergence [11]). The Jeffreys divergence D J [ p q ] : P × P [ 0 , ] is defined between two Radon–Nikodym densities p and q of μ-absolutely continuous probability measures by
D J [ p q ] : = D K L [ p q ] + D K L [ q p ] .
Such symmetrized KL-divergences have appeared in various pieces of literature [12,13,14,15,16,17,18].
For continuous distributions, the KL-divergence is known to have computational difficulty. To be more specific, if q takes a small value relative to p, the value of D K L [ p q ] may diverge to infinity. The simplest idea to avoid this is to use very small ϵ > 0 and modify D K L [ p q ] as follows:
D K L + [ p q ] : = X p ln p q + ϵ d μ .
However, such an extension is unnatural in the sense that q + ϵ no longer satisfies the condition for a probability measure: X ϵ + q ( x ) d μ ( x ) 1 . As a more natural way to stabilize KL-divergence, the following skew divergences have been proposed:
Definition 4
(Skew divergence [8,19]). The skew divergence D S ( λ ) [ p q ] : P × P [ 0 , ] is defined between two Radon–Nikodym densities p and q of μ-absolutely continuous probability measures by
D S ( λ ) [ p q ] : = D K L [ p ( 1 λ ) p + λ q ] = X p ln p ( 1 λ ) p + λ q d μ ,
where λ [ 0 , 1 ] .
Skew divergences have been experimentally shown to perform better in applications such as natural language processing [20,21], image recognition [22,23] and graph analysis [24,25]. In addition, there is research on quantum generalization of skew divergence [26].
The main contributions of this paper are summarized as follows:
  • Several symmetrized divergences or skew divergences are generalized from an information geometry perspective.
  • It is proved that the natural skew divergence for the exponential family is equivalent to the scaled KL-divergence.
  • Several properties of geometrically generalized skew divergence are proved. Specifically, the functional space associated with the proposed divergence is shown to be a Banach space.
Implementation of the proposed divergence is available on GitHub (https://github.com/nocotan/geodesical_skew_divergence (accessed on 3 April 2021)).

2. α-Geodesical Skew Divergence

The skew divergence is generalized based on the following function.
Definition 5
(f-interpolation). For any a , b , R , λ [ 0 , 1 ] and α R , f-interpolation is defined as
m f ( λ , α ) ( a , b ) = f α 1 ( 1 λ ) f α ( a ) + λ f α ( b ) ,
where
f α ( x ) = x 1 α 2 ( α 1 ) ln x ( α = 1 )
is the function that defines the f-mean [27].
The f-mean function satisfies
lim α f α ( x ) = ( | x | < 1 ) , 1 ( | x | = 1 ) , 0 ( | x | > 1 ) , lim α f α ( x ) = 0 ( | x | < 1 ) , 1 ( | x | = 1 ) , ( | x | > 1 ) .
It is easy to see that this family includes various known weighted means including the e-mixture and m-mixture for α = ± 1 in the literature of information geometry [28]:
( α = 1 ) m f ( λ , 1 ) ( a , b ) = exp { ( 1 λ ) ln a + λ ln b } ( α = 1 ) m f ( λ , 1 ) ( a , b ) = ( 1 λ ) a + λ b ( α = 0 ) m f ( λ , 0 ) ( a , b ) = ( 1 λ ) a + λ b 2 ( α = 3 ) m f ( λ , 3 ) ( a , b ) = 1 ( 1 λ ) 1 a + λ 1 b ( α = ) m f ( λ , ) ( a , b ) = min { a , b } ( α = ) m f ( λ , ) ( a , b ) = max { a , b }
The inverse function f α 1 is convex when α [ 1 , 1 ] , and concave when α ( , 1 ] ( 1 , ) . It is worth noting that the f-interpolation is a special case of the Kolmogorov–Nagumo average [29,30,31] when α is restricted in the interval [ 1 , 1 ] .
In order to consider the geometric meaning of this function, the notion of the statistical manifold is introduced.

2.1. Statistical Manifold

Let
S = { p ξ = p ( x ; ξ ) P | ξ = ( ξ 1 , , ξ n ) Ξ }
be a family of probability distribution on X , where each element p ξ is parameterized by n real-valued variables ξ = ( ξ 1 , , ξ n ) Ξ R n . The set S is called a statistical model and is a subset of P . We also denote ( S , g i j ) as a statistical model equipped with the Riemannian metric g i j . In particular, let g i j be the Fisher–Rao metric, which is the Riemannian metric induced from the Fisher information matrix [32].
In the rest of this paper, the abbreviations
i = ξ i = ξ i , = x ( ξ ) = ln p ξ ( x )
are used.
Definition 6
(Christoffel symbols). Let g i j be a Riemannian metric, particularly the Fisher information matrix, then the Christoffel symbols are given by
Γ i j , k = 1 2 i g j k + j g i k k g i j , i , j , k = 1 , , n .
Definition 7
(Levi-Civita connection). Let g be a Fisher–Riemannian metric on S which is a 2-covariant tensor defined locally by
g ( X ξ , Y ξ ) = i , j = 1 n g i j ( ξ ) a i ( ξ ) b j ( ξ ) ,
where X ξ = i = 1 n a i ( ξ ) i p ξ and Y ξ = i = 1 n b i ( ξ ) i p ξ are vector fields in the 0-representation on S . Then, its associated Levi-Civita connection ( 0 ) is defined by
g ( i ( 0 ) j , k ) = Γ i j , k .
The fact that ( 0 ) is a metrical connection can be written locally as
k g i j = Γ k i , j + Γ k j , i .
It is worth noting that the superscript α of ( α ) corresponds to a parameter of the connection. Based on the above definitions, several connections parameterized by the parameter α are introduced. The case α = 0 corresponds to the Levi-Civita connection induced by the Fisher metric.
Definition 8
( ( 1 ) -connection). Let g be the Fisher-Riemannian metric on S , which is a 2-covariant tensor. Then, the ( 1 ) -connection is defined by
g ( i ( 1 ) j , k ) = E ξ [ i j k ] .
It can also be expressed equivalently by explicitly writing as the Christoffel coefficients
Γ i j , k ( 1 ) ( ξ ) = E ξ [ i j k ] .
Definition 9
( ( 1 ) -connection). Let g be the Fisher–Riemannian metric on S , which is a 2-covariant tensor. Then, the ( 1 ) -connection is defined by
g ( i ( 1 ) j , k ) = Γ i j , k ( 1 ) ( ξ ) = E ξ [ ( i j + i j ) k ] .
In the following, the ∇-flatness is considered with respect to the corresponding coordinates system. More details can be found in [28].
Proposition 1.
The exponential family is ( 1 ) -flat.
Proposition 2.
The exponential family is ( 1 ) -flat if and only if it is ( 0 ) -flat.
Proposition 3.
The mixture family is ( 1 ) -flat.
Proposition 4.
The mixture family is ( 1 ) -flat if and only if it is ( 0 ) -flat.
Proposition 5.
The relation between the foregoing three connections is given by
( 0 ) = 1 2 ( 1 ) + ( 1 ) .
Proof. 
It suffices to show
Γ i j , k ( 0 ) = 1 2 Γ i j , k ( 1 ) + Γ i j , k ( 1 ) .
From the definitions of Γ ( 1 ) and Γ ( 1 ) ,
Γ i j , k ( 1 ) + Γ i j , k ( 1 ) = E ξ [ ( i j + i j ) k ] + E ξ [ i j k ] = E ξ [ ( 2 i j + i j ) k ] = 2 E ξ ( i j + 1 2 i j ) k = 2 Γ i j , k ( 0 ) ,
which proves the proposition. □
The connections ( 1 ) and ( 1 ) are two special connections on S with respect to the mixture family and the exponential family, respectively. Moreover, they are related by the duality condition, and the following 1-parameter family of connections is defined.
Definition 10
( ( α ) -connection). For α R , the ( α ) -connection on the statistical model S is defined as
( α ) = 1 + α 2 ( 1 ) + 1 α 2 ( 1 ) .
Proposition 6.
The components Γ i j , k ( α ) can be written as
Γ i j , k ( α ) = E ξ i j + 1 α 2 i j k .
The α -coordinate system associated with the ( α ) -connection is endowed with the α -geodesic, which is a straight line on the corresponding coordinates system. Then, we introduce some relevant notions.
Definition 11
(α-divergence [33]). Let α be a real parameter. The α-divergence between two probability vectors p and q is defined as
D α [ p q ] = 4 1 α 2 1 i p i 1 α 2 q i 1 + α 2 .
The KL-divergence, which is a special case with α = 1 , induces the linear connection ( 1 ) as follows.
Proposition 7.
The diagonal part of the third mixed derivatives of the KL-divergence is the negative of the Christoffel symbol:
ξ i ξ j ξ 0 k D K L [ p ξ 0 p ξ ] | ξ = ξ 0 = Γ i j , k ( 1 ) ( ξ 0 ) .
Proof. 
The second derivative in the argument ξ is given by
ξ i ξ j D K L [ p ξ 0 p ξ ] = X p ξ 0 ( x ) ξ i ξ j x ( ξ ) d x ,
and differentiating it with respect to ξ 0 k yields
ξ i ξ j ξ 0 k D K L [ p ξ 0 p ξ ] = ξ 0 k X p ξ 0 ( x ) ξ i ξ j x ( ξ ) d x = X p ξ 0 ( x ) ξ i ξ j x ( ξ ) ξ 0 k x ( ξ ) d x .
Then, considering the diagonal part, one yields
ξ i ξ j ξ 0 k D K L [ p ξ 0 p ξ ] | ξ = ξ 0 = E ξ 0 [ i j ( ξ ) k ( ξ ) ] = Γ i j , k ( 1 ) ( ξ 0 ) .
More generally, the α -divergence with α R induces the ( α ) -connection.
Definition 12
(α-representation [34]). For some positive measure m i 1 α 2 , the coordinate system θ = ( θ i ) derived from the α-divergence is
θ i = m i 1 α 2 = f α ( m i )
and θ i is called the α-representation of a positive measure m i 1 α 2 .
Definition 13
(α-geodesic [28]). The α-geodesic connecting two probability vectors p ( x ) and q ( x ) is defined as
r i ( t ) = c ( t ) f α 1 ( 1 t ) f α ( p ( x i ) ) + t f α ( q ( x i ) ) , t [ 0 , 1 ]
where c ( t ) is determined as
c ( t ) = 1 i = 1 n r i ( t ) .
It is known that the appropriate reparameterizations for the parameter t are necessary for a rigorous discussion in the space of probability measures [35,36]. However, as mentioned in the literature [35], an explicit expression for the reparametrizations τ p , a and τ p , q is unknown. A similar discussion has been made in the derivation of the ϕ β -path [37], where it is mentioned that the normalizing factor is unknown in general. Furthermore, the f-mean is not convex depending on the α . For these reasons, it is generally difficult to discuss α -geodesics in probability measures by normalization or reparameterization, and to avoid unnecessary complexity, the parameter t is assumed to be appropriately reparameterized.
Let ψ α ( θ ) = 1 α 2 i = 1 n m i . Then, the dual coordinate system η is given by η = ψ α ( θ ) as
η i = ( θ i ) 1 + α 1 α = f α ( m i ) .
Hence, it is the ( α )-representation of m i .

2.2. Generalization of Skew Divergences

From Definition 13, the f-interpoloation is considered as an unnormalized version of the α -geodesic. Using the notion of geodesics, skew divergence is generalized in terms of information geometry as follows.
Definition 14
(α-Geodesical Skew Divergence). The α-geodesical skew divergence D G S ( α , λ ) : P × P [ 0 , ] is defined between two Radon–Nikodym densities p and q of μ-absolutely continuous probability measures by:
D G S ( α , λ ) p q : = D K L p m f ( λ , α ) ( p , q ) = X p ln p m f ( λ , α ) ( p , q ) d μ ,
where α R and λ [ 0 , 1 ] .
Some special cases of α -geodesical skew divergence are listed below:
( α R , λ = 1 ) D G S ( α , 1 ) [ p q ] = D K L [ p q ] ( α R , λ = 0 ) D G S ( α , 0 ) [ p q ] = D K L [ p p ] = 0 ( α = 1 , λ [ 0 , 1 ] ) D G S ( 1 , λ ) [ p q ] = λ D K L [ p q ] ( scaled KL-divergence ) ( α = 1 , λ [ 0 , 1 ] ) D G S ( 1 , λ ) [ p q ] = D S ( λ ) [ p q ] ( skew divergence ) ( α = 0 , λ [ 0 , 1 ] ) D G S ( 0 , λ ) [ p q ] = X p ln p { ( 1 λ ) p + λ q } 2 d μ ( α = 3 , λ [ 0 , 1 ] ) D G S ( 3 , λ ) [ p q ] = D S ( λ ) [ p q ] + H ( p ) + H ( q ) ( α = , λ [ 0 , 1 ] ) D G S ( , λ ) [ p q ] = X p ln p min { p , q } d μ ( α = , λ [ 0 , 1 ] ) D G S ( , λ ) [ p q ] = X p ln p max { p , q } d μ
Furthermore, α -geodesical skew divergence is a special form of the generalized skew K-divergence [10,38], which is a family of abstract means-based divergences. In this paper, the skew K-divergence touched upon in [10] is characterized in terms of α -geodesic on positive measures, and its geometric and functional analytic properties are investigated. When the Kolmogorov–Nagumo average (i.e., when the function f 1 in Equation (8) is a strictly monotone convex function) the geodesic has been shown to be well-defined [37].

2.3. Symmetrization of α -Geodesical Skew Divergence

It is easy to symmetrize the α -geodesical skew divergence as follows.
Definition 15
(Symmetrized α-Geodesical Skew Divergence). The symmetrized α-geodesical skew divergence D ¯ G S ( α , λ ) : P × P [ 0 , ] is defined between two Radon–Nikodym densities p and q of μ-absolutely continuous probability measures by:
D ¯ G S ( α , λ ) [ p q ] : = 1 2 D G S ( α , λ ) [ p q ] + D G S ( α , λ ) [ q p ] ,
where α R and λ [ 0 , 1 ] .
It is seen that D ¯ G S ( α , λ ) [ p q ] includes several symmetrized divergences.
D ¯ G S ( α , 1 ) [ p q ] = 1 2 D K L [ p q ] + D K L [ q p ] , ( half of Jeffreys divergence ) D ¯ G S ( 1 , 1 2 ) [ p q ] = 1 2 D K L p p + q 2 + D K L q p + q 2 , ( JS-divergence ) D ¯ G S ( 1 , λ ) [ p q ] = 1 2 D K L p ( 1 λ ) p + λ q + D K L q ( 1 λ ) q + λ p .
The last one is the λ -JS-divergence [39], which is a generalization of the JS-divergence.

3. Properties of α-Geodesical Skew Divergence

In this section, the properties of the α -geodesical skew divergence are studied.
Proposition 8
(Non-negativity of the α-geodesical skew divergence). For α 1 and λ [ 0 , 1 ] , the α-geodesical skew divergence D G S ( α , λ ) [ p q ] satisfies the following inequality:
D G S ( α , λ ) [ p q ] 0 .
Proof. 
When λ is fixed, the f-interpolation has the following inverse monotonicity with respect to α:
m f ( λ , α ) ( p , q ) m f ( λ , α ) ( p , q ) , ( α α ) .
From Gibbs’ inequality [40] and Equation (29), one obtains
D G S ( α , λ ) [ p q ] = X p ln p m f ( α , λ ) ( p , q ) d μ X p d μ ln p m f ( α , λ ) ( p , q ) 1 · ln 1 = 0 .
Proposition 9
(Asymmetry of the α-geodesical skew divergence). α-Geodesical skew divergence is not symmetric in general:
D G S ( α , λ ) [ p | q ] D G S ( α , λ ) [ q p ] .
Proof. 
For example, if λ = 1 , then α R , it holds that
D G S ( α , 1 ) [ p q ] D G S ( α , 1 ) [ q p ] = D K L [ p q ] D K L [ q p ] ,
and the asymmetry of the KL-divergence results in an asymmetry of the geodesic skew divergence. □
When a function f ( x ) of x [ 0 , 1 ] satisfies f ( x ) = f ( 1 x ) , it is referred to as centrosymmetric.
Proposition 10
(Non-centrosymmetricity of the α-geodesical skew divergence with respect to λ). α-Geodesical skew divergence is not centrosymmetric in general with respect to the parameter λ [ 0 , 1 ] :
D G S ( α , λ ) [ p q ] D G S ( α , 1 λ ) [ p q ] .
Proof. 
For example, if λ = 1 , then α R , we have
D G S ( α , λ ) [ p q ] D G S ( α , 1 λ ) [ p q ] = D G S ( α , 1 ) [ p q ] D G S ( α , 0 ) [ p q ] = X p ln p q X p ln p p = X p ln p q 0 .
Proposition 11
(Monotonicity of the α-geodesical skew divergence with respect to α). α-Geodesical skew divergence satisfies the following inequality for all α R , λ [ 0 , 1 ] .
D G S ( α , λ ) [ p q ] D G S ( α , λ ) [ p q ] , ( α α ) .
Proof. 
Obvious from the inverse monotonicity of the f-interpolation (29) and the monotonicity of the logarithmic function. □
Figure 1 shows the monotonicity of the geodesic skew divergence with respect to α . In this figure, divergence is calculated between two binomial distributions.
Proposition 12
(Subadditivity of the α-geodesical skew divergence with respect to α). α-Geodesical skew divergence satisfies the following inequality for all α , β R , λ [ 0 , 1 ]
D G S ( α + β , λ ) [ p q ] D G S ( α , λ ) [ p q ] + D G S ( β , λ ) [ p q ] .
Proof. 
For some α and λ , m f ( λ , α ) takes the form of the Kolmogorov mean [29], which is obvious from its continuity, monotonicity and self-distributivity. □
Proposition 13
(Continuity of the α-geodesical skew divergence with respect to α and λ). α-Geodesical skew divergence has the continuity property.
Proof. 
We can prove from the continuity of the KL-divergence and the Kolmogorov mean. □
Figure 2 shows the continuity of the geodesic skew divergence with respect to α and λ . Both source and target distributions are binomial distributions. From this figure, it can be seen that the divergence changes smoothly as the parameters change.
Lemma 1.
Suppose α . Then,
lim α D G S ( α , λ ) [ p q ] = X p ln p min { p , q } d μ
holds for all λ [ 0 , 1 ] .
Proof. 
Let u = 1 α 2 . Then lim α u = . Assuming p 0 p 1 , it holds that
lim α m f ( λ , α ) ( p 0 , p 1 ) = lim u ( 1 λ ) p 0 u + λ p 1 u 1 u = p 0 lim u ( 1 λ ) + λ p 1 p 0 u 1 u = p 0 = min { p 0 , p 1 } .
Then, the following equality
lim α D G S ( α , λ ) [ p q ] = X p ln p lim α m f ( λ , α ) ( p 0 , p 1 ) d μ = X p ln p min { p , q } d μ
holds. □
Lemma 2.
Suppose α . Then,
lim α D G S ( α , λ ) [ p q ] = X p ln p max { p , q } d μ
holds for all λ [ 0 , 1 ] .
Proof. 
Let u = 1 α 2 . Then lim α u = . Assuming p 0 p 1 , it holds that
lim α m f ( λ , α ) ( p 0 , p 1 ) = lim u ( 1 λ ) p 0 u + λ p 1 u 1 u = p 1 lim u ( 1 λ ) p 0 p 1 u + λ 1 u = p 1 = max { p 0 , p 1 } .
Then, the following equality
lim α D G S ( α , λ ) [ p q ] = X p ln p lim α m f ( λ , α ) ( p 0 , p 1 ) d μ = X p ln p max { p , q } d μ
holds. □
Proposition 14
(Lower bound of the α-geodesical skew divergence). α-Geodesical skew divergence satisfies the following inequality for all α R , λ [ 0 , 1 ] .
D G S ( α , λ ) [ p q ] X p ln p max { p , q } d μ .
Proof. 
It follows from the definition of the inverse monotonicity of f-interpolation (29) and Lemma 2. □
Proposition 15
(Upper bound of the α-geodesical skew divergence). α-Geodesical skew divergence satisfies the following inequality for all α R , λ [ 0 , 1 ] .
D G S ( α , λ ) [ p q ] X p ln p min { p , q } d μ .
Proof. 
It follows from the definition of the f-interpolation (29) and Lemma 1. □
Theorem 1
(Strong convexity of the α-geodesical skew divergence). α-Geodesical skew divergence D G S ( α , λ ) [ p q ] is strongly convex in p with respect to the total variation norm.
Proof. 
Let r : = m f ( α , λ ) ( p , q ) and f j : = p j r ( j = 0 , 1 ) , so that f t = p t r ( t ( 0 , 1 ) ) . From Taylor’s theorem, for g ( x ) : = x ln x and j = 0 , 1 , it holds that
g ( f j ) = g ( f t ) + g ( f t ) ( f j f t ) + ( f j f t ) 2 0 1 g ( ( 1 s ) f t + s f j ) ( 1 s ) d s .
Let
δ : = ( 1 t ) g ( f 0 ) + t g ( f 1 ) g ( f t ) = ( 1 t ) t ( f 1 f 0 ) 2 0 1 t ( 1 s ) f t + s f 0 + 1 t ( 1 s ) f t + s f 1 ( 1 s ) d s = ( 1 t ) t ( f 1 f 0 ) 2 0 1 t f u 0 ( t , s ) + 1 t f u 1 ( t , s ) ( 1 s ) d s ,
where
u j ( t , s ) : = ( 1 s ) t + j t , f μ j ( t , s ) : = ( 1 s ) f t + s f j .
Then,
Δ : = ( 1 t ) H ( p 0 ) + t H ( p 1 ) H ( p t ) = δ d r = ( 1 t ) t 0 1 ( 1 s ) d s t I ( u 0 ( t , s ) ) + ( 1 t ) I ( u 1 ( t , s ) ) ,
where
p 1 p 0 : = | d p 1 d p 0 | d μ , H ( p ) : = D G S ( α , λ ) [ p r ] = p ln p r d μ , I ( u ) : = ( f 1 f 0 ) 2 f u d r .
Now, it is suffice to prove that Δ t ( 1 t ) 2 p 1 p 0 2 . For all u ( 0 , 1 ) , it is seen that p 1 is absolutely continuous with respect to p u . Let g u : = p 1 p u = f 1 f u . One obtains
I ( u ) = 1 ( 1 u ) 2 ( f 1 f u ) 2 f u d r = 1 ( 1 u ) 2 ( g u 1 ) 2 d p u 1 ( 1 u ) 2 | g u 1 | d p u 2 = 1 ( 1 u ) 2 p 1 p u 2 = p 1 p 0 2 ,
and hence, for j = 0 , 1 ,
Δ t ( 1 t ) 2 p 1 p 0 2 .

4. Natural α-Geodesical Skew Divergence for Exponential Family

In this section, the exponential family is considered in which the probability density function is given by
p ( x ; θ ) = exp θ · x + k ( x ) ψ ( θ ) ,
where x is a random variable. In the above equation, θ = ( θ 1 , , θ n ) is an n-dimensional vector parameter to specify distribution, k ( x ) is a function of x and ψ corresponds to the normalization factor.
In skew divergence, the probability distribution of the target is a weighted average of the two distributions. This implicitly assumes that interpolation of the two probability distributions is properly given by linear interpolation. Here, in the exponential family, the interpolation between natural parameters rather than interpolation between probability distributions themselves is considered. Namely, the geodesic connecting two distributions p ( x ; θ p ) and q ( x ; θ q ) on the θ -coordinate system is considered:
θ ( λ ) = ( 1 λ ) θ p + λ θ q ,
where λ [ 0 , 1 ] is the parameter. The probability distributions on the geodesic θ ( λ ) are
p ( x ; λ ) = p ( x ; θ ( λ ) ) = exp λ ( θ q θ p ) · x + θ p · x ψ ( λ ) .
Hence, a geodesic itself is a one-dimensional exponential family, where λ is the natural parameter. A geodesic consists of a linear interpolation of the two distributions in the logarithmic scale because
ln p ( x ; λ ) = ( 1 λ ) ln p ( x ; θ p ) + λ ln p ( x ; θ q ) ψ ( λ ) .
This corresponds to the case α = 1 on the f-interpolation with normalization factor c ( λ ) = exp { ψ ( λ ) } ,
p ( x ; θ ( λ ) ) = m f ( λ , 1 ) ( p ( x ; θ p ) , p ( x ; θ q ) ) .
This induces the natural geodesic skew divergence with α = 1 as
D G S ( 1 , λ ) [ p q ] = X p ln ( p m f ( λ , 1 ) ( p , q ) ) d μ = X p ln p p ln m f ( λ , 1 ) ( p , q ) d μ = X p ln p p ln exp { ( 1 λ ) ln p + λ ln q } d μ = X p ln p ( 1 λ ) p ln p λ p ln q d μ = X λ p ln p λ p ln q d μ = λ X p ln p q d μ = λ D K L [ p q ] ,
and this is equal to the scaled KL divergence.
More generally, let θ P ( α ) and θ Q ( α ) be the parameter representations on the α -coordinate system of probability distributions P and Q. Then, the geodesics between them are represented as in Figure 3, and it induces the α -geodesical skew divergence.

5. Function Space Associated with the α-Geodesical Skew Divergence

To discuss the functional nature of the α -geodesical skew divergence in more depth, the function space it constitutes is considered. For an α -geodesical skew divergence f q ( α , λ ) ( p ) = D G S ( α , λ ) [ p q ] with one side of the distribution fixed, let the entire set be
F q = f q ( α , λ ) α R , λ [ 0 , 1 ] .
For f q ( α , λ ) F q , its semi-norm is defined by
f q ( α , λ ) p : = X | f q ( α , λ ) | p d μ 1 p .
By defining addition and scalar multiplication for f q ( α , λ ) , g q ( α , λ ) F q , c R as follows, F q becomes a semi-norm vector space:
( f q ( α , λ ) + g q ( α , λ ) ) ( u ) : = f q ( α , λ ) ( u ) + g q ( α , λ ) ( u ) = D G S ( α , λ ) [ u q ] + D G S ( α , λ ) [ u q ] ,
( c f ) ( u ) : = c f q ( α , λ ) ( u ) = c · D G S ( α , λ ) [ u q ] .
Theorem 2.
Let N be the kernel of · p as follows:
N k e r ( · p ) = f q ( α , λ ) f q ( α , λ ) = 0 .
Then the quotient space V ( F q , · p ) / N is a Banach space.
Proof. 
It is sufficient to prove that f q ( α , λ ) is integrable to the power of p and that V is complete. From Proposition 15, the α -geodesical skew divergence is bounded from above for all α R and λ [ 0 , 1 ] . Since f q ( α , λ ) is continuous, we know that it is p-power integrable.
Let { f n } be a Cauchy sequence of V :
lim n , m f n f m p = 0 .
As n ( k ) , k = 1 , 2 , , can be taken to be monotonically increasing and
f n f n ( k ) p < 2 k
with respect to n > n ( k ) , let
f n ( k + 1 ) f n ( k ) p < 2 k .
If g n = | f n ( 1 ) | + j = 1 n 1 | f n ( j + 1 ) f n ( j ) | V , it is non-negatively monotonically increasing at each point, and from the subadditivity of the norm, g n p f n ( 1 ) p + j = 1 n 1 2 j . From the monotonic convergence theorem, we have
lim n g n p = lim n g n p f n ( 1 ) p + 1 < .
That is, lim n g n exists almost everywhere, and lim n g n V . From lim n g n < , we have
f n ( 1 ) + j = 1 n 1 ( f n ( j + 1 ) f n ( j ) ) = lim n f n ( 1 )
converges absolutely almost everywhere to | lim n f n ( n ) | lim n g n , a . e . That is, lim n f n ( n ) V . Then
| lim n f n f n ( n ) | lim n g n
and from the superior convergence theorem, we can obtain
lim n lim n f n f n ( n ) p = 0
We have now confirmed the completeness of V . □
Corollary 1.
Let
F + = f q ( α , λ ) α R , λ ( 0 , 1 ] , q P .
Then the space V + : = ( F + , · p ) is a Banach space.
Proof. 
If we restrict λ ( 0 , 1 ] , D G S ( α , λ ) [ u q ] = 0 if and only if u = q . Then, V + has the unique identity element, and then V + is a complete norm space. □
Consider the second argument Q of D G S ( α , λ ) ( P | | Q ) is fixed, which is referred to as the reference distribution. Figure 4 shows values of the α -geodesical skew divergence for a fixed reference Q, where both P and Q are restricted to be Gaussian. In this figure, the reference distribution is N ( 0 , 0.5 ) and the parameters of input distributions are varied in μ [ 0 , 4.5 ] and σ 2 [ 0.5 , 2.3 ] . From this figure, one can see that a larger value of α emphasizes the discrepancy between distributions P and Q. Figure 5 illustrates a coordinate system associated with the α -geodesical skew divergence for different α . As seen from the figure, for the same pair of distributions P and Q, the value of divergence with α = 3 is larger than that with α = 1 .

6. Conclusions and Discussion

In this paper, a new family of divergence is proposed to address the computational difficulty of KL-divergence. The proposed α -geodesical skew divergence is a natural derivation from the concept of α -geodesics in information geometry and generalizes many existing divergences.
Furthermore, α -geodesical skew divergence leads to several applications. For example, the new divergence can be applied to the annealed importance sampling by the same analogy as in previous studies using q-paths [41]. It could also be applied to linguistics, a field in which skew divergence was originally used [19].

Author Contributions

Formal analysis, M.K. and H.H.; Investigation, M.K.; Methodology, M.K. and H.H.; Software, M.K.; Supervision, H.H.; Validation, H.H.; Visualization, M.K.; Writing—original draft, M.K.; Writing—review & editing, H.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by JPSJ (KAKENHI) grant number JP17H01793, JST CREST Grant No. JPMJCR2015 and NEDO JPNP18002.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors express special thanks to the editor and reviewers, whose comments led to valuable improvements to the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Deza, M.M.; Deza, E. Encyclopedia of distances. In Encyclopedia of Distances; Springer: Berlin/Heidelberg, Germany, 2009; pp. 1–583. [Google Scholar]
  2. Basseville, M. Divergence measures for statistical data processing—An annotated bibliography. Signal Process. 2013, 93, 621–633. [Google Scholar] [CrossRef]
  3. Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  4. Sakamoto, Y.; Ishiguro, M.; Kitagawa, G. Akaike Information Criterion Statistics; D. Reidel: Dordrecht, The Netherlands, 1986; Volume 81, p. 26853. [Google Scholar]
  5. Goldberger, J.; Gordon, S.; Greenspan, H. An Efficient Image Similarity Measure Based on Approximations of KL-Divergence Between Two Gaussian Mixtures. ICCV 2003, 3, 487–493. [Google Scholar]
  6. Yu, D.; Yao, K.; Su, H.; Li, G.; Seide, F. KL-divergence regularized deep neural network adaptation for improved large vocabulary speech recognition. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 16–31 May 2013. [Google Scholar]
  7. Solanki, K.; Sullivan, K.; Madhow, U.; Manjunath, B.; Chandrasekaran, S. Provably secure steganography: Achieving zero KL divergence using statistical restoration. In Proceedings of the 2006 International Conference on Image Processing, Atlanta, GA, USA, 8–11 October 2006. [Google Scholar]
  8. Lin, J. Divergence measures based on the Shannon entropy. IEEE Trans. Inf. Theory 1991, 37, 145–151. [Google Scholar] [CrossRef] [Green Version]
  9. Menéndez, M.; Pardo, J.; Pardo, L.; Pardo, M. The jensen-shannon divergence. J. Frankl. Inst. 1997, 334, 307–318. [Google Scholar] [CrossRef]
  10. Nielsen, F. On the Jensen–Shannon symmetrization of distances relying on abstract means. Entropy 2019, 21, 485. [Google Scholar] [CrossRef] [Green Version]
  11. Jeffreys, H. An Invariant Form for the Prior Probability in Estimation Problems. Available online: https://royalsocietypublishing.org/doi/10.1098/rspa.1946.0056 (accessed on 24 April 2021).
  12. Chatzisavvas, K.C.; Moustakidis, C.C.; Panos, C. Information entropy, information distances, and complexity in atoms. J. Chem. Phys. 2005, 123, 174111. [Google Scholar] [CrossRef] [Green Version]
  13. Bigi, B. Using Kullback-Leibler distance for text categorization. In European Conference on Information Retrieval; Springer: Berlin/Heidelberg, Germany, 2003; pp. 305–319. [Google Scholar]
  14. Wang, F.; Vemuri, B.C.; Rangarajan, A. Groupwise point pattern registration using a novel CDF-based Jensen-Shannon divergence. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA, 17–22 June 2006. [Google Scholar]
  15. Nishii, R.; Eguchi, S. Image classification based on Markov random field models with Jeffreys divergence. J. Multivar. Anal. 2006, 97, 1997–2008. [Google Scholar] [CrossRef] [Green Version]
  16. Bayarri, M.; García-Donato, G. Generalization of Jeffreys divergence-based priors for Bayesian hypothesis testing. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2008, 70, 981–1003. [Google Scholar] [CrossRef]
  17. Nielsen, F. Jeffreys centroids: A closed-form expression for positive histograms and a guaranteed tight approximation for frequency histograms. IEEE Signal Process. Lett. 2013, 20, 657–660. [Google Scholar] [CrossRef] [Green Version]
  18. Nielsen, F. On a generalization of the Jensen–Shannon divergence and the Jensen–Shannon centroid. Entropy 2020, 22, 221. [Google Scholar] [CrossRef] [Green Version]
  19. Lee, L. Measures of distributional similarity. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics on Computational Linguistics, College Park, MD, USA, 20–26 June 1999. [Google Scholar]
  20. Lee, L. On the Effectiveness of the Skew Divergence for Statistical Language Analysis. In Proceedings of the Eighth International Workshop on Artificial Intelligence and Statistics, Key West, FL, USA, 4–7 January 2001; pp. 176–783. [Google Scholar]
  21. Xiao, F.; Wu, Y.; Zhao, H.; Wang, R.; Jiang, S. Dual skew divergence loss for neural machine translation. arXiv 2019, arXiv:1908.08399. [Google Scholar]
  22. Carvalho, B.M.; Garduño, E.; Santos, I.O. Skew divergence-based fuzzy segmentation of rock samples. J. Phys. Conf. Ser. 2014, 490, 012010. [Google Scholar] [CrossRef]
  23. Revathi, P.; Hemalatha, M. Cotton leaf spot diseases detection utilizing feature selection with skew divergence method. Int. J. Sci. Eng. Technol. 2014, 3, 22–30. [Google Scholar]
  24. Ahmed, N.; Neville, J.; Kompella, R.R. Network Sampling via Edge-Based Node Selection with Graph Induction. Available online: https://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=2743&context=cstech (accessed on 24 April 2021).
  25. Hughes, T.; Ramage, D. Lexical semantic relatedness with random graph walks. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), Prague, Czech Republic, 28–30 June 2007. [Google Scholar]
  26. Audenaert, K.M. Quantum skew divergence. J. Math. Phys. 2014, 55, 112202. [Google Scholar] [CrossRef] [Green Version]
  27. Hardy, G.H.; Littlewood, J.E.; Pólya, G. Inequalities; Cambridge University Press: Cambridge, UK, 1952. [Google Scholar]
  28. Amari, S.I. Information Geometry and Its Applications; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  29. Kolmogorov, A.N.; Castelnuovo, G. Sur la Notion de la Moyenne; Atti Accad. Naz: Lincei, French, 1930. [Google Scholar]
  30. Nagumo, M. Über eine klasse der mittelwerte. Jpn. J. Math. 1930, 7, 71–79. [Google Scholar] [CrossRef] [Green Version]
  31. Nielsen, F. Generalized Bhattacharyya and Chernoff upper bounds on Bayes error using quasi-arithmetic means. Pattern Recognit. Lett. 2014, 42, 25–34. [Google Scholar] [CrossRef] [Green Version]
  32. Amari, S.I. Differential-Geometrical Methods in Statistics; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012; Volume 28. [Google Scholar]
  33. Amari, S. Differential-geometrical methods in statistics. Lect. Notes Stat. 1985, 28, 1. [Google Scholar]
  34. Amari, S. α-Divergence Is Unique, Belonging to Both f-Divergence and Bregman Divergence Classes. IEEE Trans. Inf. Theory 2009, 55, 4925–4931. [Google Scholar] [CrossRef]
  35. Ay, N.; Jost, J.; Lê, H.V.; Schwachhöfer, L. Information Geometry; Springer International Publishing: Berlin/Heidelberg, Germany, 2017. [Google Scholar] [CrossRef]
  36. Morozova, E.A.; Chentsov, N.N. Markov invariant geometry on manifolds of states. J. Sov. Math. 1991, 56, 2648–2669. [Google Scholar] [CrossRef]
  37. Eguchi, S.; Komori, O. Path Connectedness on a Space of Probability Density Functions. In Lecture Notes in Computer Science; Springer International Publishing: Berlin/Heidelberg, Germany, 2015; pp. 615–624. [Google Scholar] [CrossRef]
  38. Nielsen, F. On a Variational Definition for the Jensen-Shannon Symmetrization of Distances Based on the Information Radius. Entropy 2021, 23, 464. [Google Scholar] [CrossRef]
  39. Nielsen, F. A family of statistical symmetric divergences based on Jensen’s inequality. arXiv 2010, arXiv:1009.4004. [Google Scholar]
  40. Cover, T.M. Elements of Information Theory; John Wiley & Sons: Hoboken, NJ, USA, 1999. [Google Scholar]
  41. Brekelmans, R.; Masrani, V.; Bui, T.D.; Wood, F.D.; Galstyan, A.; Steeg, G.V.; Nielsen, F. Annealed Importance Sampling with q-Paths. arXiv 2020, arXiv:2012.07823. [Google Scholar]
Figure 1. Monotonicity of the α -geodesical skew divergence with respect to α . The α -geodesical skew divergence between the binomial distributions p = B ( 10 , 0.3 ) and q = B ( 10 , 0.7 ) has been calculated.
Figure 1. Monotonicity of the α -geodesical skew divergence with respect to α . The α -geodesical skew divergence between the binomial distributions p = B ( 10 , 0.3 ) and q = B ( 10 , 0.7 ) has been calculated.
Entropy 23 00528 g001
Figure 2. Continuity of the α -geodesmcal skew divergence with respect to α and λ . The α -geodesical skew divergence between the binomial distributions p = B ( 10 , 0.3 ) and q = B ( 10 , 0.7 ) has been calculated.
Figure 2. Continuity of the α -geodesmcal skew divergence with respect to α and λ . The α -geodesical skew divergence between the binomial distributions p = B ( 10 , 0.3 ) and q = B ( 10 , 0.7 ) has been calculated.
Entropy 23 00528 g002
Figure 3. The geodesic between two probability distributions on the α -coordinate system.
Figure 3. The geodesic between two probability distributions on the α -coordinate system.
Entropy 23 00528 g003
Figure 4. α -geodesical skew divergence between two normal distributions. The reference distribution is Q = N ( 0 , 0.5 ) . For P 1 , P 2 , , P j , ( j = 1 , 2 , , 10 ) , let their mean and variance be μ j and σ j 2 , respectively, where μ j + 1 μ j = 0.5 and σ j + 1 2 σ j 2 = 0.2 .
Figure 4. α -geodesical skew divergence between two normal distributions. The reference distribution is Q = N ( 0 , 0.5 ) . For P 1 , P 2 , , P j , ( j = 1 , 2 , , 10 ) , let their mean and variance be μ j and σ j 2 , respectively, where μ j + 1 μ j = 0.5 and σ j + 1 2 σ j 2 = 0.2 .
Entropy 23 00528 g004
Figure 5. Coordinate system of F q or F + . Such a coordinate system is not Euclidean.
Figure 5. Coordinate system of F q or F + . Such a coordinate system is not Euclidean.
Entropy 23 00528 g005
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kimura, M.; Hino, H. α-Geodesical Skew Divergence. Entropy 2021, 23, 528. https://doi.org/10.3390/e23050528

AMA Style

Kimura M, Hino H. α-Geodesical Skew Divergence. Entropy. 2021; 23(5):528. https://doi.org/10.3390/e23050528

Chicago/Turabian Style

Kimura, Masanari, and Hideitsu Hino. 2021. "α-Geodesical Skew Divergence" Entropy 23, no. 5: 528. https://doi.org/10.3390/e23050528

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop