Next Article in Journal
Quantum Probes for Ohmic Environments at Thermal Equilibrium
Previous Article in Journal
Numerical Simulation of Entropy Generation for Power-Law Liquid Flow over a Permeable Exponential Stretched Surface with Variable Heat Source and Heat Flux
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Jensen–Shannon Symmetrization of Distances Relying on Abstract Means

Sony Computer Science Laboratories, Takanawa Muse Bldg., 3-14-13, Higashigotanda, Shinagawa-ku, Tokyo 141-0022, Japan
Entropy 2019, 21(5), 485; https://doi.org/10.3390/e21050485
Submission received: 10 April 2019 / Revised: 8 May 2019 / Accepted: 9 May 2019 / Published: 11 May 2019
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
The Jensen–Shannon divergence is a renowned bounded symmetrization of the unbounded Kullback–Leibler divergence which measures the total Kullback–Leibler divergence to the average mixture distribution. However, the Jensen–Shannon divergence between Gaussian distributions is not available in closed form. To bypass this problem, we present a generalization of the Jensen–Shannon (JS) divergence using abstract means which yields closed-form expressions when the mean is chosen according to the parametric family of distributions. More generally, we define the JS-symmetrizations of any distance using parameter mixtures derived from abstract means. In particular, we first show that the geometric mean is well-suited for exponential families, and report two closed-form formula for (i) the geometric Jensen–Shannon divergence between probability densities of the same exponential family; and (ii) the geometric JS-symmetrization of the reverse Kullback–Leibler divergence between probability densities of the same exponential family. As a second illustrating example, we show that the harmonic mean is well-suited for the scale Cauchy distributions, and report a closed-form formula for the harmonic Jensen–Shannon divergence between scale Cauchy distributions. Applications to clustering with respect to these novel Jensen–Shannon divergences are touched upon.

Graphical Abstract

1. Introduction and Motivations

1.1. Kullback–Leibler Divergence and Its Symmetrizations

Let ( X , A ) be a measurable space [1] where X denotes the sample space and A the σ -algebra of measurable events. Consider a positive measure μ (usually the Lebesgue measure μ L with Borel σ -algebra B ( R d ) or the counting measure μ c with power set σ -algebra 2 X ). Denote by P the set of probability distributions.
The Kullback–Leibler Divergence [2] (KLD) KL : P × P [ 0 , ] is the most fundamental distance [2] between probability distributions, defined by:
KL ( P : Q ) : = p log p q d μ ,
where p and q denote the Radon–Nikodym derivatives of probability measures P and Q with respect to μ (with P , Q μ ). The KLD expression between P and Q in Equation (1) is independent of the dominating measure μ . Table A1 summarizes the various distances and their notations used in this paper.
The KLD is also called the relative entropy [2] because it can be written as the difference of the cross-entropy minus the entropy:
KL ( p : q ) = h × ( p : q ) h ( p ) ,
where h × denotes the cross-entropy [2]:
h × ( p : q ) : = p log 1 q d μ ,
and
h ( p ) : = p log 1 p d μ = h × ( p : p ) ,
denotes the Shannon entropy [2]. Although the formula of the Shannon entropy in Equation (4) unifies both the discrete case and the continuous case of probability distributions, the behavior of entropy in the discrete case and the continuous case is very different: When μ = μ c , Equation (4) yields the discrete Shannon entropy which is always positive and upper bounded by log | X | . When μ = μ L , Equation (4) defines the Shannon differential entropy which may be negative and unbounded [2] (e.g., the differential entropy of the Gaussian distribution N ( m , σ ) is 1 2 log ( 2 π e σ 2 ) ). See also [3] for further important differences between the discrete case and the continuous case.
In general, the KLD is an asymmetric distance (i.e., KL ( p : q ) KL ( q : p ) , hence the argument separator notation using the delimiter ‘:’) In information theory [2], it is customary to use the double bar notation ‘‖’ instead of the comma ‘,’ notation to avoid confusion with joint random variables. The reverse KL divergence or dual KL divergence is:
KL * ( P : Q ) : = KL ( Q : P ) = q log q p d μ .
In general, the reverse distance or dual distance for a distance D is written as:
D * ( p : q ) : = D ( q : p ) .
One way to symmetrize the KLD is to consider the Jeffreys Divergence [4] (JD, Sir Harold Jeffreys (1891–1989) was a British statistician.):
J ( p ; q ) : = KL ( p : q ) + KL ( q : p ) = ( p q ) log p q d μ = J ( q ; p ) .
However, this symmetric distance is not upper bounded, and its sensitivity can raise numerical issues in applications. Here, we used the optional argument separator notation ‘;’ to emphasize that the distance is symmetric but not necessarily a metric distance. This notation matches the notational convention of the mutual information if two joint random variables in information theory [2].
The symmetrization of the KLD may also be obtained using the harmonic mean instead of the arithmetic mean, yielding the resistor average distance [5] R ( p ; q ) :
1 R ( p ; q ) = 1 2 1 KL ( p : q ) + 1 KL ( q : p ) ,
R ( p ; q ) = 2 KL ( p : q ) + KL ( q : p ) KL ( p : q ) KL ( q : p ) = 2 J ( p ; q ) KL ( p : q ) KL ( q : p ) .
Another famous symmetrization of the KLD is the Jensen–Shannon Divergence [6] (JSD) defined by:
JS ( p ; q ) : = 1 2 KL p : p + q 2 + KL q : p + q 2 ,
= 1 2 p log 2 p p + q + q log 2 q p + q d μ .
This distance can be interpreted as the total divergence to the average distribution (see Equation (10)). The JSD can be rewritten as a Jensen divergence (or Burbea–Rao divergence [7]) for the negentropy generator h (called Shannon information):
JS ( p ; q ) = h p + q 2 h ( p ) + h ( q ) 2 .
An important property of the Jensen–Shannon divergence compared to the Jeffreys divergence is that this distance is always bounded:
0 JS ( p : q ) log 2 .
This follows from the fact that
KL p : p + q 2 = p log 2 p p + q d μ p log 2 p p d μ = log 2 .
Finally, the square root of the JSD (i.e., JS ) yields a metric distance satisfying the triangular inequality [8,9]. The JSD has found applications in many fields such as bioinformatics [10] and social sciences [11], just to name a few. Recently, the JSD has gained attention in the deep learning community with the Generative Adversarial Networks (GANs) [12]. In computer vision and pattern recognition, one often relies on information-theoretic techniques to perform registration and recognition tasks. For example, in [13], the authors use a mixture of Principal Axes Registrations (mPAR) whose parameters are estimated by minimizing the KLD between the considered two-point distributions. In [14], the authors parameterize both shapes and deformations using Gaussian Mixture Models (GMMs) to perform non-rigid shape registration. The lack of closed-form formula for the KLD between GMMs [15] spurred the use of other statistical distances which admit a closed-form expression for GMMs. For example, in [16], shape registration is performed by using the Jensen-Rényi divergence between GMMs. See also [17] for other information-theoretic divergences that admit closed-form formula for some statistical mixtures extending GMMs.
In information geometry [18], the KLD, JD and JSD are invariant divergences which satisfy the property of information monotonicity [18]. The class of (separable) distances satisfying the information monotonicity are exhaustively characterized as Csiszár’s f-divergences [19]. A f-divergence is defined for a convex generator function f strictly convex at 1 (with f ( 1 ) = f ( 1 ) = 0 ) by:
I f ( p : q ) = p f q p d μ .
The Jeffreys and Jensen–Shannon f-generators are:
f J ( u ) : = ( u 1 ) log u ,
f JS ( u ) : = ( u + 1 ) log 1 + u 2 + u log u .

1.2. Statistical Distances and Parameter Divergences

In information and probability theory, the term “divergence” informally means a statistical distance [2]. However in information geometry [18], a divergence has a stricter meaning of being a smooth parametric distance (called a contrast function in [20]) from which a dual geometric structure can be derived [21,22].
Consider parametric distributions p θ belonging to a parametric family of distributions { p θ : θ Θ } (e.g., Gaussian family or Cauchy family), where Θ denotes the parameter space. Then a statistical distance D between distributions p θ and p θ amount to an equivalent parameter distance:
P ( θ : θ ) : = D ( p θ : p θ ) .
For example, the KLD between two distributions belonging to the same exponential family (e.g., Gaussian family) amount to a reverse Bregman divergence for the cumulant generator F of the exponential family [23]:
KL ( p θ : p θ ) = B F * ( θ : θ ) = B F ( θ : θ ) .
A Bregman divergence B F is defined for a strictly convex and differentiable generator F as:
B F ( θ : θ ) : = F ( θ ) F ( θ ) θ θ , F ( θ ) ,
where · , · is an inner product (usually the Euclidean dot product for vector parameters).
Similar to the interpretation of the Jensen–Shannon divergence (statistical divergence) as a Jensen divergence for the negentropy generator, the Jensen–Bregman divergence [7] JB F (parametric divergence JBD) amounts to a Jensen divergence J F for a strictly convex generator F : Θ R :
JB F ( θ : θ ) : = 1 2 B F θ : θ + θ 2 + B F θ : θ + θ 2 ,
= F ( θ ) + F ( θ ) 2 F θ + θ 2 = : J F ( θ : θ ) ,
Let us introduce the notation ( θ p θ q ) α : = ( 1 α ) θ p + α θ q to denote the linear interpolation (LERP) of the parameters. Then we have more generally that the skew Jensen–Bregman divergence JB F α ( θ : θ ) amounts to a skew Jensen divergence J F α ( θ : θ ) :
JB F α ( θ : θ ) : = ( 1 α ) B F θ : ( θ θ ) α + α B F θ : ( θ θ ) α ) ,
= ( F ( θ ) F ( θ ) ) α F ( θ θ ) α = : J F α ( θ : θ ) ,

1.3. J-Symmetrization and J S -Symmetrization of Distances

For any arbitrary distance D ( p : q ) , we can define its skew J-symmetrization for α [ 0 , 1 ] by:
J D α ( p : q ) : = ( 1 α ) D p : q + α D q : p ,
and its JS-symmetrization by:
JS D α ( p : q ) : = ( 1 α ) D p : ( 1 α ) p + α q + α D q : ( 1 α ) p + α q ,
= ( 1 α ) D p : ( p q ) α + α D q : ( p q ) α .
Usually, α = 1 2 , and for notational brevity, we drop the superscript: JS D ( p : q ) : = JS D 1 2 ( p : q ) . The Jeffreys divergence is twice the J-symmetrization of the KLD, and the Jensen–Shannon divergence is the JS -symmetrization of the KLD.
The J-symmetrization of a f-divergence I f is obtained by taking the generator
f α J ( u ) = ( 1 α ) f ( u ) + α f ( u ) ,
where f ( u ) = u f ( 1 u ) is the conjugate generator:
I f ( p : q ) = I f * ( p : q ) = I f ( q : p ) .
The JS -symmetrization of a f-divergence
I f α ( p : q ) : = ( 1 α ) I f ( p : ( p q ) α ) + α I f ( q : ( p q ) α ) ,
with ( p q ) α = ( 1 α ) p + α q is obtained by taking the generator
f α JS ( u ) : = ( 1 α ) f ( α u + 1 α ) + α f α + 1 α u .
We check that we have:
I f α ( p : q ) = ( 1 α ) I f ( p : ( p q ) α ) + α I f ( q : ( p q ) α ) = I f 1 α ( q : p ) = I f α JS ( p : q ) .
A family of symmetric distances unifying the Jeffreys divergence with the Jensen–Shannon divergence was proposed in [24]. Finally, let us mention that once we have symmetrized a distance D, we may also metrize this symmetric distance by choosing (when it exists) the largest exponent δ > 0 such that D δ becomes a metric distance [8,25,26,27,28].

1.4. Contributions and Paper Outline

The paper is organized as follows:
Section 2 reports the special case of mixture families in information geometry [18] for which the Jensen–Shannon divergence can be expressed as a Bregman divergence (Theorem 1), and highlight the lack of closed-form formula when considering exponential families. This fact precisely motivated this work.
Section 3 introduces the generalized Jensen–Shannon divergences using statistical mixtures derived from abstract weighted means (Definitions 2 and 5), presents the JS-symmetrization of statistical distances, and report a sufficient condition to get bounded JS-symmetrizations (Property 1).
In Section 4.1, we consider the calculation of the geometric JSD between members of the same exponential family (Theorem 2) and instantiate the formula for the multivariate Gaussian distributions (Corollary 1). We discuss about applications for k-means clustering in Section 4.1.2. In Section 4.2, we illustrate the method with another example that calculates in closed form the harmonic JSD between scale Cauchy distributions (Theorem 4).
Finally, we wrap up and conclude this work in Section 5.

2. Jensen–Shannon Divergence in Mixture and Exponential Families

We are interested to calculate the JSD between densities belonging to parametric families of distributions.
A trivial example is when p = ( p 0 , , p D ) and q = ( q 0 , , q D ) are categorical distributions: The average distribution p + q 2 is a again categorical distribution, and the JSD is expressed plainly as:
JS ( p , q ) = 1 2 i = 0 D p i log 2 p i p i + q i + q i log 2 q i p i + q i .
Another example is when p = m θ p and q = m θ q both belong to the same mixture family [18] M :
M : = m θ ( x ) = 1 i = 1 D θ i p i ( x ) p 0 ( x ) + i = 1 D θ i p i ( x ) : θ i > 0 , i θ i < 1 ,
for linearly independent component distributions p 0 , p 1 , , p D . We have [29]:
KL ( m θ p : m θ q ) = B F ( θ p : θ q ) ,
where B F is a Bregman divergence defined in Equation (20) obtained for the convex negentropy generator [29] F ( θ ) = h ( m θ ) . The proof that F ( θ ) is a strictly convex function is not trivial [30].
The mixture families include the family of categorical distributions over a finite alphabet X = { E 0 , , E D } (the D-dimensional probability simplex) since those categorical distributions form a mixture family with p i ( x ) : = Pr ( X = E i ) = δ E i ( x ) . Beware that mixture families impose to prescribe the component distributions. Therefore, a density of a mixture family is a special case of statistical mixtures (e.g., GMMs) with prescribed component distributions.
The mathematical identity of Equation (35) that does not yield a practical formula since F ( θ ) is usually not itself available in closed form. Worse, the Bregman generator can be non-analytic [31]. Nevertheless, this identity is useful for computing the right-sided Bregman centroid (left KL centroid of mixtures) since this centroid is equivalent to the center of mass, and independent of the Bregman generator [29].
Since the mixture of mixtures is also a mixture, specifically
m θ p + m θ q 2 = m θ p + θ q 2 M ,
it follows that we get a closed-form expression for the JSD between mixtures belonging to M .
Theorem 1 (JSD between mixtures).
The Jensen–Shannon divergence between two distributions p = m θ p and q = m θ q belonging to the same mixture family M is expressed as a Jensen–Bregman divergence for the negentropy generator F:
JS ( m θ p , m θ q ) = 1 2 B F θ p : θ p + θ q 2 + B F θ q : θ p + θ q 2 .
This amounts to calculate the Jensen divergence:
JS ( m θ p , m θ q ) = J F ( θ 1 ; θ 2 ) = ( F ( θ 1 ) F ( θ 2 ) ) 1 2 F ( ( θ 1 θ 2 ) 1 2 ) ,
where ( v 1 v 2 ) α : = ( 1 α ) v 1 + α v 2 .
Now, consider distributions p = e θ p and q = e θ q belonging to the same exponential family [18] E :
E : = e θ ( x ) = exp θ x F ( θ ) : θ Θ ,
where
Θ : = θ R D : exp ( θ x ) d μ < ,
denotes the natural parameter space. We have [18]:
KL ( e θ p : e θ q ) = B F ( θ q : θ p ) ,
where F denotes the log-normalizer or cumulant function of the exponential family [18].
However, e θ p + e θ q 2 does not belong to E in general, except for the case of the categorical/multinomial family which is both an exponential family and a mixture family [18].
For example, the mixture of two Gaussian distributions with distinct components is not a Gaussian distribution. Thus, it is not obvious to get a closed-form expression for the JSD in that case. This limitation precisely motivated the introduction of generalized JSDs defined in the next section.
Notice that in [32,33], it is shown how to express or approximate the f-divergences using expansions of power χ pseudo-distances. These power chi distances can all be expressed in closed form when dealing with isotropic Gaussians. This result holds for the JSD since the JSD is a f-divergence [33].

3. Generalized Jensen–Shannon Divergences

We first define abstract means M, and then generic statistical M-mixtures from which generalized Jensen–Shannon divergences are built thereof.

Definitions

Consider an abstract mean [34] M. That is, a continuous bivariate function M ( · , · ) : I × I I on an interval I R that satisfies the following in-betweenness property:
inf { x , y } M ( x , y ) sup { x , y } , x , y I .
Using the unique dyadic expansion of real numbers, we can always build a corresponding weighted mean M α ( p , q ) (with α [ 0 , 1 ] ) following the construction reported in [34] (page 3) such that M 0 ( p , q ) = p and M 1 ( p , q ) = q . In the remainder, we consider I = ( 0 , ) .
Examples of common weighted means are:
  • the arithmetic mean A α ( x , y ) = ( 1 α ) x + α y ,
  • the geometric mean G α ( x , y ) = x 1 α y α , and
  • the harmonic mean H α ( x , y ) = x y ( 1 α ) y + α x .
These means can be unified using the concept of quasi-arithmetic means [34] (also called Kolmogorov–Nagumo means):
M α h ( x , y ) : = h 1 ( 1 α ) h ( x ) + α h ( y ) ,
where h is a strictly monotonous function. For example, the geometric mean G α ( x , y ) is obtained as M α h ( x , y ) for the generator h ( u ) = log ( u ) . Rényi used the concept of quasi-arithmetic means instead of the arithmetic mean to define axiomatically the Rényi entropy [35] of order α in information theory [2].
For any abstract weighted mean, we can build a statistical mixture called a M-mixture as follows:
Definition 1 (M-mixture).
The M α -interpolation ( p q ) α M (with α [ 0 , 1 ] ) of densities p and q with respect to a mean M is a α-weighted M-mixture defined by:
( p q ) α M ( x ) : = M α ( p ( x ) , q ( x ) ) Z α M ( p : q ) ,
where
Z α M ( p : q ) = t X M α ( p ( t ) , q ( t ) ) d μ ( t ) = : M α ( p , q ) .
is the normalizer function (or scaling factor) ensuring that ( p q ) α M P . (The bracket notation f denotes the integral of f over X .)
The A-mixture ( p q ) α A ( x ) = ( 1 α ) p ( x ) + α q ( x ) (‘A’ standing for the arithmetic mean) represents the usual statistical mixture [36] (with Z α A ( p : q ) = 1 ). The G-mixture ( p q ) α G ( x ) = p ( x ) 1 α q ( x ) α Z α G ( p : q ) of two distributions p ( x ) and q ( x ) (’G’ standing for the geometric mean G) is an exponential family of order [37] 1:
( p q ) α G ( x ) = exp ( 1 α ) p ( x ) + α q ( x ) log Z α G ( p : q ) .
The two-component M-mixture can be generalized to a k-component M-mixture with α Δ k 1 , the ( k 1 ) -dimensional standard simplex:
( p 1 p k ) α M : = p 1 ( x ) α 1 × × p k ( x ) α k Z α ( p 1 , , p k ) ,
where Z α ( p 1 , , p k ) : = X p 1 ( x ) α 1 × × p k ( x ) α k d μ ( x ) .
For a given pair of distributions p and q, the set { M α ( p ( x ) , q ( x ) ) : α [ 0 , 1 ] } describes a path in the space of probability density functions. This density interpolation scheme was investigated for quasi-arithmetic weighted means in [38,39,40]. In [41], the authors study the Fisher information matrix for the α -mixture models (using α -power means).
We call ( p q ) α M the α -weighted M-mixture, thus extending the notion of α -mixtures [42] obtained for power means P α . Notice that abstract means have also been used to generalize Bregman divergences using the concept of ( M , N ) -convexity [43].
Let us state a first generalization of the Jensen–Shannon divergence:
Definition 2 (M-Jensen–Shannon divergence).
For a mean M, the skew M-Jensen–Shannon divergence (for α [ 0 , 1 ] ) is defined by
JS M α ( p : q ) : = ( 1 α ) KL p : ( p q ) α M + α KL q : ( p q ) α M
When M α = A α , we recover the ordinary Jensen–Shannon divergence since A α ( p : q ) = ( p q ) α (and Z α A ( p : q ) = 1 ).
We can extend the definition to the JS -symmetrization of any distance:
Definition 3 (M-JS symmetrization).
For a mean M and a distance D, the skew M- JS symmetrization of D (for α [ 0 , 1 ] ) is defined by
JS D M α ( p : q ) : = ( 1 α ) D p : ( p q ) α M + α D q : ( p q ) α M
By notation, we have JS M α ( p : q ) = JS KL M α ( p : q ) . That is, the arithmetic JS-symmetrization of the KLD is the JSD.
Let us define the α -skew K-divergence [6,44] K α ( p : q ) as
K α p : q : = KL ( p : ( 1 α ) p + α q ) = KL ( p : ( p q ) α ) ,
where ( p q ) α ( x ) : = ( 1 α ) p ( x ) + α q ( x ) . Then the Jensen–Shannon divergence and the Jeffreys divergence can be rewritten [24] as
JS p ; q = 1 2 K 1 2 p : q + K 1 2 q : p ,
J p ; q = K 1 ( p : q ) + K 1 ( q : p ) ,
since KL ( p : q ) = K 1 ( p : q ) . Then JS α ( p : q ) = ( 1 α ) K α ( p : q ) + α K 1 α ( q : p ) . Similarly, we can define the generalized skew K-divergence:
K D M α ( p : q ) : = D p : ( p q ) α M .
The success of the JSD compared to the JD in applications is partially due to the fact that the JSD is upper bounded by log 2 . So, one question to ask is whether those generalized JSDs are upper bounded or not?
To report a sufficient condition, let us first introduce the dominance relationship between means: We say that a mean M dominates a mean N when M ( x , y ) N ( x , y ) for all x , y 0 , see [34]. In that case we write concisely M N . For example, the Arithmetic-Geometric-Harmonic (AGH) inequality states that A G H .
Consider the term
KL ( p : ( p q ) α M ) = p ( x ) log p ( x ) Z α M ( p , q ) M α ( p ( x ) , q ( x ) ) d μ ( x ) ,
= log Z α M ( p , q ) + p ( x ) log p ( x ) M α ( p ( x ) , q ( x ) ) d μ ( x ) .
When mean M α dominates the arithmetic mean A α , we have
p ( x ) log p ( x ) M α ( p ( x ) , q ( x ) ) d μ ( x ) p ( x ) log p ( x ) A α ( p ( x ) , q ( x ) ) d μ ( x ) ,
and
p ( x ) log p ( x ) A α ( p ( x ) , q ( x ) ) d μ ( x ) p ( x ) log p ( x ) ( 1 α ) p ( x ) d μ ( x ) = log ( 1 α ) .
Notice that Z α A ( p : q ) = 1 (when M = A is the arithmetic mean), and we recover the fact that the α -skew Jensen–Shannon divergence is upper bounded by log ( 1 α ) (e.g., log 2 when α = 1 2 ).
We summarize the result in the following property:
Property 1 (Upper bound on M-JSD).
The M-JSD is upper bounded by log Z α M ( p , q ) 1 α when M A .
Let us observe that dominance of means can be used to define distances: For example, the celebrated α -divergences
I α ( p : q ) = α p ( x ) + ( 1 α ) q ( x ) p ( x ) α q ( x ) 1 α d μ ( x ) , α { 0 , 1 }
can be interpreted as a difference of two means, the arithmetic mean and the geometry mean:
I α ( p : q ) = A α ( q ( x ) : p ( x ) ) G α ( q ( x ) : p ( x ) ) d μ ( x ) .
We can also define the generalized Jeffreys divergence as follows:
Definition 4 (N-Jeffreys divergence).
For a mean N, the skew N-Jeffreys divergence (for β [ 0 , 1 ] ) is defined by
J N β ( p : q ) : = N β ( KL p : q , KL q : p ) .
This definition includes the (scaled) resistor average distance [5] R ( p ; q ) , obtained for the harmonic mean N = H for the KLD with skew parameter β = 1 2 :
1 R ( p ; q ) = 1 2 1 KL ( p : q ) + 1 KL ( q : p ) ,
R ( p ; q ) = 2 J ( p ; q ) KL ( p : q ) KL ( q : p ) .
In [5], the factor 1 2 is omitted to keep the spirit of the original Jeffreys divergence.
We can further extend this definition for any arbitrary divergence D as follows:
Definition 5 (Skew ( M , N ) -D divergence).
The skew ( M , N ) -divergence with respect to weighted means M α and N β as follows:
JS D M α , N β ( p : q ) : = N β D p : ( p q ) α M , D q : ( p q ) α M .
We now show how to choose the abstract mean according to the parametric family of distributions to obtain some closed-form formula for some statistical distances.

4. Some Closed-Form Formula for the M-Jensen–Shannon Divergences

Our motivation to introduce these novel families of M-Jensen–Shannon divergences is to obtain closed-form formula when probability densities belong to some given parametric families P Θ . We shall illustrate the principle of the method to choose the right abstract mean for the considered parametric family, and report corresponding formula for the following two case studies:
  • The geometric G-Jensen–Shannon divergence for the exponential families (Section 4.1), and
  • the harmonic H-Jensen–Shannon divergence for the family of Cauchy scale distributions (Section 4.2).
Recall that the arithmetic A-Jensen–Shannon divergence is well-suited for mixture families (Theorem 1).

4.1. The Geometric G-Jensen–Shannon Divergence

Consider an exponential family [37] E F with log-normalizer F:
E F = p θ ( x ) d μ = exp ( θ x F ( θ ) ) d μ : θ Θ ,
and natural parameter space
Θ = θ : X exp ( θ x ) d μ < .
The log-normalizer (a log-Laplace function also called log-partition or cumulant function) is a real analytic convex function.
We seek for a mean M such that the weighted M-mixture density ( p θ 1 p θ 2 ) α M of two densities p θ 1 and p θ 2 of the same exponential family yields another density of that exponential family (e.g., p ( θ 1 θ 2 ) α ). When considering exponential families, choose the weighted geometric mean G α for the abstract mean M α ( x , y ) : M α ( x , y ) = G α ( x , y ) = x 1 α y α , for x , y > 0 . Indeed, it is well-known that the normalized weighted product of distributions belonging to the same exponential family also belongs to this exponential family [45]:
x X , ( p θ 1 p θ 2 ) α G ( x ) : = G α ( p θ 1 ( x ) , p θ 2 ( x ) ) G α ( p θ 1 ( t ) , p θ 2 ( t ) ) d μ ( t ) = p θ 1 1 α ( x ) p θ 2 α ( x ) Z α G ( p : q ) ,
= p ( θ 1 θ 2 ) α ( x ) ,
where the normalization factor is
Z α G ( p : q ) = exp ( J F α ( θ 1 : θ 2 ) ) ,
for the skew Jensen divergence J F α defined by:
J F α ( θ 1 : θ 2 ) : = ( F ( θ 1 ) F ( θ 2 ) ) α F ( ( θ 1 θ 2 ) α ) .
Notice that since the natural parameter space Θ is convex, the distribution p ( θ 1 θ 2 ) α E F (since ( θ 1 θ 2 ) α Θ ).
Thus, it follows that we have:
KL p θ : ( p θ 1 p θ 2 ) α G = KL p θ : p ( θ 1 θ 2 ) α ,
= B F ( ( θ 1 θ 2 ) α : θ ) .
This allows us to conclude that the G-Jensen–Shannon divergence admits the following closed-form expression between densities belonging to the same exponential family:
JS α G ( p θ 1 : p θ 2 ) : = ( 1 α ) KL ( p θ 1 : ( p θ 1 p θ 2 ) α G ) + α KL ( p θ 2 : ( p θ 1 p θ 2 ) α G ) ,
= ( 1 α ) B F ( ( θ 1 θ 2 ) α : θ 1 ) + α B F ( ( θ 1 θ 2 ) α : θ 2 ) .
Please note that since ( θ 1 θ 2 ) α θ 1 = α ( θ 2 θ 1 ) and ( θ 1 θ 2 ) α θ 2 = ( 1 α ) ( θ 1 θ 2 ) , it follows that ( 1 α ) B F ( θ 1 : ( θ 1 θ 2 ) α ) + α B F ( θ 2 : ( θ 1 θ 2 ) α ) = J F α ( θ 1 : θ 2 ) .
The dual divergence [46] D * (with respect to the reference argument) or reverse divergence of a divergence D is defined by swapping the calling arguments: D * ( θ : θ ) : = D ( θ : θ ) .
Thus, if we defined the Jensen–Shannon divergence for the dual KL divergence KL * ( p : q ) : = KL ( q : p )
JS KL * ( p : q ) : = 1 2 KL * p : p + q 2 + KL * q : p + q 2 ,
= 1 2 KL p + q 2 : p + KL p + q 2 : q ,
then we obtain:
JS KL * G α ( p θ 1 : p θ 2 ) : = ( 1 α ) KL ( ( p θ 1 p θ 2 ) α G : p θ 1 ) + α KL ( ( p θ 1 p θ 2 ) α G : p θ 2 ) ,
= ( 1 α ) B F ( θ 1 : ( θ 1 θ 2 ) α ) + α B F ( θ 2 : ( θ 1 θ 2 ) α ) = JB F α ( θ 1 : θ 2 ) ,
= ( 1 α ) F ( θ 1 ) + α F ( θ 2 ) F ( ( θ 1 θ 2 ) α ) ,
= J F α ( θ 1 : θ 2 ) .
Please note that JS D * JS D * .
In general, the JS-symmetrization for the reverse KL divergence is
JS KL * ( p ; q ) = 1 2 KL p + q 2 : p + KL p + q 2 : q ,
= m log m p q d μ = A ( p , q ) log A ( p , q ) G ( p , q ) d μ ,
where m = p + q 2 = A ( p , q ) and G ( p , q ) = p q . Since A G (arithmetic-geometric inequality), it follows that JS KL * ( p ; q ) 0 .
Theorem 2 (G-JSD and its dual JS-symmetrization in exponential families).
The α-skew G-Jensen–Shannon divergence JS G α between two distributions p θ 1 and p θ 2 of the same exponential family E F is expressed in closed form for α ( 0 , 1 ) as:
JS G α ( p θ 1 : p θ 2 ) = ( 1 α ) B F ( θ 1 θ 2 ) α : θ 1 + α B F ( θ 1 θ 2 ) α : θ 2 ,
JS KL * G α ( p θ 1 : p θ 2 ) = JB F α ( θ 1 : θ 2 ) = J F α ( θ 1 : θ 2 ) .

4.1.1. Case Study: The Multivariate Gaussian Family

Consider the exponential family [18,37] of multivariate Gaussian distributions [47,48,49]
{ N ( μ , Σ ) : μ R d , Σ 0 } .
The multivariate Gaussian family is also called the multivariate normal family in the literature, or MVN family for short.
Let λ : = ( λ v , λ M ) = ( μ , Σ ) denote the composite (vector,matrix) parameter of an MVN. The d-dimensional MVN density is given by
p λ ( x ; λ ) : = 1 ( 2 π ) d 2 | λ M | exp 1 2 ( x λ v ) λ M 1 ( x λ v ) ,
where | · | denotes the matrix determinant. The natural parameters θ are also expressed using both a vector parameter θ v and a matrix parameter θ M in a compound object θ = ( θ v , θ M ) . By defining the following compound inner product on a composite (vector,matrix) object
θ , θ : = θ v θ v + tr θ M θ M ,
where tr ( · ) denotes the matrix trace, we rewrite the MVN density of Equation (83) in the canonical form of an exponential family [37]:
p θ ( x ; θ ) : = exp t ( x ) , θ F θ ( θ ) = p λ ( x ; λ ( θ ) ) ,
where
θ = ( θ v , θ M ) = Σ 1 μ , 1 2 Σ 1 = θ ( λ ) = λ M 1 λ v , 1 2 λ M 1 ,
is the compound natural parameter and
t ( x ) = ( x , x x )
is the compound sufficient statistic. The function F θ is the strictly convex and continuously differentiable log-normalizer defined by:
F θ ( θ ) = 1 2 d log π log | θ M | + 1 2 θ v θ M 1 θ v ,
The log-normalizer can be expressed using the ordinary parameters, λ = ( μ , Σ ) , as:
F λ ( λ ) = 1 2 λ v λ M 1 λ v + log | λ M | + d log 2 π ,
= 1 2 μ Σ 1 μ + log | Σ | + d log 2 π .
The moment/expectation parameters [18,49] are
η = ( η v , η M ) = E [ t ( x ) ] = F ( θ ) .
We report the conversion formula between the three types of coordinate systems (namely the ordinary parameter λ , the natural parameter θ and the moment parameter η ) as follows:
θ v ( λ ) = λ M 1 λ v = Σ 1 μ θ M ( λ ) = 1 2 λ M 1 = 1 2 Σ 1 λ v ( θ ) = 1 2 θ M 1 θ v = μ λ M ( θ ) = 1 2 θ M 1 = Σ
η v ( θ ) = 1 2 θ M 1 θ v η M ( θ ) = 1 2 θ M 1 1 4 ( θ M 1 θ v ) ( θ M 1 θ v ) θ v ( η ) = ( η M + η v η v ) 1 η v θ M ( η ) = 1 2 ( η M + η v η v ) 1
λ v ( η ) = η v = μ λ M ( η ) = η M η v η v = Σ η v ( λ ) = λ v = μ η M ( λ ) = λ M λ v λ v = Σ μ μ
The dual Legendre convex conjugate [18,49] is
F η * ( η ) = 1 2 log ( 1 + η v η M 1 η v ) + log | η M | + d ( 1 + log 2 π ) ,
and θ = η F η * ( η ) .
We check the Fenchel-Young equality when η = F ( θ ) and θ = F * ( η ) :
F θ ( θ ) + F η * ( η ) θ , η = 0 .
The Kullback–Leibler divergence between two d-dimensional Gaussians distributions p ( μ 1 , Σ 1 ) and p ( μ 2 , Σ 2 ) (with Δ μ = μ 2 μ 1 ) is
KL ( p ( μ 1 , Σ 1 ) : p ( μ 2 , Σ 2 ) ) = 1 2 tr ( Σ 2 1 Σ 1 ) + Δ μ Σ 2 1 Δ μ + log | Σ 2 | | Σ 1 | d = KL ( p λ 1 : p λ 2 ) .
We check that KL ( p ( μ , Σ ) : p ( μ , Σ ) ) = 0 since Δ μ = 0 and tr ( Σ 1 Σ ) = tr ( I ) = d . Notice that when Σ 1 = Σ 2 = Σ , we have
KL ( p ( μ 1 , Σ ) : p ( μ 2 , Σ ) ) = 1 2 Δ μ Σ 1 Δ μ = 1 2 D Σ 1 2 ( μ 1 , μ 2 ) ,
that is half the squared Mahalanobis distance for the precision matrix Σ 1 (a positive-definite matrix: Σ 1 0 ), where the Mahalanobis distance is defined for any positive matrix Q 0 as follows:
D Q ( p 1 : p 2 ) = ( p 1 p 2 ) Q ( p 1 p 2 ) .
The Kullback–Leibler divergence between two probability densities of the same exponential families amount to a Bregman divergence [18]:
KL ( p ( μ 1 , Σ 1 ) : p ( μ 2 , Σ 2 ) ) = KL ( p λ 1 : p λ 2 ) = B F ( θ 2 : θ 1 ) = B F * ( η 1 : η 2 ) ,
where the Bregman divergence is defined by
B F ( θ : θ ) : = F ( θ ) F ( θ ) θ θ , F ( θ ) ,
with η = F ( θ ) . Define the canonical divergence [18]
A F ( θ 1 : η 2 ) = F ( θ 1 ) + F * ( η 2 ) θ 1 , η 2 = A F * ( η 2 : θ 1 ) ,
since F * * = F . We have B F ( θ 1 : θ 2 ) = A F ( θ 1 : η 2 ) .
Now, observe that p θ ( 0 , θ ) = exp ( F ( θ ) ) when t ( 0 ) , θ = 0 . In particular, this holds for the multivariate normal family. Thus, we have the following proposition.
Proposition 1. 
For the MVN family, we have
p θ ( x ; ( θ 1 θ 2 ) α ) = p θ ( x , θ 1 ) 1 α p θ ( x , θ 2 ) α Z α G ( p θ 1 : p θ 2 ) ,
with the scaling normalization factor:
Z α G ( p θ 1 : p θ 2 ) = exp ( J F α ( θ 1 : θ 2 ) ) = p θ ( 0 ; θ 1 ) 1 α p θ ( 0 ; θ 2 ) α p θ ( 0 ; ( θ 1 θ 2 ) α ) .
More generally, we have for a k-dimensional weight vector α belonging to the ( k 1 ) -dimensional standard simplex:
Z α G ( p θ 1 , p θ k ) = i = 1 k p θ ( 0 , θ i ) α i p θ ( 0 ; θ ¯ ) ,
where θ ¯ = i = 1 k α i θ i .
Finally, we state the formulas for the G-JS divergence between MVNs for the KL and reverse KL, respectively:
Corollary 1 (G-JSD between Gaussians).
The skew G-Jensen–Shannon divergence JS α G and the dual skew G-Jensen–Shannon divergence JS * α G between two multivariate Gaussians N ( μ 1 , Σ 1 ) and N ( μ 2 , Σ 2 ) is
JS G α ( p ( μ 1 , Σ 1 ) : p ( μ 2 , Σ 2 ) ) = ( 1 α ) KL ( p ( μ 1 , Σ 1 ) : p ( μ α , Σ α ) ) + α KL ( p ( μ 2 , Σ 2 ) : p ( μ α , Σ α ) ) ,
= ( 1 α ) B F ( ( θ 1 θ 2 ) α : θ 1 ) + α B F ( ( θ 1 θ 2 ) α : θ 2 ) , = 1 2 tr Σ α 1 ( ( 1 α ) Σ 1 + α Σ 2 ) + log | Σ α | | Σ 1 | 1 α | Σ 2 | α +
( 1 α ) ( μ α μ 1 ) Σ α 1 ( μ α μ 1 ) + α ( μ α μ 2 ) Σ α 1 ( μ α μ 2 ) d
JS * G α ( p ( μ 1 , Σ 1 ) : p ( μ 2 , Σ 2 ) ) = ( 1 α ) KL ( p ( μ α , Σ α ) : p ( μ 1 , Σ 1 ) ) + α KL ( p ( μ α , Σ α ) : p ( μ 2 , Σ 2 ) ) ,
= ( 1 α ) B F ( θ 1 : ( θ 1 θ 2 ) α ) + α B F ( θ 2 : ( θ 1 θ 2 ) α ) ,
= J F ( θ 1 : θ 2 ) ,
= 1 2 ( 1 α ) μ 1 Σ 1 1 μ 1 + α μ 2 Σ 2 1 μ 2 μ α Σ α 1 μ α + log | Σ 1 | 1 α | Σ 2 | α | Σ α | ,
where
Σ α = ( Σ 1 Σ 2 ) α Σ = ( 1 α ) Σ 1 1 + α Σ 2 1 1 ,
(matrix harmonic barycenter) and
μ α = ( μ 1 μ 2 ) α μ = Σ α ( 1 α ) Σ 1 1 μ 1 + α Σ 2 1 μ 2 .
Notice that the α-skew Bhattacharyya distance [7]:
B α ( p : q ) = log X p 1 α q α d μ
between two members of the same exponential family amounts to a α -skew Jensen divergence between the corresponding natural parameters:
B α ( p θ 1 : p θ 2 ) = J F α ( θ 1 : θ 2 ) .
A simple proof follows from the fact that
p ( θ 1 θ 2 ) α ( x ) d μ ( x ) = 1 = p θ 1 1 α ( x ) p θ 2 α ( x ) Z α G ( p θ 1 : p θ 2 ) d μ ( x ) .
Therefore, we have
log 1 = 0 = log p θ 1 1 α ( x ) p θ 2 α ( x ) d μ ( x ) log Z α G ( p θ 1 : p θ 2 ) ,
with Z α G ( p θ 1 : p θ 2 ) = exp ( J F ( p θ 1 : p θ 2 ) ) . Thus, it follows that
B α ( p θ 1 : p θ 2 ) = log p θ 1 1 α ( x ) p θ 2 α ( x ) d μ ( x ) ,
= log Z α G ( p θ 1 : p θ 2 ) ,
= J F ( p θ 1 : p θ 2 ) .
Corollary 2. 
The JS-symmetrization of the reverse Kullback–Leibler divergence between densities of the same exponential family amount to calculate a Jensen/Burbea–Rao divergence between the corresponding natural parameters.

4.1.2. Applications to k-Means Clustering

Let P = { p 1 , , p n } denote a point set, and C = { c 1 , , c k } denote a set of k (cluster) centers. The generalized k-means objective [23] with respect to a distance D is defined by:
E D ( P , C ) = 1 n i = 1 n min j { 1 , , k } D ( p i : c j ) .
By defining the distance D ( p , C ) = min j { 1 , , k } D ( p : c j ) of a point to a set of points, we can rewrite compactly the objective function as E D ( P , C ) = 1 n i = 1 n D ( p i , C ) . Denote by E D * ( P , k ) the minimum objective loss for a set of k = | C | clusters: E D * ( P , k ) = min | C | = k E D ( P , C ) . It is NP-hard [50] to compute E D * ( P , k ) when k > 1 and the dimension d > 1 . The most common heuristic is Lloyd’s batched k-means [23] that yields a local minimum.
The performance of the probabilistic k-means++ initialization [51] has been extended to arbitrary distances in [52] as follows:
Theorem 3 
(Generalized k-means++ performance, [53]). Let κ 1 and κ 2 be two constants such that κ 1 defines the quasi-triangular inequality property:
D ( x : z ) κ 1 D ( x : y ) + D ( y : z ) , x , y , z Δ d ,
and κ 2 handles the symmetry inequality:
D ( x : y ) κ 2 D ( y : x ) , x , y Δ d .
Then the generalized k-means++ seeding guarantees with high probability a configuration C of cluster centers such that:
E D ( P , C ) 2 κ 1 2 ( 1 + κ 2 ) ( 2 + log k ) E D * ( P , k ) .
To bound the constants κ 1 and κ 2 , we rewrite the generalized Jensen–Shannon divergences using quadratic form expressions: That is, using a squared Mahalanobis distance:
D Q ( p : q ) = ( p q ) Q ( p q ) ,
for a positive-definite matrix Q 0 . Since the Bregman divergence can be interpreted as the tail of a first-order Taylor expansion, we have:
B F ( θ 1 : θ 2 ) = 1 2 ( θ 1 θ 2 ) 2 F ( ξ ) ( θ 1 θ 2 ) ,
for ξ Θ (open convex). Similarly, the Jensen divergence can be interpreted as a Jensen–Bregman divergence, and thus we have
J F ( θ 1 : θ 2 ) 1 2 ( θ 1 θ 2 ) 2 F ( ξ ) ( θ 1 θ 2 ) ,
for ξ Θ . More precisely, for a prescribed point set { θ 1 , , θ n } , we have ξ , ξ CH ( { θ 1 , , θ n } ) , where CH denotes the closed convex hull. We can therefore upper bound κ 1 and κ 2 using the ratio max θ CH ( { θ 1 , , θ n } ) 2 F ( θ ) max θ CH ( { θ 1 , , θ n } ) 2 F ( θ ) . See [54] for further details.
A centroid for a set of parameters θ 1 , , θ n is defined as the minimizer of the functional
E D ( θ ) = 1 n i D ( θ i : θ ) .
In particular, the symmetrized Bregman centroids have been studied in [55] (for JS G α ), and the Jensen centroids (for JS * G α ) have been investigated in [7] using the convex-concave iterative procedure.

4.2. The Harmonic Jensen–Shannon Divergence (H- J S )

The principle to get closed-form formula for generalized Jensen–Shannon divergences between distributions belonging to a parametric family P Θ = { p θ : θ Θ } consists of finding an abstract mean M such that the M-mixture ( p θ 1 p θ 2 ) α M belongs to the family P Θ . In particular, when Θ is a convex domain, we seek a mean M such that ( p θ 1 p θ 2 ) α M = p ( θ 1 θ 2 ) α with ( θ 1 θ 2 ) α Θ .
Let us consider the weighted harmonic mean [34] (induced by the harmonic mean) H:
H α ( x , y ) : = 1 ( 1 α ) 1 x + α 1 y = x y ( 1 α ) y + α x = x y ( x y ) 1 α , α [ 0 , 1 ] .
The harmonic mean is a quasi-arithmetic mean H α ( x , y ) = M α h ( x , y ) obtained for the monotone (decreasing) function h ( u ) = 1 u (or equivalently for the increasing monotone function h ( u ) = 1 u ).
This harmonic mean is well-suited for the scale family C of Cauchy probability distributions (also called Lorentzian distributions):
C Γ : = p γ ( x ) = 1 γ p std x γ = γ π ( γ 2 + x 2 ) : γ Γ = ( 0 , ) ,
where γ denotes the scale and p std ( x ) = 1 π ( 1 + x 2 ) the standard Cauchy distribution.
Using the computer algebra system Maxima (http://maxima.sourceforge.net/) we find that (see Appendix B)
( p γ 1 p γ 2 ) 1 2 H ( x ) = H α ( p γ 1 ( x ) : p γ 2 ( x ) ) Z α H ( γ 1 , γ 2 ) = p ( γ 1 γ 2 ) α
where the normalizing coefficient is
Z α H ( γ 1 , γ 2 ) : = γ 1 γ 2 ( γ 1 γ 2 ) α ( γ 1 γ 2 ) 1 α = γ 1 γ 2 ( γ 1 γ 2 ) α ( γ 2 γ 1 ) α ,
since we have ( γ 1 γ 2 ) 1 α = ( γ 2 γ 1 ) α .
The H-Jensen–Shannon symmetrization of a distance D between distributions writes as:
JS D H α ( p : q ) = ( 1 α ) D ( p : ( p q ) α H ) + α D ( q : ( p q ) α H ) ,
where H α denote the weighted harmonic mean. When D is available in closed form for distributions belonging to the scale Cauchy distributions, so is JS D H α ( p : q ) .
For example, consider the KL divergence formula between two scale Cauchy distributions:
KL ( p γ 1 : p γ 2 ) = 2 log A ( γ 1 , γ 2 ) G ( γ 1 , γ 2 ) = 2 log γ 1 + γ 2 2 γ 1 γ 2 ,
where A and G denote the arithmetic and geometric means, respectively. The formula initially reported in [56] has been corrected by the authors. Since A G (and A G 1 ), it follows that KL ( p γ 1 : p γ 2 ) 0 . Notice that the KL divergence is symmetric for Cauchy scale distributions. We note in passing that for exponential families, the KL divergence is symmetric only for the location Gaussian family (since the only symmetric Bregman divergences are the squared Mahalanobis distances [57]). The cross-entropy between scale Cauchy distributions is h × ( p γ 1 : p γ 2 ) = log π ( γ 1 + γ 2 ) 2 γ 2 , and the differential entropy is h ( p γ ) = h × ( p γ : p γ ) = log 4 π γ .
Then the H-JS divergence between p = p γ 1 and q = p γ 2 is:
JS H ( p : q ) = 1 2 KL p : ( p q ) 1 2 H + KL q : ( p q ) 1 2 H ,
JS H ( p γ 1 : p γ 2 ) = 1 2 KL p γ 1 : p γ 1 + γ 2 2 + KL p γ 2 : p γ 1 + γ 2 2 ,
= log ( 3 γ 1 + γ 2 ) ( 3 γ 2 + γ 1 ) 8 γ 1 γ 2 ( γ 1 + γ 2 ) .
We check that when γ 1 = γ 2 = γ , we have JS H α ( p γ : p γ ) = 0 .
Theorem 4 (Harmonic JSD between scale Cauchy distributions).
The harmonic Jensen–Shannon divergence between two scale Cauchy distributions p γ 1 and p γ 2 is JS H ( p γ 1 : p γ 2 ) = log ( 3 γ 1 + γ 2 ) ( 3 γ 2 + γ 1 ) 8 γ 1 γ 2 ( γ 1 + γ 2 ) .
Let us report some numerical examples: Consider p γ 1 = 0 . 1 and p γ 1 = 0 . 5 , we find that JS H ( p γ 1 : p γ 2 ) 0 . 176 . When p γ 1 = 0 . 2 and p γ 1 = 0 . 8 , we find that JS H ( p γ 1 : p γ 2 ) 0 . 129 .
Notice that KL formula is scale-invariant, and this property holds for any scale family:
Lemma 1. 
The Kullback–Leibler divergence between two distributions p s 1 and p s 2 belonging to the same scale family { p s ( x ) = 1 s p ( x s ) } s ( 0 , ) with standard density p is scale-invariant: KL ( p λ s 1 : p λ s 2 ) = KL ( p s 1 : p s 2 ) = KL ( p : p s 2 s 1 ) = KL ( p s 1 s 2 : p ) for any λ > 0 .
A direct proof follows from a change of variable in the KL integral with y = x λ and d x = λ d y . Please note that although the KLD between scale Cauchy distributions is symmetric, it is not the case for all scale families: For example, the Rayleigh distributions form a scale family with the KLD amounting to compute a Bregman asymmetric Itakura–Saito divergence between parameters [37].
Instead of the KLD, we can choose the total variation distance for which a formula has been reported in [38] between two Cauchy distributions. Notice that the Cauchy distributions are alpha-stable distributions for α = 1 and q Gaussian distributions for q = 2 ([58], p. 104). A closed-form formula for the divergence between two q-Gaussians is given in [58] when q < 2 . The definite integral h q ( p ) = + p ( x ) q d μ is available in closed form for Cauchy distributions. When q = 2 , we have h 2 ( p γ ) = 1 2 π γ .
We refer to [38] for yet other illustrative examples considering the family of Pearson type VII distributions and central multivariate t-distributions which use the power means (quasi-arithmetic means M h induced by h ( u ) = u α for α > 0 ) for defining mixtures.
Table 1 summarizes the various examples introduced in the paper.

4.3. The M-Jensen–Shannon Matrix Distances

In this section, we consider distances between matrices which play an important role in quantum computing [59,60]. We refer to [61] for the matrix Jensen–Bregman logdet divergence. The Hellinger distance can be interpreted as the difference of an arithmetic mean A and a geometric mean G:
D H ( p , q ) = 1 X p ( x ) q ( x ) d μ ( x ) = X ( A ( p ( x ) , q ( x ) ) G ( p ( x ) , q ( x ) ) ) d μ ( x ) .
Notice that since A G , we have D H ( p , q ) 0 . The scaled and squared Hellinger distance is an α -divergence I α for α = 0 . Recall that the α -divergence can be interpreted as the difference of a weighted arithmetic minus a weighted geometry mean.
In general, if a mean M 1 dominates a mean M 2 , we may define the distance as
D M 1 , M 2 ( p , q ) = X M 1 ( p , q ) M 2 ( p , q ) d μ ( x ) .
When considering matrices [62], there is not a unique definition of a geometric matrix mean, and thus we have different notions of matrix Hellinger distances [62], some of them are divergences (smooth distances defining a dualistic structure in information geometry).
We define the matrix M-Jensen–Shannon divergence for a matrix divergence [63,64] D as follows:
JS D M ( X 1 , X 2 ) = 1 2 D ( X 1 : M ( X 1 , X 2 ) ) + D ( X 2 : M ( X 1 , X 2 ) ) = JS D M ( X 2 , X 1 ) .
For example, we can choose the von Neumann matrix divergence [63]:
D vN ( X 1 : X 2 ) : = tr X 1 log X 1 X 1 log X 2 X 1 + X 2 ,
or the LogDet matrix divergence [63]:
D ld ( X 1 : X 2 ) : = tr ( X 1 X 2 1 ) log | X 1 X 2 1 | d ,
where square matrices X 1 and X 2 have dimension d.

5. Conclusions and Perspectives

We introduced a generalization of the celebrated Jensen–Shannon divergence [6], termed the ( M , N ) -Jensen–Shannon divergences, based on M-mixtures derived from abstract means M. This new family of divergences includes the ordinary Jensen–Shannon divergence when both M and N are set to the arithmetic mean. We reported closed-form expressions of the M Jensen–Shannon divergences for mixture families and exponential families in information geometry by choosing the arithmetic and geometric weighted mean, respectively. The α -skewed geometric Jensen–Shannon divergence (G-Jensen–Shannon divergence) between densities p θ 1 and p θ 2 of the same exponential family with cumulant function F is
JS KL G α [ p θ 1 : p θ 2 ] = JS B F * A α ( θ 1 : θ 2 ) .
Here, we used the bracket notation to emphasize that the statistical distance JS KL G α is between densities, and the parenthesis notation to emphasize that the distance JS B F * A α is between parameters. We also have JS KL * G α [ p θ 1 : p θ 2 ] = J F α ( θ 1 : θ 2 ) . We also show how to get a closed-form formula for the harmonic Jensen–Shannon divergence of Cauchy scale distributions by taking harmonic mixtures.
For an arbitrary distance D, we define the skew N-Jeffreys symmetrization:
J D N β ( p 1 : p 2 ) = N β ( D ( p 1 : p 2 ) , D ( p 2 : p 1 ) ) ,
and the skew ( M , N ) -JS-symmetrization:
JS D M α , N β ( p 1 : p 2 ) = N β ( D ( p 1 : ( p 1 p 2 ) α M ) , D ( p 2 : ( p 1 p 2 ) α M ) ) .
A Java™ source code for computing the geometric Jensen–Shannon divergence between multivariate Gaussian distributions is available online at https://franknielsen.github.io/M-JS/.

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A. Summary of Distances and Their Notations

Table A1 lists the main distances with their notations.
Table A1. Summary of Distances and Their Notations.
Table A1. Summary of Distances and Their Notations.
Weighted mean M α , α ( 0 , 1 )
Arithmetic mean A α ( x , y ) = ( 1 α ) x + α y
Geometric mean G α ( x , y ) = x 1 α y α
Harmonic mean H α ( x , y ) = x y ( 1 α ) y + α x
Power mean P α p ( x , y ) = ( ( 1 α ) x p + α y p ) 1 p , p R { 0 } , lim p 0 P α p = G
Quasi-arithmetic mean M α f ( x , y ) = f 1 ( ( 1 α ) f ( x ) + α f ( y ) ) , f strictly monotonous
M-mixture Z α M ( p , q ) = t X M α ( p ( t ) , q ( t ) ) d μ ( t )
with Z α M ( p , q ) = t X M α ( p ( t ) , q ( t ) ) d μ ( t )
Statistical distance D ( p : q )
Dual/reverse distance D * D * ( p : q ) : = D ( q : p )
Kullback-Leibler divergence KL ( p : q ) = p ( x ) log p ( x ) q ( x ) d μ ( x )
reverse Kullback-Leibler divergence KL * ( p : q ) = KL ( q : p ) = q ( x ) log q ( x ) p ( x ) d μ ( x )
Jeffreys divergence J ( p ; q ) = KL ( p : q ) + KL ( q : p ) = ( p ( x ) q ( x ) ) log p ( x ) q ( x ) d μ ( x )
Resistor divergence 1 R ( p ; q ) = 1 2 1 KL ( p : q ) + 1 KL ( q : p ) . R ( p ; q ) = 2 J ( p ; q ) KL ( p : q ) KL ( q : p )
skew K-divergence K α ( p : q ) = p ( x ) log p ( x ) ( 1 α ) p ( x ) + α q ( x ) d μ ( x )
Jensen-Shannon divergence JS ( p , q ) = 1 2 KL p : p + q 2 + KL q : p + q 2
skew Bhattacharrya divergence B α ( p : q ) = log X p ( x ) 1 α q ( x ) α d μ ( x )
Hellinger distance D H ( p , q ) = 1 X p ( x ) q ( x ) d μ ( x )
α -divergences I α ( p : q ) = α p ( x ) + ( 1 α ) q ( x ) p ( x ) α q ( x ) 1 α d μ ( x ) , α { 0 , 1 }
I α ( p : q ) = A α ( q : p ) G α ( q : p )
Mahalanobis distance D Q ( p : q ) = ( p q ) Q ( p q ) for a positive-definite matrix Q 0
f-divergence I f ( p : q ) = p ( x ) f q ( x ) p ( x ) d μ ( x ) , with f ( 1 ) = f ( 1 ) = 0
f strictly convex at 1
reverse f-divergence I f * ( p : q ) = q ( x ) f p ( x ) q ( x ) d μ ( x ) = I f ( p : q )
for f ( u ) = u f ( 1 u )
J-symmetrized f-divergence J f ( p ; q ) = 1 2 ( I f ( p : q ) + I f ( q : p ) )
JS-symmetrized f-divergence I f α ( p ; q ) : = ( 1 α ) I f ( p : ( p q ) α ) + α I f ( q : ( p q ) α ) = I f α JS ( p : q )
for f α JS ( u ) : = ( 1 α ) f ( α u + 1 α ) + α f α + 1 α u
Parameter distance
Bregman divergence B F ( θ : θ ) : = F ( θ ) F ( θ ) θ θ , F ( θ )
skew Jeffreys-Bregman divergence S F α = ( 1 α ) B F ( θ : θ ) + α B F ( θ : θ )
skew Jensen divergence J F α ( θ : θ ) : = ( F ( θ ) F ( θ ) ) α F ( ( θ θ ) α )
Jensen-Bregman divergence JB F ( θ ; θ ) = 1 2 B F θ : θ + θ 2 + B F θ : θ + θ 2 = J F ( θ ; θ ) .
Generalized Jensen-Shannon divergences
skew J-symmetrization J D α ( p : q ) : = ( 1 α ) D p : q + α D q : p
skew JS -symmetrization JS D α ( p : q ) : = ( 1 α ) D p : ( 1 α ) p + α q + α D q : ( 1 α ) p + α q
skew M-Jensen-Shannon divergence JS M α ( p : q ) : = ( 1 α ) KL p : ( p q ) α M + α KL q : ( p q ) α M
skew M- JS -symmetrization JS D M α ( p : q ) : = ( 1 α ) D p : ( p q ) α M + α D q : ( p q ) α M
N-Jeffreys divergence J N β ( p : q ) : = N β ( KL p : q , KL q : p )
N-J D divergence J D N β ( p : q ) = N β ( D ( p : q ) , D ( q : p ) )
skew ( M , N ) -D JS divergence JS D M α , N β ( p : q ) : = N β D p : ( p q ) α M , D q : ( p q ) α M

Appendix B. Symbolic Calculations in Maxima

The program below calculates the normalizer Z for the harmonic H-mixtures of Cauchy distributions (Equation (133)).
assume(gamma>0);
Cauchy(x,gamma) := gamma/(%pi∗(x∗∗2+gamma∗∗2));
assume(alpha>0);
assume(alpha<1);
h(x,y,alpha) := (x∗y)/((1-alpha)∗y+alpha∗x);
assume(gamma1>0);
assume(gamma2>0);
m(x,alpha) := ratsimp(h(Cauchy(x,gamma1),Cauchy(x,gamma2),alpha));
/∗ calculate Z ∗/
integrate(m(x,alpha),x,-inf,inf);

References

  1. Billingsley, P. Probability and Measure; John Wiley & Sons: Hoboken, NJ, USA, 2008. [Google Scholar]
  2. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  3. Ho, S.W.; Yeung, R.W. On the discontinuity of the Shannon information measures. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Adelaide, Australia, 4–9 September 2005; pp. 159–163. [Google Scholar]
  4. Nielsen, F. Jeffreys centroids: A closed-form expression for positive histograms and a guaranteed tight approximation for frequency histograms. IEEE Signal Process. Lett. 2013, 20, 657–660. [Google Scholar] [CrossRef]
  5. Johnson, D.; Sinanovic, S. Symmetrizing the Kullback-Leibler Distance. Technical report of Rice University (US). 2001. Available online: https://scholarship.rice.edu/handle/1911/19969 (accessed on 11 May 2019).
  6. Lin, J. Divergence measures based on the Shannon entropy. IEEE Trans. Inf. Theory 1991, 37, 145–151. [Google Scholar] [CrossRef]
  7. Nielsen, F.; Boltz, S. The Burbea-Rao and Bhattacharyya centroids. IEEE Trans. Inf. Theory 2011, 57, 5455–5466. [Google Scholar] [CrossRef]
  8. Vajda, I. On metric divergences of probability measures. Kybernetika 2009, 45, 885–900. [Google Scholar]
  9. Fuglede, B.; Topsoe, F. Jensen-Shannon divergence and Hilbert space embedding. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Waikiki, HI, USA, 29 June–4 July 2014; p. 31. [Google Scholar]
  10. Sims, G.E.; Jun, S.R.; Wu, G.A.; Kim, S.H. Alignment-free genome comparison with feature frequency profiles (FFP) and optimal resolutions. Proc. Natl. Acad. Sci. USA 2009, 106, 2677–2682. [Google Scholar] [CrossRef]
  11. DeDeo, S.; Hawkins, R.X.; Klingenstein, S.; Hitchcock, T. Bootstrap methods for the empirical study of decision-making and information flows in social systems. Entropy 2013, 15, 2246–2276. [Google Scholar] [CrossRef]
  12. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2014; pp. 2672–2680. [Google Scholar]
  13. Wang, Y.; Woods, K.; McClain, M. Information-theoretic matching of two point sets. IEEE Trans. Image Process. 2002, 11, 868–872. [Google Scholar] [CrossRef]
  14. Peter, A.M.; Rangarajan, A. Information geometry for landmark shape analysis: Unifying shape representation and deformation. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 337–350. [Google Scholar] [CrossRef] [PubMed]
  15. Nielsen, F.; Sun, K. Guaranteed bounds on information-theoretic measures of univariate mixtures using piecewise log-sum-exp inequalities. Entropy 2016, 18, 442. [Google Scholar] [CrossRef]
  16. Wang, F.; Syeda-Mahmood, T.; Vemuri, B.C.; Beymer, D.; Rangarajan, A. Closed-form Jensen-Rényi divergence for mixture of Gaussians and applications to group-wise shape registration. In International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI); Springer: Berlin, Germany, 2009; pp. 648–655. [Google Scholar]
  17. Nielsen, F. Closed-form information-theoretic divergences for statistical mixtures. In Proceedings of the IEEE 21st International Conference on Pattern Recognition (ICPR2012), Tsukuba, Japan, 11–15 November 2012; pp. 1723–1726. [Google Scholar]
  18. Amari, S.I. Information Geometry and Its Applications; Springer: Berlin, Germany, 2016. [Google Scholar]
  19. Csiszár, I. Information-type measures of difference of probability distributions and indirect observation. Stud. Sci. Math. Hung. 1967, 2, 229–318. [Google Scholar]
  20. Eguchi, S. Geometry of minimum contrast. Hiroshima Math. J. 1992, 22, 631–647. [Google Scholar] [CrossRef]
  21. Amari, S.I.; Cichocki, A. Information geometry of divergence functions. Bull. Pol. Acad. Sci. Tech. Sci. 2010, 58, 183–195. [Google Scholar] [CrossRef]
  22. Ciaglia, F.M.; Di Cosmo, F.; Felice, D.; Mancini, S.; Marmo, G.; Pérez-Pardo, J.M. Hamilton-Jacobi approach to potential functions in information geometry. J. Math. Phys. 2017, 58, 063506. [Google Scholar] [CrossRef]
  23. Banerjee, A.; Merugu, S.; Dhillon, I.S.; Ghosh, J. Clustering with Bregman divergences. J. Mach. Learn. Res. 2005, 6, 1705–1749. [Google Scholar]
  24. Nielsen, F. A family of statistical symmetric divergences based on Jensen’s inequality. arXiv 2010, arXiv:1009.4004. [Google Scholar]
  25. Chen, P.; Chen, Y.; Rao, M. Metrics defined by Bregman divergences. Commun. Math. Sci. 2008, 6, 915–926. [Google Scholar] [CrossRef]
  26. Chen, P.; Chen, Y.; Rao, M. Metrics defined by Bregman divergences: Part 2. Commun. Math. Sci. 2008, 6, 927–948. [Google Scholar] [CrossRef]
  27. Kafka, P.; Österreicher, F.; Vincze, I. On powers of f-divergences defining a distance. Stud. Sci. Math. Hung. 1991, 26, 415–422. [Google Scholar]
  28. Österreicher, F.; Vajda, I. A new class of metric divergences on probability spaces and its applicability in statistics. Ann. Inst. Stat. Math. 2003, 55, 639–653. [Google Scholar] [CrossRef]
  29. Nielsen, F.; Nock, R. On the geometry of mixtures of prescribed distributions. In Proceeding of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 Aprli 2018; pp. 2861–2865. [Google Scholar]
  30. Nielsen, F.; Hadjeres, G. Monte Carlo Information Geometry: The dually flat case. arXiv 2018, arXiv:1803.07225. [Google Scholar]
  31. Watanabe, S.; Yamazaki, K.; Aoyagi, M. Kullback information of normal mixture is not an analytic function. IEICE Tech. Rep. Neurocomput. 2004, 104, 41–46. [Google Scholar]
  32. Nielsen, F.; Nock, R. On the chi square and higher-order chi distances for approximating f-divergences. IEEE Signal Process. Lett. 2014, 21, 10–13. [Google Scholar] [CrossRef]
  33. Nielsen, F.; Hadjeres, G. On power chi expansions of f-divergences. arXiv 2019, arXiv:1903.05818. [Google Scholar]
  34. Niculescu, C.; Persson, L.E. Convex Functions and Their Applications, 2nd ed.; Springer: Berlin, Germany, 2018. [Google Scholar]
  35. Rényi, A. On measures of entropy and information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics; The Regents of the University of California: Oakland, CA, USA, 1961. [Google Scholar]
  36. McLachlan, G.J.; Lee, S.X.; Rathnayake, S.I. Finite mixture models. Ann. Rev. Stat. Appl. 2019, 6, 355–378. [Google Scholar] [CrossRef]
  37. Nielsen, F.; Garcia, V. Statistical exponential families: A digest with flash cards. arXiv 2009, arXiv:0911.4863. [Google Scholar]
  38. Nielsen, F. Generalized Bhattacharyya and Chernoff upper bounds on Bayes error using quasi-arithmetic means. Pattern Recognit. Lett. 2014, 42, 25–34. [Google Scholar] [CrossRef]
  39. Eguchi, S.; Komori, O. Path connectedness on a space of probability density functions. In Geometric Science of Information (GSI); Springer: Cham, Switzerland, 2015; pp. 615–624. [Google Scholar]
  40. Eguchi, S.; Komori, O.; Ohara, A. Information geometry associated with generalized means. In Information Geometry and its Applications IV; Springer: Berlin, Germany, 2016; pp. 279–295. [Google Scholar]
  41. Asadi, M.; Ebrahimi, N.; Kharazmi, O.; Soofi, E.S. Mixture models, Bayes Fisher information, and divergence measures. IEEE Trans. Inf. Theory 2019, 65, 2316–2321. [Google Scholar] [CrossRef]
  42. Amari, S.I. Integration of stochastic models by minimizing α-divergence. Neural Comput. 2007, 19, 2780–2796. [Google Scholar] [CrossRef]
  43. Nielsen, F.; Nock, R. Generalizing skew Jensen divergences and Bregman divergences with comparative convexity. IEEE Signal Process. Lett. 2017, 24, 1123–1127. [Google Scholar] [CrossRef]
  44. Lee, L. Measures of distributional similarity. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics on Computational Linguistics, Association for Computational Linguistics, Stroudsburg, PA, USA, 20–26 June 1999; pp. 25–32. [Google Scholar] [CrossRef]
  45. Nielsen, F. The statistical Minkowski distances: Closed-form formula for Gaussian mixture models. arXiv 2019, arXiv:1901.03732. [Google Scholar]
  46. Zhang, J. Reference duality and representation duality in information geometry. AIP Conf. Proc. 2015, 1641, 130–146. [Google Scholar]
  47. Yoshizawa, S.; Tanabe, K. Dual differential geometry associated with the Kullback-Leibler information on the Gaussian distributions and its 2-parameter deformations. SUT J. Math. 1999, 35, 113–137. [Google Scholar]
  48. Nielsen, F.; Nock, R. A closed-form expression for the Sharma–Mittal entropy of exponential families. J. Phys. A Math. Theor. 2011, 45, 032003. [Google Scholar] [CrossRef]
  49. Nielsen, F. An elementary introduction to information geometry. arXiv 2018, arXiv:1808.08271. [Google Scholar]
  50. Nielsen, F.; Nock, R. Optimal interval clustering: Application to Bregman clustering and statistical mixture learning. IEEE Signal Process. Lett. 2014, 21, 1289–1292. [Google Scholar] [CrossRef]
  51. Arthur, D.; Vassilvitskii, S. k-means++: The advantages of careful seeding. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms. Society for Industrial and Applied Mathematics; ACM: New York, NY, USA, 2007; pp. 1027–1035. [Google Scholar]
  52. Nielsen, F.; Nock, R.; Amari, S.I. On clustering histograms with k-means by using mixed α-divergences. Entropy 2014, 16, 3273–3301. [Google Scholar] [CrossRef]
  53. Nielsen, F.; Nock, R. Total Jensen divergences: definition, properties and clustering. In Proceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brisbane, QLD, Australia, 19–24 August 2015; pp. 2016–2020. [Google Scholar]
  54. Ackermann, M.R.; Blömer, J. Bregman clustering for separable instances. In Scandinavian Workshop on Algorithm Theory; Springer: Berlin, Germany, 2010; pp. 212–223. [Google Scholar]
  55. Nielsen, F.; Nock, R. Sided and symmetrized Bregman centroids. IEEE Trans. Inf. Theory 2009, 55, 2882–2904. [Google Scholar] [CrossRef]
  56. Tzagkarakis, G.; Tsakalides, P. A statistical approach to texture image retrieval via alpha-stable modeling of wavelet decompositions. In Proceedings of the 5th International Workshop on Image Analysis for Multimedia Interactive Services, Instituto Superior Técnico, Lisboa, Portugal, 21–23 April 2004; pp. 21–23. [Google Scholar]
  57. Boissonnat, J.D.; Nielsen, F.; Nock, R. Bregman Voronoi diagrams. Discrete Comput. Geom. 2010, 44, 281–307. [Google Scholar] [CrossRef]
  58. Naudts, J. Generalised Thermostatistics; Springer Science & Business Media: Berlin, Germany, 2011. [Google Scholar]
  59. Briët, J.; Harremoës, P. Properties of classical and quantum Jensen-Shannon divergence. Phys. Rev. A 2009, 79, 052311. [Google Scholar] [CrossRef]
  60. Audenaert, K.M. Quantum skew divergence. J. Math. Phys. 2014, 55, 112202. [Google Scholar] [CrossRef]
  61. Cherian, A.; Sra, S.; Banerjee, A.; Papanikolopoulos, N. Jensen-Bregman logdet divergence with application to efficient similarity search for covariance matrices. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2161–2174. [Google Scholar] [CrossRef]
  62. Bhatia, R.; Jain, T.; Lim, Y. Strong convexity of sandwiched entropies and related optimization problems. Rev. Math. Phys. 2018, 30, 1850014. [Google Scholar] [CrossRef]
  63. Kulis, B.; Sustik, M.A.; Dhillon, I.S. Low-rank kernel learning with Bregman matrix divergences. J. Mach. Learn. Res. 2009, 10, 341–376. [Google Scholar]
  64. Nock, R.; Magdalou, B.; Briys, E.; Nielsen, F. Mining matrix data with Bregman matrix divergences for portfolio selection. In Matrix Information Geometry; Springer: Berlin, Germany, 2013; pp. 373–402. [Google Scholar]
Table 1. Summary of the weighted means M chosen according to the parametric family in order to ensure that the family is closed under M-mixturing: ( p θ 1 p θ 2 ) α M = p ( θ 1 θ 2 ) α .
Table 1. Summary of the weighted means M chosen according to the parametric family in order to ensure that the family is closed under M-mixturing: ( p θ 1 p θ 2 ) α M = p ( θ 1 θ 2 ) α .
JS M α Mean MParametric Family Z α M ( p : q )
JS A α arithmetic Amixture family Z α M ( θ 1 : θ 2 ) = 1
JS G α geometric Gexponential family Z α G ( θ 1 : θ 2 ) = exp ( J F α ( θ 1 : θ 2 ) )
JS H α harmonic HCauchy scale family Z α H ( θ 1 : θ 2 ) = θ 1 θ 2 ( θ 1 θ 2 ) α ( θ 1 θ 2 ) 1 α

Share and Cite

MDPI and ACS Style

Nielsen, F. On the Jensen–Shannon Symmetrization of Distances Relying on Abstract Means. Entropy 2019, 21, 485. https://doi.org/10.3390/e21050485

AMA Style

Nielsen F. On the Jensen–Shannon Symmetrization of Distances Relying on Abstract Means. Entropy. 2019; 21(5):485. https://doi.org/10.3390/e21050485

Chicago/Turabian Style

Nielsen, Frank. 2019. "On the Jensen–Shannon Symmetrization of Distances Relying on Abstract Means" Entropy 21, no. 5: 485. https://doi.org/10.3390/e21050485

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop