Next Article in Journal
Goal-Directed Planning for Habituated Agents by Active Inference Using a Variational Recurrent Neural Network
Next Article in Special Issue
A Two-Moment Inequality with Applications to Rényi Entropy and Mutual Information
Previous Article in Journal
Analysis of the Stochastic Population Model with Random Parameters
Previous Article in Special Issue
Conditional Rényi Divergences and Horse Betting
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Relations Between the Relative Entropy and χ2-Divergence, Generalizations and Applications

1
Independent Researcher, Tokyo 206–0003, Japan
2
Faculty of Electrical Engineering, Technion—Israel Institute of Technology, Technion City, Haifa 3200003, Israel
*
Author to whom correspondence should be addressed.
Entropy 2020, 22(5), 563; https://doi.org/10.3390/e22050563
Submission received: 22 April 2020 / Revised: 12 May 2020 / Accepted: 17 May 2020 / Published: 18 May 2020

Abstract

:
This paper is focused on a study of integral relations between the relative entropy and the chi-squared divergence, which are two fundamental divergence measures in information theory and statistics, a study of the implications of these relations, their information-theoretic applications, and some generalizations pertaining to the rich class of f-divergences. Applications that are studied in this paper refer to lossless compression, the method of types and large deviations, strong data–processing inequalities, bounds on contraction coefficients and maximal correlation, and the convergence rate to stationarity of a type of discrete-time Markov chains.

1. Introduction

The relative entropy (also known as the Kullback–Leibler divergence [1]) and the chi-squared divergence [2] are divergence measures which play a key role in information theory, statistics, learning, signal processing, and other theoretical and applied branches of mathematics. These divergence measures are fundamental in problems pertaining to source and channel coding, combinatorics and large deviations theory, goodness-of-fit and independence tests in statistics, expectation–maximization iterative algorithms for estimating a distribution from an incomplete data, and other sorts of problems (the reader is referred to the tutorial paper by Csiszár and Shields [3]). They both belong to an important class of divergence measures, defined by means of convex functions f, and named f-divergences [4,5,6,7,8]. In addition to the relative entropy and the chi-squared divergence, this class unifies other useful divergence measures such as the total variation distance in functional analysis, and it is also closely related to the Rényi divergence which generalizes the relative entropy [9,10]. In general, f-divergences (defined in Section 2) are attractive since they satisfy pleasing features such as the data–processing inequality, convexity, (semi)continuity, and duality properties, and they therefore find nice applications in information theory and statistics (see, e.g., [6,8,11,12]).
In this work, we study integral relations between the relative entropy and the chi-squared divergence, implications of these relations, and some of their information-theoretic applications. Some generalizations which apply to the class of f-divergences are also explored in detail. In this context, it should be noted that integral representations of general f-divergences, expressed as a function of either the DeGroot statistical information [13], the E γ -divergence (a parametric sub-class of f-divergences, which generalizes the total variation distance [14] [p. 2314]) and the relative information spectrum, have been derived in [12] [Section 5], [15] [Section 7.B], and [16] [Section 3], respectively.
Applications in this paper are related to lossless source compression, large deviations by the method of types, and strong data–processing inequalities. The relevant background for each of these applications is provided to make the presentation self contained.
We next outline the paper contributions and the structure of our manuscript.

1.1. Paper Contributions

This work starts by introducing integral relations between the relative entropy and the chi-squared divergence, and some inequalities which relate these two divergences (see Theorem 1, its corollaries, and Proposition 1). It continues with a study of the implications and generalizations of these relations, pertaining to the rich class of f-divergences. One implication leads to a tight lower bound on the relative entropy between a pair of probability measures, expressed as a function of the means and variances under these measures (see Theorem 2). A second implication of Theorem 1 leads to an upper bound on a skew divergence (see Theorem 3 and Corollary 3). Due to the concavity of the Shannon entropy, let the concavity deficit of the entropy function be defined as the non-negative difference between the entropy of a convex combination of distributions and the convex combination of the entropies of these distributions. Then, Corollary 4 provides an upper bound on this deficit, expressed as a function of the pairwise relative entropies between all pairs of distributions. Theorem 4 provides a generalization of Theorem 1 to the class of f-divergences. It recursively constructs non-increasing sequences of f-divergences and as a consequence of Theorem 4 followed by the usage of polylogairthms, Corollary 5 provides a generalization of the useful integral relation in Theorem 1 between the relative entropy and the chi-squared divergence. Theorem 5 relates probabilities of sets to f-divergences, generalizing a known and useful result by Csiszár for the relative entropy. With respect to Theorem 1, the integral relation between the relative entropy and the chi-squared divergence has been independently derived in [17], which also derived an alternative upper bound on the concavity deficit of the entropy as a function of total variational distances (differing from the bound in Corollary 4, which depends on pairwise relative entropies). The interested reader is referred to [17], with a preprint of the extended version in [18], and to [19] where the connections in Theorem 1 were originally discovered in the quantum setting.
The second part of this work studies information-theoretic applications of the above results. These are ordered by starting from the relatively simple applications, and ending at the more complicated ones. The first one includes a bound on the redundancy of the Shannon code for universal lossless compression with discrete memoryless sources, used in conjunction with Theorem 3 (see Section 4.1). An application of Theorem 2 in the context of the method of types and large deviations analysis is then studied in Section 4.2, providing non-asymptotic bounds which lead to a closed-form expression as a function of the Lambert W-function (see Proposition 2). Strong data–processing inequalities with bounds on contraction coefficients of skew divergences are provided in Theorem 6, Corollary 7 and Proposition 3. Consequently, non-asymptotic bounds on the convergence to stationarity of time-homogeneous, irreducible, and reversible discrete-time Markov chains with finite state spaces are obtained by relying on our bounds on the contraction coefficients of skew divergences (see Theorem 7). The exact asymptotic convergence rate is also obtained in Corollary 8. Finally, a property of maximal correlations is obtained in Proposition 4 as an application of our starting point on the integral relation between the relative entropy and the chi-squared divergence.

1.2. Paper Organization

This paper is structured as follows. Section 2 presents notation and preliminary material which is necessary for, or otherwise related to, the exposition of this work. Section 3 refers to the developed relations between divergences, and Section 4 studies information-theoretic applications. Proofs of the results in Section 3 and Section 4 (except for short proofs) are deferred to Section 5.

2. Preliminaries and Notation

This section provides definitions of divergence measures which are used in this paper, and it also provides relevant notation.
Definition 1.
[12] [p. 4398] Let P and Q be probability measures, let μ be a dominating measure of P and Q (i.e., P , Q μ ), and let p : = d P d μ and q : = d Q d μ be the densities of P and Q with respect to μ. The f-divergence from P to Q is given by
D f ( P Q ) : = q f p q d μ ,
where
f ( 0 ) : = lim t 0 + f ( t ) , 0 f 0 0 : = 0 ,
0 f a 0 : = lim t 0 + t f a t = a lim u f ( u ) u , a > 0 .
It should be noted that the right side of (1) does not depend on the dominating measure μ.
Throughout the paper, we denote by 1 { relation } the indicator function; it is equal to 1 if the relation is true, and it is equal to 0 otherwise. Throughout the paper, unless indicated explicitly, logarithms have an arbitrary common base (that is larger than 1), and exp ( · ) indicates the inverse function of the logarithm with that base.
Definition 2.
[1] The relative entropy is the f-divergence with f ( t ) : = t log t for t > 0 ,
(4) D ( P Q ) : = D f ( P Q ) (5) = p log p q d μ .
Definition 3.
The total variation distance between probability measures P and Q is the f-divergence from P to Q with f ( t ) : = | t 1 | for all t 0 . It is a symmetric f-divergence, denoted by | P Q | , which is given by
(6) | P Q | : = D f ( P Q ) (7) = | p q | d μ .
Definition 4.
[2] The chi-squared divergence from P to Q is defined to be the f-divergence in (1) with f ( t ) : = ( t 1 ) 2 or f ( t ) : = t 2 1 for all t > 0 ,
(8) χ 2 ( P Q ) : = D f ( P Q ) (9) = ( p q ) 2 q d μ = p 2 q d μ 1 .
The Rényi divergence, a generalization of the relative entropy, was introduced by Rényi [10] in the special case of finite alphabets. Its general definition is given as follows (see, e.g., [9]).
Definition 5.
[10] Let P and Q be probability measures on X dominated by μ, and let their densities be respectively denoted by p = d P d μ and q = d Q d μ . The Rényi divergence of order α [ 0 , ] is defined as follows:
  • If α ( 0 , 1 ) ( 1 , ) , then
    (10) D α ( P Q ) = 1 α 1 log E p α ( Z ) q 1 α ( Z ) (11) = 1 α 1 log x X P α ( x ) Q 1 α ( x ) ,
    where Z μ in (10), and (11) holds if X is a discrete set.
  • By the continuous extension of D α ( P Q ) ,
    (12) D 0 ( P Q ) = max A : P ( A ) = 1 log 1 Q ( A ) , (13) D 1 ( P Q ) = D ( P Q ) , (14) D ( P Q ) = log ess sup p ( Z ) q ( Z ) .
The second-order Rényi divergence and the chi-squared divergence are related as follows:
D 2 ( P Q ) = log 1 + χ 2 ( P Q ) ,
and the relative entropy and the chi-squared divergence satisfy (see, e.g., [20] [Theorem 5])
D ( P Q ) log 1 + χ 2 ( P Q ) .
Inequality (16) readily follows from (13), (15), and since D α ( P Q ) is monotonically increasing in α ( 0 , ) (see [9] [Theorem 3]). A tightened version of (16), introducing an improved and locally-tight upper bound on D ( P Q ) as a function of χ 2 ( P Q ) and χ 2 ( Q P ) , is introduced in [15] [Theorem 20]. Another sharpened version of (16) is derived in [15] [Theorem 11] under the assumption of a bounded relative information. Furthermore, under the latter assumption, tight upper and lower bounds on the ratio D ( P Q ) χ 2 ( P Q ) are obtained in [15] [(169)].
Definition 6.
[21] The Györfi–Vajda divergence of order s [ 0 , 1 ] is an f-divergence with
f ( t ) = ϕ s ( t ) : = ( t 1 ) 2 s + ( 1 s ) t , t 0 .
Vincze–Le Cam distance (also known as the triangular discrimination) ([22,23]) is a special case with s = 1 2 .
In view of (1), (9) and (17), it can be verified that the Györfi–Vajda divergence is related to the chi-squared divergence as follows:
D ϕ s ( P Q ) = { 1 s 2 · χ 2 P ( 1 s ) P + s Q , s ( 0 , 1 ] , χ 2 ( Q P ) , s = 0 .
Hence,
(19) D ϕ 1 ( P Q ) = χ 2 ( P Q ) , (20) D ϕ 0 ( P Q ) = χ 2 ( Q P ) .

3. Relations between Divergences

We introduce in this section results on the relations between the relative entropy and the chi-squared divergence, their implications, and generalizations. Information–theoretic applications are studied in the next section.

3.1. Relations between the Relative Entropy and the Chi-Squared Divergence

The following result relates the relative entropy and the chi-squared divergence, which are two fundamental divergence measures in information theory and statistics. This result was recently obtained in an equivalent form in [17] [(12)] (it is noted that this identity was also independently derived by the coauthors in two separate un-published works in [24] [(16)] and [25]). It should be noted that these connections between divergences in the quantum setting were originally discovered in [19] [Theorem 6]. Beyond serving as an interesting relation between these two fundamental divergence measures, it is introduced here for the following reasons:
(a)
New consequences and applications of it are obtained, including new shorter proofs of some known results;
(b)
An interesting extension provides new relations between f-divergences (see Section 3.3).
Theorem 1.
Let P and Q be probability measures defined on a measurable space ( X , F ) , and let
R λ : = ( 1 λ ) P + λ Q , λ [ 0 , 1 ]
be the convex combination of P and Q. Then, for all λ [ 0 , 1 ] ,
1 log e D ( P R λ ) = 0 λ χ 2 ( P R s ) d s s ,
1 2 λ 2 χ 2 ( R 1 λ Q ) = 0 λ χ 2 ( R 1 s Q ) d s s .
Proof. 
See Section 5.1. □
A specialization of Theorem 1 by letting λ = 1 gives the following identities.
Corollary 1.
1 log e D ( P Q ) = 0 1 χ 2 ( P ( 1 s ) P + s Q ) d s s ,
1 2 χ 2 ( P Q ) = 0 1 χ 2 ( s P + ( 1 s ) Q Q ) d s s .
Remark 1.
The substitution s : = 1 1 + t transforms (24) to [26] [Equation (31)], i.e.,
1 log e D ( P Q ) = 0 χ 2 P t P + Q 1 + t d t 1 + t .
In view of (18) and (21), an equivalent form of (22) and (24) is given as follows:
Corollary 2.
For s [ 0 , 1 ] , let ϕ s : [ 0 , ) R be given in (17). Then,
(27) 1 log e D ( P R λ ) = 0 λ s D ϕ s ( P Q ) d s , λ [ 0 , 1 ] , (28) 1 log e D ( P Q ) = 0 1 s D ϕ s ( P Q ) d s .
By Corollary 1, we obtain original and simple proofs of new and old f-divergence inequalities.
Proposition 1.
(f-divergence inequalities).
(a) 
Pinsker’s inequality:
D ( P Q ) 1 2 | P Q | 2 log e .
(b) 
1 log e D ( P Q ) 1 3 χ 2 ( P Q ) + 1 6 χ 2 ( Q P ) .
Furthermore, let { P n } be a sequence of probability measures that is defined on a measurable space ( X , F ) , and which converges to a probability measure P in the sense that
lim n ess sup d P n d P ( X ) = 1 ,
with X P . Then, (30) is locally tight in the sense that its both sides converge to 0, and
lim n 1 3 χ 2 ( P n P ) + 1 6 χ 2 ( P P n ) 1 log e D ( P n P ) = 1 .
(c) 
For all θ ( 0 , 1 ) ,
D ( P Q ) ( 1 θ ) log 1 1 θ D ϕ θ ( P Q ) .
Moreover, under the assumption in (31), for all θ [ 0 , 1 ]
lim n D ( P P n ) D ϕ θ ( P P n ) = 1 2 log e .
(d) 
[15] [Theorem 2]:
1 log e D ( P Q ) 1 2 χ 2 ( P Q ) + 1 4 | P Q | .
Proof. 
See Section 5.2. □
Remark 2.
Inequality (30) is locally tight in the sense that (31) yields (32). This property, however, is not satisfied by (16) since the assumption in (31) implies that
lim n log 1 + χ 2 ( P n P ) D ( P n P ) = 2 .
Remark 3.
Inequality (30) readily yields
D ( P Q ) + D ( Q P ) 1 2 χ 2 ( P Q ) + χ 2 ( Q P ) log e ,
which is proved by a different approach in [27] [Proposition 4]. It is further shown in [15] [Theorem 2 b)] that
sup D ( P Q ) + D ( Q P ) χ 2 ( P Q ) + χ 2 ( Q P ) = 1 2 log e ,
where the supremum is over P Q and P Q .

3.2. Implications of Theorem 1

We next provide two implications of Theorem 1. The first implication, which relies on the Hammersley–Chapman–Robbins (HCR) bound for the chi-squared divergence [28,29], gives the following tight lower bound on the relative entropy D ( P Q ) as a function of the means and variances under P and Q.
Theorem 2.
Let P and Q be probability measures defined on the measurable space ( R , B ) , where R is the real line and B is the Borel σ–algebra of subsets of R . Let m P , m Q , σ P 2 , and σ Q 2 denote the expected values and variances of X P and Y Q , i.e.,
E [ X ] = : m P , E [ Y ] = : m Q , Var ( X ) = : σ P 2 , Var ( Y ) = : σ Q 2 .
(a) 
If m P m Q , then
D ( P Q ) d ( r s ) ,
where d ( r s ) : = r log r s + ( 1 r ) log 1 r 1 s , for r , s [ 0 , 1 ] , denotes the binary relative entropy (with the convention that 0 log 0 0 = 0 ), and
(41) r : = 1 2 + b 4 a v [ 0 , 1 ] , (42) s : = r a 2 v [ 0 , 1 ] , (43) a : = m P m Q , (44) b : = a 2 + σ Q 2 σ P 2 , (45) v : = σ P 2 + b 2 4 a 2 .
(b) 
The lower bound on the right side of (40) is attained for P and Q which are defined on the two-element set U : = { u 1 , u 2 } , and
P ( u 1 ) = r , Q ( u 1 ) = s ,
with r and s in (41) and (42), respectively, and for m P m Q
u 1 : = m P + ( 1 r ) σ P 2 r , u 2 : = m P r σ P 2 1 r .
(c) 
If m P = m Q and σ P and σ Q are selected arbitrarily, then
inf P , Q D ( P Q ) = 0 ,
where the infimum on the left side of (48) is taken over all P and Q which satisfy (39).
Proof. 
See Section 5.3. □
Remark 4.
Consider the case of the non-equal means in Items (a) and (b) of Theorem 2. If these means are fixed, then the infimum of D ( P Q ) is zero by choosing arbitrarily large equal variances. Suppose now that the non-equal means m P and m Q are fixed, as well as one of the variances (either σ P 2 or σ Q 2 ). Numerical experimentation shows that, in this case, the achievable lower bound in (40) is monotonically decreasing as a function of the other variance, and it tends to zero as we let the free variance tend to infinity. This asymptotic convergence to zero can be justified by assuming, for example, that m P , m Q , and σ Q 2 are fixed, and m P > m Q (the other cases can be justified in a similar way). Then, it can be verified from (41)–(45) that
r = ( m P m Q ) 2 σ P 2 + O 1 σ P 4 , s = O 1 σ P 4 ,
which implies that d ( r s ) 0 as we let σ P . The infimum of the relative entropy D ( P Q ) is therefore equal to zero since the probability measures P and Q in (46) and (47), which are defined on a two-element set and attain the lower bound on the relative entropy under the constraints in (39), have a vanishing relative entropy in this asymptotic case.
Remark 5.
The proof of Item (c) in Theorem 2 suggests explicit constructions of sequences of pairs probability measures { ( P n , Q n ) } such that
(a) 
The means under P n and Q n are both equal to m (independently of n);
(b) 
The variance under P n is equal to σ P 2 , and the variance under Q n is equal to σ Q 2 (independently of n);
(c) 
The relative entropy D ( P n Q n ) vanishes as we let n .
This yields in particular (48).
A second consequence of Theorem 1 gives the following result. Its first part holds due to the concavity of exp D ( P · ) (see [30] [Problem 4.2]). The second part is new, and its proof relies on Theorem 1. As an educational note, we provide an alternative proof of the first part by relying on Theorem 1.
Theorem 3.
Let P Q , and F : [ 0 , 1 ] [ 0 , ) be given by
F ( λ ) : = D P ( 1 λ ) P + λ Q , λ [ 0 , 1 ] .
Then, for all λ [ 0 , 1 ] ,
F ( λ ) log 1 1 λ + λ exp D ( P Q ) ,
with an equality if λ = 0 or λ = 1 . Moreover, F is monotonically increasing, differentiable, and it satisfies
F ( λ ) 1 λ exp F ( λ ) 1 log e , λ ( 0 , 1 ] ,
lim λ 0 + F ( λ ) λ = χ 2 ( Q P ) log e ,
so the limit in (53) is twice as large as the value of the lower bound on this limit as it follows from the right side of (52).
Proof. 
See Section 5.4. □
Remark 6.
By the convexity of the relative entropy, it follows that F ( λ ) λ D ( P Q ) for all λ [ 0 , 1 ] . It can be verified, however, that the inequality 1 λ + λ exp ( x ) exp ( λ x ) holds for all x 0 and λ [ 0 , 1 ] . Letting x : = D ( P Q ) implies that the upper bound on F ( λ ) on the right side of (51) is tighter than or equal to the upper bound λ D ( P Q ) (with an equality if and only if either λ { 0 , 1 } or P Q ).
Corollary 3.
Let { P j } j = 1 m , with m N , be probability measures defined on a measurable space ( X , F ) , and let { α j } j = 1 m be a sequence of non-negative numbers that sum to 1. Then, for all i { 1 , , m } ,
D P i j = 1 m α j P j log α i + ( 1 α i ) exp 1 1 α i j i α j D ( P i P j ) .
Proof. 
For an arbitrary i { 1 , , m } , apply the upper bound on the right side of (51) with λ : = 1 α i , P : = P i and Q : = 1 1 α i j i α j P j . The right side of (54) is obtained from (51) by invoking the convexity of the relative entropy, which gives D ( P i Q ) 1 1 α i j i α j D ( P i P j ) . □
The next result provides an upper bound on the non-negative difference between the entropy of a convex combination of distributions and the respective convex combination of the individual entropies (it is also termed as the concavity deficit of the entropy function in [17] [Section 3]).
Corollary 4.
Let { P j } j = 1 m , with m N , be probability measures defined on a measurable space ( X , F ) , and let { α j } j = 1 m be a sequence of non-negative numbers that sum to 1. Then,
0 H j = 1 m α j P j j = 1 m α j H ( P j ) i = 1 m α i log α i + ( 1 α i ) exp 1 1 α i j i α j D ( P i P j ) .
Proof. 
The lower bound holds due to the concavity of the entropy function. The upper bound readily follows from Corollary 3, and the identity
H j = 1 m α j P j j = 1 m α j H ( P j ) = i = 1 m α i D P i j = 1 m α j P j .
 □
Remark 7.
The upper bound in (55) refines the known bound (see, e.g., [31] [Lemma 2.2])
H j = 1 m α j P j j = 1 m α j H ( P j ) j = 1 m α j log 1 α j = H ( α ̲ ) ,
by relying on all the 1 2 m ( m 1 ) pairwise relative entropies between the individual distributions { P j } j = 1 m . Another refinement of (57), expressed in terms of total variation distances, has been recently provided in [17] [Theorem 3.1].

3.3. Monotonic Sequences of f-Divergences and an Extension of Theorem 1

The present subsection generalizes Theorem 1, and it also provides relations between f-divergences which are defined in a recursive way.
Theorem 4.
Let P and Q be probability measures defined on a measurable space ( X , F ) . Let R λ , for λ [ 0 , 1 ] , be the convex combination of P and Q as in (21). Let f 0 : ( 0 , ) R be a convex function with f 0 ( 1 ) = 0 , and let { f k ( · ) } k = 0 be a sequence of functions that are defined on ( 0 , ) by the recursive equation
f k + 1 ( x ) : = 0 1 x f k ( 1 s ) d s s , x > 0 , k { 0 , 1 , } .
Then,
(a) 
D f k ( P Q ) k = 0 is a non-increasing (and non-negative) sequence of f-divergences.
(b) 
For all λ [ 0 , 1 ] and k { 0 , 1 , } ,
D f k + 1 ( R λ P ) = 0 λ D f k ( R s P ) d s s .
Proof. 
See Section 5.5. □
We next use the polylogarithm functions, which satisfy the recursive equation [32] [Equation (7.2)]:
Li k ( x ) : = { x 1 x , i f k = 0 , 0 x Li k 1 ( s ) s d s , i f k 1 .
This gives Li 1 ( x ) = log e ( 1 x ) , Li 2 ( x ) = 0 x 1 s log e ( 1 s ) d s and so on, which are real–valued and finite for x < 1 .
Corollary 5.
Let
f k ( x ) : = Li k ( 1 x ) , x > 0 , k { 0 , 1 , } .
Then, (59) holds for all λ [ 0 , 1 ] and k { 0 , 1 , } . Furthermore, setting k = 0 in (59) yields (22) as a special case.
Proof. 
See Section 5.6. □

3.4. On Probabilities and f-Divergences

The following result relates probabilities of sets to f-divergences.
Theorem 5.
Let ( X , F , μ ) be a probability space, and let C F be a measurable set with μ ( C ) > 0 . Define the conditional probability measure
μ C ( E ) : = μ ( C E ) μ ( C ) , E F .
Let f : ( 0 , ) R be an arbitrary convex function with f ( 1 ) = 0 , and assume (by continuous extension of f at zero) that f ( 0 ) : = lim t 0 + f ( t ) < . Furthermore, let f ˜ : ( 0 , ) R be the convex function which is given by
f ˜ ( t ) : = t f 1 t , t > 0 .
Then,
D f ( μ C μ ) = f ˜ μ ( C ) + 1 μ ( C ) f ( 0 ) .
Proof. 
See Section 5.7. □
Connections of probabilities to the relative entropy, and to the chi-squared divergence, are next exemplified as special cases of Theorem 5.
Corollary 6.
In the setting of Theorem 5,
D μ C μ = log 1 μ C ,
χ 2 μ C μ = 1 μ C 1 ,
so (16) is satisfied in this case with equality. More generally, for all α ( 0 , ) ,
D α μ C μ = log 1 μ C .
Proof. 
See Section 5.7. □
Remark 8.
In spite of its simplicity, (65) proved very useful in the seminal work by Marton on transportation–cost inequalities, proving concentration of measures by information-theoretic tools [33,34] (see also [35] [Chapter 8] and [36] [Chapter 3]). As a side note, the simple identity (65) was apparently first explicitly used by Csiszár (see [37] [Equation (4.13)]).

4. Applications

This section provides applications of our results in Section 3. These include universal lossless compression, method of types and large deviations, and strong data–processing inequalities (SDPIs).

4.1. Application of Corollary 3: Shannon Code for Universal Lossless Compression

Consider m > 1 discrete, memoryless, and stationary sources with probability mass functions { P i } i = 1 m , and assume that the symbols are emitted by one of these sources with an a priori probability α i for source no. i, where { α i } i = 1 m are positive and sum to 1.
For lossless data compression by a universal source code, suppose that a single source code is designed with respect to the average probability mass function P : = j = 1 m α j P j .
Assume that the designer uses a Shannon code, where the code assignment for a symbol x X is of length ( x ) = log 1 P ( x ) bits (logarithms are on base 2). Due to the mismatch in the source distribution, the average codeword length avg satisfies (see [38] [Proposition 3.B])
i = 1 m α i H ( P i ) + i = 1 m α i D ( P i P ) avg i = 1 m α i H ( P i ) + i = 1 m α i D ( P i P ) + 1 .
The fractional penalty in the average codeword length, denoted by ν , is defined to be equal to the ratio of the penalty in the average codeword length as a result of the source mismatch, and the average codeword length in case of a perfect matching. From (68), it follows that
j = 1 m α i D ( P i P ) 1 + j = 1 m α i H ( P i ) ν 1 + j = 1 m α i D ( P i P ) j = 1 m α i H ( P i ) .
We next rely on Corollary 3 to obtain an upper bound on ν which is expressed as a function of the m ( m 1 ) relative entropies D ( P i P j ) for all i j in { 1 , , m } . This is useful if, e.g., the m relative entropies on the left and right sides of (69) do not admit closed form expressions, in contrast to the m ( m 1 ) relative entropies D ( P i P j ) for i j . We next exemplify this case.
For i { 1 , , m } , let P i be a Poisson distribution with parameter λ i > 0 . For all i , j { 1 , , m } , the relative entropy from P i to P j admits the closed-form expression
D ( P i P j ) = λ i log λ i λ j + ( λ j λ i ) log e .
From (54) and (70), it follows that
D ( P i P ) log α i + ( 1 α i ) exp f i ( α ̲ , λ ̲ ) 1 α i ,
where
(72) f i ( α ̲ , λ ̲ ) : = j i α j D ( P i P j ) (73) = j i α j λ i log λ i λ j + ( λ j λ i ) log e .
The entropy of a Poisson distribution, with parameter λ i , is given by the integral representation [39,40,41]
H ( P i ) = λ i log e λ i + 0 λ i 1 e λ i ( 1 e u ) 1 e u e u u d u log e .
Combining (69), (71) and (74) finally gives an upper bound on ν in the considered setup.
Example 1.
Consider five discrete memoryless sources where the probability mass function of source no. i is given by P i = Poisson ( λ i ) with λ ̲ = [ 16 , 20 , 24 , 28 , 32 ] . Suppose that the symbols are emitted from one of the sources with equal probability, so α ̲ = 1 5 , 1 5 , 1 5 , 1 5 , 1 5 . Let P : = 1 5 ( P 1 + + P 5 ) be the average probability mass function of the five sources. The term i α i D ( P i P ) , which appears in the numerators of the upper and lower bounds on ν (see (69)), does not lend itself to a closed-form expression, and it is not even an easy task to calculate it numerically due to the need to compute an infinite series which involves factorials. We therefore apply the closed-form upper bound in (71) to get that i α i D ( P i P ) 1.46 bits, whereas the upper bound which follows from the convexity of the relative entropy (i.e., i α i f i ( α ̲ , λ ̲ ) ) is equal to 1.99 bits (both upper bounds are smaller than the trivial bound log 2 5 2.32 bits). From (69), (74), and the stronger upper bound on i α i D ( P i P ) , the improved upper bound on ν is equal to 57.0 % (as compared to a looser upper bound of 69.3 % , which follows from (69), (74), and the looser upper bound on i α i D ( P i P ) that is equal to 1.99 bits).

4.2. Application of Theorem 2 in the Context of the Method of Types and Large Deviations Theory

Let X n = ( X 1 , , X n ) be a sequence of i.i.d. random variables with X 1 Q , where Q is a probability measure defined on a finite set X , and Q ( x ) > 0 for all x X . Let P be a set of probability measures on X such that Q P , and suppose that the closure of P coincides with the closure of its interior. Then, by Sanov’s theorem (see, e.g., [42] [Theorem 11.4.1] and [43] [Theorem 3.3]), the probability that the empirical distribution P ^ X n belongs to P vanishes exponentially at the rate
lim n 1 n log 1 P [ P ^ X n P ] = inf P P D ( P Q ) .
Furthermore, for finite n, the method of types yields the following upper bound on this rare event:
(76) P [ P ^ X n P ] n + | X | 1 | X | 1 exp n inf P P D ( P Q ) (77) ( n + 1 ) | X | 1 exp n inf P P D ( P Q ) ,
whose exponential decay rate coincides with the exact asymptotic result in (75).
Suppose that Q is not fully known, but its mean m Q and variance σ Q 2 are available. Let m 1 R and δ 1 , ε 1 , σ 1 > 0 be fixed, and let P be the set of all probability measures P, defined on the finite set X , with mean m P [ m 1 δ 1 , m 1 + δ 1 ] and variance σ P 2 [ σ 1 2 ε 1 , σ 1 2 + ε 1 ] , where | m 1 m Q | > δ 1 . Hence, P coincides with the closure of its interior, and Q P .
The lower bound on the relative entropy in Theorem 2, used in conjunction with the upper bound in (77), can serve to obtain an upper bound on the probability of the event that the empirical distribution of X n belongs to the set P , regardless of the uncertainty in Q. This gives
P [ P ^ X n P ] ( n + 1 ) | X | 1 exp n d * ,
where
d : = inf m P , σ P 2 d ( r s ) ,
and, for fixed ( m P , m Q , σ P 2 , σ Q 2 ) , the parameters r and s are given in (41) and (42), respectively.
Standard algebraic manipulations that rely on (78) lead to the following result, which is expressed as a function of the Lambert–W function [44]. This function, which finds applications in various engineering and scientific fields, is a standard built–in function in mathematical software tools such as Mathematica, Matlab, and Maple. Applications of the Lambert–W function in information theory and coding are briefly surveyed in [45].
Proposition 2.
For ε ( 0 , 1 ) , let n : = n ( ε ) denote the minimal value of n N such that the upper bound on the right side of (78) does not exceed ε ( 0 , 1 ) . Then, n admits the following closed-form expression:
n = max | X | 1 W 1 ( η ) log e d 1 , 1 ,
with
η : = d ε exp ( d ) 1 / ( | X | 1 ) | X | 1 log e [ 1 e , 0 ) ,
and W 1 ( · ) on the right side of (80) denotes the secondary real–valued branch of the Lambert–W function (i.e., x : = W 1 ( y ) where W 1 : [ 1 e , 0 ) ( , 1 ] is the inverse function of y : = x e x ).
Example 2.
Let Q be an arbitrary probability measure, defined on a finite set X , with mean m Q = 40 and variance σ Q 2 = 20 . Let P be the set of all probability measures P, defined on X , whose mean m P and variance σ P 2 lie in the intervals [ 43 , 47 ] and [ 18 , 22 ] , respectively. Suppose that it is required that, for all probability measures Q as above, the probability that the empirical distribution of the i.i.d. sequence X n Q n that is included in the set P is at most ε = 10 10 . We rely here on the upper bound in (78), and impose the stronger condition where it should not exceed ε. By this approach, it is obtained numerically from (79) that d = 0.203 nats. We next examine two cases:
(i)
If | X | = 2 , then it follows from (80) that n = 138 .
(ii)
Consider a richer alphabet size of the i.i.d. samples where, e.g., | X | = 100 . By relying on the same universal lower bound d , which holds independently of the value of | X | ( X can possibly be an infinite set), it follows from (80) that n = 4170 is the minimal value such that the upper bound in (78) does not exceed 10 10 .
We close this discussion by providing numerical experimentation of the lower bound on the relative entropy in Theorem 2, and comparing this attainable lower bound (see Item b) of Theorem 2) with the following closed-form expressions for relative entropies:
(a)
The relative entropy between real-valued Gaussian distributions is given by
D N ( m P , σ P 2 ) N ( m Q , σ Q 2 ) = log σ Q σ P + 1 2 ( m P m Q ) 2 + σ P 2 σ Q 2 1 log e .
(b)
Let E μ denote a random variable which is exponentially distributed with mean μ > 0 ; its probability density function is given by
e μ ( x ) = 1 μ e x / μ 1 { x 0 } .
Then, for a 1 , a 2 > 0 and d 1 , d 2 R ,
D ( E a 1 + d 1 E a 2 + d 2 ) = { log a 2 a 1 + d 1 + a 1 d 2 a 2 a 2 log e , d 1 d 2 , , d 1 < d 2 .
In this case, the means under P and Q are m P = d 1 + a 1 and m Q = d 2 + a 2 , respectively, and the variances are σ P 2 = a 1 2 and σ Q 2 = a 2 2 . Hence, for obtaining the required means and variances, set
a 1 = σ P , a 2 = σ Q , d 1 = m P σ P , d 2 = m Q σ Q .
Example 3.
We compare numerically the attainable lower bound on the relative entropy, as it given in (40), with the two relative entropies in (82) and (84):
(i)
If ( m P , m Q , σ P 2 , σ Q 2 ) = ( 45 , 40 , 20 , 20 ) , then the lower bound in (40) is equal to 0.521 nats, and the two relative entropies in (82) and (84) are equal to 0.625 and 1.118 nats, respectively.
(ii)
If ( m P , m Q , σ P 2 , σ Q 2 ) = ( 50 , 35 , 10 , 20 ) , then the lower bound in (40) is equal to 2.332 nats, and the two relative entropies in (82) and (84) are equal to 5.722 and 3.701 nats, respectively.

4.3. Strong Data–Processing Inequalities and Maximal Correlation

The information contraction is a fundamental concept in information theory. The contraction of f-divergences through channels is captured by data–processing inequalities, which can be further tightened by the derivation of SDPIs with channel-dependent or source-channel dependent contraction coefficients (see, e.g., [26,46,47,48,49,50,51,52]).
We next provide necessary definitions which are relevant for the presentation in this subsection.
Definition 7.
Let Q X be a probability distribution which is defined on a set X , and that is not a point mass, and let W Y | X : X Y be a stochastic transformation. The contraction coefficient for f-divergences is defined as
μ f ( Q X , W Y | X ) : = sup P X : D f ( P X Q X ) ( 0 , ) D f ( P Y Q Y ) D f ( P X Q X ) ,
where, for all y Y ,
P Y ( y ) = ( P X W Y | X ) ( y ) : = X d P X ( x ) W Y | X ( y | x ) ,
Q Y ( y ) = ( Q X W Y | X ) ( y ) : = X d Q X ( x ) W Y | X ( y | x ) .
The notation in (87) and (88) is consistent with the standard notation used in information theory (see, e.g., the first displayed equation after (3.2) in [53]).
The derivation of good upper bounds on contraction coefficients for f-divergences, which are strictly smaller than 1, lead to SDPIs. These inequalities find their applications, e.g., in studying the exponential convergence rate of an irreducible, time-homogeneous and reversible discrete-time Markov chain to its unique invariant distribution over its state space (see, e.g., [49] [Section 2.4.3] and [50] [Section 2]). It is in sharp contrast to DPIs which do not yield convergence to stationarity at any rate. We return to this point later in this subsection, and determine the exact convergence rate to stationarity under two parametric families of f-divergences.
We next rely on Theorem 1 to obtain upper bounds on the contraction coefficients for the following f-divergences.
Definition 8.
For α ( 0 , 1 ] , the α-skew K-divergence is given by
K α ( P Q ) : = D P ( 1 α ) P + α Q ,
and, for α [ 0 , 1 ] , let
(90) S α ( P Q ) : = α D P ( 1 α ) P + α Q + ( 1 α ) D Q ( 1 α ) P + α Q (91) = α K α ( P Q ) + ( 1 α ) K 1 α ( Q P ) ,
with the convention that K 0 ( P Q ) 0 (by a continuous extension at α = 0 in (89)). These divergence measures are specialized to the relative entropies:
K 1 ( P Q ) = D ( P Q ) = S 1 ( P Q ) , S 0 ( P Q ) = D ( Q P ) ,
and S 1 2 ( P Q ) is the Jensen–Shannon divergence [54,55,56] (also known as the capacitory discrimination [57]):
(93) S 1 2 ( P Q ) = 1 2 D P 1 2 ( P + Q ) + 1 2 D Q 1 2 ( P + Q ) (94) = H 1 2 ( P + Q ) 1 2 H ( P ) 1 2 H ( Q ) : = JS ( P Q ) .
It can be verified that the divergence measures in (89) and (90) are f-divergences:
K α ( P Q ) = D k α ( P Q ) , α ( 0 , 1 ] ,
S α ( P Q ) = D s α ( P Q ) , α [ 0 , 1 ] ,
with
(97) k α ( t ) : = t log t t log α + ( 1 α ) t , t > 0 , α ( 0 , 1 ] , (98) s α ( t ) : = α t log t α t + 1 α log α + ( 1 α ) t (99) = α k α ( t ) + ( 1 α ) t k 1 α 1 t , t > 0 , α [ 0 , 1 ] ,
where k α ( · ) and s α ( · ) are strictly convex functions on ( 0 , ) , and vanish at 1.
Remark 9.
The α-skew K-divergence in (89) is considered in [55] and [58] [(13)] (including pointers in the latter paper to its utility). The divergence in (90) is akin to Lin’s measure in [55] [(4.1)], the asymmetric α-skew Jensen–Shannon divergence in [58] [(11)–(12)], the symmetric α-skew Jensen–Shannon divergence in [58] [(16)], and divergence measures in [59] which involve arithmetic and geometric means of two probability distributions. Properties and applications of quantum skew divergences are studied in [19] and references therein.
Theorem 6.
The f-divergences in (89) and (90) satisfy the following integral identities, which are expressed in terms of the Györfi–Vajda divergence in (17):
1 log e K α ( P Q ) = 0 α s D ϕ s ( P Q ) d s , α ( 0 , 1 ] ,
1 log e S α ( P Q ) = 0 1 g α ( s ) D ϕ s ( P Q ) d s , α [ 0 , 1 ] ,
with
g α ( s ) : = α s 1 s ( 0 , α ] + ( 1 α ) ( 1 s ) 1 s [ α , 1 ) , ( α , s ) [ 0 , 1 ] × [ 0 , 1 ] .
Moreover, the contraction coefficients for these f-divergences are related as follows:
μ χ 2 ( Q X , W Y | X ) μ k α ( Q X , W Y | X ) sup s ( 0 , α ] μ ϕ s ( Q X , W Y | X ) , α ( 0 , 1 ] ,
μ χ 2 ( Q X , W Y | X ) μ s α ( Q X , W Y | X ) sup s ( 0 , 1 ) μ ϕ s ( Q X , W Y | X ) , α [ 0 , 1 ] ,
where μ χ 2 ( Q X , W Y | X ) denotes the contraction coefficient for the chi-squared divergence.
Proof. 
See Section 5.8. □
Remark 10.
The upper bounds on the contraction coefficients for the parametric f-divergences in (89) and (90) generalize the upper bound on the contraction coefficient for the relative entropy in [51] [Theorem III.6] (recall that K 1 ( P Q ) = D ( P Q ) = S 1 ( P Q ) ), so the upper bounds in Theorem 6 are specialized to the latter bound at α = 1 .
Corollary 7.
Let
μ χ 2 ( W Y | X ) : = sup Q μ χ 2 ( Q X , W Y | X ) ,
where the supremum on the right side is over all probability measures Q X defined on X . Then,
μ χ 2 ( Q X , W Y | X ) μ k α ( Q X , W Y | X ) μ χ 2 ( W Y | X ) , α ( 0 , 1 ] ,
μ χ 2 ( Q X , W Y | X ) μ s α ( Q X , W Y | X ) μ χ 2 ( W Y | X ) , α [ 0 , 1 ] .
Proof. 
See Section 5.9. □
Example 4.
Let Q X = Bernoulli 1 2 , and let W Y | X correspond to a binary symmetric channel (BSC) with crossover probability ε. Then, μ χ 2 ( Q X , W Y | X ) = μ χ 2 ( W Y | X ) = ( 1 2 ε ) 2 . The upper and lower bounds on μ k α ( Q X , W Y | X ) and μ s α ( Q X , W Y | X ) in (106) and (107) match for all α, and they are all equal to ( 1 2 ε ) 2 .
The upper bound on the contraction coefficients in Corollary 7 is given by μ χ 2 ( W Y | X ) , whereas the lower bound is given by μ χ 2 ( Q X , W Y | X ) , which depends on the input distribution Q X . We next provide alternative upper bounds on the contraction coefficients for the considered (parametric) f-divergences, which, similarly to the lower bound, scale like μ χ 2 ( Q X , W Y | X ) . Although the upper bound in Corollary 7 may be tighter in some cases than the alternative upper bounds which are next presented in Proposition 3 (and in fact, the former upper bound may be even achieved with equality as in Example 4), the bounds in Proposition 3 are used shortly to determine the exponential rate of the convergence to stationarity of a type of Markov chains.
Proposition 3.
For all α ( 0 , 1 ] ,
(108) μ χ 2 ( Q X , W Y | X ) μ k α ( Q X , W Y | X ) 1 α Q min · μ χ 2 ( Q X , W Y | X ) , (109) μ χ 2 ( Q X , W Y | X ) μ s α ( Q X , W Y | X ) ( 1 α ) log e 1 α + 2 α 1 ( 1 3 α + 3 α 2 ) Q min · μ χ 2 ( Q X , W Y | X ) ,
where Q min denotes the minimal positive mass of the input distribution Q X .
Proof. 
See Section 5.10. □
Remark 11.
In view of (92), at α = 1 , (108) and (109) specialize to an upper bound on the contraction coefficient of the relative entropy (KL divergence) as a function of the contraction coefficient of the chi-squared divergence. In this special case, both (108) and (109) give
μ χ 2 ( Q X , W Y | X ) μ KL ( Q X , W Y | X ) 1 Q min · μ χ 2 ( Q X , W Y | X ) ,
which then coincides with [48] [Theorem 10].
We next apply Proposition 3 to consider the convergence rate to stationarity of Markov chains by the introduced f-divergences in Definition 8. The next result follows [49] [Section 2.4.3], and it provides a generalization of the result there.
Theorem 7.
Consider a time-homogeneous, irreducible, and reversible discrete-time Markov chain with a finite state space X , let W be its probability transition matrix, and Q X be its unique stationary distribution (reversibility means that Q X ( x ) [ W ] x , y = Q X ( y ) [ W ] y , x for all x , y X ). Let P X be an initial probability distribution over X . Then, for all α ( 0 , 1 ] and n N ,
(111) K α ( P X W n Q X ) μ k α ( Q X , W n ) K α ( P X Q X ) , (112) S α ( P X W n Q X ) μ s α ( Q X , W n ) S α ( P X Q X ) ,
and the contraction coefficients on the right sides of (111) and (112) scale like the n-th power of the contraction coefficient for the chi-squared divergence as follows:
(113) μ χ 2 ( Q X , W ) n μ k α ( Q X , W n ) 1 α Q min · μ χ 2 ( Q X , W ) n , (114) μ χ 2 ( Q X , W ) n μ s α ( Q X , W n ) ( 1 α ) log e 1 α + 2 α 1 ( 1 3 α + 3 α 2 ) Q min · μ χ 2 ( Q X , W ) n .
Proof. 
Inequalities (111) and (112) hold since Q X W n = Q X , for all n N , and due to Definition 7 and (95) and (96). Inequalities (113) and (114) hold by Proposition 3, and due to the reversibility of the Markov chain which implies that (see [49] [Equation (2.92)])
μ χ 2 ( Q X , W n ) = μ χ 2 ( Q X , W ) n , n N .
 □
In view of (113) and (114), Theorem 7 readily gives the following result on the exponential decay rate of the upper bounds on the divergences on the left sides of (111) and (112).
Corollary 8.
For all α ( 0 , 1 ] ,
lim n μ k α ( Q X , W n ) 1 / n = μ χ 2 ( Q X , W ) = lim n μ s α ( Q X , W n ) 1 / n .
Remark 12.
Theorem 7 and Corollary 8 generalize the results in [49] [Section 2.4.3], which follow as a special case at α = 1 (see (92)).
We end this subsection by considering maximal correlations, which are closely related to the contraction coefficient for the chi-squared divergence.
Definition 9.
The maximal correlation between two random variables X and Y is defined as
ρ m ( X ; Y ) : = sup f , g E [ f ( X ) g ( Y ) ] ,
where the supremum is taken over all real-valued functions f and g such that
E [ f ( X ) ] = E [ g ( Y ) ] = 0 , E [ f 2 ( X ) ] 1 , E [ g 2 ( Y ) ] 1 .
It is well-known [60] that, if X Q X and Y Q Y = Q X W Y | X , then the contraction coefficient for the chi-squared divergence μ χ 2 ( Q X , W Y | X ) is equal to the square of the maximal correlation between the random variables X and Y, i.e.,
ρ m ( X ; Y ) = μ χ 2 ( Q X , W Y | X ) .
A simple application of Corollary 1 and (119) gives the following result.
Proposition 4.
In the setting of Definition 7, for s [ 0 , 1 ] , let X s ( 1 s ) P X + s Q X and Y s ( 1 s ) P Y + s Q Y with P X Q X and P X Q X . Then, the following inequality holds:
sup s [ 0 , 1 ] ρ m ( X s ; Y s ) max D ( P Y Q Y ) D ( P X Q X ) , D ( Q Y P Y ) D ( Q X P X ) .
Proof. 
See Section 5.11. □

5. Proofs

This section provides proofs of the results in Section 3 and Section 4.

5.1. Proof of Theorem 1

Proof of (22): We rely on an integral representation of the logarithm function (on base e ):
log e x = 0 1 x 1 x + ( 1 x ) v d v , x > 0 .
Let μ be a dominating measure of P and Q (i.e., P , Q μ ), and let p : = d P d μ , q : = d Q d μ , and
r λ : = d R λ d μ = ( 1 λ ) p + λ q , λ [ 0 , 1 ] ,
where the last equality is due to (21). For all λ [ 0 , 1 ] ,
(123) 1 log e D ( P R λ ) = p log e p r λ d μ (124) = 0 1 p ( p r λ ) p + v ( r λ p ) d μ d v ,
where (124) holds due to (121) with x : = p r λ , and by swapping the order of integration. The inner integral on the right side of (124) satisfies, for all v ( 0 , 1 ] ,
(125) p ( p r λ ) p + v ( r λ p ) d μ = ( p r λ ) 1 + v ( p r λ ) p + v ( r λ p ) d μ (126) = ( p r λ ) d μ + v ( p r λ ) 2 p + v ( r λ p ) d μ (127) = v ( p r λ ) 2 ( 1 v ) p + v r λ d μ (128) = 1 v p ( 1 v ) p + v r λ 2 ( 1 v ) p + v r λ d μ (129) = 1 v χ 2 P ( 1 v ) P + v R λ ,
where (127) holds since p d μ = 1 , and r λ d μ = 1 . From (21), for all ( λ , v ) [ 0 , 1 ] × [ 0 , 1 ] ,
( 1 v ) P + v R λ = ( 1 λ v ) P + λ v Q = R λ v .
The substitution of (130) into the right side of (129) gives that, for all ( λ , v ) [ 0 , 1 ] × ( 0 , 1 ] ,
p ( p r λ ) p + v ( r λ p ) d μ = 1 v χ 2 ( P R λ v ) .
Finally, substituting (131) into the right side of (124) gives that, for all λ ( 0 , 1 ] ,
(132) 1 log e D ( P R λ ) = 0 1 1 v χ 2 ( P R λ v ) d v (133) = 0 λ 1 s χ 2 ( P R s ) d s ,
where (133) holds by the transformation s : = λ v . Equality (133) also holds for λ = 0 since we have D ( P R 0 ) = D ( P P ) = 0 .
Proof of (23): For all s ( 0 , 1 ] ,
χ 2 ( P Q ) = ( p q ) 2 q d μ (134) = 1 s 2 s p + ( 1 s ) q q 2 q d μ (135) = 1 s 2 r 1 s q 2 q d μ (136) = 1 s 2 χ 2 R 1 s Q ,
where (135) holds due to (122). From (136), it follows that for all λ [ 0 , 1 ] ,
0 λ 1 s χ 2 R 1 s Q d s = 0 λ s d s χ 2 ( P Q ) = 1 2 λ 2 χ 2 ( P Q ) .

5.2. Proof of Proposition 1

(a)
Simple Proof of Pinsker’s Inequality: By [61] or [62] [(58)],
χ 2 ( P Q ) { | P Q | 2 , if | P Q | [ 0 , 1 ] , | P Q | 2 | P Q | , if | P Q | ( 1 , 2 ] .
We need the weaker inequality χ 2 ( P Q ) | P Q | 2 , proved by the Cauchy–Schwarz inequality:
(139) χ 2 ( P Q ) = ( p q ) 2 q d μ q d μ (140) | p q | q · q d μ 2 (141) = | P Q | 2 .
By combining (24) and (139)–(141), it follows that
(142) 1 log e D ( P Q ) = 0 1 χ 2 ( P ( 1 s ) P + s Q ) d s s (143) 0 1 | P ( 1 s ) P + s Q | 2 d s s (144) = 0 1 s | P Q | 2 d s (145) = 1 2 | P Q | 2 .
(b)
Proof of (30) and its local tightness:
(146) 1 log e D ( P Q ) = 0 1 χ 2 ( P ( 1 s ) P + s Q ) d s s (147) = 0 1 p ( ( 1 s ) p + s q ) 2 ( 1 s ) p + s q d μ d s s (148) = 0 1 s ( p q ) 2 ( 1 s ) p + s q d μ d s (149) 0 1 s ( p q ) 2 1 s p + s q d μ d s (150) = 0 1 s 2 d s ( p q ) 2 q d μ + 0 1 s ( 1 s ) d s ( p q ) 2 p d μ (151) = 1 3 χ 2 ( P Q ) + 1 6 χ 2 ( Q P ) ,
where (146) is (24), and (149) holds due to Jensen’s inequality and the convexity of the hyperbola.
We next show the local tightness of inequality (30) by proving that (31) yields (32). Let { P n } be a sequence of probability measures, defined on a measurable space ( X , F ) , and assume that { P n } converges to a probability measure P in the sense that (31) holds. In view of [16] [Theorem 7] (see also [15] [Section 4.F] and [63]), it follows that
lim n D ( P n P ) = lim n χ 2 ( P n P ) = 0 ,
and
(153) lim n D ( P n P ) χ 2 ( P n P ) = 1 2 log e , (154) lim n χ 2 ( P n P ) χ 2 ( P P n ) = 1 ,
which therefore yields (32).
(c)
Proof of (33) and (34): The proof of (33) relies on (28) and the following lemma.
Lemma 1. 
For all s , θ ( 0 , 1 ) ,
D ϕ s ( P Q ) D ϕ θ ( P Q ) min 1 θ 1 s , θ s .
Proof. 
(156) D ϕ s ( P Q ) = ( p q ) 2 ( 1 s ) p + s q d μ (157) = ( p q ) 2 ( 1 θ ) p + θ q ( 1 θ ) p + θ q ( 1 s ) p + s q d μ (158) min 1 θ 1 s , θ s ( p q ) 2 ( 1 θ ) p + θ q d μ (159) = min 1 θ 1 s , θ s D ϕ θ ( P Q ) .
 □
From (28) and (155), for all θ ( 0 , 1 ) ,
(160) 1 log e D ( P Q ) = 0 θ s D ϕ s ( P Q ) d s + θ 1 s D ϕ s ( P Q ) d s (161) 0 θ s ( 1 θ ) 1 s · D ϕ θ ( P Q ) d s + θ 1 θ D ϕ θ ( P Q ) d s (162) = θ + log e 1 1 θ ( 1 θ ) D ϕ θ ( P Q ) + θ ( 1 θ ) D ϕ θ ( P Q ) (163) = ( 1 θ ) log e 1 1 θ D ϕ θ ( P Q ) .
This proves (33). Furthermore, under the assumption in (31), for all θ [ 0 , 1 ] ,
(164) lim n D ( P P n ) D ϕ θ ( P P n ) = lim n D ( P P n ) χ 2 ( P P n ) lim n χ 2 ( P P n ) D ϕ θ ( P P n ) (165) = 1 2 log e · 2 ϕ θ ( 1 ) (166) = 1 2 log e ,
where (165) holds due to (153) and the local behavior of f-divergences [63], and (166) holds due to (17) which implies that ϕ θ ( 1 ) = 2 for all θ [ 0 , 1 ] . This proves (34).
(d)
Proof of (35): From (24), we get
(167) 1 log e D ( P Q ) = 0 1 χ 2 ( P ( 1 s ) P + s Q ) d s s (168) = 0 1 χ 2 ( P ( 1 s ) P + s Q ) s 2 χ 2 ( P Q ) d s s + 0 1 s d s χ 2 ( P Q ) (169) = 0 1 χ 2 ( P ( 1 s ) P + s Q ) s 2 χ 2 ( P Q ) d s s + 1 2 χ 2 ( P Q ) .
Referring to the integrand of the first term on the right side of (169), for all s ( 0 , 1 ] ,
1 s χ 2 ( P ( 1 s ) P + s Q ) s 2 χ 2 ( P Q ) (170) = s ( p q ) 2 1 ( 1 s ) p + s q 1 q d μ (171) = s ( 1 s ) ( q p ) 3 q ( 1 s ) p + s q d μ (172) = s ( 1 s ) | q p | · | q p | q · q p p + s ( q p ) 1 s 1 { q p } d μ (173) ( 1 s ) ( q p ) 1 { q p } d μ (174) = 1 2 ( 1 s ) | P Q | ,
where the last equality holds since the equality ( q p ) d μ = 0 implies that
(175) ( q p ) 1 { q p } d μ = ( p q ) 1 { p q } d μ (176) = 1 2 | p q | d μ = 1 2 | P Q | .
From (170)–(174), an upper bound on the right side of (169) results. This gives
(177) 1 log e D ( P Q ) 1 2 0 1 ( 1 s ) d s | P Q | + 1 2 χ 2 ( P Q ) (178) = 1 4 | P Q | + 1 2 χ 2 ( P Q ) .
It should be noted that [15] [Theorem 2(a)] shows that inequality (35) is tight. To that end, let ε ( 0 , 1 ) , and define probability measures P ε and Q ε on the set A = { 0 , 1 } with P ε ( 1 ) = ε 2 and Q ε ( 1 ) = ε . Then,
lim ε 0 1 log e D ( P ε Q ε ) 1 4 | P ε Q ε | + 1 2 χ 2 ( P ε Q ε ) = 1 .

5.3. Proof of Theorem 2

We first prove Item (a) in Theorem 2. In view of the Hammersley–Chapman–Robbins lower bound on the χ 2 divergence, for all λ [ 0 , 1 ]
χ 2 P ( 1 λ ) P + λ Q E [ X ] E [ Z λ ] 2 Var ( Z λ ) ,
where X P , Y Q and Z λ R λ : = ( 1 λ ) P + λ Q is defined by
Z λ : = { X , with probability 1 λ , Y , with probability λ .
For λ [ 0 , 1 ] ,
E [ Z λ ] = ( 1 λ ) m P + λ m Q ,
and it can be verified that
Var ( Z λ ) = ( 1 λ ) σ P 2 + λ σ Q 2 + λ ( 1 λ ) ( m P m Q ) 2 .
We now rely on identity (24)
1 log e D ( P Q ) = 0 1 χ 2 ( P ( 1 λ ) P + λ Q ) d λ λ
to get a lower bound on the relative entropy. Combining (180), (183) and (184) yields
1 log e D ( P Q ) ( m P m Q ) 2 0 1 λ ( 1 λ ) σ P 2 + λ σ Q 2 + λ ( 1 λ ) ( m P m Q ) 2 d λ .
From (43) and (44), we get
0 1 λ ( 1 λ ) σ P 2 + λ σ Q 2 + λ ( 1 λ ) ( m P m Q ) 2 d λ = 0 1 λ ( α a λ ) ( β + a λ ) d λ ,
where
α : = σ P 2 + b 2 4 a 2 + b 2 a ,
β : = σ P 2 + b 2 4 a 2 b 2 a .
By using the partial fraction decomposition of the integrand on the right side of (186), we get (after multiplying both sides of (185) by log e )
(189) D ( P Q ) ( m P m Q ) 2 a 2 α α + β log α α a + β α + β log β β + a (190) = α α + β log α α a + β α + β log β β + a (191) = d ( α α + β α a α + β ,
where (189) holds by integration since α a λ and β + a λ are both non-negative for all λ [ 0 , 1 ] . To verify the latter claim, it should be noted that (43) and the assumption that m P m Q imply that a 0 . Since α , β > 0 , it follows that, for all λ [ 0 , 1 ] , either α a λ > 0 or β + a λ > 0 (if a < 0 , then the former is positive, and, if a > 0 , then the latter is positive). By comparing the denominators of both integrands on the left and right sides of (186), it follows that ( α a λ ) ( β + a λ ) 0 for all λ [ 0 , 1 ] . Since the product of α a λ and β + a λ is non-negative and at least one of these terms is positive, it follows that α a λ and β + a λ are both non-negative for all λ [ 0 , 1 ] . Finally, (190) follows from (43).
If m P m Q 0 and σ P σ Q , then it follows from (43) and (44) that a 0 and b σ P 2 σ Q 2 0 . Hence, from (187) and (188), α b a and β 0 , which implies that the lower bound on D ( P Q ) in (191) tends to zero.
Letting r : = α α + β and s : = α a α + β , we obtain that the lower bound on D ( P Q ) in (40) holds. This bound is consistent with the expressions of r and s in (41) and (42) since, from (45), (187) and (188),
r = α α + β = v + b 2 a 2 v = 1 2 + b 4 a v ,
s = α a α + β = r a α + β = r a 2 v .
It should be noted that r , s [ 0 , 1 ] . First, from (187) and (188), α and β are positive if σ P 0 , which yields r = α α + β ( 0 , 1 ) . We next show that s [ 0 , 1 ] . Recall that α a λ and β + a λ are both non-negative for all λ [ 0 , 1 ] . Setting λ = 1 yields α a , which (from (193)) implies that s 0 . Furthermore, from (193) and the positivity of α + β , it follows that s 1 if and only if β a . The latter holds since β + a λ 0 for all λ [ 0 , 1 ] (in particular, for λ = 1 ). If σ P = 0 , then it follows from (41)–(45) that v = b 2 | a | , b = a 2 + σ Q 2 , and (recall that a 0 )
(i)
If a > 0 , then v = b 2 a implies that r = 1 2 + b 4 a v = 1 , and s = r a 2 v = 1 a 2 b = σ Q 2 σ Q 2 + a 2 [ 0 , 1 ] ;
(ii)
If a < 0 , then v = b 2 a implies that r = 0 , and s = r a 2 v = a 2 b = a 2 a 2 + σ Q 2 [ 0 , 1 ] .
We next prove Item (b) in Theorem 2 (i.e., the achievability of the lower bound in (40)). To that end, we provide a technical lemma, which can be verified by the reader.
Lemma 2.
Let r , s be given in (41)–(45), and let u 1 , 2 be given in (47). Then,
( s r ) ( u 1 u 2 ) = m Q m P ,
u 1 + u 2 = m P + m Q + σ Q 2 σ P 2 m Q m P .
Let X P and Y Q be defined on a set U = { u 1 , u 2 } (for the moment, the values of u 1 and u 2 are not yet specified) with P [ X = u 1 ] = r , P [ X = u 2 ] = 1 r , Q [ Y = u 1 ] = s , and Q [ Y = u 2 ] = 1 s . We now calculate u 1 and u 2 such that E [ X ] = m P and Var ( X ) = σ P 2 . This is equivalent to
(196) r u 1 + ( 1 r ) u 2 = m P , (197) r u 1 2 + ( 1 r ) u 2 2 = m P 2 + σ P 2 .
Substituting (196) into the right side of (197) gives
r u 1 2 + ( 1 r ) u 2 2 = r u 1 + ( 1 r ) u 2 2 + σ P 2 ,
which, by rearranging terms, also gives
u 1 u 2 = ± σ P 2 r ( 1 r ) .
Solving simultaneously (196) and (199) gives
u 1 = m P ± ( 1 r ) σ P 2 r ,
u 2 = m P r σ P 2 1 r .
We next verify that, by setting u 1 , 2 as in (47), one also gets (as desired) that E [ Y ] = m Q and Var ( Y ) = σ Q 2 . From Lemma 2, and, from (196) and (197), we have
(202) E [ Y ] = s u 1 + ( 1 s ) u 2 (203) = r u 1 + ( 1 r ) u 2 + ( s r ) ( u 1 u 2 ) (204) = m P + ( s r ) ( u 1 u 2 ) = m Q , (205) E [ Y 2 ] = s u 1 2 + ( 1 s ) u 2 2 (206) = r u 1 2 + ( 1 r ) u 2 2 + ( s r ) ( u 1 2 u 2 2 ) (207) = E [ X 2 ] + ( s r ) ( u 1 u 2 ) ( u 1 + u 2 ) (208) = m P 2 + σ P 2 + ( m Q m P ) m P + m Q + σ Q 2 σ P 2 m Q m P (209) = m Q 2 + σ Q 2 .
By combining (204) and (209), we obtain Var ( Y ) = σ Q 2 . Hence, the probability mass functions P and Q defined on U = { u 1 , u 2 } (with u 1 and u 2 in (47)) such that
P ( u 1 ) = 1 P ( u 2 ) = r , Q ( u 1 ) = 1 Q ( u 2 ) = s
satisfy the equality constraints in (39), while also achieving the lower bound on D ( P Q ) that is equal to d ( r s ) . It can be also verified that the second option where
u 1 = m P ( 1 r ) σ P 2 r , u 2 = m P + r σ P 2 1 r
does not yield the satisfiability of the conditions E [ Y ] = m Q and Var ( Y ) = σ Q 2 , so there is only a unique pair of probability measures P and Q, defined on a two-element set that achieves the lower bound in (40) under the equality constraints in (39).
We finally prove Item (c) in Theorem 2. Let m R , σ P 2 , and σ Q 2 be selected arbitrarily such that σ Q 2 σ P 2 . We construct probability measures P ε and Q ε , depending on a free parameter ε , with means m P = m Q : = m and variances σ P 2 and σ Q 2 , respectively (means and variances are independent of ε ), and which are defined on a three-element set U : = { u 1 , u 2 , u 3 } as follows:
P ε ( u 1 ) = r , P ε ( u 2 ) = 1 r , P ε ( u 3 ) = 0 ,
Q ε ( u 1 ) = s , Q ε ( u 2 ) = 1 s ε , Q ε ( u 3 ) = ε ,
with ε > 0 . We aim to set the parameters r , s , u 1 , u 2 and u 3 (as a function of m , σ P , σ Q and ε ) such that
lim ε 0 + D ( P ε Q ε ) = 0 .
Proving (214) yields (48), while it also follows that the infimum on the left side of (48) can be restricted to probability measures which are defined on a three-element set.
In view of the constraints on the means and variances in (39), with equal means m, we get the following set of equations from (212) and (213):
r u 1 + ( 1 r ) u 2 = m , s u 1 + ( 1 s ε ) u 2 + ε u 3 = m , r u 1 2 + ( 1 r ) u 2 2 = m 2 + σ P 2 , s u 1 2 + ( 1 s ε ) u 2 2 + ε u 3 2 = m 2 + σ Q 2 .
The first and second equations in (215) refer to the equal means under P and Q, and the third and fourth equations in (215) refer to the second moments in (39). Furthermore, in view of (212) and (213), the relative entropy is given by
D ( P ε Q ε ) = r log r s + ( 1 r ) log 1 r 1 s ε .
Subtracting the square of the first equation in (215) from its third equation gives the equivalent set of equations
r u 1 + ( 1 r ) u 2 = m , s u 1 + ( 1 s ε ) u 2 + ε u 3 = m , r ( 1 r ) ( u 1 u 2 ) 2 = σ P 2 , s u 1 2 + ( 1 s ε ) u 2 2 + ε u 3 2 = m 2 + σ Q 2 .
We next select u 1 and u 2 such that u 1 u 2 : = 2 σ P . Then, the third equation in (217) gives r ( 1 r ) = 1 4 , so r = 1 2 . Furthermore, the first equation in (217) gives
u 1 = m + σ P ,
u 2 = m σ P .
Since r, u 1 , and u 2 are independent of ε , so is the probability measure P ε : = P . Combining the second equation in (217) with (218) and (219) gives
u 3 = m 1 + 2 s 1 ε σ P .
Substituting (218)–(220) into the fourth equation of (217) gives a quadratic equation for s, whose selected solution (such that s and r = 1 2 be close for small ϵ > 0 ) is equal to
s = 1 2 1 ε + σ Q 2 σ P 2 1 + ε ε .
Hence, s = 1 2 + O ( ε ) , which implies that s ( 0 , 1 ε ) for sufficiently small ε > 0 (as it is required in (213)). In view of (216), it also follows that D ( P Q ε ) vanishes as we let ε tend to zero.
We finally outline an alternative proof, which refers to the case of equal means with arbitrarily selected σ P 2 and σ Q 2 . Let ( σ P 2 , σ Q 2 ) ( 0 , ) 2 . We next construct a sequence of pairs of probability measures { ( P n , Q n ) } with zero mean and respective variances ( σ P 2 , σ Q 2 ) for which D ( P n Q n ) 0 as n (without any loss of generality, one can assume that the equal means are equal to zero). We start by assuming ( σ P 2 , σ Q 2 ) ( 1 , ) 2 . Let
μ n : = 1 + n σ Q 2 1 ,
and define a sequence of quaternary real-valued random variables with probability mass functions
Q n ( a ) : = { 1 2 1 2 n a = ± 1 , 1 2 n a = ± μ n .
It can be verified that, for all n N , Q n has zero mean and variance σ Q 2 . Furthermore, let
P n ( a ) : = { 1 2 ξ 2 n a = ± 1 , ξ 2 n a = ± μ n ,
with
ξ : = σ P 2 1 σ Q 2 1 .
If ξ > 1 , for n = 1 , , ξ , we choose P n arbitrarily with mean 0 and variance σ P 2 . Then,
Var ( P n ) = 1 ξ n + ξ n μ n 2 = σ P 2 ,
D ( P n Q n ) = d ξ n 1 n 0 .
Next, suppose min { σ P 2 , σ Q 2 } : = σ 2 < 1 , then construct P n and Q n as before with variances 2 σ P 2 σ 2 > 1 and 2 σ Q 2 σ 2 > 1 , respectively. If P n and Q n denote the random variables P n and Q n scaled by a factor of σ 2 , then their variances are σ P 2 , σ Q 2 , respectively, and D ( P n Q n ) = D ( P n Q n ) 0 as we let n .
To conclude, it should be noted that the sequences of probability measures in the latter proof are defined on a four-element set. Recall that, in the earlier proof, specialized to the case of (equal means with) σ P 2 σ Q 2 , the introduced probability measures are defined on a three-element set, and the reference probability measure P is fixed while referring to an equiprobable binary random variable.

5.4. Proof of Theorem 3

We first prove (52). Differentiating both sides of (22) gives that, for all λ ( 0 , 1 ] ,
(228) F ( λ ) = 1 λ χ 2 P R λ log e (229) 1 λ exp D ( P R λ ) 1 log e (230) = 1 λ exp F ( λ ) 1 log e ,
where (228) holds due to (21), (22) and (50); (229) holds by (16) and (230) is due to (21) and (50). This gives (52).
We next prove (53), and the conclusion which appears after it. In view of [16] [Theorem 8], applied to f ( t ) : = log t for all t > 0 , we get (it should be noted that, by the definition of F in (50), the result in [16] [(195)–(196)] is used here by swapping P and Q)
lim λ 0 + F ( λ ) λ 2 = 1 2 χ 2 ( Q P ) log e .
Since lim λ 0 + F ( λ ) = 0 , it follows by L’Hôpital’s rule that
lim λ 0 + F ( λ ) λ = 2 lim λ 0 + F ( λ ) λ 2 = χ 2 ( Q P ) log e ,
which gives (53). A comparison of the limit in (53) with a lower bound which follows from (52) gives
(233) lim λ 0 + F ( λ ) λ lim λ 0 + 1 λ 2 exp F ( λ ) 1 log e (234) = lim λ 0 + F ( λ ) λ 2 lim λ 0 + exp F ( λ ) 1 F ( λ ) · log e (235) = lim λ 0 + F ( λ ) λ 2 lim u 0 e u 1 u (236) = 1 2 χ 2 ( Q P ) log e ,
where (236) relies on (231). Hence, the limit in (53) is twice as large as its lower bound on the right side of (236). This proves the conclusion which comes right after (53).
We finally prove the known result in (51), by showing an alternative proof which is based on (52). The function F is non-negative on [ 0 , 1 ] , and it is strictly positive on ( 0 , 1 ] if P Q . Let P Q (otherwise, (51) is trivial). Rearranging terms in (52) and integrating both sides over the interval [ λ , 1 ] , for λ ( 0 , 1 ] , gives that
(237) λ 1 F ( t ) exp F ( t ) 1 d t λ 1 d t t log e (238) = log 1 λ , λ ( 0 , 1 ] .
The left side of (237) satisfies
(239) λ 1 F ( t ) exp F ( t ) 1 d t = λ 1 F ( t ) exp F ( t ) 1 exp F ( t ) d t (240) = λ 1 d d t log 1 exp F ( t ) d t (241) = log 1 exp D ( P Q ) 1 exp F ( λ ) ,
where (241) holds since F ( 1 ) = D ( P Q ) (see (50)). Combining (237)–(241) gives
1 exp D ( P Q ) 1 exp F ( λ ) 1 λ , λ ( 0 , 1 ] ,
which, due to the non-negativity of F, gives the right side inequality in (51) after rearrangement of terms in (242).

5.5. Proof of Theorem 4

Lemma 3.
Let f 0 : ( 0 , ) R be a convex function with f 0 ( 1 ) = 0 , and let { f k ( · ) } k = 0 be defined as in (58). Then, { f k ( · ) } k = 0 is a sequence of convex functions on ( 0 , ) , and
f k ( x ) f k + 1 ( x ) , x > 0 , k { 0 , 1 , } .
Proof. 
We prove the convexity of { f k ( · ) } on ( 0 , ) by induction. Suppose that f k ( · ) is a convex function with f k ( 1 ) = 0 for a fixed integer k 0 . The recursion in (58) yields f k + 1 ( 1 ) = 0 and, by the change of integration variable s : = ( 1 x ) s ,
f k + 1 ( x ) = 0 1 f k ( s x s + 1 ) d s s , x > 0 .
Consequently, for t ( 0 , 1 ) and x y with x , y > 0 , applying (244) gives
(245) f k + 1 ( ( 1 t ) x + t y ) = 0 1 f k s [ ( 1 t ) x + t y ] s + 1 d s s (246) = 0 1 f k ( 1 t ) ( s x s + 1 ) + t ( s y s + 1 ) d s s (247) ( 1 t ) 0 1 f k ( s x s + 1 ) d s s + t 0 1 f k ( s y s + 1 ) d s s (248) = ( 1 t ) f k + 1 ( x ) + t f k + 1 ( y ) ,
where (247) holds since f k ( · ) is convex on ( 0 , ) (by assumption). Hence, from (245)–(248), f k + 1 ( · ) is also convex on ( 0 , ) with f k + 1 ( 1 ) = 0 . By mathematical induction and our assumptions on f 0 , it follows that { f k ( · ) } k = 0 is a sequence of convex functions on ( 0 , ) which vanish at 1.
We next prove (243). For all x , y > 0 and k { 0 , 1 , } ,
(249) f k + 1 ( y ) f k + 1 ( x ) + f k + 1 ( x ) ( y x ) (250) = f k + 1 ( x ) + f k ( x ) x 1 ( y x ) ,
where (249) holds since f k ( · ) is convex on ( 0 , ) , and (250) relies on the recursive equation in (58). Substituting y = 1 into (249)–(250), and using the equality f k + 1 ( 1 ) = 0 , gives (243). □
We next prove Theorem 4. From Lemma 3, it follows that D f k ( P Q ) is an f-divergence for all integers k 0 , and the non-negative sequence { D f k ( P Q ) } k = 0 is monotonically non-increasing. From (21) and (58), it also follows that, for all λ [ 0 , 1 ] and integer k { 0 , 1 , } ,
(251) D f k + 1 ( R λ P ) = p f k + 1 r λ p d μ (252) = p 0 ( p q ) λ / p f k ( 1 s ) d s s d μ (253) = p 0 λ f k 1 + ( q p ) s p d s s d μ (254) = 0 λ p f k r s p d μ d s s (255) = 0 λ D f k ( R s P ) d s s ,
where the substitution s : = ( p q ) s p is invoked in (253), and then (254) holds since r s p = 1 + ( q p ) s p for s [ 0 , 1 ] (this follows from (21)) and by interchanging the order of the integrations.

5.6. Proof of Corollary 5

Combining (60) and (61) yields (58); furthermore, f 0 : ( 0 , ) R , given by f 0 ( x ) = 1 x 1 for all x > 0 , is convex on ( 0 , ) with f 0 ( 1 ) = 0 . Hence, Theorem 4 holds for the selected functions { f k ( · ) } k = 0 in (61), which therefore are all convex on ( 0 , ) and vanish at 1. This proves that (59) holds for all λ [ 0 , 1 ] and k { 0 , 1 , } . Since f 0 ( x ) = 1 x 1 and f 1 ( x ) = log e ( x ) for all x > 0 (see (60) and (61)), then, for every pair of probability measures P and Q:
D f 0 ( P Q ) = χ 2 ( Q P ) , D f 1 ( P Q ) = 1 log e D ( Q P ) .
Finally, combining (59), for k = 0 , together with (256), gives (22) as a special case.

5.7. Proof of Theorem 5 and Corollary 6

For an arbitrary measurable set E X , we have from (62)
μ C ( E ) = E 1 C ( x ) μ ( C ) d μ ( x ) ,
where 1 C : X { 0 , 1 } is the indicator function of C X , i.e., 1 C ( x ) : = 1 { x C } for x X . Hence,
d μ C d μ ( x ) = 1 C ( x ) μ ( C ) , x X ,
and
(259) D ( μ C μ ) = X f d μ C d μ d μ (260) = C f 1 μ ( C ) d μ ( x ) + X \ C f ( 0 ) d μ ( x ) (261) = μ ( C ) f 1 μ ( C ) + μ ( X \ C ) f ( 0 ) (262) = f ˜ μ ( C ) + ( 1 μ ( C ) ) f ( 0 ) ,
where the last equality holds by the definition of f ˜ in (63). This proves Theorem 5. Corollary 6 is next proved by first proving (67) for the Rényi divergence. For all α ( 0 , 1 ) ( 1 , ) ,
(263) D α μ C μ = 1 α 1 log X d μ C d μ α d μ (264) = 1 α 1 log C 1 μ ( C ) α d μ (265) = 1 α 1 log 1 μ ( C ) α μ ( C ) (266) = log 1 μ ( C ) .
The justification of (67) for α = 1 is due to the continuous extension of the order- α Rényi divergence at α = 1 , which gives the relative entropy (see (13)). Equality (65) is obtained from (67) at α = 1 . Finally, (66) is obtained by combining (15) and (67) with α = 2 .

5.8. Proof of Theorem 6

Equation (100) is an equivalent form of (27). From (91) and (100), for all α [ 0 , 1 ] ,
(267) 1 log e S α ( P Q ) = α 1 log e K α ( P Q ) + ( 1 α ) 1 log e K 1 α ( Q P ) (268) = α 0 α s D ϕ s ( P Q ) d s + ( 1 α ) 0 1 α s D ϕ s ( Q P ) d s (269) = α 0 α s D ϕ s ( P Q ) d s + ( 1 α ) α 1 ( 1 s ) D ϕ 1 s ( Q P ) d s .
Regarding the integrand of the second term in (269), in view of (18), for all s ( 0 , 1 )
(270) D ϕ 1 s ( Q P ) = 1 ( 1 s ) 2 · χ 2 Q ( 1 s ) P + s Q (271) = 1 s 2 · χ 2 P ( 1 s ) P + s Q (272) = D ϕ s ( P Q ) ,
where (271) readily follows from (9). Since we also have D ϕ 1 ( P Q ) = χ 2 ( P Q ) = D ϕ 0 ( Q P ) (see (18)), it follows that
D ϕ 1 s ( Q P ) = D ϕ s ( P Q ) , s [ 0 , 1 ] .
By using this identity, we get from (269) that, for all α [ 0 , 1 ]
(274) 1 log e S α ( P Q ) = α 0 α s D ϕ s ( P Q ) d s + ( 1 α ) α 1 ( 1 s ) D ϕ s ( P Q ) d s (275) = 0 1 g α ( s ) D ϕ s ( P Q ) d s ,
where the function g α : [ 0 , 1 ] R is defined in (102). This proves the integral identity (101).
The lower bounds in (103) and (104) hold since, if f : ( 0 , ) R is convex, continuously twice differentiable and strictly convex at 1, then
μ χ 2 ( Q X , W Y | X ) μ f ( Q X , W Y | X ) ,
(see, e.g., [46] [Proposition II.6.5] and [50] [Theorem 2]). Hence, this holds in particular for the f-divergences in (95) and (96) (since the required properties are satisfied by the parametric functions in (97) and (98), respectively). We next prove the upper bound on the contraction coefficients in (103) and (104) by relying on (100) and (101), respectively. In the setting of Definition 7, if P X Q X , then it follows from (100) that for α ( 0 , 1 ] ,
(277) K α ( P Y Q Y ) K α ( P X Q X ) = 0 α s D ϕ s ( P Y Q Y ) d s 0 α s D ϕ s ( P X Q X ) d s (278) 0 α s μ ϕ s ( Q X , W Y | X ) D ϕ s ( P X Q X ) d s 0 α s D ϕ s ( P X Q X ) d s (279) sup s ( 0 , α ] μ ϕ s ( Q X , W Y | X ) .
Finally, taking the supremum of the left-hand side of (277) over all probability measures P X such that 0 < K α ( P X Q X ) < gives the upper bound on μ k α ( Q X , W Y | X ) in (103). The proof of the upper bound on μ s α ( Q X , W Y | X ) , for all α [ 0 , 1 ] , follows similarly from (101), since the function g α ( · ) as defined in (102) is positive over the interval ( 0 , 1 ) .

5.9. Proof of Corollary 7

The upper bounds in (106) and (107) rely on those in (103) and (104), respectively, by showing that
sup s ( 0 , 1 ] μ ϕ s ( Q X , W Y | X ) μ χ 2 ( W Y | X ) .
Inequality (280) is obtained as follows, similarly to the concept of the proof of [51] [Remark 3.8]. For all s ( 0 , 1 ] and P X Q X ,
D ϕ s ( P X W Y | X Q X W Y | X ) D ϕ s ( P X Q X ) (281) = χ 2 ( P X W Y | X ( 1 s ) P X W Y | X + s Q X W Y | X ) χ 2 ( P X ( 1 s ) P X + s Q X ) (282) μ χ 2 ( ( 1 s ) P X + s Q X , W Y | X ) (283) μ χ 2 ( W Y | X ) ,
where (281) holds due to (18), and (283) is due to the definition in (105).

5.10. Proof of Proposition 3

The lower bound on the contraction coefficients in (108) and (109) is due to (276). The derivation of the upper bounds relies on [49] [Theorem 2.2], which states the following. Let f : [ 0 , ) R be a three–times differentiable, convex function with f ( 1 ) = 0 , f ( 1 ) > 0 , and let the function z : ( 0 , ) R defined as z ( t ) : = f ( t ) f ( 0 ) t , for all t > 0 , be concave. Then,
μ f ( Q X , W Y | X ) f ( 1 ) + f ( 0 ) f ( 1 ) Q min · μ χ 2 ( Q X , W Y | X ) .
For α ( 0 , 1 ] , let z α , 1 : ( 0 , ) R and z α , 2 : ( 0 , ) R be given by
z α , 1 ( t ) : = k α ( t ) k α ( 0 ) t , t > 0 ,
z α , 2 ( t ) : = s α ( t ) s α ( 0 ) t , t > 0 ,
with k α and s α in (97) and (98). Straightforward calculus shows that, for α ( 0 , 1 ] and t > 0 ,
(287) 1 log e z α , 1 ( t ) = α 2 + 2 α ( 1 α ) t t 2 α + ( 1 α ) t 2 < 0 , 1 log e z α , 2 ( t ) = α 2 α + 2 ( 1 α ) t t 2 α + ( 1 α ) t 2 (288) 2 ( 1 α ) t 3 log e 1 + ( 1 α ) t α ( 1 α ) t α + ( 1 α ) t ( 1 α ) 2 t 2 2 α + ( 1 α ) t 2 .
The first term on the right side of (288) is negative. For showing that the second term is also negative, we rely on the power series expansion log e ( 1 + u ) = u 1 2 u 2 + 1 3 u 3 for u ( 1 , 1 ] . Setting u : = x 1 + x , for x > 0 , and using Leibnitz theorem for alternating series yields
log e ( 1 + x ) = log e 1 x 1 + x > x 1 + x + x 2 2 ( 1 + x ) 2 , x > 0 .
Consequently, setting x : = ( 1 α ) t α [ 0 , ) in (289), for t > 0 and α ( 0 , 1 ] , proves that the second term on the right side of (288) is negative. Hence, z α , 1 ( t ) , z α , 2 ( t ) < 0 , so both z α , 1 , z α , 2 : ( 0 , ) R are concave functions.
In view of the satisfiability of the conditions of [49] [Theorem 2.2] for the f-divergences with f = k α or f = s α , the upper bounds in (108) and (109) follow from (284), and also since
(290) k α ( 0 ) = 0 , k α ( 1 ) = α log e , k α ( 1 ) = α 2 log e , (291) s α ( 0 ) = ( 1 α ) log α , s α ( 1 ) = ( 2 α 1 ) log e , s α ( 1 ) = ( 1 3 α + 3 α 2 ) log e .

5.11. Proof of Proposition 4

In view of (24), we get
(292) D ( P Y Q Y ) D ( P X Q X ) = 0 1 χ 2 ( P Y ( 1 s ) P Y + s Q Y ) d s s 0 1 χ 2 ( P X ( 1 s ) P X + s Q X ) d s s (293) 0 1 μ χ 2 ( ( 1 s ) P X + s Q X , W Y | X ) χ 2 ( P X ( 1 s ) P X + s Q X ) d s s 0 1 χ 2 ( P X ( 1 s ) P X + s Q X ) d s s (294) sup s [ 0 , 1 ] μ χ 2 ( ( 1 s ) P X + s Q X , W Y | X ) .
In view of (119), the distributions of X s and Y s , and since ( 1 s ) P X + s Q X W Y | X = ( 1 s ) P Y + s Q Y holds for all s [ 0 , 1 ] , it follows that
ρ m ( X s ; Y s ) = μ χ 2 ( ( 1 s ) P X + s Q X , W Y | X ) , s [ 0 , 1 ] ,
which, from (292)–(295), implies that
sup s [ 0 , 1 ] ρ m ( X s ; Y s ) D ( P Y Q Y ) D ( P X Q X ) .
Switching P X and Q X in (292)–(294) and using the mapping s 1 s in (294) gives (due to the symmetry of the maximal correlation)
sup s [ 0 , 1 ] ρ m ( X s ; Y s ) D ( Q Y P Y ) D ( Q X P X ) ,
and, finally, taking the maximal lower bound among those in (296) and (297) gives (120).

Author Contributions

Both coauthors contributed to this research work, and to the writing and proofreading of this article. The starting point of this work was in independent derivations of preliminary versions of Theorems 1 and 2 in two separate un-published works [24,25]. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

Sergio Verdú is gratefully acknowledged for a careful reading, and well-appreciated feedback on the submitted version of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  2. Pearson, K. On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling. Lond. Edinb. Dublin Philos. Mag. J. Sci. 1900, 50, 157–175. [Google Scholar] [CrossRef] [Green Version]
  3. Csiszár, I.; Shields, P.C. Information Theory and Statistics: A Tutorial. Found. Trends Commun. Inf. Theory 2004, 1, 417–528. [Google Scholar] [CrossRef] [Green Version]
  4. Ali, S.M.; Silvey, S.D. A general class of coefficients of divergence of one distribution from another. J. R. Stat. Soc. 1966, 28, 131–142. [Google Scholar] [CrossRef]
  5. Csiszár, I. Eine Informationstheoretische Ungleichung und ihre Anwendung auf den Bewis der Ergodizität von Markhoffschen Ketten. Publ. Math. Inst. Hungar. Acad. Sci. 1963, 8, 85–108. [Google Scholar]
  6. Csiszár, I. Information-type measures of difference of probability distributions and indirect observations. Stud. Sci. Math. Hung. 1967, 2, 299–318. [Google Scholar]
  7. Csiszár, I. On topological properties of f-divergences. Stud. Sci. Math. Hung. 1967, 2, 329–339. [Google Scholar]
  8. Csiszár, I. A class of measures of informativity of observation channels. Period. Math. Hung. 1972, 2, 191–213. [Google Scholar] [CrossRef]
  9. Van Erven, T.; Harremoës, P. Rényi divergence and Kullback–Leibler divergence. IEEE Trans. Inf. Theory 2014, 60, 3797–3820. [Google Scholar] [CrossRef] [Green Version]
  10. Rényi, A. On measures of entropy and information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics; University of California Press: Berkeley, CA, USA, 1961; pp. 547–561. [Google Scholar]
  11. Liese, F.; Vajda, I. Convex Statistical Distances; Teubner-Texte Zur Mathematik: Leipzig, Germany, 1987. [Google Scholar]
  12. Liese, F.; Vajda, I. On divergences and informations in statistics and information theory. IEEE Trans. Inf. Theory 2006, 52, 4394–4412. [Google Scholar] [CrossRef]
  13. DeGroot, M.H. Uncertainty, information and sequential experiments. Ann. Math. Stat. 1962, 33, 404–419. [Google Scholar] [CrossRef]
  14. Polyanskiy, Y.; Poor, H.V.; Verdú, S. Channel coding rate in the finite blocklength regime. IEEE Trans. Inf. Theory 2010, 56, 2307–2359. [Google Scholar] [CrossRef]
  15. Sason, I.; Verdú, S. f-divergence inequalities. IEEE Trans. Inf. Theory 2016, 62, 5973–6006. [Google Scholar] [CrossRef]
  16. Sason, I. On f-divergences: Integral representations, local behavior, and inequalities. Entropy 2018, 20, 383. [Google Scholar] [CrossRef] [Green Version]
  17. Melbourne, J.; Madiman, M.; Salapaka, M.V. Relationships between certain f-divergences. In Proceedings of the 57th Annual Allerton Conference on Communication, Control and Computing, Urbana, IL, USA, 24–27 September 2019; pp. 1068–1073. [Google Scholar]
  18. Melbourne, J.; Talukdar, S.; Bhaban, S.; Madiman, M.; Salapaka, M.V. The Differential Entropy of Mixtures: New Bounds and Applications. Available online: https://arxiv.org/pdf/1805.11257.pdf (accessed on 22 April 2020).
  19. Audenaert, K.M.R. Quantum skew divergence. J. Math. Phys. 2014, 55, 112202. [Google Scholar] [CrossRef] [Green Version]
  20. Gibbs, A.L.; Su, F.E. On choosing and bounding probability metrics. Int. Stat. Rev. 2002, 70, 419–435. [Google Scholar] [CrossRef] [Green Version]
  21. Györfi, L.; Vajda, I. A class of modified Pearson and Neyman statistics. Stat. Decis. 2001, 19, 239–251. [Google Scholar] [CrossRef]
  22. Le Cam, L. Asymptotic Methods in Statistical Decision Theory; Series in Statistics; Springer: New York, NY, USA, 1986. [Google Scholar]
  23. Vincze, I. On the concept and measure of information contained in an observation. In Contributions to Probability; Gani, J., Rohatgi, V.K., Eds.; Academic Press: New York, NY, USA, 1981; pp. 207–214. [Google Scholar]
  24. Nishiyama, T. A New Lower Bound for Kullback–Leibler Divergence Based on Hammersley-Chapman-Robbins Bound. Available online: https://arxiv.org/abs/1907.00288v3 (accessed on 2 November 2019).
  25. Sason, I. On Csiszár’s f-divergences and informativities with applications. In Workshop on Channels, Statistics, Information, Secrecy and Randomness for the 80th birthday of I. Csiszár; The Rényi Institute of Mathematics, Hungarian Academy of Sciences: Budapest, Hungary, 2018. [Google Scholar]
  26. Makur, A.; Polyanskiy, Y. Comparison of channels: Criteria for domination by a symmetric channel. IEEE Trans. Inf. Theory 2018, 64, 5704–5725. [Google Scholar] [CrossRef]
  27. Simic, S. On a new moments inequality. Stat. Probab. Lett. 2008, 78, 2671–2678. [Google Scholar] [CrossRef]
  28. Chapman, D.G.; Robbins, H. Minimum variance estimation without regularity assumptions. Ann. Math. Stat. 1951, 22, 581–586. [Google Scholar] [CrossRef]
  29. Hammersley, J.M. On estimating restricted parameters. J. R. Stat. Soc. Ser. B 1950, 12, 192–240. [Google Scholar] [CrossRef]
  30. Verdú, S. Information Theory, in preparation.
  31. Wang, L.; Madiman, M. Beyond the entropy power inequality, via rearrangments. IEEE Trans. Inf. Theory 2014, 60, 5116–5137. [Google Scholar] [CrossRef] [Green Version]
  32. Lewin, L. Polylogarithms and Associated Functions; Elsevier North Holland: Amsterdam, The Netherlands, 1981. [Google Scholar]
  33. Marton, K. Bounding d¯-distance by informational divergence: A method to prove measure concentration. Ann. Probab. 1996, 24, 857–866. [Google Scholar] [CrossRef]
  34. Marton, K. Distance-divergence inequalities. IEEE Inf. Theory Soc. Newsl. 2014, 64, 9–13. [Google Scholar]
  35. Boucheron, S.; Lugosi, G.; Massart, P. Concentration Inequalities—A Nonasymptotic Theory of Independence; Oxford University Press: Oxford, UK, 2013. [Google Scholar]
  36. Raginsky, M.; Sason, I. Concentration of Measure Inequalities in Information Theory, Communications and Coding: Third Edition. In Foundations and Trends in Communications and Information Theory; NOW Publishers: Boston, MA, USA; Delft, The Netherlands, 2018. [Google Scholar]
  37. Csiszár, I. Sanov property, generalized I-projection and a conditional limit theorem. Ann. Probab. 1984, 12, 768–793. [Google Scholar] [CrossRef]
  38. Clarke, B.S.; Barron, A.R. Information-theoretic asymptotics of Bayes methods. IEEE Trans. Inf. Theory 1990, 36, 453–471. [Google Scholar] [CrossRef] [Green Version]
  39. Evans, R.J.; Boersma, J.; Blachman, N.M.; Jagers, A.A. The entropy of a Poisson distribution. SIAM Rev. 1988, 30, 314–317. [Google Scholar] [CrossRef] [Green Version]
  40. Knessl, C. Integral representations and asymptotic expansions for Shannon and Rényi entropies. Appl. Math. Lett. 1998, 11, 69–74. [Google Scholar] [CrossRef] [Green Version]
  41. Merhav, N.; Sason, I. An integral representation of the logarithmic function with applications in information theory. Entropy 2020, 22, 51. [Google Scholar] [CrossRef] [Green Version]
  42. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  43. Csiszár, I. The method of types. IEEE Trans. Inf. Theory 1998, 44, 2505–2523. [Google Scholar] [CrossRef] [Green Version]
  44. Corless, R.M.; Gonnet, G.H.; Hare, D.E.G.; Jeffrey, D.J.; Knuth, D.E. On the Lambert W function. Adv. Comput. Math. 1996, 5, 329–359. [Google Scholar] [CrossRef]
  45. Tamm, U. Some refelections about the Lambert W function as inverse of x·log(x). In Proceedings of the 2014 IEEE Information Theory and Applications Workshop, San Diego, CA, USA, 9–14 February 2014. [Google Scholar]
  46. Cohen, J.E.; Kemperman, J.H.B.; Zbăganu, G. Comparison of Stochastic Matrices with Applications in Information Theory, Statistics, Economics and Population Sciences; Birkhäuser: Boston, MA, USA, 1998. [Google Scholar]
  47. Cohen, J.E.; Iwasa, Y.; Rautu, G.; Ruskai, M.B.; Seneta, E.; Zbăganu, G. Relative entropy under mappings by stochastic matrices. Linear Algebra Its Appl. 1993, 179, 211–235. [Google Scholar] [CrossRef] [Green Version]
  48. Makur, A.; Zheng, L. Bounds between contraction coefficients. In Proceedings of the 53rd Annual Allerton Conference on Communication, Control and Computing, Urbana, IL, USA, 29 September–2 October 2015; pp. 1422–1429. [Google Scholar]
  49. Makur, A. Information Contraction and Decomposition. Ph.D. Thesis, MIT, Cambridge, MA, USA, May 2019. [Google Scholar]
  50. Polyanskiy, Y.; Wu, Y. Strong data processing inequalities for channels and Bayesian networks. In Convexity and Concentration; The IMA Volumes in Mathematics and its Applications; Carlen, E., Madiman, M., Werner, E.M., Eds.; Springer: New York, NY, USA, 2017; Volume 161, pp. 211–249. [Google Scholar]
  51. Raginsky, M. Strong data processing inequalities and Φ-Sobolev inequalities for discrete channels. IEEE Trans. Inf. Theory 2016, 62, 3355–3389. [Google Scholar] [CrossRef] [Green Version]
  52. Sason, I. On data-processing and majorization inequalities for f-divergences with applications. Entropy 2019, 21, 1022. [Google Scholar] [CrossRef] [Green Version]
  53. Csiszár, I.; Körner, J. Information Theory: Coding Theorems for Discrete Memoryless Systems, 2nd ed.; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  54. Burbea, J.; Rao, C.R. On the convexity of some divergence measures based on entropy functions. IEEE Trans. Inf. Theory 1982, 28, 489–495. [Google Scholar] [CrossRef]
  55. Lin, J. Divergence measures based on the Shannon entropy. IEEE Trans. Inf. Theory 1991, 37, 145–151. [Google Scholar] [CrossRef] [Green Version]
  56. Menéndez, M.L.; Pardo, J.A.; Pardo, L.; Pardo, M.C. The Jensen–Shannon divergence. J. Frankl. Inst. 1997, 334, 307–318. [Google Scholar] [CrossRef]
  57. Topsøe, F. Some inequalities for information divergence and related measures of discrimination. IEEE Trans. Inf. Theory 2000, 46, 1602–1609. [Google Scholar] [CrossRef] [Green Version]
  58. Nielsen, F. On a generalization of the Jensen–Shannon divergence and the Jensen–Shannon centroids. Entropy 2020, 22, 221. [Google Scholar] [CrossRef] [Green Version]
  59. Asadi, M.; Ebrahimi, N.; Karazmi, O.; Soofi, E.S. Mixture models, Bayes Fisher information, and divergence measures. IEEE Trans. Inf. Theory 2019, 65, 2316–2321. [Google Scholar] [CrossRef]
  60. Sarmanov, O.V. Maximum correlation coefficient (non-symmetric case). Sel. Transl. Math. Stat. Probab. 1962, 2, 207–210. (In Russian) [Google Scholar]
  61. Gilardoni, G.L. Corrigendum to the note on the minimum f-divergence for given total variation. Comptes Rendus Math. 2010, 348, 299. [Google Scholar] [CrossRef]
  62. Reid, M.D.; Williamson, R.C. Information, divergence and risk for binary experiments. J. Mach. Learn. Res. 2011, 12, 731–817. [Google Scholar]
  63. Pardo, M.C.; Vajda, I. On asymptotic properties of information-theoretic divergences. IEEE Trans. Inf. Theory 2003, 49, 1860–1868. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Nishiyama, T.; Sason, I. On Relations Between the Relative Entropy and χ2-Divergence, Generalizations and Applications. Entropy 2020, 22, 563. https://doi.org/10.3390/e22050563

AMA Style

Nishiyama T, Sason I. On Relations Between the Relative Entropy and χ2-Divergence, Generalizations and Applications. Entropy. 2020; 22(5):563. https://doi.org/10.3390/e22050563

Chicago/Turabian Style

Nishiyama, Tomohiro, and Igal Sason. 2020. "On Relations Between the Relative Entropy and χ2-Divergence, Generalizations and Applications" Entropy 22, no. 5: 563. https://doi.org/10.3390/e22050563

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop