Next Article in Journal
Statistical Significance of Earth’s Electric and Magnetic Field Variations Preceding Earthquakes in Greece and Japan Revisited
Next Article in Special Issue
Information Perspective to Probabilistic Modeling: Boltzmann Machines versus Born Machines
Previous Article in Journal
A Novel Index Based on Binary Entropy to Confirm the Spatial Expansion Degree of Urban Sprawl
Previous Article in Special Issue
Recognizing Information Feature Variation: Message Importance Transfer Measure and Its Applications in Big Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ensemble Estimation of Information Divergence

by
Kevin R. Moon
1,‡,
Kumar Sricharan
2,
Kristjan Greenewald
3 and
Alfred O. Hero III
4,*
1
Genetics Department and Applied Math Program, Yale University, New Haven, CT 06520, USA
2
Intuit Inc., Mountain View, CA 94043, USA
3
IBM Research, Cambridge, MA 02142, USA
4
Electrical Engineering and Computer Science Department, University of Michigan, Ann Arbor, MI 48109, USA
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in the 2016 IEEE International Symposium on Information Theory (ISIT), Barcelona, Spain, 10–15 July 2016; pp. 1133–1137.
Current address: Department of Mathematics and Statistics, Utah State University, Logan, UT 84322, USA
Entropy 2018, 20(8), 560; https://doi.org/10.3390/e20080560
Submission received: 29 June 2018 / Revised: 23 July 2018 / Accepted: 26 July 2018 / Published: 27 July 2018
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)

Abstract

:
Recent work has focused on the problem of nonparametric estimation of information divergence functionals between two continuous random variables. Many existing approaches require either restrictive assumptions about the density support set or difficult calculations at the support set boundary which must be known a priori. The mean squared error (MSE) convergence rate of a leave-one-out kernel density plug-in divergence functional estimator for general bounded density support sets is derived where knowledge of the support boundary, and therefore, the boundary correction is not required. The theory of optimally weighted ensemble estimation is generalized to derive a divergence estimator that achieves the parametric rate when the densities are sufficiently smooth. Guidelines for the tuning parameter selection and the asymptotic distribution of this estimator are provided. Based on the theory, an empirical estimator of Rényi- α divergence is proposed that greatly outperforms the standard kernel density plug-in estimator in terms of mean squared error, especially in high dimensions. The estimator is shown to be robust to the choice of tuning parameters. We show extensive simulation results that verify the theoretical results of our paper. Finally, we apply the proposed estimator to estimate the bounds on the Bayes error rate of a cell classification problem.

1. Introduction

Information divergences are integral functionals of two probability distributions and have many applications in the fields of information theory, statistics, signal processing, and machine learning. Some applications of divergences include estimating the decay rates of error probabilities [1], estimating bounds on the Bayes error [2,3,4,5,6,7,8] or the minimax error [9] for a classification problem, extending machine learning algorithms to distributional features [10,11,12,13], testing the hypothesis that two sets of samples come from the same probability distribution [14], clustering [15,16,17], feature selection and classification [18,19,20], blind source separation [21,22], image segmentation [23,24,25], and steganography [26]. For many more applications of divergence measures, see reference [27]. There are many information divergence families including Alpha- and Beta-divergences [28] as well as f-divergences [29,30]. In particular, the f-divergence family includes the well-known Kullback–Leibler (KL) divergence [31], the Rényi- α divergence integral [32], the Hellinger–Bhattacharyya distance [33,34], the Chernoff- α divergence [5], the total variation distance, and the Henze–Penrose divergence [6].
Despite the many applications of divergences between continuous random variables, there are no nonparametric estimators of these functionals that achieve the parametric mean squared error (MSE) convergence rate, are simple to implement, do not require knowledge of the boundary of the density support set, and apply to a large set of divergence functionals. In this paper, we present the first information divergence estimator that achieves all of the above. Specifically, we address the problem of estimating divergence functionals when only a finite population of independent and identically distributed (i.i.d.) samples is available from the two d-dimensional distributions that are unknown, nonparametric, and smooth. Our contributions are as follows:
  • We propose the first information divergence estimator, referred to as EnDive, that is based on ensemble methods. The ensemble estimator takes a weighted average of an ensemble of weak kernel density plug-in estimators of divergence where the weights are chosen to improve the MSE convergence rate. This ensemble construction makes it very easy to implement EnDive.
  • We prove that the proposed ensemble divergence estimator achieves the optimal parametric MSE rate of O 1 N , where N is the sample size when the densities are sufficiently smooth. In particular, EnDive achieves these rates without explicitly performing boundary correction which is required for most other estimators. Furthermore, we show that the convergence rates are uniform.
  • We prove that EnDive obeys a central limit theorem and thus, can be used to perform inference tasks on the divergence such as testing that two populations have identical distributions or constructing confidence intervals.

1.1. Related Work

Much work has focused on the problem of estimating the entropy and the information divergence of discrete random variables [1,29,35,36,37,38,39,40,41,42,43]. However, the estimation problem for discrete random variables differs significantly from the continuous case and thus employs different tools for both estimation and analysis.
One approach to estimating the differential entropy and information divergence of continuous random variables is to assume a parametric model for the underlying probability distributions [44,45,46]. However, these methods perform poorly when the parametric model does not fit the data well. Unfortunately, the structure of the underlying data distribution is unknown for many applications, and thus the chance for model misspecification is high. Thus, in many of these applications, parametric methods are insufficient, and nonparametric estimators must be used.
While several nonparametric estimators of divergence functionals between continuous random variables have been previously defined, the convergence rates are known for only a few of them. Furthermore, the asymptotic distributions of these estimators are unknown for nearly all of them. For example, Póczos and Schneider [10] established a weak consistency for a bias-corrected k-nearest neighbor (nn) estimator for Rényi- α and other divergences of a similar form where k was fixed. Li et al. [47] examined k-nn estimators of entropy and the KL divergence using hyperspherical data. Wang et al. [48] provided a k-nn based estimator for KL divergence. Plug-in histogram estimators of mutual information and divergence have been proven to be consistent [49,50,51,52]. Hero et al. [53] provided a consistent estimator for Rényi- α divergence when one of the densities is known. However none of these works studied the convergence rates or the asymptotic distribution of their estimators.
There has been recent interest in deriving convergence rates for divergence estimators for continuous data [54,55,56,57,58,59,60]. The rates are typically derived in terms of a smoothness condition on the densities, such as the Hölder condition [61]:
Definition 1 (Hölder Class).
Let X R d be a compact space. For r = ( r 1 , , r d ) , r i N , define | r | = i = 1 d r i and D r = | r | x 1 r 1 x d r d . The Hölder class Σ ( s , K H ) of functions on L 2 ( X ) consists of the functions (f) that satisfy
D r f ( x ) D r f ( y ) K H x y min ( s | r | , 1 ) ,
for all x , y X and for all r s.t. | r | s .
From Definition 1, it is clear that if a function (f) belongs to Σ ( s , K H ) , then f is continuously differentiable up to order s . In this work, we show that EnDive achieves a parametric MSE convergence rate of O ( 1 / N ) when s d and s > d 2 , depending on the specific form of the divergence function.
Nguyen et al. [56] proposed an f-divergence estimator that estimates the likelihood ratio of the two densities by solving a convex optimization problem and then plugging it into the divergence formulas. The authors proved that the minimax MSE convergence rate is parametric when the likelihood ratio is a member of the bounded Hölder class Σ ( s , K H ) with s d / 2 . However, this estimator is restricted to true f-divergences and may not apply to the broader class of divergence functionals that we consider here (as an example, the L 2 2 divergence is not an f-divergence). Additionally, solving the convex problem of [56] has similar computational complexity to that of training a support vector machine (SVM) (between O ( N 2 ) and O ( N 3 ) ), which can be demanding when N is large. In contrast, the EnDive estimator that we propose requires only the construction of simple density plug-in estimates and the solution of an offline convex optimization problem. Therefore, the most computationally demanding step in the EnDive estimator is the calculation of the density estimates, which has a computational complexity no greater than O ( N 2 ) .
Singh and Póczos [58,59] provided an estimator for Rényi- α divergences as well as general density functionals that use a “mirror image” kernel density estimator. They proved that these estimators obtain an MSE convergence rate of O 1 N when s d for each of the densities. However their approach requires several computations at each boundary of the support of the densities which is difficult to implement as d gets large. Also, this computation requires knowledge of the support (specifically, the boundaries) of the densities which is unknown in most practical settings. In contrast, while our assumptions require the density support sets to be bounded and the boundaries to be smooth, knowledge of the support is not required to implement EnDive.
The “linear” and “quadratic” estimators presented by Krishnamurthy et al. [57] estimate divergence functionals that include the form f 1 α ( x ) f 2 β ( x ) d μ ( x ) for given α and β where f 1 and f 2 are probability densities. These estimators achieve the parametric rate when s d / 2 and s d / 4 for the linear and quadratic estimators, respectively. However, the latter estimator is computationally infeasible for most functionals, and the former requires numerical integration for some divergence functionals which can be computationally difficult. Additionally, while a suitable α - β indexed sequence of divergence functionals of this form can be constructed that converge to the KL divergence, this does not guarantee convergence of the corresponding sequence of divergence estimators, as shown in reference [57]. In contrast, EnDive can be used to estimate the KL divergence directly. Other important f-divergence functionals are also excluded from this form including some that bound the Bayes error [2,4,6]. In contrast, our method applies to a large class of divergence functionals and avoids numerical integration.
Finally, Kandasamy et al. [60] proposed influence function-based estimators of distributional functionals including divergences that achieve the parametric rate when s d / 2 . While this method can be applied to general functionals, the estimator requires numerical integration for some functionals. Additionally, the estimators in both Kandasamy et al. [60] and Krishnamurthy et al. [57] require an optimal kernel density estimator. This is difficult to construct when the density support is bounded as it requires difficult computations at the density support set boundary and therefore, knowledge of the density support set. In contrast, Endive does not require knowledge of the support boundary.
In addition to the MSE convergence rates, the asymptotic distribution of divergence estimators is of interest. Asymptotic normality has been established for certain divergences between a specific density estimator and the true density [62,63,64]. This differs from the problem we consider where we assume that both densities are unknown. The asymptotic distributions of the estimators in references [56,57,58,59] are currently unknown. Thus, it is difficult to use these estimators for hypothesis testing which is crucial in many scientific applications. Kandasamy et al. [60] derived the asymptotic distribution of their data-splitting estimator but did not prove similar results for their leave-one-out estimator. We establish a central limit theorem for EnDive which greatly enhances its applicability in scientific settings.
Our ensemble divergence estimator reduces to an ensemble entropy estimator as a special case when data from only one distribution is considered and the other density is set to a uniform measure (see reference [28] for more on the relationship between entropy and information divergence). The resultant entropy estimator differs from the ensemble entropy estimator proposed by Sricharan et al. [65] in several important ways. First, the density support set must be known for the estimator in reference [65] to perform the explicit boundary correction. In contrast, the EnDive estimator does not require any boundary correction. To show this requires a significantly different approach to prove the bias and variance rates of the EnDive estimator. Furthermore, the EnDive results apply under more general assumptions for the densities and the kernel used in the weak estimators. Finally, the central limit theorem applies to the EnDive estimator which is currently unknown for the estimator in reference [65].
We also note that Berrett et al. [66] proposed a modification of the Kozachenko and Leonenko estimator of entropy [67] that takes a weighted ensemble estimation approach. While their results require stronger assumptions for the smoothness of the densities than ours do, they did obtain the asymptotic distribution of their weighted estimator and they also showed that the asymptotic variance of the estimator is not increased by taking a weighted average. This latter point is an important selling point of the ensemble framework—we can improve the asymptotic bias of an estimator without increasing the asymptotic variance.

1.2. Organization and Notation

The paper is organized as follows. We first derive the MSE convergence rates in Section 2 for a weak divergence estimator, which is a kernel density plug-in divergence estimator. We then generalize the theory of optimally weighted ensemble entropy estimation developed in reference [65] to obtain the ensemble divergence estimator EnDive from an ensemble of weak estimators in Section 3. A central limit theorem and uniform convergence rate for the ensemble estimator are also presented in Section 3. In Section 4, we provide guidelines for selecting the tuning parameters based on experiments and the theory derived in the previous sections. We then perform experiments in Section 4 that validate the theory and establish the robustness of the proposed estimators to the tuning parameters.
Bold face type is used for random variables and random vectors. The conditional expectation given a random variable Z is denoted as E Z . The variance of a random variable is denoted as V , and the bias of an estimator is denoted as B .

2. The Divergence Functional Weak Estimator

This paper focuses on estimating functionals of the form
G f 1 , f 2 = g f 1 ( x ) , f 2 ( x ) f 2 ( x ) d x , ,
where g ( x , y ) is a smooth functional, and f 1 and f 2 are smooth d-dimensional probability densities. If g f 1 ( x ) , f 2 ( x ) = g f 1 ( x ) f 2 ( x ) , g is convex, and g ( 1 ) = 0 , then G f 1 , f 2 defines the family of f-divergences. Some common divergences that belong to this family include the KL divergence ( g ( t ) = ln t ) and the total variation distance ( g ( t ) = | t 1 | ). In this work, we consider a broader class of functionals than the f-divergences, since g is allowed to be very general.
To estimate G f 1 , f 2 , we first define a weak plug-in estimator based on kernel density estimators (KDEs), that is, a simple estimator that converges slowly to the true value G f 1 , f 2 in terms of MSE. We then derive the bias and variance expressions for this weak estimator as a function of sample size and bandwidth. We then use the resulting bias and variance expressions to derive an ensemble estimator that takes a weighted average of weak estimators with different bandwidths and achieves superior MSE performance.

2.1. The Kernel Density Plug-in Estimator

We use a kernel density plug-in estimator of the divergence functional in (1) as the weak estimator. Assume that N 1 i.i.d. realizations Y 1 , , Y N 1 are available from f 1 and N 2 i.i.d. realizations X 1 , , X N 2 are available from f 2 . Let h i > 0 be the kernel bandwidth for the density estimator of f i . For simplicity of presentation, assume that N 1 = N 2 = N and h 1 = h 2 = h . The results for the more general case of differing sample sizes and bandwidths are given in Appendix C. Let K ( · ) be a kernel function with K ( x ) d x = 1 and | | K | | < where K is the norm of the kernel (K). The KDEs for f 1 and f 2 are, respectively,
f ˜ 1 , h ( X j ) = 1 N h d i = 1 N K X j Y i h , f ˜ 2 , h ( X j ) = 1 M h d i = 1 i j N K X j X i h ,
where M = N 1 . G f 1 , f 2 is then approximated as
G ˜ h = 1 N i = 1 N g f ˜ 1 , h X i , f ˜ 2 , h X i .

2.2. Convergence Rates

For many estimators, MSE convergence rates are typically provided in the form of upper (or sometimes lower) bounds on the bias and the variance. Therefore, only the slowest converging terms (as a function of the sample size (N)) are presented in these cases. However, to apply our generalized ensemble theory to obtain estimators that guarantee the parametric MSE rate, we required explicit expressions for the bias of the weak estimators in terms of the sample size (N) and the kernel bandwidth (h). Thus, an upper bound was insufficient for our work. Furthermore, to guarantee the parametric rate, we required explicit expressions of all bias terms that converge to zero slower than  O 1 / N .
To obtain bias expressions, we required multiple assumptions on the densities f 1 and f 2 , the functional g, and the kernel K. Similar to reference [7,54,65], the principal assumptions we make were that (1) f 1 , f 2 , and g are smooth; (2) f 1 and f 2 have common bounded support sets S ; and (3) f 1 and f 2 are strictly lower bounded on S . We also assume (4) that the density support set is smooth with respect to the kernel ( K ( u ) ). The full technical assumptions and a discussion of them are contained in Appendix A. Given these assumptions, we have the following result on the bias of G ˜ h :
Theorem 1.
For a general g, the bias of the plug-in estimator G ˜ h is given by
B G ˜ h = j = 1 s c 10 , j h j + c 11 1 N h d + O h s + 1 N h d .
To apply our generalized ensemble theory to the KDE plug-in estimator ( G ˜ h ), we required only an upper bound on its variance. The following variance result required much less strict assumptions than the bias results in Theorem 1:
Theorem 2.
Assume that the functional g in (1) is Lipschitz continuous in both of its arguments with the Lipschitz constant ( C g ). Then, the variance of the plug-in estimator ( G ˜ h ) is bounded by
V G ˜ h C g 2 | | K | | 2 11 N .
From Theorems 1 and 2, we observe that h 0 and N h d are required for G ˜ h to be unbiased, while the variance of the plug-in estimator depends primarily on the sample size (N). Note that the constants depend on the densities f 1 and f 2 and their derivatives which are often unknown.

2.3. Optimal MSE Rate

From Theorem 1, the dominating terms in the bias are observed to be Θ h and Θ 1 N h d . If no bias correction is performed, the optimal choice of h that minimizes MSE is
h = Θ N 1 d + 1 .
This results in a dominant bias term of order Θ N 1 d + 1 . Note that this differs from the standard result for the optimal KDE bandwidth for minimum MSE density estimation which is Θ N 1 / d + 4 for a symmetric uniform kernel when the boundary bias is ignored [68].
Figure 1 shows a heatmap showing the leading bias term O h as a function of d and N when h = N 1 d + 1 . The heatmap indicates that the bias of the plug-in estimator in (2) is small only for relatively small values of d. This is consistent with the empirical results in reference [69] which examined the MSE of multiple plug-in KDE and k-nn estimators. In the next section, we propose an ensemble estimator that achieves a superior convergence rate regardless of the dimensions (d) as long as the density is sufficiently smooth.

2.4. Proof Sketches of Theorems 1 and 2

To prove the bias expressions in Theorem 1, the bias is first decomposed into two parts by adding and subtracting g E Z f ˜ 1 , h ( Z ) , E Z f ˜ 2 , h ( Z ) within the expectation creating a “bias” term and a “variance” term. Applying a Taylor series expansion on the bias and variance terms results in expressions that depend on powers of B Z f ˜ i , h ( Z ) : = E Z f ˜ i , h ( Z ) f i ( Z ) and e ˜ i , h ( Z ) : = f ˜ i , h ( Z ) E Z f ˜ i , h ( Z ) , respectively. Within the interior of the support, moment bounds can be derived from properties of the KDEs and a Taylor series expansion of the densities. Near the boundary of the support, the smoothness assumption on the boundary A . 5 is required to obtain an expression of the bias in terms of the KDE bandwidth (h) and the sample size (N). The full proof of Theorem 1 is given in Appendix E.
The proof of the variance result takes a different approach. The proof uses the Efron–Stein inequality [70] which bounds the variance by analyzing the expected squared difference between the plug-in estimator when one sample is allowed to differ. This approach provides a bound on the variance under much less strict assumptions on the densities and the functional g than is required for Theorem 1. The full proof of Theorem 2 is given in Appendix F.

3. Weighted Ensemble Estimation

From Theorem 1 and Figure 1, we can observe that the bias of the MSE-optimal plug-in estimator G ˜ h decreases very slowly as a function of the sample size (N) when the data dimensions (d) are not small, resulting in a large MSE. However, by applying the theory of optimally weighted ensemble estimation, we can obtain an estimator with improved performance by taking a weighted sum of an ensemble of weak estimators where the weights are chosen to significantly reduce the bias.
The ensemble of weak estimators is formed by choosing different values of the bandwidth parameter h as follows. Set L = l 1 , , l L to be real positive numbers that index h ( l i ) . Thus, the parameter l indexes over different neighborhood sizes for the KDEs. Define the weight w : = w l 1 , , w l L and G ˜ w : = l L w ( l ) G ˜ h ( l ) . That is, for each estimator G ˜ h ( l ) there is a corresponding weight value ( w ( l ) ). The key to reducing the MSE is to choose the weight vector (w) to reduce the lower order terms in the bias while minimizing the impact of the weighted average on the variance.

3.1. Finding the Optimal Weight

The theory of optimally weighted ensemble estimation is a general theory that is applicable to any estimation problem as long as the bias and variance of the estimator can be expressed in a specific way. An early version of this theory was presented in reference [65]. We now generalize this theory so that it can be applied to a wider variety of estimation problems. Let N be the number of available samples and let L = l 1 , , l L be a set of index values. Given an indexed ensemble of estimators E ^ l l L of some parameter (E), the weighted ensemble estimator with weights w = w l 1 , , w l L satisfying l L w ( l ) = 1 is defined as
E ^ w = l L w l E ^ l .
E ^ w is asymptotically unbiased as long as the estimators E ^ l l L are asymptotically unbiased. Consider the following conditions on E ^ l l L :
  • C . 1 The bias is expressible as
    B E ^ l = i J c i ψ i ( l ) ϕ i , d ( N ) + O 1 N ,
    where c i are constants that depend on the underlying density and are independent of N and l, J = i 1 , , i I is a finite index set with I < L , and ψ i ( l ) are basis functions depending only on parameter l and not on the sample size (N).
  • C . 2 The variance is expressible as
    V E ^ l = c v 1 N + o 1 N .
Theorem 3.
Assume conditions C . 1 and C . 2 hold for an ensemble of estimators E ^ l l L . Then, there exists a weight vector ( w 0 ) such that the MSE of the weighted ensemble estimator attains the parametric rate of convergence:
E E ^ w 0 E 2 = O 1 N .
The weight vector ( w 0 ) is the solution to the following convex optimization problem:
min w | | w | | 2 s u b j e c t t o l L w ( l ) = 1 , γ w ( i ) = l L w ( l ) ψ i ( l ) = 0 , i J .
Proof. 
From condition C . 1 , we can write the bias of the weighted estimator as
B E ^ w = i J c i γ w ( i ) ϕ i , d ( N ) + O L | | w | | 2 N .
The variance of the weighted estimator is bounded as
V E ^ w L | | w | | 2 2 N .
The optimization problem in (4) zeroes out the lower-order bias terms and limits the 2 norm of the weight vector (w) to prevent the variance from exploding. This results in an MSE rate of O ( 1 / N ) when the dimensions (d) are fixed and when L is fixed independently of the sample size (N). Furthermore, a solution to (4) is guaranteed to exist if L > I and the vectors a i = ψ i ( l 1 ) , , ψ i ( l L ) are linearly independent. This completes our sketch of the proof of Theorem 3. ☐

3.2. The EnDive Estimator

The parametric rate of O 1 / N in MSE convergence can be achieved without requiring γ w ( i ) = 0 , i J . This can be accomplished by solving the following convex optimization problem in place of the optimization problem in Theorem 3:
min w ϵ s u b j e c t t o l L w ( l ) = 1 , γ w ( i ) N 1 2 ϕ i , d ( N ) ϵ , i J , w 2 2 η ϵ ,
where the parameter η is chosen to achieve a trade-off between bias and variance. Instead of forcing γ w ( i ) = 0 , the relaxed optimization problem uses the weights to decrease the bias terms at a rate of O 1 / N , yielding an MSE convergence rate of O ( 1 / N ) . In fact, it was shown in reference [71] that the optimization problem in (6) guarantees the parametric MSE rate as long as the conditions of Theorem 3 are satisfied and a solution to the optimization problem in (4) exists (the conditions for this existence are given in the proof of Theorem 3).
We now construct a divergence ensemble estimator from an ensemble of plug-in KDE divergence estimators. Consider first the bias result in (3) where g is general, and assume that s d . In this case, the bias contains a O 1 h d N term. To guarantee the parametric MSE rate, any remaining lower-order bias terms in the ensemble estimator must be no slower than O 1 / N . Let h ( l ) = l N 1 / ( 2 d ) where l L . Then O 1 h ( l ) d N = O 1 l d N . We therefore obtain an ensemble of plug-in estimators G ˜ h ( l ) l L and a weighted ensemble estimator G ˜ w = l L w ( l ) G ˜ h ( l ) . The bias of each estimator in the ensemble satisfies the condition C . 1 with ψ i ( l ) = l i and ϕ i , d ( N ) = N i / ( 2 d ) for i = 1 , , d . To obtain a uniform bound on the bias with respect to w and L , we also include the function ψ d + 1 ( l ) = l d with corresponding ϕ d + 1 , d ( N ) = N 1 / 2 . The variance also satisfies the condition C . 2 . The optimal weight ( w 0 ) is found by using (6) to obtain an optimally weighted plug-in divergence functional estimator G ˜ w 0 with an MSE convergence rate of O 1 N as long as s d and L d . Otherwise, if s < d , we can only guarantee the MSE rate up to O 1 N s / d . We refer to this estimator as the Ensemble Divergence (EnDive) estimator and denote it as G ˜ EnDive .
We note that for some functionals (g) (including the KL divergence and the Renyi- α divergence integral), we can modify the EnDive estimator to obtain the parametric rate under the less strict assumption that s > d / 2 . For details on this approach, see Appendix B.

3.3. Central Limit Theorem

The following theorem shows that an appropriately normalized ensemble estimator G ˜ w converges in distribution to a normal random variable under rather general conditions. Thus, the same result applies to the EnDive estimator G ˜ EnDive . This enables us to perform hypothesis testing on the divergence functional which is very useful in many scientific applications. The proof is based on the Efron–Stein inequality and an application of Slutsky’s Theorem (Appendix G).
Theorem 4.
Assume that the functional g is Lipschitz in both arguments with the Lipschitz constant C g . Further assume that h ( l ) = o ( 1 ) , N , and N h ( l ) d for each l L . Then, for a fixed L , the asymptotic distribution of the weighted ensemble estimator G ˜ w is
P r G ˜ w E G ˜ w / V G ˜ w t P r ( S t ) ,
where S is a standard normal random variable.

3.4. Uniform Convergence Rates

Here, we show that the optimally weighted ensemble estimators achieve the parametric MSE convergence rate uniformly. Denote the subset of Σ s , K H with densities bounded between ϵ 0 and ϵ as Σ s , K H , ϵ 0 , ϵ .
Theorem 5.
Let G ˜ EnDive be the EnDive estimator of the functional
G ( p , q ) = g p ( x ) , q ( x ) q ( x ) d x ,
where p and q are d-dimensional probability densities. Additionally, let r = d and assume that s > r . Then,
sup p , q Σ s , K H , ϵ 0 , ϵ E G ˜ w 0 G ( p , q ) 2 C N ,
where C is a constant.
The proof decomposes the MSE into the variance plus the square of the bias. The variance is bounded easily by using Theorem 2. To bound the bias, we show that the constants in the bias terms are continuous with respect to the densities p and q under an appropriate norm. We then show that Σ s , K H , ϵ 0 , ϵ is compact with respect to this norm and then apply an extreme value theorem. Details are given in Appendix H.

4. Experimental Results

In this section, we discuss the choice of tuning parameters and validate the EnDive estimator’s convergence rates and the central limit theorem. We then use the EnDive estimator to estimate bounds on the Bayes error for a single-cell bone marrow data classification problem.

4.1. Tuning Parameter Selection

The optimization problem in (6) has parameters η , L, and L . By applying (6), and the resulting MSE of the ensemble estimator is
O ϵ 2 / N + O L η 2 ϵ 2 / N ,
where each term in the sum comes from the bias and variance, respectively. From this expression and (6), we see that the parameter η provides a tradeoff between bias and variance. Increasing η enables the norm of the weight vector to be larger. This means the feasible region for the variable w increases in size as η increases which can result in decreased bias. However, as η contributes to the variance term, increasing η may result in increased variance.
If all of the constants in (3) and an exact expression for the variance of the ensemble estimator were known, then η could be chosen to optimize this tradeoff in bias and variance and thus minimize the MSE. Since these constants are unknown, we can only choose η based on the asymptotic results. From (8), this would suggest setting η = 1 / L . In practice, we find that for finite sample sizes, the variance in the ensemble estimator is less than the upper bound of L η 2 ϵ 2 / N . Thus, setting η = 1 / L is unnecessarily restrictive. We find that, in practice, setting η = 1 works well.
Upon first glance, it appears that for fixed L, the set L that parameterizes the kernel widths can, in theory, be chosen by minimizing ϵ in (6) over L in addition to w. However, adding this constraint results in a non-convex optimization problem since w does not lie in the non-negative orthant. A parameter search over possible values for L is another possibility. However, this may not be practical as ϵ generally decreases as the size and spread of L increases. In addition, for finite sample sizes, decreasing ϵ does not always directly correspond to a decrease in MSE, as very high or very low values of h ( l ) can lead to inaccurate density estimates, resulting in a larger MSE.
Given these limitations, we provide the following recommendations for L . Denote the value of the minimum value of l such that f ˜ i , h ( l m i n ) ( X j ) > 0 i = 1 , 2 as l m i n and the diameter of the support S as D. To ensure the KDEs are bounded away from zero, we require that min ( L ) l m i n . As demonstrated in Figure 2, the weights in w 0 are generally largest for the smallest values of L . This indicates that min ( L ) should also be sufficiently larger than l m i n to render an adequate density estimate. Similarly, max ( L ) should be sufficiently smaller than the diamter (D) as high bandwidth values can lead to high bias in the KDEs. Once these values are chosen, all other L values can then be chosen to be equally spaced between min ( L ) and max ( L ) .
An efficient way to choose l m i n and l m a x is to select the integers k m i n and k m a x and compute the k m i n and k m a x nearest neighbor distances of all the data points. The bandwidths h ( l m i n ) and h ( l m a x ) can then be chosen to be the maximums of these corresponding distances. The parameters l m i n and l m a x can then be computed from the expression h ( l ) = l N 1 2 d . This choice ensures that a minimum of k m i n points are within the kernel bandwidth for the density estimates at all points and that a maximum of k m a x points are within the kernel bandwidth for the density estimates at one of the points.
Once min ( L ) and max ( L ) have been chosen, the similarity of bandwidth values h ( l ) and basis functions ψ i , d ( l ) increases as L increases, resulting in a negligible decrease in the bias. Hence, L should be chosen to be large enough for sufficient bias but small enough so that the bandwidth values h ( l ) are sufficiently distinct. In our experiments, we found 30 L 60 to be sufficient.

4.2. Convergence Rates Validation: Rényi- α Divergence

To validate our theoretical convergence rate results, we estimated the Rényi- α divergence integral between two truncated multivariate Gaussian distributions with varying dimension and sample sizes. The densities had means of μ ¯ 1 = 0 . 7 × 1 ¯ d , μ ¯ 2 = 0 . 3 × 1 ¯ d and covariance matrices of 0 . 4 × I d , where 1 ¯ d is a d-dimensional vector of ones, and I d is a d × d identity matrix. We restricted the Gaussians to the unit cube and used α = 0 . 5 .
The left plots in Figure 3 show the MSE (200 trials) of the standard plug-in estimator implemented with a uniform kernel and the proposed optimally weighted estimator EnDive for various dimensions and sample sizes. The parameter set L was selected based on a range of k-nearest neighbor distances. The bandwidth used for the standard plug-in estimator was selected by setting h f i x e d ( l ) = l N 1 d + 1 , where l was chosen from L to minimize the MSE of the plug-in estimator. For all dimensions and sample sizes, EnDive outperformed the plug-in estimator in terms of MSE. EnDive was also less biased than the plug-in estimator and even had lower variance at smaller sample sizes (e.g., N = 100 ). This reflects the strength of ensemble estimators—the weighted sum of a set of relatively poor estimators can result in a very good estimator. Note also that for the larger values of N, the ensemble estimator MSE rates approached the theoretical rate based on the estimated log–log slope given in Table 1.
To illustrate the difference between the problems of density estimation and divergence functional estimation, we estimated the average pointwise squared error between the KDE f ˜ 1 , h and f 1 in the previous experiment. We used exactly the same bandwidth and kernel as the standard plug-in estimators in Figure 3 and calculated the pointwise error at 10,000 points sampled from f 1 . The results are shown in Figure 4. From these results, we see that the KDEs performed worse as the dimension of the densities increased. Additionally, we observe by comparing Figure 3 and Figure 4, the average pointwise squared error decreased at a much slower rate as a function of the sample size (N) than the MSE of the plug-in divergence estimators, especially for larger dimensions (d).
Our experiments indicated that the proposed ensemble estimator is not sensitive to the tuning parameters. See reference [72] for more details.

4.3. Central Limit Theorem Validation: KL Divergence

To verify the central limit theorem of the EnDive estimator, we estimated the KL divergence between two truncated Gaussian densities, again restricted to the unit cube. We conducted two experiments where (1) the densities were different with means of μ ¯ 1 = 0 . 7 × 1 ¯ d , μ ¯ 2 = 0 . 3 × 1 ¯ d and covariances of matrices σ i × I d , σ 1 = 0 . 1 , σ 2 = 0 . 3 ; and where (2) the densities were the same with means of 0 . 3 × 1 ¯ d and covariance matrices of 0 . 3 × I d . For both experiments, we chose d = 6 and four different sample sizes (N). We found that the correspondence between the quantiles of the standard normal distribution and the quantiles of the centered and scaled EnDive estimator was very high under all settings (see Table 2 and Figure 5) which validates Theorem 4.

4.4. Bayes Error Rate Estimation on Single-Cell Data

Using the EnDive estimator, we estimated bounds on the Bayes error rate (BER) of a classification problem involving MARS-seq single-cell RNA-sequencing (scRNA-seq) data measured from developing mouse bone marrow cells enriched for the myeloid and erythroid lineages [73]. However, we first demonstrated the ability of EnDive to estimate the bounds on the BER of a simulated problem. In this simulation, the data were drawn from two classes where each class distribution was a d = 10 dimensional Gaussian distribution with different means and the identity covariance matrix. We considered two cases, namely, the distance between the means was 1 or 3. The BER was calculated in both cases. We then estimated upper and lower bounds on the BER by estimating the Henze–Penrose (HP) divergence [4,6]. Figure 6 shows the average estimated upper and lower bounds on the BER with standard error bars for both cases. For all tested sample sizes, the BER was within one standard deviation of the estimated lower bound. The lower bound was also closer, on average, to the BER for most of the tested sample sizes (lower sample sizes with smaller distances between means were the exceptions). Generally, these resuls indicate that the true BER is relatively close to the estimated lower bound, on average.
We then estimated similar bounds on the scRNA-seq classification problem using EnDive. We considered the three most common cell types within the data: erythrocytes (eryth.), monocytes (mono.), and basophils (baso.) ( N = 1095 , 559 , 300 , respectively). We estimated the upper and lower bounds on the pairwise BER between these classes using different combinations of genes selected from the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways associated with the hematopoietic cell lineage [74,75,76]. Each collection of genes contained 11–14 genes. The upper and lower bounds on the BER were estimated using the Henze–Penrose divergence [4,6]. The standard deviations of the bounds for the KEGG-based genes were estimated via 1000 bootstrap iterations. The KEGG-based bounds were compared to BER bounds obtained from 1000 random selections of 12 genes. In all cases, we compared the bounds to the performance of a quadratic discriminant analysis classifier (QDA) with 10-fold cross validation. Note that to correct for undersampling in scRNA-seq data, we first imputed the undersampled data using MAGIC [77].
All results are given in Table 3. From these results, we note that erythrocytes are relatively easy to distinguish from the other two cell types as the BER lower bounds were within nearly two standard deviations of zero when using genes associated with platelet, erythrocyte, and neutrophil development as well as a random selection of 12 genes. This is corroborated by the QDA cross-validated results which were all within two standard deviations of either the upper or lower bound for these gene sets. In contrast, the macrophage-associated genes seem to be less useful for distinguishing erythrocytes than the other gene sets.
We also found that basophils are difficult to distinguish from monocytes using these gene sets. Assuming the relative abundance of each cell type is representative of the population, a trivial upper bound on the BER is 300 / ( 300 + 559 ) 0 . 35 which is between all of the estimated lower and upper bounds. The QDA results were also relatively high (and may be overfitting the data in some cases based on the estimated BER bounds), suggesting that different genes should be explored for this classification problem.

5. Conclusions

We derived the MSE convergence rates for a kernel density plug-in estimator for a large class of divergence functionals. We generalized the theory of optimally weighted ensemble estimation and derived an ensemble divergence estimator EnDive that achieves the parametric rate when the densities are more than d times differentiable. The estimator we derived can be applied to general bounded density support sets and can be implemented without knowledge of the support, which is a distinct advantage over other competing estimators. We also derived the asymptotic distribution of the estimator, provided some guidelines for tuning parameter selection, and experimentally validated the theoretical convergence rates for the case of empirical estimation of the Rényi- α divergence integral. We then performed experiments to examine the estimator’s robustness to the choice of tuning parameters, validated the central limit theorem for KL divergence estimation, and estimated bounds on the Bayes error rate for a single cell classification problem.
We note that based on the proof techniques employed in our work, our weighted ensemble estimators are easily extended beyond divergence estimation to more general distributional functionals which may be integral functionals of any number of probability distributions. We also show in Appendix B that EnDive can be easily modified to obtain an estimator that achieves the parametric rate when the densities are more than d / 2 times differentiable and the functional g has a specific form that includes the Rényi and KL divergences. Future work includes extending this modification to functionals with more general forms. An important divergence of interest in this context is the Henze–Penrose divergence that we used to bound the Bayes error. Further future work will focus on extending this work on divergence estimation to k-nn based estimators where knowledge of the support is, again, not required. This will improve the computational burden, as k-nn estimators require fewer computations than standard KDEs.

Author Contributions

K.M. wrote this article primarily as part of his PhD dissertation under the supervision of A.H. and in collaboration with K.S. A.H, K.M., and K.S. edited the paper. K.S. provided the primary contribution to the proof of Theorem A1 and assisted with all other proofs. K.M. provided the primary contributions for the proofs of all other theorems and performed all other experiments. K.G. contributed to the bias proof.

Funding

This research was funded by Army Research Office (ARO) Multidisciplinary University Research Initiative (MURI) grant number W911NF-15-1-0479, National Science Foundation (NSF) grant number CCF-1217880, and a National Science Foundation (NSF) Graduate Research Fellowship to the first author under grant number F031543.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
KLKullback–Leibler
MSEMean squared error
SVMSupport vector machine
KDEkernel density estimator
EnDiveEnsemble Divergence
BERBayes error rate
scRNA-seqSingle-cell RNA-sequencing
HPHenze–Penrose

Appendix A. Bias Assumptions

Our full assumptions to prove the bias expressions for the estimator G ˜ h were as follows:
  • ( A . 0 ) : Assume that the kernel K is symmetric, is a product kernel, and has bounded support in each dimension.
  • ( A . 1 ) : Assume there exist constants ϵ 0 , ϵ , such that 0 < ϵ 0 f i ( x ) ϵ < , x S .
  • ( A . 2 ) : Assume that the densities are f i Σ ( s , K H ) in the interior of S with s 2 .
  • ( A . 3 ) : Assume that g has an infinite number of mixed derivatives.
  • ( A . 4 ): Assume that k + l g ( x , y ) x k y l , k , l = 0 , 1 , are strictly upper bounded for ϵ 0 x , y ϵ .
  • ( A . 5 ) : Assume the following boundary smoothness condition: Let p x ( u ) : R d R be a polynomial in u of order q r = s whose coefficients are a function of x and are r q times differentiable. Then, assume that
    x S u : K ( u ) > 0 , x + u h S K ( u ) p x ( u ) d u t d x = v t ( h ) ,
    where v t ( h ) admits the expansion
    v t ( h ) = i = 1 r q e i , q , t h i + o h r q ,
    for some constants e i , q , t .
We focused on finite support kernels for simplicity in the proofs, although it is likely that our results extend to some infinitely supported kernels as well. We assumed relatively strong conditions on the smoothness of g in A . 3 to enable us to obtain an estimator that achieves good convergence rates without knowledge of the boundary of the support set. While this smoothness condition may seem restrictive, in practice, nearly all divergence and entropy functionals of interest satisfy this condition. Functionals of interest that do not satisfy this assumption (e.g., the total variation distance) typically have at least one point that is not differentiable which violates the assumptions of all competing estimators [54,57,58,59,60,65]. We also note that to obtain simply an upper bound on the bias for the plug-in estimator, much less restrictive assumptions on the functional g are sufficient.
Assumption A . 5 requires the boundary of the density support set to be smooth with respect to the kernel ( K ( u ) ) in the sense that the expectation of the area outside of S with respect to any random variable u with smooth distribution is a smooth function of the bandwidth (h). Note that we do not require knowledge of the support of the unknown densities to actually implement the estimator ( G ˜ h ). As long as assumptions A . 0 A . 5 are satisfied, then the bias results we obtain are valid, and therefore, we can obtain the parametric rate with the EnDive estimator. This is in contrast to many other estimators of information theoretic measures such as those presented in references [59,60,65]. In these cases, the boundary of the support set must be known precisely to perform boundary correction to obtain the parametric rate, since the boundary correction is an explicit step in these estimators. In contrast, we do not need to explicitly perform a boundary correction.
It is not necessary for the boundary of S to have smooth contours with no edges or corners as assumption A . 5 is satisfied by the following case:
Theorem A1.
Assumption A . 5 is satisfied when S = [ 1 , 1 ] d and when K is the uniform rectangular kernel; that is, K ( x ) = 1 for all x : | | x | | 1 1 / 2 .
The proof is given in Appendix D. The methods used to prove this can be easily extended to show that A . 5 is satisfied with the uniform rectangular kernel and other similar supports with flat surfaces and corners. Furthermore, we showed in reference [78] that A . 5 is satisfied using the uniform spherical kernel with a density support set equal to the unit cube. Note that assumption A . 0 is trivially satisfied by the uniform rectangular kernel as well. Again, this is easily extended to more complicated density support sets that have boundaries that contain flat surfaces and corners. Determining other combinations of kernels and density support sets that satisfy A . 5 is left for future work.
Densities for which assumptions A .1– A . 2 hold include the truncated Gaussian distribution and the beta distribution on the unit cube. Functions for which assumptions A .3– A . 4 hold include g ( x , y ) = ln x y and g ( x , y ) = x y α .

Appendix B. Modified EnDive

If the functional g has a specific form, we can modify the EnDive estimator to obtain an estimator that achieves the parametric rate when s > d / 2 . Specifically, we have the following theorem:
Theorem A2.
Assume that assumptions A .0– A . 5 hold. Furthermore, if g ( x , y ) has k , l -th order mixed derivatives k + l g ( x , y ) x k y l that depend on x , y only through x α y β for some α , β R , then for any positive integer λ 2 , the bias of G ˜ h is
B G ˜ h = j = 1 s c 10 , j h j + q = 1 λ / 2 j = 0 s c 11 , q , j h j N h d q + O h s + 1 N h d λ 2 .
Divergence functionals that satisfy the mixed derivatives condition required for (A1) include the KL divergence and the Rényi- α divergence. Obtaining similar terms for other divergence functionals requires us to separate the dependence on h of the derivatives of g evaluated at E Z f ˜ i , h ( Z ) . This is left for future work. See Appendix E for details.
As compared to (3), there are many more terms in (A1). These terms enable us to modify the EnDive estimator to achieve the parametric MSE convergence rate when s > d / 2 for an appropriate choice of bandwidths, whereas the terms in (3) requires s d to achieve the same rate. This is accomplished by letting h ( l ) decrease at a faster rate, as follows.
Let δ > 0 and h ( l ) = l N 1 d + δ where l L . The bias of each estimator in the resulting ensemble has terms proportional to l j d q N j + q d + δ , where j , q 0 and j + q > 0 . Then, the bias of G ˜ h ( l ) satisfies condition C . 1 if ϕ j , q , d ( N ) = N j + q d + δ , ψ j , q ( l ) = l j d q , and
J = { j , q } : 0 < j + q < ( d + 1 ) / 2 , q { 0 , 1 , 2 , , λ / 2 } , j { 0 , 1 , 2 , , s } ,
as long as L > | J | = I . The variance also satisfies condition C . 2 . The optimal weight ( w 0 ) is found by using (6) to obtain an optimally weighted plug-in divergence functional estimator G ˜ w 0 that achieves the parametric convergence rate if λ d / δ + 1 and if s ( d + δ ) / 2 . Otherwise, if s < ( d + δ ) / 2 , we can only guarantee the MSE rate up to O 1 N 2 s / ( d + δ ) . We refer to this estimator as the modified EnDive estimator and denote it as G ˜ Mod . The ensemble estimator G ˜ Mod is summarized in Algorithm 1 when δ = 1 .
Algorithm A1: The Modified EnDive Estimator
Input: 
η , L positive real numbers L , samples Y 1 , , Y N from f 1 , samples X 1 , , X N from f 2 , dimension d, function g, kernel K
Output: 
The modified EnDive estimator G ˜ Mod
 1:
Solve for w 0 using (6) with ϕ j , q , d ( N ) = N j + q d + 1 and basis functions ψ j , q ( l ) = l j d q , l l ¯ , and { i , j } J defined in (A2)
 2:
for all l l ¯ do
 3:
h ( l ) l N 1 d + 1
 4:
for i = 1 to N do
 5:
   f ˜ 1 , h ( l ) ( X i ) 1 N h ( l ) d j = 1 N K X i Y j h ( l ) , f ˜ 2 , h ( l ) ( X i ) 1 ( N 1 ) h ( l ) d j = 1 j i N K X i X j h ( l )
 6:
end for
 7:
G ˜ h ( l ) 1 N i = 1 N g f ˜ 1 , h ( l ) ( X i ) , f ˜ 2 , h ( l ) ( X i )
 8:
end for
 9:
G ˜ Mod l L w 0 ( l ) G ˜ h ( l )
The parametric rate can be achieved with G ˜ Mod under less strict assumptions on the smoothness of the densities than those required for G ˜ EnDive . Since δ > 0 can be arbitrary, it is theoretically possible to achieve the parametric rate with the modified estimator as long as s > d / 2 . This is consistent with the rate achieved by the more complex estimators proposed in reference [57]. We also note that the central limit theorem applies and that the convergence is uniform as Theorem 5 applies for s > ( d + δ ) / 2 and s ( d + δ ) / 2 .
These rate improvements come at a cost for the number of parameters (L) required to implement the weighted ensemble estimator. If s d + δ 2 , then the size of J for G ˜ Mod is in the order of d 2 8 δ . This may lead to increased variance in the ensemble estimator as indicated by (5).
So far, G ˜ Mod can only be applied to functionals ( g ( x , y ) ) with mixed derivatives of the form of x α y β . Future work is required to extend this estimator to other functionals of interest.

Appendix C. General Results

Here we present the generalized forms of Theorems 1 and 2 where the sample sizes and bandwidths of the two datasets are allowed to differ. In this case, the KDEs are
f ˜ 1 , h 1 ( X j ) = 1 N 1 h 1 d i = 1 N 1 K X j Y i h 1 , f ˜ 2 , h 2 ( X j ) = 1 M 2 h 2 d i = 1 i j N 2 K X j X i h 2 ,
where M 2 = N 2 1 . G f 1 , f 2 is then approximated as
G ˜ h 1 , h 2 = 1 N 2 i = 1 N 2 g f ˜ 1 , h 1 X i , f ˜ 2 , h 2 X i .
We also generalize the bias result to the case where the kernel (K) has the order ν which means that the j-th moment of the kernel K i defined as t j K i ( t ) d t is zero for all j = 1 , , ν 1 and i = 1 , , d where K i is the kernel in the i-th coordinate. Note that symmetric product kernels have the order ν 2 . The following theorem on the bias follows under assumptions A .0– A . 5 :
Theorem A3.
For general g, the bias of the plug-in estimator ( G ˜ h 1 , h 2 ) is of the form
B G ˜ h 1 , h 2 = j = 1 r c 4 , 1 , j h 1 j + c 4 , 2 , j h 2 j + j = 1 r i = 1 r c 5 , i , j h 1 j h 2 i + O h 1 s + h 2 s + c 9 , 1 1 N 1 h 1 d + c 9 , 2 1 N 2 h 2 d + o 1 N 1 h 1 d + 1 N 2 h 2 d .
Furthermore, if g ( x , y ) has k , l -th order mixed derivatives k + l g ( x , y ) x k y l that depend on x , y only through x α y β for some α , β R , then for any positive integer λ 2 , the bias is of the form
B G ˜ h 1 , h 2 = j = 1 r c 4 , 1 , j h 1 j + c 4 , 2 , j h 2 j + j = 1 r i = 1 r c 5 , i , j h 1 j h 2 i + O h 1 s + h 2 s j = 1 λ / 2 m = 0 r c 9 , 1 , j , m h 1 m N 1 h 1 d j + c 9 , 2 , j , m h 2 m N 2 h 2 d j + j = 1 λ / 2 m = 0 r i = 1 λ / 2 n = 0 r c 9 , j , i , m , n h 1 m h 2 n N 1 h 1 d j N 2 h 2 d i + O 1 N 1 h 1 d λ 2 + 1 N 2 h 2 d λ 2 .
Note that the bandwidth and sample size terms do not depend on the order of the kernel ( ν ). Thus, using a higher-order kernel does not provide any benefit to the convergence rates. This lack of improvement is due to the bias of the density estimators at the boundary of the density support sets. To obtain better convergence rates using higher-order kernels, boundary correction would be necessary [57,60]. In contrast, we improve the convergence rates by using a weighted ensemble that does not require boundary correction.
The variance result requires much less strict assumptions than the bias results:
Theorem A4.
Assume that the functionalg in (1) is Lipschitz continuous in both of its arguments with the Lipschitz constant C g . Then, the variance of the plug-in estimator ( G ˜ h 1 , h 2 ) is bounded by
V G ˜ h 1 , h 2 C g 2 | | K | | 2 10 N 2 + N 1 N 2 2 .
The proofs of these theorems are in Appendix E and Appendix F. Theorems 1 and 2 then follow.

Appendix D. Proof of Theorem A1 (Boundary Conditions)

Consider a uniform rectangular kernel K ( x ) that satisfies K ( x ) = 1 for all x, such that | | x | | 1 1 / 2 . Also, consider the family of probability densities (f) with rectangular support S = [ 1 , 1 ] d . We prove Theorem A1 which is that that S satisfies the following smoothness condition ( A . 5 ): for any polynomial p x ( u ) : R d R of order q r = s with coefficients that are r q times differentiable wrt x,
x S u : | | u | | 1 1 2 , x + u h S p x ( u ) d u t d x = v t ( h ) ,
where v t ( h ) has the expansion
v t ( h ) = i = 1 r q e i , q , t h i + o h r q .
Note that the inner integral forces the xs under consideration to be boundary points via the constraint x + u h S .

Appendix D.1. Single Coordinate Boundary Point

We begin by focusing on points x that are boundary points by virtue of a single coordinate x i , such that x i + u i h S . Without loss of generality, assume that x i + u i h > 1 . The inner integral in (A6) can then be evaluated first with respect to (wrt) all coordinates other than i. Since all of these coordinates lie within the support, the inner integral over these coordinates will amount to integration of the polynomial p x ( u ) over a symmetric d 1 dimensional rectangular region u j 1 2 for all j i . This yields a function m = 1 q p ˜ m ( x ) u i m where the coefficients p ˜ m ( x ) are each r q times differentiable wrt x.
With respect to the u i coordinate, the inner integral will have limits from 1 x i h to 1 2 for some 1 > x i > 1 h 2 . Consider the p ˜ q ( x ) u i q monomial term. The inner integral wrt this term yields
m = 1 q p ˜ m ( x ) 1 x i h 1 2 u i m d u i = m = 1 q p ˜ m ( x ) 1 m + 1 1 2 m + 1 1 x i h m + 1 .
Raising the right-hand-side of (A7) to the power of t results in an expression of the form
j = 0 q t p ˇ j ( x ) 1 x i h j ,
where the coefficients p ˇ j ( x ) are r q times differentiable wrt x. Integrating (A8) over all the coordinates in x other than x i results in an expression of the form
j = 0 q t p ¯ j ( x i ) 1 x i h j ,
where, again, the coefficients p ¯ j ( x i ) are r q times differentiable wrt x i . Note that since the other coordinates of x other than x i are far away from the boundary, the coefficients p ¯ j ( x i ) are independent of h. To evaluate the integral of (A9), consider the r q term Taylor series expansion of p ¯ j ( x i ) around x i = 1 . This will yield terms of the form
1 h / 2 1 1 x i j + k h k d x i = 1 x i j + k + 1 h k ( j + k + 1 ) x i = 1 h / 2 x i = 1 = h j + 1 ( j + k + 1 ) 2 j + k + 1 ,
for 0 j r q , and 0 k q t . Combining terms results in the expansion v t ( h ) = i = 1 r q e i , q , t h i + o h r q .

Appendix D.2. Multiple Coordinate Boundary Point

The case where multiple coordinates of point x are near the boundary is a straightforward extension of the single boundary point case, so we only sketch the main ideas here. As an example, consider the case where two of the coordinates are near the boundary. Assume for notational ease that they are x 1 and x 2 and that x 1 + u 1 h > 1 and x 2 + u 2 h > 1 . The inner integral in (A6) can again be evaluated first wrt all coordinates other than 1 and 2. This yields a function m , j = 1 q p ˜ m , j ( x ) u 1 m u 2 j where the coefficients p ˜ m , j ( x ) are each r q times differentiable wrt x. Integrating this wrt x 1 and x 2 and then raising the result to the power of t yields a double sum similar to (A8). Integrating this over all the coordinates in x other than x 1 and x 2 gives a double sum similar to (A9). Then, a Taylor series expansion of the coefficients and integration over x 1 and x 2 yields the result.

Appendix E. Proof of Theorem A3 (Bias)

In this appendix, we prove the bias results in Theorem A3. The bias of the base kernel density plug-in estimator G ˜ h 1 , h 2 can be expressed as
B G ˜ h 1 , h 2 = E g f ˜ 1 , h 1 ( Z ) , f ˜ 2 , h 2 ( Z ) g f 1 ( Z ) , f 2 ( Z ) = E g f ˜ 1 , h 1 ( Z ) , f ˜ 2 , h 2 ( Z ) g E Z f ˜ 1 , h 1 ( Z ) , E Z f ˜ 2 , h 2 ( Z ) + E g E Z f ˜ 1 , h 1 ( Z ) , E Z f ˜ 2 , h 2 ( Z ) g f 1 ( Z ) , f 2 ( Z ) ,
where Z is drawn from f 2 . The first term is the “variance” term, while the second is the “bias” term. We bound these terms using Taylor series expansions under the assumption that g is infinitely differentiable. The Taylor series expansion of the variance term in (A11) will depend on variance-like terms of the KDEs, while the Taylor series expansion of the bias term in (A11) will depend on the bias of the KDEs.
The Taylor series expansion of g E Z f ˜ 1 , h 1 ( Z ) , E Z f ˜ 2 , h 2 ( Z ) around f 1 ( Z ) and f 2 ( Z ) is
g E Z f ˜ 1 , h 1 ( Z ) , E Z f ˜ 2 , h 2 ( Z ) = i = 0 j = 0 i + j g ( x , y ) x i y j x = f 1 ( Z ) y = f 2 ( Z ) B Z i f ˜ 1 , h 1 ( Z ) B Z j f ˜ 2 , h 2 ( Z ) i ! j ! ,
where B Z j f ˜ i , h i ( Z ) = E Z f ˜ i , h i ( Z ) f i ( Z ) j is the bias of f ˜ i , h i at the point Z raised to the power of j. This expansion can be used to control the second term (the bias term) in (A11). To accomplish this, we require an expression for E Z f ˜ i , h i ( Z ) f i ( Z ) = B Z f ˜ i , h i ( Z ) .
To obtain an expression for B Z f ˜ i , h i ( Z ) , we separately consider the cases when Z is in the interior of the support S or when Z is near the boundary of the support. A point X S is defined to be in the interior of S if for all Y S , K X Y h i = 0 . A point X S is near the boundary of the support if it is not in the interior. Denote the region in the interior and near the boundary wrt h i as S I i and S B i , respectively. We will need the following:
Lemma A1.
Let Z be a realization of the density f 2 independent of f ˜ i , h i for i = 1 , 2 . Assume that the densities f 1 and f 2 belong to Σ ( s , L ) . Then, for Z S I i ,
E Z f ˜ i , h i ( Z ) = f i ( Z ) + j = ν / 2 s / 2 c i , j ( Z ) h i 2 j + O h i s .
Proof. 
Obtaining the lower order terms in (A12) is a common result in kernel density estimation. However, since we also require the higher order terms, we present the proof here. Additionally, some of the results in this proof will be useful later. From the linearity of the KDE, we have that if X is drawn from f i and is independent of Z , then
E Z f ˜ i , h i ( Z ) = E Z 1 h i d K X Z h i = 1 h i d K x Z h i f i ( x ) d x = K t f i ( t h i + Z ) d t ,
where the last step follows on from the substitution t = x Z h i . Since the density ( f i ) belongs to Σ ( s , K ) , by using multi-index notation we can expand it to
f i ( t h i + Z ) = f i ( Z ) + 0 < | α | s D α f i ( Z ) α ! t h i α + O t h i s ,
where α ! = α 1 ! α 2 ! α d ! and Y α = Y 1 α 1 Y 2 α 2 Y d α d . Combining (A13) and (A14) gives
E Z f ˜ i , h i ( Z ) = f i ( Z ) + 0 < | α | s D α f i ( Z ) α ! h i | α | t α K ( t ) d t + O ( h i s ) = f i ( Z ) + j = ν / 2 s / 2 c i , j ( Z ) h i 2 j + O ( h i s ) ,
where the last step follows from the fact that K is symmetric and of order ν . ☐
To obtain a similar result for the case when Z is near the boundary of S , we use the assumption A . 5 .
Lemma A2.
Let γ ( x , y ) be an arbitrary function satisfying sup x , y | γ ( x , y ) | < . Let S satisfy the boundary smoothness conditions of Assumption A . 5 . Assume that the densities f 1 and f 2 belong to Σ ( s , L ) , and let Z be a realization of the f 2 density independently of f ˜ i , h i for i = 1 , 2 . Let h = min h 1 , h 2 . Then,
E 1 Z S B i γ f 1 ( Z ) , f 2 ( Z ) B Z t f ˜ i , h i ( Z ) = j = 1 r c 4 , i , j , t h i j + o h i r
E 1 Z S B 1 S B 2 γ f 1 ( Z ) , f 2 ( Z ) B Z t f ˜ 1 , h 1 ( Z ) B Z q f ˜ 2 , h 2 ( Z ) = j = 0 r 1 i = 0 r 1 c 4 , j , i , q , t h 1 j h 2 i h + o h r
Proof. 
For a fixed X near the boundary of S , we have
E f ˜ i , h i ( X ) f i ( X ) = 1 h i d Y : Y S K X Y h i f i ( Y ) d Y f i ( X ) = 1 h i d Y : K X Y h i > 0 K X Y h i f i ( Y ) d Y f i ( X ) 1 h i d Y : Y S K X Y h i f i ( Y ) d Y = T 1 , i ( X ) T 2 , i ( X ) .
Note that, in T 1 , i ( X ) , we are extending the integral beyond the support of the f i density. However, by using the same Taylor series expansion method as in the proof of Lemma A1, we always evaluate f i and its derivatives at point X which is within the support of f i . Thus, it does not matter how we define an extension of f i since the Taylor series will remain the same. Thus, T 1 , i ( X ) results in an identical expression to that obtained from (A12).
For the T 2 , i ( X ) term, we expand it using multi-index notation as
T 2 , i ( X ) = 1 h i d Y : Y S K X Y h i f i ( Y ) d Y = u : h i u + X S , K ( u ) > 0 K u f i ( X + h i u ) d u = | α | r h i | α | α ! u : h i u + X S , K ( u ) > 0 K u D α f i ( X ) u α d u + o h i r .
Recognizing that the | α | th derivative of f i is r | α | times differentiable, we can apply assumption A . 5 to obtain the expectation of T 2 , i ( X ) wrt X:
E T 2 , i ( X ) = 1 h i d X Y : Y S K X Y h i f i ( Y ) d Y f 2 ( X ) d x = | α | r h i | α | α ! X u : h i u + X S , K ( u ) > 0 K u D α f i ( X ) u α d u f 2 ( X ) d X + o h i r = | α | r h i | α | α ! 1 | β | r | α | e β , r | α | h i | β | + o h i r | α | + o h i r = j = 1 r e j h i j + o h i r .
Similarly, we find that
E T 2 , i ( X ) t = 1 h i d t X Y : Y S K X Y h i f i ( Y ) d Y t f 2 ( X ) d x = X | α | r h i | α | α ! u : h i u + X S , K ( u ) > 0 K u D α f i ( X ) u α d u t f 2 ( X ) d X = j = 1 r e j , t h i j + o h i r .
Combining these results gives
E 1 Z S B γ f 1 ( Z ) , f 2 ( Z ) E Z f ˜ i , h i ( Z ) f i ( Z ) t = E γ f 1 ( Z ) , f 2 ( Z ) T 1 , i ( Z ) T 2 , i ( Z ) t = E γ f 1 ( Z ) , f 2 ( Z ) j = 0 t t j T 1 , i ( Z ) j T 2 , i ( Z ) t j = j = 1 r c 4 , i , j , t h i j + o h i r ,
where the constants are functionals of the kernel γ and the densities.
The expression in (A16) can be proved in a similar manner. ☐
Applying Lemmas A1 and A2 to (A11) gives
E g E Z f ˜ 1 , h 1 ( Z ) , E Z f ˜ 2 , h 2 ( Z ) g f 1 ( Z ) , f 2 ( Z ) = j = 1 r c 4 , 1 , j h 1 j + c 4 , 2 , j h 2 j + j = 0 r 1 i = 0 r 1 c 5 , i , j h 1 j h 2 i h + o h 1 r + h 2 r .
For the variance term (the first term) in (A10), the truncated Taylor series expansion of g f ˜ 1 , h 1 ( Z ) , f ˜ 2 , h 2 ( Z ) around E Z f ˜ 1 , h 1 ( Z ) and E Z f ˜ 2 , h 2 ( Z ) gives
g f ˜ 1 , h 1 ( Z ) , f ˜ 2 , h 2 ( Z ) = i = 0 λ j = 0 λ i + j g ( x , y ) x i y j x = E Z f ˜ 1 , h 1 ( Z ) y = E Z f ˜ 2 , h 2 ( Z ) e ˜ 1 , h 1 i ( Z ) e ˜ 2 , h 2 j ( Z ) i ! j ! + o e ˜ 1 , h 1 λ ( Z ) + e ˜ 2 , h 2 λ ( Z )
where e ˜ i , h i ( Z ) : = f ˜ i , h i ( Z ) E Z f ˜ i , h i ( Z ) . To control the variance term in (A11), we thus require expressions for E Z e ˜ i , h i j ( Z ) .
Lemma A3.
Let Z be a realization of the f 2 density that is in the interior of the support and is independent of f ˜ i , h i for i = 1 , 2 . Let n ( q ) be the set of integer divisors of q including 1 but excluding q. Then,
E Z e ˜ i , h i q ( Z ) = j n ( q ) 1 N 2 h 2 d q j m = 0 s / 2 c 6 , i , q , j , m ( Z ) h i 2 m + O 1 N i , q 2 0 , q = 1 ,
E Z e ˜ 1 , h 1 q ( Z ) e ˜ 2 , h 2 l ( Z ) = i n ( q ) 1 N 1 h 1 d q i m = 0 s / 2 c 6 , 1 , q , i , m ( Z ) h 1 2 m × q , l 2 j n ( l ) 1 N 2 h 2 d l j t = 0 s / 2 c 6 , 2 , l , j , t ( Z ) h 2 2 t + O 1 N 1 + 1 N 2 , 0 , q = 1 or l = 1
where c 6 , i , q , j , m is a functional of f 1 and f 2 .
Proof. 
Define the random variable V i ( Z ) = K X i Z h 2 E Z K X i Z h 2 . This gives
e ˜ 2 , h 2 ( Z ) = f ˜ 2 , h 2 ( Z ) E Z f ˜ 2 , h 2 ( Z ) = 1 N 2 h 2 d i = 1 N 2 V i ( Z ) .
Clearly, E Z V i ( Z ) = 0 . From (A13), we have for integer j 1
E Z K j X i Z h 2 = K j t f 2 ( t h 2 + Z ) d t = h 2 d m = 0 s / 2 c 3 , 2 , j , m ( Z ) h 2 2 m ,
where the constants c 3 , 2 , j , m depend on density f 2 , its derivatives, and the moments of kernel K j . Note that since K is symmetric, the odd moments of K j are zero for Z in the interior of the support. However, all even moments may now be non-zero since K j may now be non-negative. In accordance with the binomial theorem,
E Z V i j ( Z ) = k = 0 j j k E Z K k X i Z h 2 E Z K X i Z h 2 j k = k = 0 j j k h 2 d O h 2 d ( j k ) m = 0 s / 2 c 3 , 2 , k , m ( Z ) h 2 2 m = h 2 d m = 0 s / 2 c 3 , 2 , j , m ( Z ) h 2 2 m + O h 2 d .
We can use these expressions to simplify E Z e ˜ 2 , h 2 q ( Z ) . As an example, let q = 2 . Then, since the X i s are independent,
E Z e ˜ 2 , h 2 2 ( Z ) = 1 N 2 h 2 2 d E Z V i 2 ( Z ) = 1 N 2 h 2 d m = 0 s / 2 c 3 , 2 , 2 , m ( Z ) h 2 2 m + O 1 N 2 .
Similarly, we find that
E Z e ˜ 2 , h 2 3 ( Z ) = 1 N 2 2 h 2 3 d E Z V i 3 ( Z ) = 1 N 2 h 2 d 2 m = 0 s / 2 c 3 , 2 , 3 , m ( Z ) h 2 2 m + o 1 N 2 .
For q = 4 , we have
E Z e ˜ 2 , h 2 4 ( Z ) = 1 N 2 3 h 2 4 d E Z V i 4 ( Z ) + N 2 1 N 2 3 h 2 4 d E Z V i 2 ( Z ) 2 = 1 N 2 h 2 d 3 m = 0 s / 2 c 3 , 2 , 4 , m ( Z ) h 2 2 m + 1 N 2 h 2 d 2 m = 0 s / 2 c 6 , 2 , 2 , m ( Z ) h 2 2 m + o 1 N 2 .
The pattern for q 2 is then,
E Z e ˜ 2 , h 2 q ( Z ) = i n ( q ) 1 N 2 h 2 d q i m = 0 s / 2 c 6 , 2 , q , i , m ( Z ) h 2 2 m + O 1 N 2 .
For any integer (q), the largest possible factor is q / 2 . Thus, for a given q, the smallest possible exponent on the N 2 h 2 d term is q / 2 . This increases as q increases. A similar expression holds for E Z e ˜ 1 , h 1 q ( Z ) , except the X i s are replaced with Y i , f 2 is replaced with f 1 , and N 2 and h 2 are replaced with N 1 and h 1 , respectively, all resulting in different constants. Then, since e ˜ 1 , h 1 ( Z ) and e ˜ 2 , h 2 ( Z ) are conditionally independent given Z ,
E Z e ˜ 1 , h 1 q ( Z ) e ˜ 2 , h 2 l ( Z ) = i n ( q ) 1 N 1 h 1 d q i m = 0 s / 2 c 6 , 1 , q , i , m ( Z ) h 1 2 m j n ( l ) 1 N 2 h 2 d l j t = 0 s / 2 c 6 , 2 , l , j , t ( Z ) h 2 2 t + O 1 N 1 + 1 N 2 .
 ☐
Applying Lemma A3 to (A18) when taking the conditional expectation given Z in the interior gives an expression of the form
j = 1 λ / 2 m = 0 s / 2 c 7 , 1 , j , m E Z f ˜ 1 , h 1 ( Z ) , E Z f ˜ 2 , h 2 ( Z ) h 1 2 m N 1 h 1 d j + c 7 , 2 , j , m E Z f ˜ 2 , h 2 ( Z ) , E Z f ˜ 2 , h 2 ( Z ) h 2 2 m N 2 h 2 d j + j = 1 λ / 2 m = 0 s / 2 i = 1 λ / 2 n = 0 s / 2 c 7 , j , i , m , n E Z f ˜ 2 , h 2 ( Z ) , E Z f ˜ 2 , h 2 ( Z ) h 1 2 m h 2 2 n N 1 h 1 d j N 2 h 2 d i + O 1 N 1 h 1 d λ 2 + 1 N 2 h 2 d λ 2 .
Note that the functionals c 7 , i , j , m and c 7 , j , i , m , n depend on the derivatives of g and E Z f ˜ i , h i ( Z ) which depend on h i . To apply an ensemble estimation, we need to separate the dependence on h i from the constants. If we use ODin1, then it is sufficient to note that in the interior of the support, E Z f ˜ i , h i ( Z ) = f i ( Z ) + o ( 1 ) and therefore, c E Z f ˜ 1 , h 1 ( Z ) , E Z f ˜ 2 , h 2 ( Z ) = c f 1 ( Z ) , f 2 ( Z ) + o ( 1 ) for some functional c. The terms in (A22) reduce to
c 7 , 1 , 1 , 0 f 1 ( Z ) , f 2 ( Z ) 1 N 1 h 1 d + c 7 , 2 , 1 , 0 f 1 ( Z ) , f 2 ( Z ) 1 N 2 h 2 d + o 1 N 1 h 1 d + 1 N 2 h 2 d .
For ODin2, we need the higher order terms. To separate the dependence on h i from the constants, we need more information about the functional g and its derivatives. Consider a special case where the functional g ( x , y ) has derivatives of the form of x α y β with α , β < 0 . This includes the important cases of the KL divergence and the Renyi divergence. The generalized binomial theorem states that if α m : = α ( α 1 ) ( α m + 1 ) m ! and if q and t are real numbers with | q | > | t | , then for any complex number ( α ),
( q + t ) α = m = 0 α m q α m t m .
Since the densities are bounded away from zero, for sufficiently small h i , we have f i ( Z ) > j = ν / 2 s / 2 c i , j ( Z ) h i 2 j + O h i s . Applying the generalized binomial theorem and Lemma A1 gives
E Z f ˜ 1 , h 1 ( Z ) α = m = 0 α m f i α m ( Z ) j = ν / 2 s / 2 c i , j ( Z ) h i 2 j + O h i s m .
Since m is an integer, the exponents of the h i terms are also integers. Thus, (A22) gives, in this case,
E Z g f ˜ 1 , h 1 ( Z ) , f ˜ 2 , h 2 ( Z ) g E Z f ˜ 1 , h 1 ( Z ) , E Z f ˜ 2 , h 2 ( Z ) = j = 1 λ / 2 m = 0 s / 2 c 8 , 1 , j , m Z h 1 2 m N 1 h 1 d j + c 8 , 2 , j , m Z h 2 2 m N 2 h 2 d j + j = 1 λ / 2 m = 0 s / 2 i = 1 λ / 2 n = 0 s / 2 c 8 , j , i , m , n Z h 1 2 m h 2 2 n N 1 h 1 d j N 2 h 2 d i + O 1 N 1 h 1 d λ 2 + 1 N 2 h 2 d λ 2 + h 1 s + h 2 s .
As before, the case for Z close to the boundary of the support is more complicated. However, by using a similar technique to the proof of Lemma A2 for Z at the boundary and combining with previous results, we find that for general g,
E g f ˜ 1 , h 1 ( Z ) , f ˜ 2 , h 2 ( Z ) g E Z f ˜ 1 , h 1 ( Z ) , E Z f ˜ 2 , h 2 ( Z ) = c 9 , 1 1 N 1 h 1 d + c 9 , 2 1 N 2 h 2 d + o 1 N 1 h 1 d + 1 N 2 h 2 d .
If g ( x , y ) has derivatives of the form of x α y β with α , β < 0 , then we can similarly obtain
E g f ˜ 1 , h 1 ( Z ) , f ˜ 2 , h 2 ( Z ) g E Z f ˜ 1 , h 1 ( Z ) , E Z f ˜ 2 , h 2 ( Z ) = j = 1 λ / 2 m = 0 r c 9 , 1 , j , m h 1 m N 1 h 1 d j + c 9 , 2 , j , m h 2 m N 2 h 2 d j + j = 1 λ / 2 m = 0 r i = 1 λ / 2 n = 0 r c 9 , j , i , m , n h 1 m h 2 n N 1 h 1 d j N 2 h 2 d i + O 1 N 1 h 1 d λ 2 + 1 N 2 h 2 d λ 2 + h 1 s + h 2 s .
Combining (A17) with either (A24) or (A26) completes the proof.

Appendix F. Proof of Theorem A4 (Variance)

To bound the variance of the plug-in estimator G ˜ h 1 , h 2 , we use the Efron–Stein inequality [70]:
Lemma A4 (Efron–Stein Inequality).
Let X 1 , , X n , X 1 , , X n be independent random variables on the space S . Then, if f : S × × S R , we have
V f ( X 1 , , X n ) 1 2 i = 1 n E f ( X 1 , , X n ) f ( X 1 , , X i , , X n ) 2 .
Suppose we have samples X 1 , , X N 2 , Y 1 , , Y N 1 and X 1 , , X N 2 , Y 1 , , Y N 1 and denote the respective estimators as G ˜ h 1 , h 2 and G ˜ h 1 , h 2 . We have
G ˜ h 1 , h 2 G ˜ h 1 , h 2 1 N 2 g f ˜ 1 , h 1 ( X 1 ) , f ˜ 2 , h 2 ( X 1 ) g f ˜ 1 , h 1 ( X 1 ) , f ˜ 2 , h 2 ( X 1 ) + 1 N 2 j = 2 N 2 g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) .
Since g is Lipschitz continuous with the constant C g , we have
g f ˜ 1 , h 1 ( X 1 ) , f ˜ 2 , h 2 ( X 1 ) g f ˜ 1 , h 1 ( X 1 ) , f ˜ 2 , h 2 ( X 1 ) C g f ˜ 1 , h 1 ( X 1 ) f ˜ 1 , h 1 ( X 1 ) + f ˜ 2 , h 2 ( X 1 ) f ˜ 2 , h 2 ( X 1 ) ,
f ˜ 1 , h 1 ( X 1 ) f ˜ 1 , h 1 ( X 1 ) = 1 N 1 h 1 d i = 1 N 1 K X 1 Y i h 1 K X 1 Y i h 1 1 N 1 h 1 d i = 1 N 1 K X 1 Y i h 1 K X 1 Y i h 1 E f ˜ 1 , h 1 ( X 1 ) f ˜ 1 , h 1 ( X 1 ) 2 1 N 1 h 1 2 d i = 1 N 1 E K X 1 Y i h 1 K X 1 Y i h 1 2 ,
where the last step follows from Jensen’s inequality. By making the substitutions u i = X 1 Y i h 1 and u i = X 1 Y i h 1 , this gives
1 h 1 2 d E K X 1 Y i h 1 K X 1 Y i h 1 2 = 1 h 2 d K X 1 Y i h 1 K X 1 Y i h 1 2 × f 2 ( X 1 ) f 2 ( X 1 ) f 1 ( Y i ) d X 1 d X 1 d Y i 2 | | K | | 2 .
Combining this with (A29) gives
E f ˜ 1 , h 1 ( X 1 ) f ˜ 1 , h 1 ( X 1 ) 2 2 | | K | | 2 .
Similarly,
E f ˜ 2 , h 2 ( X 1 ) f ˜ 2 , h 2 ( X 1 ) 2 2 | | K | | 2 .
Combining these results with (A27) gives
E g f ˜ 1 , h 1 ( X 1 ) , f ˜ 2 , h 2 ( X 1 ) g f ˜ 1 , h 1 ( X 1 ) , f ˜ 2 , h 2 ( X 1 ) 2 8 C g 2 | | K | | 2 .
The second term in (A26) is controlled in a similar way. From the Lipschitz condition,
g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) 2 C g 2 f ˜ 2 , h 2 ( X j ) f ˜ 2 , h 2 ( X j ) 2 = C g 2 M 2 2 h 2 2 d K X j X 1 h K X j X 1 h 2 .
The h 2 2 d terms are eliminated by making the substitutions of u j = X j X 1 h 2 and u j = X j X 1 h 2 within the expectation to obtain
E g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) 2 2 C g 2 | | K | | 2 M 2 2
E j = 2 N 2 g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) 2
= j = 2 N 2 i = 2 N 2 E g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) g f ˜ 1 , h 1 ( X i ) , f ˜ 2 , h 2 ( X i ) g f ˜ 1 , h 1 ( X i ) , f ˜ 2 , h 2 ( X i ) M 2 2 E g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) 2 2 C g 2 | | K | | 2 ,
where we use the Cauchy Schwarz inequality to bound the expectation within each summand. Finally, applying Jensen’s inequality and (A29) and (A32) gives
E G ˜ h 1 , h 2 G ˜ h 1 , h 2 2 2 N 2 2 E g f ˜ 1 , h 1 ( X 1 ) , f ˜ 2 , h 2 ( X 1 ) g f ˜ 1 , h 1 ( X 1 ) , f ˜ 2 , h 2 ( X 1 ) 2 + 2 N 2 2 E j = 2 N 2 g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) 2 20 C g 2 | | K | | 2 N 2 2 .
Now, suppose we have samples X 1 , , X N 2 , Y 1 , , Y N 1 and X 1 , , X N 2 , Y 1 , , Y N 1 and denote the respective estimators as G ˜ h 1 , h 2 and G ˜ h 1 , h 2 . Then,
g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) C g f ˜ 1 , h 1 ( X j ) f ˜ 1 , h 1 ( X j ) = C g N 1 h 1 d K X j Y 1 h 1 K X j Y 1 h 1 E g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) 2 2 C g 2 | | K | | 2 N 1 2 .
Thus, using a similar argument as was used to obtain (A32),
E G ˜ h 1 , h 2 G ˜ h 1 , h 2 2 1 N 2 2 E j = 1 N 2 g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) 2 2 C g 2 | | K | | 2 N 2 2 .
Applying the Efron–Stein inequality gives
V G ˜ h 1 , h 2 10 C g 2 | | K | | 2 N 2 + C g 2 | | K | | 2 N 1 N 2 2 .

Appendix G. Proof of Theorem 4 (CLT)

We are interested in the asymptotic distribution of
N 2 G ˜ h 1 , h 2 E G ˜ h 1 , h 2 = 1 N 2 j = 1 N 2 g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) E X j g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) + 1 N 2 j = 1 N 2 E X j g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) E g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) .
Note that in the standard central limit theorem [79], the second term converges in distribution to a Gaussian random variable. If the first term converges in probability to a constant (specifically, 0), then we can use Slutsky’s theorem [80] to find the asymptotic distribution. So, now, we focus on the first term which we denote as V N 2 .
To prove convergence in probability, we use Chebyshev’s inequality. Note that E V N 2 = 0 . To bound the variance of V N 2 , we again use the Efron–Stein inequality. Let X 1 be drawn from f 2 and denote V N 2 and V N 2 as the sequences using X 1 and X 1 , respectively. Then,
V N 2 V N 2 = 1 N 2 g f ˜ 1 , h 1 ( X 1 ) , f ˜ 2 , h 2 ( X 1 ) E X 1 g f ˜ 1 , h 1 ( X 1 ) , f ˜ 2 , h 2 ( X 1 ) 1 N 2 g f ˜ 1 , h 1 ( X 1 ) , f ˜ 2 , h 2 ( X 1 ) E X 1 g f ˜ 1 , h 1 ( X 1 ) , f ˜ 2 , h 2 ( X 1 ) + 1 N 2 j = 2 N 2 g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) .
Note that
E g f ˜ 1 , h 1 ( X 1 ) , f ˜ 2 , h 2 ( X 1 ) E X 1 g f ˜ 1 , h 1 ( X 1 ) , f ˜ 2 , h 2 ( X 1 ) 2 = E V X 1 g f ˜ 1 , h 1 ( X 1 ) , f ˜ 2 , h 2 ( X 1 ) .
We use the Efron–Stein inequality to bound V X 1 g f ˜ 1 , h 1 ( X 1 ) , f ˜ 2 , h 2 ( X 1 ) .
We do this by bounding the conditional expectation of the term
g f ˜ 1 , h 1 ( X 1 ) , f ˜ 2 , h 2 ( X 1 ) g f ˜ 1 , h 1 ( X 1 ) , f ˜ 2 , h 2 ( X 1 ) ,
where X i is replaced with X i in the KDEs for some i 1 . Using similar steps as in Appendix F, we have
E X 1 g f ˜ 1 , h 1 ( X 1 ) , f ˜ 2 , h 2 ( X 1 ) g f ˜ 1 , h 1 ( X 1 ) , f ˜ 2 , h 2 ( X 1 ) 2 = O 1 N 2 2 .
A similar result is obtained when Y i is replaced with Y i . Then, based on the Efron–Stein inequality, V X 1 g f ˜ 1 , h 1 ( X 1 ) , f ˜ 2 , h 2 ( X 1 ) = O 1 N 2 + 1 N 1 .
Therefore,
E 1 N 2 g f ˜ 1 , h 1 ( X 1 ) , f ˜ 2 , h 2 ( X 1 ) E X 1 g f ˜ 1 , h 1 ( X 1 ) , f ˜ 2 , h 2 ( X 1 ) 2 = O 1 N 2 2 + 1 N 1 N 2 .
A similar result holds for the g f ˜ 1 , h 1 ( X 1 ) , f ˜ 2 , h 2 ( X 1 ) terms in (A33).
For the third term in (A33),
E j = 2 N 2 g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) 2
= j = 2 N 2 i = 2 N 2 E g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) g f ˜ 1 , h 1 ( X i ) , f ˜ 2 , h 2 ( X i ) g f ˜ 1 , h 1 ( X i ) , f ˜ 2 , h 2 ( X i ) .
There are M 2 terms where i = j , and we have from Appendix F (see (A30)) that
E g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) 2 2 C g 2 | | K | | 2 M 2 2 .
Thus, these terms are O 1 M 2 . There are M 2 2 M 2 terms when i j . In this case, we can do four substitutions of the form u j = X j X 1 h 2 to obtain
E g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) g f ˜ 1 , h 1 ( X i ) , f ˜ 2 , h 2 ( X i ) g f ˜ 1 , h 1 ( X i ) , f ˜ 2 , h 2 ( X i ) 4 C g 2 | | K | | 2 h 2 2 d M 2 2 .
Then, since h 2 d = o ( 1 ) , we get
E j = 2 N 2 g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) 2 = o ( 1 ) ,
E V N 2 V N 2 2 3 N 2 E g f ˜ 1 , h 1 ( X 1 ) , f ˜ 2 , h 2 ( X 1 ) E X 1 g f ˜ 1 , h 1 ( X 1 ) , f ˜ 2 , h 2 ( X 1 ) 2 + 3 N 2 E g f ˜ 1 , h 1 ( X 1 ) , f ˜ 2 , h 2 ( X 1 ) E X 1 g f ˜ 1 , h 1 ( X 1 ) , f ˜ 2 , h 2 ( X 1 ) 2 + 3 N 2 E j = 2 N 2 g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) 2 = o 1 N 2 .
Now, consider samples X 1 , , X N 2 , Y 1 , , Y N 1 and X 1 , , X N 2 , Y 1 , , Y N 1 and the respective sequences V N 2 and V N 2 . Then,
V N 2 V N 2 = 1 N 2 j = 1 N 2 g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) .
Using a similar argument as that used to obtain (A33), we have that if h 1 d = o ( 1 ) and N 1 , then
E j = 2 N 2 g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) g f ˜ 1 , h 1 ( X j ) , f ˜ 2 , h 2 ( X j ) 2 = o ( 1 )
E V N 2 V N 2 2 = o 1 N 2 .
Applying the Efron–Stein inequality gives
V V N 2 = o N 2 + N 1 N 2 = o ( 1 ) .
Thus, based on Chebyshev’s inequality,
Pr V N 2 > ϵ V V N 2 ϵ 2 = o ( 1 ) ,
and therefore, V N 2 converges to zero in probability. Based on Slutsky’s theorem, N 2 G ˜ h 1 , h 2 E G ˜ h 1 , h 2 converges in distribution to a zero mean Gaussian random variable with variance
V E X g f ˜ 1 , h 1 ( X ) , f ˜ 2 , h 2 ( X ) ,
where X is drawn from f 2 .
For the weighted ensemble estimator, we wish to know the asymptotic distribution of N 2 G ˜ w E G ˜ w where G ˜ w = l l ¯ w ( l ) G ˜ h ( l ) . We have
N 2 G ˜ w E G ˜ w = 1 N 2 j = 1 N 2 l l ¯ w ( l ) g f ˜ 1 , h ( l ) ( X j ) , f ˜ 2 , h ( l ) ( X j ) E X j g f ˜ 1 , h ( l ) ( X j ) , f ˜ 2 , h ( l ) ( X j ) + 1 N 2 j = 1 N 2 E X j l l ¯ w ( l ) g f ˜ 1 , h ( l ) ( X j ) , f ˜ 2 , h ( l ) ( X j ) E l l ¯ w ( l ) g f ˜ 1 , h ( l ) ( X j ) , f ˜ 2 , h ( l ) ( X j ) .
The second term again converges in distribution to a Gaussian random variable by the central limit theorem. The mean and variance are, respectively, zero and
V l l ¯ w ( l ) E X g f ˜ 1 , h ( l ) ( X ) , f ˜ 2 , h ( l ) ( X ) .
The first term is equal to
l l ¯ w ( l ) 1 N 2 j = 1 N 2 g f ˜ 1 , h ( l ) ( X j ) , f ˜ 2 , h ( l ) ( X j ) E X j g f ˜ 1 , h ( l ) ( X j ) , f ˜ 2 , h ( l ) ( X j ) = l l ¯ w ( l ) o P ( 1 ) = o P ( 1 ) ,
where o P ( 1 ) denotes the convergence to zero as a probability. In the last step, we use the fact that if two random variables converge in probability to constants, then their linear combination converges in probability to the linear combination of the constants. Combining this result with Slutsky’s theorem completes the proof.

Appendix H. Proof of Theorem 5 (Uniform MSE)

Since the MSE is equal to the square of the bias plus the variance, we can upper bound the left hand side of (7) with
sup p , q Σ ( s , K H , ϵ 0 , ϵ ) E G ˜ w 0 G ( p , q ) 2 = sup p , q Σ ( s , K H , ϵ 0 , ϵ ) Bias G ˜ w 0 2 + Var G ˜ w 0 sup p , q Σ ( s , K H , ϵ 0 , ϵ ) Bias G ˜ w 0 2 + sup p , q Σ ( s , K H , ϵ 0 , ϵ ) Var G ˜ w 0 .
From the assumptions (Lipschitz, kernel bounded, weight calculated from the relaxed optimization problem), we have
sup p , q Σ ( s , K H , ϵ 0 , ϵ ) Var G ˜ w 0 sup p , q Σ ( s , K , ϵ 0 , ϵ ) 11 C g 2 | | w 0 | | 2 2 | | K | | N = 11 C g 2 | | w 0 | | 2 2 | | K | | N ,
where the last step follows on from the fact that all of the terms are independent of p and q.
For the bias, recall that if g is infinitely differentiable and if the optimal weight ( w 0 ) is calculated using the relaxed convex optimization problem, then
Bias G ˜ w 0 = i J c i ( p , q ) ϵ N 1 / 2 , Bias G ˜ w 0 2 = ϵ 2 N i J c i ( p , q ) 2 .
We use a topology argument to bound the supremum of this term. We use the Extreme Value Theorem [81]:
Theorem A5 (Extreme Value Theorem)
Let f : X R be continuous. If X is compact, then points c , d X s.t. f ( c ) f ( x ) f ( d ) exist for every x X .
Based on this theorem, f achieves its minimum and maximum on X. Our approach is to first show that the functionals c i ( p , q ) are continuous wrt p and q in some appropriate norm. We then show that the space Σ ( s , K H , ϵ 0 , ϵ ) is compact wrt this norm. The Extreme Value Theorem can then be applied to bound the supremum of (A34).
We first define the norm. Let α = s r > 0 . We use the standard norm on the space Σ ( s , K H ) [82]:
| | f | | = | | f | | Σ ( s , K H ) = | | f | | C r + max | β | = r | D β f | C 0 , α
where
| | f | | C r = max | β | r sup x S | D β f ( x ) | , | f | C 0 , α = sup x y S | f ( x ) f ( y ) | | x y | α .
Lemma A5.
The functionals c i ( p , q ) are continuous wrt the norm max ( | | p | | C r , | | q | | C r ) .
Proof. 
The functionals c i ( p , q ) depend on terms of the form
c ( p , q ) = i + j g ( t 1 , t 2 ) t 1 i t 2 j t 1 = p ( x ) t 2 = q ( x ) D β p ( x ) D γ q ( x ) q ( x ) d x .
It is sufficient to show that this is continuous. Let ϵ > 0 and max | | p p 0 | | C r , | | q q 0 | | C r < δ where δ > 0 will be chosen later. Then, by applying the triangle inequality for integration and adding and subtracting terms, we have
| c ( p , q ) c ( p 0 , q 0 ) |
i + j g ( t 1 , t 2 ) t 1 i t 2 j t 1 = p ( x ) t 2 = q ( x ) D β p ( x ) D γ q ( x ) q ( x ) q 0 ( x ) d x + i + j g ( t 1 , t 2 ) t 1 i t 2 j t 1 = p ( x ) t 2 = q ( x ) D β p ( x ) q 0 ( x ) D γ q ( x ) D γ q 0 ( x ) d x + D β p 0 ( x ) D γ q 0 ( x ) q 0 ( x ) i + j g ( t 1 , t 2 ) t 1 i t 2 j t 1 = p ( x ) t 2 = q ( x ) i + j g ( t 1 , t 2 ) t 1 i t 2 j t 1 = p 0 ( x ) t 2 = q 0 ( x ) d x + i + j g ( t 1 , t 2 ) t 1 i t 2 j t 1 = p ( x ) t 2 = q ( x ) D γ q 0 ( x ) q 0 ( x ) D β p ( x ) D β p 0 ( x ) d x .
Based on Assumption A . 4 , the absolute value of the mixed derivatives of g is bounded on the range defined for p and q by some constant ( C i , j ). Also, q 0 ( x ) ϵ . Furthermore, since D γ q 0 and D β p are continuous, and since S R d is compact, then the absolute value of derivatives D γ q 0 and D β p is also bounded by a constant ( ϵ ). Let δ 0 > 0 . Then, since the mixed derivatives of g are continuous on the interval [ ϵ 0 , ϵ ] , they are uniformly continuous. Therefore, we can choose a small enough δ such that (s.t.)
i + j g ( t 1 , t 2 ) t 1 i t 2 j t 1 = p ( x ) t 2 = q ( x ) i + j g ( t 1 , t 2 ) t 1 i t 2 j t 1 = p 0 ( x ) t 2 = q 0 ( x ) < δ 0 .
Combining all of these results with (A36) gives
| c ( p , q ) c ( p 0 , q 0 ) | λ ( S ) δ C i j ϵ 2 + ϵ + λ ( S ) ϵ ϵ 2 δ 0 + C i j δ ,
where λ ( S ) is the Lebesgue measure of S . This is bounded since S is compact. Let δ 0 > 0 be s.t. if max | | p p 0 | | C r , | | q q 0 | | C r < δ 0 , then (A37) is less than ϵ 4 λ ( S ) ϵ ϵ . Let δ 1 = ϵ 4 λ ( S ) C i j ϵ ( 1 + ϵ ) . Then, if δ < min ( δ 0 , δ 1 ) ,
| c ( p , q ) c ( p 0 , q 0 ) | < ϵ .
 ☐
Given that each c i ( p , q ) is continuous, then i J c i ( p , q ) 2 is also continuous wrt p and q.
We now argue that Σ ( s , K H ) is compact. First, a set is relatively compact if its closure is compact. Based on the Arzela–Ascoli theorem [83], the space Σ ( s , K H ) is relatively compact in the topology induced by the · Σ ( t , K H ) norm for any t < s . We choose t = r . It can then be shown that under the · Σ ( r , K H ) norm, Σ ( s , K H ) is complete [82]. Since Σ ( s , K H ) is contained in a metric space, it is also closed and therefore, equal to its closure. Thus, Σ ( s , K H ) is compact. Then, since Σ ( s , K H , ϵ 0 , ϵ ) is closed in Σ ( s , K H ) , it is also compact. Therefore, since for each p , q Σ ( s , K H , ϵ 0 , ϵ ) , i J c i ( p , q ) 2 < , based on the Extreme Value Theorem, we have
sup p , q Σ ( s , K H , ϵ 0 , ϵ ) Bias G ˜ w 0 2 = sup p , q Σ ( s , K H , ϵ 0 , ϵ ) ϵ 2 N i J c i ( p , q ) 2 = ϵ 2 N C ,
where we use the fact that J is finite (see Section 3.2 or Appendix B for the set J).

References

  1. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: New York, NY, USA, 2012. [Google Scholar]
  2. Avi-Itzhak, H.; Diep, T. Arbitrarily tight upper and lower bounds on the Bayesian probability of error. IEEE Trans. Pattern Anal. Mach. Intell. 1996, 18, 89–91. [Google Scholar] [CrossRef]
  3. Hashlamoun, W.A.; Varshney, P.K.; Samarasooriya, V. A tight upper bound on the Bayesian probability of error. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 220–224. [Google Scholar] [CrossRef]
  4. Moon, K.; Delouille, V.; Hero, A.O., III. Meta learning of bounds on the Bayes classifier error. In Proceedings of the 2015 IEEE Signal Processing and Signal Processing Education Workshop (SP/SPE), Salt Lake City, UT, USA, 9–12 August 2015; pp. 13–18. [Google Scholar]
  5. Chernoff, H. A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations. Ann. Math. Stat. 1952, 23, 493–507. [Google Scholar] [CrossRef]
  6. Berisha, V.; Wisler, A.; Hero, A.O., III; Spanias, A. Empirically Estimable Classification Bounds Based on a New Divergence Measure. IEEE Trans. Signal Process. 2016, 64, 580–591. [Google Scholar] [CrossRef] [PubMed]
  7. Moon, K.R.; Hero, A.O., III. Multivariate f-Divergence Estimation With Confidence. In Proceedings of the 27th International Conference on Neural Information Processing Systems (NIPS 2014), Montreal, QC, Canada, 8–13 December 2014; pp. 2420–2428. [Google Scholar]
  8. Gliske, S.V.; Moon, K.R.; Stacey, W.C.; Hero, A.O., III. The intrinsic value of HFO features as a biomarker of epileptic activity. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016. [Google Scholar]
  9. Loh, P.-L. On Lower Bounds for Statistical Learning Theory. Entropy 2017, 19, 617. [Google Scholar] [CrossRef]
  10. Póczos, B.; Schneider, J.G. On the estimation of alpha-divergences. In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 11–13 April 2011; pp. 609–617. [Google Scholar]
  11. Oliva, J.; Póczos, B.; Schneider, J. Distribution to distribution regression. In Proceedings of the 30th International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; pp. 1049–1057. [Google Scholar]
  12. Szabó, Z.; Gretton, A.; Póczos, B.; Sriperumbudur, B. Two-stage sampled learning theory on distributions. In Proceeding of The 18th International Conference on Artificial Intelligence and Statistics, San Diego, CA, USA, 9–12 May 2015. [Google Scholar]
  13. Moon, K.R.; Delouille, V.; Li, J.J.; De Visscher, R.; Watson, F.; Hero, A.O., III. Image patch analysis of sunspots and active regions. II. Clustering via matrix factorization. J. Space Weather Space Clim. 2016, 6, A3. [Google Scholar] [CrossRef]
  14. Moon, K.R.; Li, J.J.; Delouille, V.; De Visscher, R.; Watson, F.; Hero, A.O., III. Image patch analysis of sunspots and active regions. I. Intrinsic dimension and correlation analysis. J. Space Weather Space Clim. 2016, 6, A2. [Google Scholar] [CrossRef]
  15. Dhillon, I.S.; Mallela, S.; Kumar, R. A divisive information theoretic feature clustering algorithm for text classification. J. Mach. Learn. Res. 2003, 3, 1265–1287. [Google Scholar]
  16. Banerjee, A.; Merugu, S.; Dhillon, I.S.; Ghosh, J. Clustering with Bregman divergences. J. Mach. Learn. Res. 2005, 6, 1705–1749. [Google Scholar]
  17. Lewi, J.; Butera, R.; Paninski, L. Real-time adaptive information-theoretic optimization of neurophysiology experiments. In Proceedings of the 19th International Conference on Neural Information Processing Systems (NIPS 2006), Vancouver, BC, Canada, 4–9 December 2006; pp. 857–864. [Google Scholar]
  18. Bruzzone, L.; Roli, F.; Serpico, S.B. An extension of the Jeffreys-Matusita distance to multiclass cases for feature selection. IEEE Trans. Geosci. Remote Sens. 1995, 33, 1318–1321. [Google Scholar] [CrossRef]
  19. Guorong, X.; Peiqi, C.; Minhui, W. Bhattacharyya distance feature selection. In Proceedings of the 13th International Conference on Pattern Recognition, Vienna, Austria, 25–29 August 1996; Volume 2, pp. 195–199. [Google Scholar]
  20. Sakate, D.M.; Kashid, D.N. Variable selection via penalized minimum φ-divergence estimation in logistic regression. J. Appl. Stat. 2014, 41, 1233–1246. [Google Scholar] [CrossRef]
  21. Hild, K.E.; Erdogmus, D.; Principe, J.C. Blind source separation using Renyi’s mutual information. IEEE Signal Process. Lett. 2001, 8, 174–176. [Google Scholar] [CrossRef]
  22. Mihoko, M.; Eguchi, S. Robust blind source separation by beta divergence. Neural Comput. 2002, 14, 1859–1886. [Google Scholar] [CrossRef] [PubMed]
  23. Vemuri, B.C.; Liu, M.; Amari, S.; Nielsen, F. Total Bregman divergence and its applications to DTI analysis. IEEE Trans. Med. Imaging 2011, 30, 475–483. [Google Scholar] [CrossRef] [PubMed]
  24. Hamza, A.B.; Krim, H. Image registration and segmentation by maximizing the Jensen-Rényi divergence. In Proceedings of the 4th International Workshop Energy Minimization Methods in Computer Vision and Pattern Recognition (EMMCVPR 2003), Lisbon, Portugal, 7–9 July 2003; pp. 147–163. [Google Scholar]
  25. Liu, G.; Xia, G.; Yang, W.; Xue, N. SAR image segmentation via non-local active contours. In Proceedings of the 2014 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Quebec City, QC, Canada, 13–18 July 2014; pp. 3730–3733. [Google Scholar]
  26. Korzhik, V.; Fedyanin, I. Steganographic applications of the nearest-neighbor approach to Kullback-Leibler divergence estimation. In Proceedings of the 2015 Third International Conference on Digital Information, Networking, and Wireless Communications (DINWC), Moscow, Russia, 3–5 February 2015; pp. 133–138. [Google Scholar]
  27. Basseville, M. Divergence measures for statistical data processing–An annotated bibliography. Signal Process. 2013, 93, 621–633. [Google Scholar] [CrossRef]
  28. Cichocki, A.; Amari, S. Families of alpha-beta-and gamma-divergences: Flexible and robust measures of similarities. Entropy 2010, 12, 1532–1568. [Google Scholar] [CrossRef]
  29. Csiszar, I. Information-type measures of difference of probability distributions and indirect observations. Stud. Sci. Math. Hungar. 1967, 2, 299–318. [Google Scholar]
  30. Ali, S.M.; Silvey, S.D. A general class of coefficients of divergence of one distribution from another. J. R. Stat. Soc. Ser. B Stat. Methodol. 1966, 28, 131–142. [Google Scholar]
  31. Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  32. Rényi, A. On measures of entropy and information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA, USA, 20 June–30 July 1960; University of California Press: Berkeley, CA, USA, 1961; pp. 547–561. [Google Scholar]
  33. Hellinger, E. Neue Begründung der Theorie quadratischer Formen von unendlichvielen Veränderlichen. J. Rein. Angew. Math. 1909, 136, 210–271. (In German) [Google Scholar]
  34. Bhattacharyya, A. On a measure of divergence between two multinomial populations. Indian J. Stat. 1946, 7, 401–406. [Google Scholar]
  35. Silva, J.F.; Parada, P.A. Shannon entropy convergence results in the countable infinite case. In Proceedings of the 2012 IEEE International Symposium on Information Theory Proceedings (ISIT), Cambridge, MA, USA, 1–6 July 2012; pp. 155–159. [Google Scholar]
  36. Antos, A.; Kontoyiannis, I. Convergence properties of functional estimates for discrete distributions. Random Struct. Algorithms 2001, 19, 163–193. [Google Scholar] [CrossRef]
  37. Valiant, G.; Valiant, P. Estimating the unseen: An n/log (n)-sample estimator for entropy and support size, shown optimal via new CLTs. In Proceedings of the 43rd Annual ACM Symposium on Theory of Computing, San Jose, CA, USA, 6–8 June 2011; pp. 685–694. [Google Scholar]
  38. Jiao, J.; Venkat, K.; Han, Y.; Weissman, T. Minimax estimation of functionals of discrete distributions. IEEE Trans. Inf. Theory 2015, 61, 2835–2885. [Google Scholar] [CrossRef] [PubMed]
  39. Jiao, J.; Venkat, K.; Han, Y.; Weissman, T. Maximum likelihood estimation of functionals of discrete distributions. IEEE Trans. Inf. Theory 2017, 63, 6774–6798. [Google Scholar] [CrossRef]
  40. Valiant, G.; Valiant, P. The power of linear estimators. In Proceedings of the 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science (FOCS), Palm Springs, CA, USA, 22–25 October 2011; pp. 403–412. [Google Scholar]
  41. Paninski, L. Estimation of entropy and mutual information. Neural Comput. 2003, 15, 1191–1253. [Google Scholar] [CrossRef]
  42. Paninski, L. Estimating entropy on m bins given fewer than m samples. IEEE Trans. Inf. Theory 2004, 50, 2200–2203. [Google Scholar] [CrossRef]
  43. Alba-Fernández, M.V.; Jiménez-Gamero, M.D.; Ariza-López, F.J. Minimum Penalized ϕ-Divergence Estimation under Model Misspecification. Entropy 2018, 20, 329. [Google Scholar] [CrossRef]
  44. Ahmed, N.A.; Gokhale, D. Entropy expressions and their estimators for multivariate distributions. IEEE Trans. Inf. Theory 1989, 35, 688–692. [Google Scholar] [CrossRef]
  45. Misra, N.; Singh, H.; Demchuk, E. Estimation of the entropy of a multivariate normal distribution. J. Multivar. Anal. 2005, 92, 324–342. [Google Scholar] [CrossRef]
  46. Gupta, M.; Srivastava, S. Parametric Bayesian estimation of differential entropy and relative entropy. Entropy 2010, 12, 818–843. [Google Scholar] [CrossRef]
  47. Li, S.; Mnatsakanov, R.M.; Andrew, M.E. K-nearest neighbor based consistent entropy estimation for hyperspherical distributions. Entropy 2011, 13, 650–667. [Google Scholar] [CrossRef]
  48. Wang, Q.; Kulkarni, S.R.; Verdú, S. Divergence estimation for multidimensional densities via k-nearest-neighbor distances. IEEE Trans. Inf. Theory 2009, 55, 2392–2405. [Google Scholar] [CrossRef]
  49. Darbellay, G.A.; Vajda, I. Estimation of the information by an adaptive partitioning of the observation space. IEEE Trans. Inf. Theory 1999, 45, 1315–1321. [Google Scholar] [CrossRef] [Green Version]
  50. Silva, J.; Narayanan, S.S. Information divergence estimation based on data-dependent partitions. J. Stat. Plan. Inference 2010, 140, 3180–3198. [Google Scholar] [CrossRef]
  51. Le, T.K. Information dependency: Strong consistency of Darbellay–Vajda partition estimators. J. Stat. Plan. Inference 2013, 143, 2089–2100. [Google Scholar] [CrossRef]
  52. Wang, Q.; Kulkarni, S.R.; Verdú, S. Divergence estimation of continuous distributions based on data-dependent partitions. IEEE Trans. Inf. Theory 2005, 51, 3064–3074. [Google Scholar] [CrossRef]
  53. Hero, A.O., III; Ma, B.; Michel, O.; Gorman, J. Applications of entropic spanning graphs. IEEE Signal Process. Mag. 2002, 19, 85–95. [Google Scholar] [CrossRef] [Green Version]
  54. Moon, K.R.; Hero, A.O., III. Ensemble estimation of multivariate f-divergence. In Proceedings of the 2014 IEEE International Symposium on Information Theory (ISIT), Honolulu, HI, USA, 29 June–4 July 2014; pp. 356–360. [Google Scholar]
  55. Moon, K.R.; Sricharan, K.; Greenewald, K.; Hero, A.O., III. Improving convergence of divergence functional ensemble estimators. In Proceedings of the 2016 IEEE International Symposium on Information Theory (ISIT), Barcelona, Spain, 10–15 July 2016; pp. 1133–1137. [Google Scholar]
  56. Nguyen, X.; Wainwright, M.J.; Jordan, M.I. Estimating divergence functionals and the likelihood ratio by convex risk minimization. IEEE Trans. Inf. Theory 2010, 56, 5847–5861. [Google Scholar] [CrossRef]
  57. Krishnamurthy, A.; Kandasamy, K.; Poczos, B.; Wasserman, L. Nonparametric Estimation of Renyi Divergence and Friends. In Proceedings of the 31st International Conference on Machine Learning, Beijing, China, 21–26 June 2014; pp. 919–927. [Google Scholar]
  58. Singh, S.; Póczos, B. Generalized exponential concentration inequality for Rényi divergence estimation. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), Beijing, China, 21–26 June 2014; pp. 333–341. [Google Scholar]
  59. Singh, S.; Póczos, B. Exponential Concentration of a Density Functional Estimator. In Proceedings of the 27th International Conference on Neural Information Processing Systems (NIPS 2014), Montreal, QC, Canada, 8–13 December 2014; pp. 3032–3040. [Google Scholar]
  60. Kandasamy, K.; Krishnamurthy, A.; Poczos, B.; Wasserman, L.; Robins, J. Nonparametric von Mises Estimators for Entropies, Divergences and Mutual Informations. In Proceedings of the 28th International Conference on Neural Information Processing Systems (NIPS 2015), Montreal, QC, Canada, 7–12 December 2015; pp. 397–405. [Google Scholar]
  61. Härdle, W. Applied Nonparametric Regression; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
  62. Berlinet, A.; Devroye, L.; Györfi, L. Asymptotic normality of L1-error in density estimation. Statistics 1995, 26, 329–343. [Google Scholar] [CrossRef]
  63. Berlinet, A.; Györfi, L.; Dénes, I. Asymptotic normality of relative entropy in multivariate density estimation. Publ. l’Inst. Stat. l’Univ. Paris 1997, 41, 3–27. [Google Scholar]
  64. Bickel, P.J.; Rosenblatt, M. On some global measures of the deviations of density function estimates. Ann. Stat. 1973, 1, 1071–1095. [Google Scholar] [CrossRef]
  65. Sricharan, K.; Wei, D.; Hero, A.O., III. Ensemble estimators for multivariate entropy estimation. IEEE Trans. Inf. Theory 2013, 59, 4374–4388. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  66. Berrett, T.B.; Samworth, R.J.; Yuan, M. Efficient multivariate entropy estimation via k-nearest neighbour distances. arXiv 2017, arXiv:1606.00304. [Google Scholar]
  67. Kozachenko, L.; Leonenko, N.N. Sample estimate of the entropy of a random vector. Probl. Peredachi Inf. 1987, 23, 9–16. [Google Scholar]
  68. Hansen, B.E.; (University of Wisconsin, Madison, WI, USA). Lecture Notes on Nonparametrics. 2009. [Google Scholar]
  69. Budka, M.; Gabrys, B.; Musial, K. On accuracy of PDF divergence estimators and their applicability to representative data sampling. Entropy 2011, 13, 1229–1266. [Google Scholar] [CrossRef]
  70. Efron, B.; Stein, C. The jackknife estimate of variance. Ann. Stat. 1981, 9, 586–596. [Google Scholar] [CrossRef]
  71. Wisler, A.; Moon, K.; Berisha, V. Direct ensemble estimation of density functionals. In Proceedings of the 2018 International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018. [Google Scholar]
  72. Moon, K.R.; Sricharan, K.; Greenewald, K.; Hero, A.O., III. Nonparametric Ensemble Estimation of Distributional Functionals. arXiv 2016, arXiv:1601.06884v2. [Google Scholar]
  73. Paul, F.; Arkin, Y.; Giladi, A.; Jaitin, D.A.; Kenigsberg, E.; Keren-Shaul, H.; Winter, D.; Lara-Astiaso, D.; Gury, M.; Weiner, A.; et al. Transcriptional heterogeneity and lineage commitment in myeloid progenitors. Cell 2015, 163, 1663–1677. [Google Scholar] [CrossRef] [PubMed]
  74. Kanehisa, M.; Goto, S. KEGG: Kyoto encyclopedia of genes and genomes. Nucleic Acids Res. 2000, 28, 27–30. [Google Scholar] [CrossRef] [PubMed]
  75. Kanehisa, M.; Sato, Y.; Kawashima, M.; Furumichi, M.; Tanabe, M. KEGG as a reference resource for gene and protein annotation. Nucleic Acids Res. 2015, 44, D457–D462. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  76. Kanehisa, M.; Furumichi, M.; Tanabe, M.; Sato, Y.; Morishima, K. KEGG: New perspectives on genomes, pathways, diseases and drugs. Nucleic Acids Res. 2016, 45, D353–D361. [Google Scholar] [CrossRef] [PubMed]
  77. Van Dijk, D.; Sharma, R.; Nainys, J.; Yim, K.; Kathail, P.; Carr, A.J.; Burdsiak, C.; Moon, K.R.; Chaffer, C.; Pattabiraman, D.; et al. Recovering Gene Interactions from Single-Cell Data Using Data Diffusion. Cell 2018, 174, 716–729. [Google Scholar] [CrossRef] [PubMed]
  78. Moon, K.R.; Sricharan, K.; Hero, A.O., III. Ensemble Estimation of Distributional Functionals via k-Nearest Neighbors. arXiv 2017, arXiv:1707.03083. [Google Scholar]
  79. Durrett, R. Probability: Theory and Examples; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  80. Gut, A. Probability: A Graduate Course; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  81. Munkres, J. Topology; Prentice Hall: Englewood Cliffs, NJ, USA, 2000. [Google Scholar]
  82. Evans, L.C. Partial Differential Equations; American Mathematical Society: Providence, RI, USA, 2010. [Google Scholar]
  83. Gilbarg, D.; Trudinger, N.S. Elliptic Partial Differential Equations of Second Order; Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
Figure 1. Heat map showing the predicted bias of the divergence functional plug-in estimator G ˜ h based on Theorem 1 as a function of the dimensions (d) and sample size (N) when h = N 1 d + 1 . Note that the phase transition in the bias as the dimensions (d) increase for a fixed sample size (N); the bias remains small only for relatively small values of d . The proposed weighted ensemble estimator EnDive eliminates this phase transition when the densities and the function g are sufficiently smooth.
Figure 1. Heat map showing the predicted bias of the divergence functional plug-in estimator G ˜ h based on Theorem 1 as a function of the dimensions (d) and sample size (N) when h = N 1 d + 1 . Note that the phase transition in the bias as the dimensions (d) increase for a fixed sample size (N); the bias remains small only for relatively small values of d . The proposed weighted ensemble estimator EnDive eliminates this phase transition when the densities and the function g are sufficiently smooth.
Entropy 20 00560 g001
Figure 2. The optimal weights from (6) when d = 4 , N = 3100 , L = 50 , and l are uniformly spaced between 1.5 and 3. The lowest values of l are given the highest weight. Thus, the minimum value of bandwidth parameters L should be sufficiently large to render an adequate estimate.
Figure 2. The optimal weights from (6) when d = 4 , N = 3100 , L = 50 , and l are uniformly spaced between 1.5 and 3. The lowest values of l are given the highest weight. Thus, the minimum value of bandwidth parameters L should be sufficiently large to render an adequate estimate.
Entropy 20 00560 g002
Figure 3. (Left) Log–log plot of MSE of the uniform kernel plug-in (“Kernel”) and the optimally weighted EnDive estimator for various dimensions and sample sizes. (Right) Plot of the true values being estimated compared to the average values of the same estimators with standard error bars. The proposed weighted ensemble estimator approaches the theoretical rate (see Table 1), performed better than the plug-in estimator in terms of MSE and was less biased.
Figure 3. (Left) Log–log plot of MSE of the uniform kernel plug-in (“Kernel”) and the optimally weighted EnDive estimator for various dimensions and sample sizes. (Right) Plot of the true values being estimated compared to the average values of the same estimators with standard error bars. The proposed weighted ensemble estimator approaches the theoretical rate (see Table 1), performed better than the plug-in estimator in terms of MSE and was less biased.
Entropy 20 00560 g003
Figure 4. Log–log plot of the average pointwise squared error between the KDE f ˜ 1 , h and f 1 for various dimensions and sample sizes using the same bandwidth and kernel as the standard plug-in estimators in Figure 3. The KDE and the density were compared at 10,000 points sampled from f 1 .
Figure 4. Log–log plot of the average pointwise squared error between the KDE f ˜ 1 , h and f 1 for various dimensions and sample sizes using the same bandwidth and kernel as the standard plug-in estimators in Figure 3. The KDE and the density were compared at 10,000 points sampled from f 1 .
Entropy 20 00560 g004
Figure 5. QQ-plots comparing the quantiles of a standard normal random variable and the quantiles of the centered and scaled EnDive estimator applied to the Kullback–Leibler (KL) divergence when the distributions were the same and different. Quantiles were computed from 10,000 trials. These plots correspond to the same experiments as in Table 2 when N = 100 and N = 1000. The correspondence between quantiles is high for all cases.
Figure 5. QQ-plots comparing the quantiles of a standard normal random variable and the quantiles of the centered and scaled EnDive estimator applied to the Kullback–Leibler (KL) divergence when the distributions were the same and different. Quantiles were computed from 10,000 trials. These plots correspond to the same experiments as in Table 2 when N = 100 and N = 1000. The correspondence between quantiles is high for all cases.
Entropy 20 00560 g005
Figure 6. Estimated upper (UB) and lower bounds (LB) on the Bayes error rate (BER) based on estimating the HP divergence between two 10-dimensional Gaussian distributions with identity covariance matrices and distances between means of 1 (left) and 3 (right), respectively. Estimates were calculated using EnDive, with error bars indicating the standard deviation from 400 trials. The upper bound was closer, on average, to the true BER when N was small (≈100–300) and the distance between the means was small. The lower bound was closer, on average, in all other cases.
Figure 6. Estimated upper (UB) and lower bounds (LB) on the Bayes error rate (BER) based on estimating the HP divergence between two 10-dimensional Gaussian distributions with identity covariance matrices and distances between means of 1 (left) and 3 (right), respectively. Estimates were calculated using EnDive, with error bars indicating the standard deviation from 400 trials. The upper bound was closer, on average, to the true BER when N was small (≈100–300) and the distance between the means was small. The lower bound was closer, on average, in all other cases.
Entropy 20 00560 g006
Table 1. Negative log–log slope of the EnDive mean squared error (MSE) as a function of the sample size for various dimensions. The slope was calculated beginning at N s t a r t . The negative slope was closer to 1 with N s t a r t = 10 2.375 than for N s t a r t = 10 2 indicating that the asymptotic rate had not yet taken effect at N s t a r t = 10 2 .
Table 1. Negative log–log slope of the EnDive mean squared error (MSE) as a function of the sample size for various dimensions. The slope was calculated beginning at N s t a r t . The negative slope was closer to 1 with N s t a r t = 10 2.375 than for N s t a r t = 10 2 indicating that the asymptotic rate had not yet taken effect at N s t a r t = 10 2 .
Estimator d = 5 d = 10 d = 15
N s t a r t = 10 2 0.85 0.84 0.80
N s t a r t = 10 2.375 0.96 0.96 0.95
Table 2. Comparison between quantiles of a standard normal random variable and the quantiles of the centered and scaled EnDive estimator applied to the KL divergence when the distributions were the same and different. Quantiles were computed from 10,000 trials. The parameter ρ gives the correlation coefficient between the quantiles, while β is the estimated slope between the quantiles. The correspondence between quantiles was very high for all cases.
Table 2. Comparison between quantiles of a standard normal random variable and the quantiles of the centered and scaled EnDive estimator applied to the KL divergence when the distributions were the same and different. Quantiles were computed from 10,000 trials. The parameter ρ gives the correlation coefficient between the quantiles, while β is the estimated slope between the quantiles. The correspondence between quantiles was very high for all cases.
NSameDifferent
1 ρ β 1 ρ β
100 2.35 × 10 4 1.014 9.97 × 10 4 0.993
500 9.48 × 10 5 1.007 5.06 × 10 4 0.999
1000 8.27 × 10 5 0.996 4.30 × 10 4 0.988
5000 8.59 × 10 5 0.995 4.47 × 10 4 1.005
Table 3. Misclassification rate of a quadratic discriminant analysis classifier (QDA) classifier and estimated upper bounds (UB) and lower bounds (LB) of the pairwise BER between mouse bone marrow cell types using the Henze–Penrose divergence applied to different combinations of genes selected from the KEGG pathways associated with the hematopoietic cell lineage. Results are presented as percentages in the form of mean ± standard deviation. Based on these results, erythrocytes are relatively easy to distinguish from the other two cell types using these gene sets.
Table 3. Misclassification rate of a quadratic discriminant analysis classifier (QDA) classifier and estimated upper bounds (UB) and lower bounds (LB) of the pairwise BER between mouse bone marrow cell types using the Henze–Penrose divergence applied to different combinations of genes selected from the KEGG pathways associated with the hematopoietic cell lineage. Results are presented as percentages in the form of mean ± standard deviation. Based on these results, erythrocytes are relatively easy to distinguish from the other two cell types using these gene sets.
PlateletsErythrocytesNeutrophilsMacrophagesRandom
Eryth. vs. Mono., LB 2.8 ± 1.5 1.2 ± 0.6 0.6 ± 0.6 8.5 ± 1.2 14.4 ± 8.4
Eryth. vs. Mono., UB 5.3 ± 2.9 2.4 ± 1.3 1.2 ± 1.3 15.5 ± 1.9 23.2 ± 12.3
Eryth. vs. Mono., Prob. Error 0.9 0.4 1.3 3.4 7.2 ± 5.4
Eryth. vs. Baso., LB 0.5 ± 0.6 0.05 ± 0.12 0.6 ± 0.5 5.1 ± 0.9 11.9 ± 5.5
Eryth. vs. Baso., UB 1.0 ± 1.1 0.1 ± 0.2 1.1 ± 0.9 9.6 ± 1.6 20.3 ± 8.8
Eryth. vs. Baso., Prob. Error 1.2 0.3 1.9 3.6 6.8 ± 5.0
Baso. vs. Mono., LB 31.1 ± 1.8 27.8 ± 3.1 27.1 ± 2.6 31.6 ± 1.3 32.1 ± 2.6
Baso. vs. Mono., UB 42.8 ± 1.4 39.9 ± 2.8 39.4 ± 2.4 43.2 ± 1.0 43.5 ± 1.2
Baso. vs. Mono., Prob. Error 28.8 30.9 23.9 22.4 29.7 ± 5.7

Share and Cite

MDPI and ACS Style

Moon, K.R.; Sricharan, K.; Greenewald, K.; Hero, A.O., III. Ensemble Estimation of Information Divergence . Entropy 2018, 20, 560. https://doi.org/10.3390/e20080560

AMA Style

Moon KR, Sricharan K, Greenewald K, Hero AO III. Ensemble Estimation of Information Divergence . Entropy. 2018; 20(8):560. https://doi.org/10.3390/e20080560

Chicago/Turabian Style

Moon, Kevin R., Kumar Sricharan, Kristjan Greenewald, and Alfred O. Hero, III. 2018. "Ensemble Estimation of Information Divergence " Entropy 20, no. 8: 560. https://doi.org/10.3390/e20080560

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop