Next Article in Journal
Causal Intuition and Delayed-Choice Experiments
Previous Article in Journal
ML-Based Analysis of Particle Distributions in High-Intensity Laser Experiments: Role of Binning Strategy
Previous Article in Special Issue
Data-Driven Model Reduction for Stochastic Burgers Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bayesian Update with Importance Sampling: Required Sample Size

Department of Statistics, University of Chicago, Chicago, IL 60637, USA
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(1), 22; https://doi.org/10.3390/e23010022
Submission received: 23 September 2020 / Revised: 22 December 2020 / Accepted: 23 December 2020 / Published: 26 December 2020

Abstract

:
Importance sampling is used to approximate Bayes’ rule in many computational approaches to Bayesian inverse problems, data assimilation and machine learning. This paper reviews and further investigates the required sample size for importance sampling in terms of the χ 2 -divergence between target and proposal. We illustrate through examples the roles that dimension, noise-level and other model parameters play in approximating the Bayesian update with importance sampling. Our examples also facilitate a new direct comparison of standard and optimal proposals for particle filtering.

1. Introduction

Importance sampling is a mechanism to approximate expectations with respect to a target distribution using independent weighted samples from a proposal distribution. The variance of the weights—quantified by the χ 2 -divergence between target and proposal— gives both necessary and sufficient conditions on the sample size to achieve a desired worst-case error over large classes of test functions. This paper contributes to the understanding of importance sampling to approximate the Bayesian update, where the target is a posterior distribution obtained by conditioning the proposal to observed data. We consider illustrative examples where the χ 2 -divergence between target and proposal admits a closed formula and it is hence possible to characterize explicitly the required sample size. These examples showcase the fundamental challenges that importance sampling encounters in high dimension and small noise regimes where target and proposal are far apart. They also facilitate a direct comparison of standard and optimal proposals for particle filtering.
We denote the target distribution by μ and the proposal by π and assume that both are probability distributions in Euclidean space R d . We further suppose that the target is absolutely continuous with respect to the proposal and denote by g the un-normalized density between target and proposal so that, for any suitable test function φ ,
R d φ ( u ) μ ( d u ) = R d φ ( u ) g ( u ) π ( d u ) R d g ( u ) π ( d u ) .
We write this succinctly as μ ( φ ) = π ( φ g ) / π ( g ) . For simplicity of exposition, we will assume that g is positive π -almost surely. Importance sampling approximates μ ( φ ) using independent samples { u ( n ) } n = 1 N from the proposal π , computing the numerator and denominator in (1) by Monte Carlo integration,
μ ( φ ) 1 N n = 1 N φ ( u ( n ) ) g ( u ( n ) ) 1 N n = 1 N g ( u ( n ) ) = n = 1 N w ( n ) φ ( u ( n ) ) , w ( n ) : = g ( u ( n ) ) = 1 N g ( u ( ) ) .
The weights w ( n ) —called autonormalized or self-normalized since they add up to one—can be computed as long as the un-normalized density g can be evaluated point-wise; knowledge of the normalizing constant π ( g ) is not needed. We write (2) briefly as μ ( φ ) μ N ( φ ) , where μ N is the random autonormalized particle approximation measure
μ N : = n = 1 N w ( n ) δ u ( n ) , u ( n ) i . i . d . π .
This paper is concerned with the study of importance sampling in Bayesian formulations to inverse problems, data assimilation and machine learning tasks [1,2,3,4,5], where the relationship μ ( d u ) g ( u ) π ( d u ) arises from application of Bayes’ rule P ( u | y ) P ( y | u ) P ( u ) ; we interpret u R d as a parameter of interest, π P ( u ) as a prior distribution on u, g ( u ) g ( u ; y ) P ( y | u ) as a likelihood function which tacitly depends on observed data y R k , and μ P ( u | y ) as the posterior distribution of u given y . With this interpretation and terminology, the goal of importance sampling is to approximate posterior expectations using prior samples. Since the prior has fatter tails than the posterior, the Bayesian setting poses further structure into the analysis of importance sampling. In addition, there are several specific features of the application of importance sampling in Bayesian inverse problems, data assimilation and machine learning that shape our presentation and results.
First, Bayesian formulations have the potential to provide uncertainty quantification by computing several posterior quantiles. This motivates considering a worst-case error analysis [6] of importance sampling over large classes of test functions φ or, equivalently, bounding a certain distance between the random particle approximation measure μ N and the target μ , see [1]. As we will review in Section 2, a key quantity in controlling the error of importance sampling with bounded test functions is the χ 2 -divergence between target and proposal, given by
d χ 2 ( μ π ) = π ( g 2 ) π ( g ) 2 1 .
Second, importance sampling in inverse problems, data assimilation and machine learning applications is often used as a building block of more sophisticated computational methods, and in such a case there may be little or no freedom in the choice of proposal. For this reason, throughout this paper we view both target and proposal as given and we focus on investigating the required sample size for accurate importance sampling with bounded test functions, following a similar perspective as [1,7,8]. The complementary question of how to choose the proposal to achieve a small variance for a given test function is not considered here. This latter question is of central interest in the simulation of rare events [9] and has been widely studied since the introduction of importance sampling in [10,11], leading to a plethora of adaptive importance sampling schemes [12].
Third, high dimensional and small noise settings are standard in inverse problems, data assimilation and machine learning, and it is essential to understand the scalability of sampling algorithms in these challenging regimes. The curse of dimension of importance sampling has been extensively investigated [1,13,14,15,16,17]. The early works [13,14] demonstrated a weight collapse phenomenon, by which unless the number of samples is scaled exponentially with the dimension of the parameter, the maximum weight converges to one. The paper [1] also considered small noise limits and further emphasized the need to define precisely the dimension of learning problems. Indeed, while many inverse problems, data assimilation models and machine learning tasks are defined in terms of millions of parameters, their intrinsic dimension can be substantially smaller since ( i ) all parameters may not be equally important; ( i i ) a priori information about some parameters may be available; and ( i i i ) the data may be lower dimensional than the parameter space. If the intrinsic dimension is still large, which occurs often in applications in geophysics and machine learning, it is essential to leverage the correlation structure of the parameters or the observations by performing localization [18,19,20]. Local particle filters are reviewed in [21] and their potential to beat the curse of dimension is investigated from a theoretical viewpoint in [16]. Localization is popular in ensemble Kalman filters [20] and has been employed in Markov chain Monte Carlo [22,23]. Our focus in this paper is not on localization but rather on providing a unified and accessible understanding of the roles that dimension, noise-level and other model parameters play in approximating the Bayesian update. We will do so through examples where it is possible to compute explicitly the χ 2 -divergence between target and proposal, and hence the required sample size.
Finally, in the Bayesian context the normalizing constant π ( g ) represents the marginal likelihood and is often computationally intractable. This motivates our focus on the autonormalized importance sampling estimator in (2), which estimates both π ( g φ ) and π ( g ) using Monte Carlo integration, as opposed to un-normalized variants of importance sampling [8].

Main Goals, Specific Contributions and Outline

The main goal of this paper is to provide a rich and unified understanding of the use of importance sampling to approximate the Bayesian update, while keeping the presentation accessible to a large audience. In Section 2 we investigate the required sample size for importance sampling in terms of the χ 2 -divergence between target and proposal. Section 3 builds on the results in Section 2 to illustrate through numerous examples the fundamental challenges that importance sampling encounters when approximating the Bayesian update in small noise and high dimensional settings. In Section 4 we show how our concrete examples facilitate a new direct comparison of standard and optimal proposals for particle filtering. These examples also allow us to identify model problems where the advantage of the optimal proposal over the standard one can be dramatic.
Next, we provide further details on the specific contributions of each section and link them to the literature. We refer to [1] for a more exhaustive literature review.
  • Section 2 provides a unified perspective on the sufficiency and necessity of having a sample size of the order of the χ 2 -divergence between target and proposal to guarantee accurate importance sampling with bounded test functions. Our analysis and presentation are informed by the specific features that shape the use of importance sampling to approximate Bayes’ rule. The key role of the second moment of the χ 2 -divergence has long been acknowledged [24,25], and it is intimately related to an effective sample size used by practitioners to monitor the performance of importance sampling [26,27]. A topic of recent interest is the development of adaptive importance sampling schemes where the proposal is chosen by minimizing—over some admissible family of distributions—the χ 2 -divergence with respect to the target [28,29]. The main original contributions of Section 2 are Proposition 2 and Theorem 1, which demonstrate the necessity of suitably increasing the sample size with the χ 2 -divergence along singular limit regimes. The idea of Proposition 2 is inspired by [7], but adapted here from relative entropy to χ 2 -divergence. Our results complement sufficient conditions on the sample size derived in [1] and necessary conditions for un-normalized (as opposed to autonormalized) importance sampling in [8].
  • In Section 3, Proposition 4 gives a closed formula for the χ 2 -divergence between posterior and prior in a linear-Gaussian Bayesian inverse problem setting. This formula allows us to investigate the scaling of the χ 2 -divergence (and thereby the rate at which the sample size needs to grow) in several singular limit regimes, including small observation noise, large prior covariance and large dimension. Numerical examples motivate and complement the theoretical results. Large dimension and small noise singular limits were studied in [1] in a diagonal setting. The results here are generalized to a nondiagonal setting, and the presentation is simplified by using the closed formula in Proposition 4. Moreover, we include singular limits arising from large prior covariance. In an infinite dimensional setting, Corollary 1 establishes an equivalence between absolute continuity, finite χ 2 -divergence and finite intrinsic dimension. A similar result was proved in more generality in [1] using the advanced theory of Gaussian measures in Hilbert space [30]; our presentation and proof here are elementary, while still giving the same degree of understanding.
  • In Section 4 we follow [1,13,14,15,31] and investigate the use of importance sampling to approximate Bayes’ rule within one filtering step in a linear-Gaussian setting. We build on the examples and results in Section 3 to identify model regimes where the performance of standard and optimal proposals can be dramatically different. We refer to [2,32] for an introduction to standard and optimal proposals for particle filtering and to [33] for a more advanced presentation. The main original contribution of this section is Theorem 2, which gives a direct comparison of the χ 2 -divergence between target and standard/optimal proposals. This result improves on [1], where only a comparison between the intrinsic dimension was established.

2. Importance Sampling and χ 2 -Divergence

The aim of this section is to demonstrate the central role of the χ 2 -divergence between target and proposal in determining the accuracy of importance sampling. In Section 2.1 we show how the χ 2 -divergence arises in both sufficient and necessary conditions on the sample size for accurate importance sampling with bounded test functions. Section 2.2 describes a well-known connection between the effective sample size and the χ 2 -divergence. Our investigation of importance sampling to approximate the Bayesian update—developed in Section 3 and Section 4—will make use of a closed formula for the χ 2 -divergence between Gaussians, which we include in Section 2.3 for later reference.

2.1. Sufficient and Necessary Sample Size

Here we provide general sufficient and necessary conditions on the sample size in terms of
ρ : = d χ 2 ( μ π ) + 1 .
We first review upper-bounds on the worst-case bias and mean-squared error of importance sampling with bounded test functions, which imply that accurate importance sampling is guaranteed if N ρ . The proof of the bound for the mean-squared error can be found in [1] and the bound for the bias in [2].
Proposition 1
(Sufficient Sample Size). It holds that
sup | φ | 1 E μ N ( φ ) μ ( φ ) 4 N ρ , sup | φ | 1 E μ N ( φ ) μ ( φ ) 2 4 N ρ .
The next result shows the existence of bounded test functions for which the error may be large with a high probability if N ρ . The idea is taken from [7], but we adapt it here to obtain a result in terms of the χ 2 -divergence rather than relative entropy. We denote by g : = g / π ( g ) the normalized density between μ and π , and note that ρ = π ( g 2 ) = μ ( g ) .
Proposition 2
(Necessary Sample Size). Let U μ . For any N 1 and α ( 0 , 1 ) there exists a test function φ with | φ | 1 such that
P | μ N ( φ ) μ ( φ ) | = P ( g ( U ) > α ρ ) 1 N α ρ .
Proof. 
Observe that for the test function φ ( u ) : = 1 { g ( u ) α ρ } , we have μ ( φ ) = P g ( U ) α ρ . On the other hand, μ N ( φ ) = 1 if and only if g ( u ( n ) ) α ρ for all 1 n N . This implies that
P | μ N ( φ ) μ ( φ ) | = P ( g ( U ) > α ρ ) 1 N P ( g ( u ( 1 ) ) > α ρ ) 1 N α ρ .
The power of Proposition 2 is due to the fact that in some singular limit regimes the distribution of g ( U ) concentrates around its expected value ρ . In such a case, for any fixed α ( 0 , 1 ) the probability of the event g ( U ) > α ρ will not vanish as the singular limit is approached. This idea will become clear in the proof of Theorem 1 below.
In Section 3 and Section 4 we will investigate the required sample size for importance sampling approximation of the Bayesian update in various singular limits, where target and proposal become further apart as a result of reducing the observation noise, increasing the prior uncertainty or increasing the dimension of the problem. To formalize the discussion in a general abstract setting, let { ( μ θ , π θ ) } θ > 0 be a family of targets and proposals such that ρ θ : = d χ 2 ( μ θ π θ ) as θ . The parameter θ may represent for instance the size of the precision of the observation noise, the size of the prior covariance or a suitable notion of dimension. Our next result shows a clear dichotomy in the performance of importance sampling along the singular limit depending on whether the sample size grows sublinearly or superlinearly with ρ θ .
Theorem 1.
Suppose that ρ θ and that V : = sup θ V [ g θ ( U θ ) ] ρ θ 2 < 1 . Let δ > 0 .
i 
If N θ = ρ θ 1 + δ , then
lim θ sup | φ | 1 E μ θ N θ ( φ ) μ θ ( φ ) 2 = 0 .
ii 
If N θ = ρ θ 1 δ , then there exists a fixed c ( 0 , 1 ) such that
lim θ sup | φ | 1 P | μ θ N θ ( φ ) μ θ ( φ ) | > c = 1 .
Proof. 
The proof of ( i ) follows directly from Proposition 1. For ( i i ) we fix α ( 0 , 1 V ) and c 0 , 1 V ( 1 α ) 2 . Let φ θ ( u ) : = 1 ( g θ ( u ) α ρ θ ) as in the proof of Proposition 2. Then,
P g θ ( U θ ) > α ρ θ 1 P | ρ θ g θ ( U θ ) | ( 1 α ) ρ θ 1 V [ g θ ( U θ ) ] ( 1 α ) 2 ρ θ 2 1 V ( 1 α ) 2 > c .
The bound in (5) implies that
P | μ θ N θ ( φ θ ) μ θ ( φ θ ) | > c P | μ θ N ( φ θ ) μ θ ( φ θ ) | = P ( g θ ( U θ ) > α ρ θ ) 1 N θ α ρ θ .
This completes the proof, since if N θ = ρ θ 1 δ the right-hand side goes to 1 as θ . □
Remark 1.
As noted in [1], the bound in Proposition 1 is sharp in the asymptotic limit N . This implies that, for any fixed θ, the bound 4 ρ θ / N becomes sharp as N . We point out that this statement does not provide direct understanding of the joint limit θ , N θ analyzed in Theorem 1.
The assumption that V < 1 can be verified for some singular limits of interest, in particular for small noise and large prior covariance limits studied in Section 3 and Section 4; details will be given in Example 1. While the assumption V < 1 may fail to hold in high dimensional singular limit regimes, the works [13,14] and our numerical example in Section 4.4 provide compelling evidence of the need to suitably scale N with ρ along those singular limits in order to avoid a weigh-collapse phenomenon. Further theoretical evidence was given for un-normalized importance sampling in [8].

2.2. χ 2 -Divergence and Effective Sample Size

The previous subsection provides theoretical nonasymptotic and asymptotic evidence that a sample size larger than ρ is necessary and sufficient for accurate importance sampling. Here we recall a well known connection between the χ 2 -divergence and the effective sample size
ESS : = 1 n = 1 N ( w ( n ) ) 2 ,
widely used by practitioners to monitor the performance of importance sampling. Note that always 1 ESS N ; it is intuitive that ESS = 1 if the maximum weight is one and ESS = N if the maximum weight is 1 / N . To see the connection between ESS and ρ , note that
ESS N = 1 N n = 1 N ( w ( n ) ) 2 = n = 1 N g ( u ( n ) ) 2 N n = 1 N g ( u ( n ) ) 2 = 1 N n = 1 N g ( u ( n ) ) 2 1 N n = 1 N g ( u ( n ) ) 2 π ( g ) 2 π ( g 2 ) .
Therefore, ESS N / ρ : if the sample-based estimate of ρ is significantly larger than N, ESS will be small which gives a warning sign that a larger sample size N may be needed.

2.3. χ 2 -Divergence between Gaussians

We conclude this section by recalling an analytical expression for the χ 2 -divergence between Gaussians. In order to make our presentation self-contained, we include a proof in Appendix A.
Proposition 3.
Let μ = N ( m , C ) and π = N ( 0 , Σ ) . If 2 Σ C , then
ρ = | Σ | | 2 Σ C | | C | exp m ( 2 Σ C ) 1 m .
Otherwise, ρ = .
It is important to note that nondegenerate Gaussians μ = N ( m , C ) and π = N ( 0 , Σ ) in R d are always equivalent. However, ρ = unless 2 Σ C . In Section 3 and Section 4 we will interpret μ as a posterior and π as a prior, in which case automatically Σ C and ρ < .

3. Importance Sampling for Inverse Problems

In this section we study the use of importance sampling in a linear Bayesian inverse problem setting where the target and the proposal represent, respectively, the posterior and the prior distribution. In Section 3.1 we describe our setting and we also derive an explicit formula for the χ 2 -divergence between the posterior and the prior. This explicit formula allows us to determine the scaling of the χ 2 -divergence in small noise regimes (Section 3.2), in the limit of large prior covariance (Section 3.3) and in a high dimensional limit (Section 3.4). Our overarching goal is to show how the sample size for importance sampling needs to grow along these limiting regimes in order to maintain the same level of accuracy.

3.1. Inverse Problem Setting and χ 2 -Divergence between Posterior and Prior

Let A R k × d be a given design matrix and consider the linear inverse problem of recovering u R d from data y R k related by
y = A u + η , η N ( 0 , Γ ) ,
where η represents measurement noise. We assume henceforth that we are in the underdetermined case k d , and that A is full rank. We follow a Bayesian perspective and set a Gaussian prior on u, u π = N ( 0 , Σ ) . We assume throughout that Σ and Γ are given symmetric positive definite matrices. The solution to the Bayesian formulation of the inverse problem is the posterior distribution μ of u given y . We are interested in studying the performance of importance sampling with proposal π (the prior) and target μ (the posterior). We recall that under this linear-Gaussian model the posterior distribution is Gaussian [2], and we denote it by μ = N ( m , C ) . In order to characterize the posterior mean m and covariance C, we introduce standard data assimilation notation
S : = A Σ A + Γ , K : = Σ A S 1 ,
where K is the Kalman gain. Then we have
m = K y , C = ( I K A ) Σ .
Proposition 3 allows us to obtain a closed formula for the quantity ρ = d χ 2 ( μ π ) + 1 , noting that (10) implies that
2 Σ C = ( I + K A ) Σ = Σ + Σ A S 1 A Σ 0 .
The proof of the following result is then immediate and therefore omitted.
Proposition 4.
Consider the inverse problem (9) with prior u π = N ( 0 , Σ ) and posterior μ = N ( m , C ) with m and C defined in (10). Then ρ = d χ 2 ( μ π ) + 1 admits the explicit characterization
ρ = ( | I + K A | | I K A | ) 1 2 exp y K [ ( I + K A ) Σ ] 1 K y .
In the following two subsections we employ this result to derive by direct calculation the rate at which the posterior and prior become further apart —in χ 2 -divergence— in small noise and large prior regimes. To carry out the analysis we use parameters γ 2 , σ 2 > 0 to scale the noise covariance, γ 2 Γ , and the prior covariance, σ 2 Σ .

3.2. Importance Sampling in Small Noise Regime

To illustrate the behavior of importance sampling in small noise regimes, we first introduce a motivating numerical study. A similar numerical setup was used in [13] to demonstrate the curse of dimension of importance sampling. We consider the inverse problem setting in Equation (9) with d = k = 5 and noise covariance γ 2 Γ . We conduct 18 numerical experiments with a fixed data y. For each experiment, we perform importance sampling 400 times and report in Figure 1 a histogram with the largest autonormalized weight in each of the 400 realizations. The 18 experiments differ in the sample size N and the size of the observation noise γ 2 . In both Figure 1a,b we consider three choices of N (rows) and three choices of γ 2 (columns). These choices are made so that in Figure 1a it holds that N = γ 4 along the bottom-left to top-right diagonal, while in Figure 1b N = γ 6 along the same diagonal.
We can see from Figure 1a that N = γ 4 is not a fast enough growth of N to avoid weight collapse: the histograms skew to the right along the bottom-left to top-right diagonal, suggesting that weight collapse (i.e., one weight dominating the rest, and therefore the variance of the weights being large) is bound to occur in the joint limit N , γ 0 with N = γ 4 . In contrast, the histograms in Figure 1b skew to the left along the same diagonal, suggesting that the probability of weight collapse is significantly reduced if N = γ 6 . We observe a similar behavior with other choices of dimension d by conducting experiments with sample sizes N = γ d + 1 and N = γ d 1 , and we include the histograms with d = k = 4 in Appendix C. Our next result shows that these empirical findings are in agreement with the scaling of the χ 2 -divergence between target and proposal in the small noise limit.
Proposition 5.
Consider the inverse problem setting
y = A u + η , η = N ( 0 , γ 2 Γ ) , u π = N ( 0 , Σ ) .
Let μ γ denote the posterior and let ρ γ = d χ 2 ( μ γ π ) + 1 . Then, for almost every y ,
ρ γ O ( γ k )
in the small noise limit γ 0 .
Proof. 
Let K γ = Σ A ( A Σ A + γ 2 Γ ) 1 denote the Kalman gain. We observe that K γ Σ A ( A Σ A ) 1 under our standing assumption that A is full rank. Let U Ξ V be the singular value decompostion of Γ 1 2 A Σ 1 2 and { ξ i } i = 1 k be the singular values. Then we have
K γ A Σ 1 2 A Γ 1 2 ( Γ 1 2 A Σ A Γ 1 2 + γ 2 I ) 1 Γ 1 2 A Σ 1 2 = V Ξ U ( U Ξ V V Ξ U + γ 2 I ) 1 U Ξ V Ξ ( Ξ Ξ + γ 2 I ) 1 Ξ ,
where here “∼” denotes matrix similarity. It follows that I + K γ A converges to a finite limit, and so does the exponent y K γ Σ 1 ( I + K γ A ) 1 K γ y in Proposition 4. On the other hand,
( | I + K γ A | | I K γ A | ) 1 2 = i = 1 k γ 2 ξ i 2 + γ 2 1 2 O ( γ k )
as γ 0 . The conclusion follows. □
Remark 2.
The scaling of ρ with γ 2 obtained in Proposition 5 agrees with the lower bound reported in Table 1 in [1], which was derived in a diagonalized setting.

3.3. Importance Sampling and Prior Scaling

Here we illustrate the behavior of importance sampling in the limit of large prior covariance. We start again with a motivating numerical example, similar to the one reported in Figure 1. The behavior is analogous to the small noise regime, which is expected since the ratio of prior and noise covariances determines the closeness between target and proposal. Figure 2 shows that when d = k = 5 weight collapse is observed frequently when the sample size N grows as σ 4 , but not so often with sample size N = σ 6 . Similar histograms with d = k = 4 are included in Appendix C. These empirical results are in agreement with the theoretical growth rate of the χ 2 -divergence between target and proposal in the limit of large prior covariance, as we prove next.
Proposition 6.
Consider the inverse problem setting
y = A u + η , η N ( 0 , Γ ) , u π σ = N ( 0 , σ 2 Σ ) .
Let μ σ denote the posterior and ρ σ = d χ 2 ( μ σ π σ ) + 1 . Then, for almost every y ,
ρ σ O ( σ d )
in the large prior limit σ .
Proof. 
Let Σ σ = σ 2 Σ , let K σ = Σ σ A ( A Σ σ A + Γ ) 1 be the Kalman gain. Observing that K σ = K γ = 1 σ , we apply Proposition 5 and deduce that when σ :
  • K σ Σ A ( A Σ A + γ 2 Γ ) 1 ;
  • I + K σ A has a well-defined and invertible limit;
  • | I K σ A | 1 2 O ( σ k ) .
On the other hand, we notice that the quadratic term
K σ Σ σ 1 ( I + K σ A ) 1 K σ = σ 2 K σ Σ ( I + K σ A ) 1 K σ
vanishes in limit. The conclusion follows by Proposition 4. □

3.4. Importance Sampling in High Dimension

In this subsection we study importance sampling in high dimensional limits. To that end, we let { a i } i = 1 , { γ i 2 } i = 1 and { σ i 2 } i = 1 be infinite sequences and we define, for any d 1 ,
A 1 : d : = diag a 1 , , a d R d × d , Γ 1 : d : = diag γ 1 2 , , γ d 2 R d × d , Σ 1 : d : = diag σ 1 2 , , σ d 2 R d × d .
We then consider the inverse problem of reconstructing u R d from data y R d under the setting
y = A 1 : d u + η , η N ( 0 , Γ 1 : d ) , u π 1 : d = N ( 0 , Σ 1 : d ) .
We denote the corresponding posterior distribution by μ 1 : d , which is Gaussian with a diagonal covariance. Given observation y, we may find the posterior distribution μ i of u i by solving the one dimensional linear-Gaussian inverse problem
y i = a i u i + η i , η i N ( 0 , γ i 2 ) , 1 i d ,
with prior π i = N ( 0 , σ i 2 ) . In this way we have defined, for each d N { } , an inverse problem with prior and posterior
π 1 : d = i = 1 d π i , μ 1 : d = i = 1 d μ i .
In Section 3.4.1 we include an explicit calculation in the one dimensional inverse setting (12), which will be used in Section 4.4 to establish the rate of growth of ρ d = d χ 2 ( μ 1 : d π 1 : d ) and thereby how the sample size needs to be scaled along the high dimensional limit d to maintain the same accuracy. Finally, in Section 3.4.3 we establish from first principles and our simple one dimensional calculation the equivalence between ( i ) certain notion of dimension being finite; ( i i ) ρ < ; and ( i i i ) absolute continuity of μ 1 : with respect to π 1 : .

3.4.1. One Dimensional Setting

Let a R be given and consider the one dimensional inverse problem of reconstructing u R from data y R , under the setting
y = a u + η , η N ( 0 , γ 2 ) , u π = N ( 0 , σ 2 ) .
By defining
g ( u ) : = exp a 2 2 γ 2 u 2 + a y γ 2 u ,
we can write the posterior density μ ( d u ) as μ ( d u ) g ( u ) π ( d u ) . The next result gives a simplified closed formula for ρ = d χ 2 ( μ π ) + 1 . In addition, it gives a closed formula for the Hellinger integral
H ( μ , π ) : = π g 1 2 π ( g ) 1 2 ,
which will facilitate the study of the case d = in Section 3.4.3.
Lemma 1.
Consider the inverse problem in (14). Let λ : = a 2 σ 2 / γ 2 and z 2 : = y 2 a 2 σ 2 + γ 2 . Then, for any > 0 ,
π ( g ) π ( g ) = ( λ + 1 ) 2 λ + 1 exp ( 2 ) λ 2 ( λ + 1 ) z 2 .
In particular,
ρ = λ + 1 2 λ + 1 exp λ 2 λ + 1 z 2 ,
H ( μ , π ) = 2 λ + 1 λ + 2 exp λ z 2 4 ( λ + 2 ) .
Proof. 
A direct calculation shows that
π ( g ) = 1 λ + 1 exp 1 2 λ y 2 a 2 σ 2 + γ 2 .
The same calculation, but replacing γ 2 by γ 2 / and λ by λ , gives similar expressions for π ( g ) , which leads to (15). The other two equations follow by setting to be 2 and 1 2 . □
Lemma 1 will be used in the two following subsections to study high dimensional limits. Here we show how this lemma also allows us to verify directly that the assumption V < 1 in Theorem 1 holds in small noise and large prior limits.
Example 1.
Consider a sequence of inverse problems of the form (14) with λ = a 2 σ 2 / γ 2 approaching infinity. Let { ( μ λ , π λ ) } λ > 0 be the corresponding family of posteriors and priors and let g λ be the normalized density. Lemma 1 implies that
π λ ( g λ 3 ) π λ ( g λ 2 ) 2 = 2 λ + 1 ( 3 λ + 1 ) ( λ + 1 ) exp λ ( 2 λ + 1 ) ( 3 λ + 1 ) z 2 2 3 < 2 ,
as λ . This implies that, for λ sufficiently large,
V [ g λ ( U λ ) ] ρ λ 2 = π λ ( g λ 3 ) π λ ( g λ 2 ) 2 1 < 1 .

3.4.2. Large Dimensional Limit

Now we investigate the behavior of importance sampling in the limit of large dimension, in the inverse problem setting (11). We start with an example similar to the ones in Figure 1 and Figure 2. Figure 3 shows that for λ = 1.3 fixed, weight collapse happens frequently when the sample size N grows polynomially as d 2 but not so often if N grows at rate O i = 1 d λ + 1 2 λ + 1 e λ z i 2 2 λ + 1 . These empirical results are in agreement with the growth rate of ρ d in the large d limit.
Proposition 7.
For any d N { } ,
ρ d = i = 1 d λ i + 1 2 λ i + 1 e λ i z i 2 2 λ i + 1 , E z 1 : d ρ d = i = 1 d ( λ i + 1 ) .
Proof. 
The formula for ρ d is a direct consequence of Equation (16) and the product structure. Similarly, we have
E z i λ i + 1 2 λ i + 1 e λ i z i 2 2 λ i + 1 = λ i + 1 2 λ i + 1 R 1 2 π e z i 2 2 + λ i z i 2 2 λ i + 1 d z i = λ i + 1 2 λ i + 1 R 1 2 π e z i 2 2 ( 2 λ i + 1 ) d z i = λ i + 1 .
Proposition 7 implies that, for d N { } ,
sup | φ | 1 E μ 1 : d N ( φ ) μ 1 : d ( φ ) 2 4 N i = 1 d λ i + 1 2 λ i + 1 e λ i z i 2 2 λ i + 1 , E sup | φ | 1 E μ 1 : d N ( φ ) μ 1 : d ( φ ) 2 4 N i = 1 d ( λ i + 1 ) .
Note that the outer expected value in the latter equation averages over the data, while the inner one averages oversampling from the prior π 1 : d . This suggests that
log E sup | φ | 1 E μ 1 : d N ( φ ) μ 1 : d ( φ ) 2 i = 1 d λ i log N .
The quantity τ : = i = 1 d λ i had been used as an intrinsic dimension of the inverse problem (11). This simple heuristic together with Theorem 1 suggest that increasing Nexponentially with τ is both necessary and sufficient to maintain accurate importance sampling along the high dimensional limit d . In particular, if all coordinates of the problem play the same role, this implies that N needs to grow exponentially with d, a manifestation of the curse of dimension of importance sampling [1,13,14].

3.4.3. Infinite Dimensional Singularity

Finally, we investigate the case d = . Our goal in this subsection is to establish a connection between the effective dimension, the quantity ρ and absolute continuity. The main result, Corollary 1, had been proved in more generality in [1]. However, our proof and presentation here requires minimal technical background and is based on the explicit calculations obtained in the previous subsections and in the following lemma.
Lemma 2.
It holds that μ 1 : is absolutely continuous with respect to π 1 : if and only if
H ( μ 1 : , π 1 : ) = i = 1 π i g i 1 2 π i ( g i ) 1 2 > 0 ,
where g i is an un-normalized density between μ i and π i . Moreover, we have the following explicit characterizations of the Hellinger integral H ( μ 1 : , π 1 : ) and its average with respect to data realizations,
H ( μ 1 : , π 1 : ) = i = 1 2 λ i + 1 λ i + 2 e λ i z i 2 4 ( λ i + 2 ) , E z 1 : H ( μ 1 : , π 1 : ) = i = 1 2 ( λ i + 1 ) 1 4 3 λ i + 4 .
Proof. 
The formula for the Hellinger integral is a direct consequence of Equation (17) and the product structure. On the other hand,
E z i 2 λ i + 1 λ i + 2 e λ i z i 2 4 ( λ i + 2 ) = 2 ( λ i + 1 ) 1 4 λ i + 2 R 1 2 π e λ i z i 2 4 ( λ i + 2 ) z i 2 2 d z i = 2 ( λ i + 1 ) 1 4 3 λ i + 4 .
The proof of the equivalence between finite Hellinger integral and absolute continuity is given in Appendix B. □
Corollary 1.
The following statements are equivalent:
i 
τ = i = 1 λ i < ;
ii 
ρ < for almost every y;
iii 
μ 1 : π 1 : for almost every y.
Proof. 
Observe that λ i 0 is a direct consequence of all three statements, so we will assume λ i 0 from now on.
( i ) ( i i ) : By Proposition 7,
log E z 1 : ρ = i = 1 log ( 1 + λ i ) = O ( i = 1 λ i ) ,
since log ( 1 + λ i ) λ i for large i.
( i ) ( i i i ) : Similarly, we have
log E z 1 : H ( μ 1 : , π 1 : ) = 1 4 i = 1 log ( 3 λ i + 4 ) 2 16 ( λ i + 1 ) = 1 4 i = 1 log 1 + 9 λ i 2 + 8 λ i 16 λ i + 16 = 1 4 O ( i = 1 λ i ) .
The conclusion follows from Lemma 2. □

4. Importance Sampling for Data Assimilation

In this section, we study the use of importance sampling in a particle filtering setting. Following [13,14,15] we focus on one filtering step. Our goal is to provide a new and concrete comparison of two proposals, referred to as standard and optimal in the literature [1]. In Section 4.1 we introduce the setting and both proposals and show that the χ 2 -divergence between target and standard proposal is larger than the χ 2 -divergence between target and optimal proposal. Section 4.3 and Section 4.4 identify small noise and large dimensional limiting regimes where the sample size for the standard proposal needs to grow unboundedly to maintain the same level of accuracy, but the required sample size for the optimal proposal remains bounded.

4.1. One-Step Filtering Setting

Let M and H be given matrices. We consider the one-step filtering problem of recovering v 0 , v 1 from y, under the following setting
v 1 = M v 0 + ξ , v 0 N ( 0 , P ) , ξ N ( 0 , Q ) ,
y = H v 1 + ζ , ζ N ( 0 , R ) .
Similar to the setting in SubSection 3.1, we assume that P , Q , R are symmetric positive definite and that M and H are full rank. From a Bayesian point of view, we would like to sample from the target distribution P v 0 , v 1 | y . To achieve this, we can either use π std = P v 1 | v 0 P v 0 or π opt = P v 1 | v 0 , y P v 0 as the proposal distribution.
The standard proposal π std is the prior distribution of ( v 0 , v 1 ) determined by the prior v 0 N ( 0 , P ) and the signal dynamics encoded in Equation (19). Then assimilating the observation y leads to an inverse problem [1,2] with design matrix, noise covariance and prior covariance given by
A std : = H , Γ std : = R , Σ std : = M P M + Q .
We denote π std = N ( 0 , Σ std ) the prior distribution and by μ std the corresponding posterior distribution.
The optimal proposal π opt samples from v 0 and the conditional kernel v 1 | v 0 , y . Then assimilating y leads to the inverse problem [1,2]
y = H M v 0 + H ξ + ζ ,
where the design matrix, noise covariance and prior covariance are given by
A opt : = H M , Γ opt : = H Q H + R , Σ opt : = P .
We denote π opt = N ( 0 , Σ opt ) the prior distribution and μ std the corresponding posterior distribution.

4.2. χ 2 -Divergence Comparison between Standard and Optimal Proposal

Here we show that
ρ std : = d χ 2 ( μ std π std ) + 1 > d χ 2 ( μ opt π opt ) + 1 = : ρ opt .
The proof is a direct calculation using the explicit formula in Proposition 4. We introduce, as in Section 3, standard Kalman notation
K std : = Σ std A std S std 1 , S std : = A std Σ std A std + Γ std , K opt : = Σ opt A opt S opt 1 , S opt : = A opt Σ opt A opt + Γ opt .
It follows from the definitions in (21) and (22) that
S std = H ( M P M + Q ) H + R = H M P M H + H Q H + R = S opt .
Since S std = S opt we drop the subscripts in what follows and denote both simply by S .
Theorem 2.
Consider the one-step filtering setting in Equations (19) and (20). If M and H are full rank and P , Q , R are symmetric positive definite, then, for almost every y ,
ρ std > ρ opt .
Proof. 
By Proposition 4 we have
ρ std = ( | I K std A std | | I + K std A std | ) 1 2 exp y K std [ ( I + K std A std ) Σ std ] 1 K std y , ρ opt = ( | I K opt A opt | | I + K opt A opt | ) 1 2 exp y K opt [ ( I + K opt A opt ) Σ std ] 1 K opt y .
Therefore, it suffices to prove the following two inequalities:
| I K std A std | | I + K std A std | < | I K opt A opt | | I + K opt A opt | ,
K std [ ( I + K std A std ) Σ std ] 1 K std K opt [ ( I + K opt A opt ) Σ std ] 1 K opt .
We start with inequality (24). Note that
( I + K std A std ) Σ std = Σ std + Σ std A std S 1 A std Σ std , ( I + K opt A opt ) Σ opt = Σ opt + Σ opt A opt S 1 A opt Σ opt .
Using the definitions in (21) and (22) it follows that
K std Σ std 1 ( I + K std A std ) 1 K std = H ( M P M + Q ) 1 + H S H 1 H H ( M P M ) 1 + H S H 1 H = K opt Σ opt 1 ( I + K opt A opt ) 1 K opt .
For inequality (23), we notice that
K std A std = ( M P M + Q ) H S 1 H = M P ˜ M H S 1 H ( H S 1 H ) 1 2 M P ˜ M ( H S 1 H ) 1 2 , K opt A opt = P M H S 1 H M ( H S 1 H ) 1 2 M P M ( H S 1 H ) 1 2 ,
where P ˜ : = P + M Q M . Therefore
K opt A opt K std A std
which, together with K std A std I , implies that
| I K std A std | | I + K std A std | | I K opt A opt | | I + K opt A opt | = | I ( K std A std ) 2 | | I ( K opt A opt ) 2 | > 0 ,
as desired. □
Remark 3.
It is well known that if the signal dynamics are deterministic, i.e., if Q = 0 in (19), then the standard and optimal proposal agree, and therefore ρ opt = ρ std . Theorem 2 shows that ρ std > ρ opt provided that Q is positive definite. Further works that have investigated theoretical and practical benefits of the optimal proposal over the standard proposal include [1,2,31,34]. In particular, [1] shows that use of the optimal proposal reduces the intrinsic dimension. Theorem 2 compares directly the χ 2 -divergence, which is the key quantity that determines the performance of importance sampling.

4.3. Standard and Optimal Proposal in Small Noise Regime

It is possible that along a certain limiting regime, ρ diverges for the standard proposal but not for the optimal proposal. This has been observed in previous work [1,31], and here we provide some concrete examples using the scaling results from Section 3. Precisely, consider the following one-step filtering setting
v 1 = M v 0 + ξ , v 0 N ( 0 , P ) , ξ N ( 0 , Q ) , y = H v 1 + ζ , ζ N ( 0 , r 2 R ) ,
where r 0 . Let μ opt ( r ) , μ std ( r ) be the optimal/standard targets and π opt ( r ) , π std ( r ) be the optimal/standard proposals. We assume that M R d × d and H R k × d are full rank.
Proposition 8.
If r 0 , then we have
ρ opt ( r ) < , ρ std ( r ) O ( r k ) .
Proof. 
Consider the two inverse problems that correspond to μ opt ( r ) , π opt ( r ) and μ std ( r ) , π std ( r ) . Note that the two problems have identical prior and design matrix. Let Γ opt ( r ) and Γ std ( r ) denote the noise in those two inverse problems. When r goes to 0, we observe that
Γ opt ( r ) = r 2 R + H Q H H Q H , Γ std ( r ) = r 2 R 0 .
Therefore, the limit of ρ opt ( r ) converges to a finite value, but Lemma 5 implies that ρ std ( r ) diverges at rate O ( r k ) . □

4.4. Standard and Optimal Proposal in High Dimension

The previous subsection shows that the standard and optimal proposals can have dramatically different behavior in the small noise regime r 0 . Here we show that both proposals can also lead to dramatically different behavior in high dimensional limits. Precisely, as a consequence of Corollary 1 we can easily identify the exact regimes where both proposals converge or diverge in limit. The notation is analogous to that in Section 4.4, and so we omit the details.
Proposition 9.
Consider the sequence of particle filters defined as above. We have the following convergence criteria:
1. 
μ opt ( 1 : ) π opt ( 1 : ) and ρ opt < if and only if i = 1 h i 2 m i 2 p i 2 h i 2 q i 2 + r i 2 < ,
2. 
μ std ( 1 : ) π std ( 1 : ) and ρ std < if and only if i = 1 h i 2 m i 2 p i 2 r i 2 < and i = 1 h i 2 q i 2 r i 2 < .
Proof. 
By direct computation, we have
λ std ( i ) = h i 2 m i 2 p i 2 + h i 2 q i 2 r i 2 = h i 2 m i 2 p i 2 r i 2 + h i 2 q i 2 r i 2 , λ opt ( i ) = h i 2 m i 2 p i 2 h i 2 q i 2 + r i 2 .
Theorem 1 gives the desired result. □
Example 2.
As a simple example where absolute continuity holds for the optimal proposal but not for the standard one, let h i = m i = p i = r i = 1 . Then ρ std = , but ρ opt < provided that i = 1 1 q i 2 + 1 < .

Author Contributions

Funding acquisition, D.S.-A.; Investigation, Z.W.; Supervision, D.S.-A.; Writing—original draft, Z.W.; Writing—review & editing, D.S.-A. and Z.W. All authors have read and agreed to the published version of the manuscript.

Funding

The work of DSA was supported by NSF and NGA through the grant DMS-2027056. DSA also acknowledges partial support from the NSF grant DMS-1912818/1912802.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. χ 2 -Divergence between Gaussians

We recall that the distribution P θ parameterized by θ belongs to the exponential family E F ( Θ ) over a natural parameter space Θ , if θ Θ and P θ has density of the form
f ( u ; θ ) = e t ( u ) , θ F ( θ ) + k ( u ) ,
where the natural parameter space is given by
Θ = θ : e t ( u ) , θ + k ( u ) d u < .
The following result can be found in [35].
Lemma A1.
Suppose θ 1 , 2 Θ are parameters for probability densities f ( u ; θ 1 , 2 ) = e t ( u ) , θ 1 , 2 F ( θ 1 , 2 ) + k ( u ) with 2 θ 1 θ 2 Θ . Then,
d χ 2 f ( ; θ 1 ) f ( ; θ 2 ) = e F ( 2 θ 1 θ 2 ) 2 F ( θ 1 ) + F ( θ 2 ) 1 .
Proof. 
By direct computation,
d χ 2 f ( ; θ 1 ) f ( ; θ 2 ) + 1 = f ( u ; θ 1 ) 2 f ( u ; θ 2 ) 1 d u = e t ( u ) , 2 θ 1 θ 2 ( 2 F ( θ 1 ) F ( θ 2 ) ) + k ( u ) d u = e F ( 2 θ 1 θ 2 ) 2 F ( θ 1 ) + F ( θ 2 ) f ( u ; 2 θ 1 θ 2 ) d u = e F ( 2 θ 1 θ 2 ) 2 F ( θ 1 ) + F ( θ 2 ) .
Note that f ( u ; 2 θ 1 θ 2 ) d u = 1 since 2 θ 1 θ 2 Θ by assumption. □
Using Lemma A1 we can compute the χ 2 -divergence between Gaussians. To do so, we note that d -dimensional Gaussians N ( μ , Σ ) belong to the exponential family over the parameter space R d R d × d by letting θ = [ Σ 1 μ ; 1 2 Σ 1 ] and F ( θ ) = 1 2 μ Σ 1 μ + 1 2 log | Σ | . In the context of Gaussians, an exponential parameter θ = [ Σ 1 μ ; 1 2 Σ 1 ] belongs to the natural parameter space Θ if and only if Σ is symmetric and positive definite. Indeed, the integral exp ( 1 2 ( u μ ) Σ 1 ( u μ ) ) d u is finite if and only if Σ 0 .
Proof of Proposition 3.
Let θ μ , θ π be the exponential parameters of μ , π . Then 2 θ μ θ π corresponds to a Gaussian with mean ( 2 C 1 Σ 1 ) 1 ( 2 C 1 m ) and covariance ( 2 C 1 Σ 1 ) 1 . We have
F ( 2 θ μ θ π ) 2 F ( θ μ ) + F ( θ π ) = 1 2 log | ( 2 C 1 Σ 1 ) 1 | log | C | + 1 2 log | Σ | + 1 2 ( 2 C 1 m ) ( 2 C 1 Σ 1 ) 1 ( 2 C 1 m ) m C 1 m = log | Σ | | 2 C 1 Σ 1 | | C | 2 + m ( C 1 ( 2 C 1 Σ 1 ) 1 2 C 1 ) m m ( C 1 ( 2 C 1 Σ 1 ) 1 ( 2 C 1 Σ 1 ) ) m = log | Σ | | 2 Σ C | | C | + m ( C 1 ( 2 C 1 Σ 1 ) 1 Σ 1 ) m = log | Σ | | 2 Σ C | | C | + m ( 2 Σ C ) 1 m .
Applying Lemma A1 gives
d χ 2 ( μ π ) = exp F ( 2 θ μ θ π ) 2 F ( θ μ ) + F ( θ π ) 1 = | Σ | | 2 Σ C | | C | exp m ( 2 Σ C ) 1 m 1 ,
if 2 θ μ θ π Θ . In other words, the corresponding covariance matrix ( 2 C 1 Σ 1 ) 1 is positive definite. □
Remark A1.
By translation invariance of Lebesgue measure, we can obtain the more general formula for χ 2 -divergence between two Gaussians with nonzero mean by replacing m with the difference between the two mean vectors:
d χ 2 N ( m 1 , C ) N ( m 2 , Σ ) = | Σ | | 2 Σ C | | C | e ( m 1 m 2 ) ( 2 Σ C ) 1 ( m 1 m 2 ) 1 .

Appendix B. Proof of Lemma 2

Proof. 
Dividing g by its normalizing constant, we may assume without loss of generality that g is exactly the Radon–Nikodym derivative d μ d π and H ( μ , π ) = π i ( g ) .
If μ 1 : π 1 : , then the Radon-Nikodym derivative g 1 : cannot be π 1 : a.e. zero since π 1 : and μ 1 : are probability measures. As a consequence, i = 1 π i g i = π 1 : g 1 : > 0 by the product structure of μ 1 : and π 1 : .
Now we assume i = 1 π i g i > 0 . It suffices to show that g 1 : is well-defined, i.e., convergence of i = 1 L g i in L π 1 as L . It suffices to prove that the sequence is Cauchy, in other words
lim L , π 1 : | g 1 : L + g 1 : L | = 0 .
We observe that
g 1 : L + g 1 : L 1 g 1 : L + g 1 : L 2 g 1 : L + + g 1 : L 2 g 1 : L + g 1 : L 2 ( g 1 : L + 2 + g 1 : L 2 ) = 2 g 1 : L + g 1 : L 2 .
Expanding the square of the right-hand side gives
π 1 : g 1 : L + g 1 : L 2 = π 1 : g 1 : L + + g 1 : L 2 g 1 : L + g 1 : L = 2 2 π 1 : L g 1 : L π L + 1 : g 1 : L + g 1 : L = 2 1 π 1 : L + g 1 : L + π 1 : L g 1 : L .
Therefore, it is enough to show
lim L , π 1 : L + g 1 : L + π 1 : L g 1 : L = 1 .
By Jensen’s inequality, for any two probability measures μ π with density g, we have
π g π g = 1 .
Combining with our assumption, we deduce that
0 < i = 1 π i g i = π 1 : g 1 : 1 ,
which is equivalent to
< i = 1 log ( π i g i ) 0 .
This series is monotonely decreasing by (A1) and bounded below, so it converges and satisfies that
lim L , π 1 : L + g 1 : L + π 1 : L g 1 : L = lim L , e i = L L + log ( π i g i ) = 1 .

Appendix C. Additional Figures

Figure A1. Noise scaling with d = k = 4 .
Figure A1. Noise scaling with d = k = 4 .
Entropy 23 00022 g0a1
Figure A2. Prior scaling with d = k = 4 .
Figure A2. Prior scaling with d = k = 4 .
Entropy 23 00022 g0a2

References

  1. Agapiou, S.; Papaspiliopoulos, O.; Sanz-Alonso, D.; Stuart, A.M. Importance sampling: Intrinsic dimension and computational cost. Stat. Sci. 2017, 32, 405–431. [Google Scholar] [CrossRef] [Green Version]
  2. Sanz-Alonso, D.; Stuart, A.M.; Taeb, A. Inverse Problems and Data assimilation. arXiv 2018, arXiv:1810.06191. [Google Scholar]
  3. Barber, D. Bayesian Reasoning and Machine Learning; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  4. Garcia Trillos, N.; Kaplan, Z.; Samakhoana, T.; Sanz-Alonso, D. On the consistency of graph-based Bayesian semi-supervised learning and the scalability of sampling algorithms. J. Mach. Learn. Res. 2020, 21, 1–47. [Google Scholar]
  5. Garcia Trillos, N.; Sanz-Alonso, D. The Bayesian update: Variational formulations and gradient flows. Bayesian Anal. 2018, 15, 29–56. [Google Scholar] [CrossRef]
  6. Dick, J.; Kuo, F.Y.; Sloan, I.H. High-dimensional integration: The quasi-Monte Carlo way. Acta Numer. 2013, 22, 133. [Google Scholar] [CrossRef]
  7. Chatterjee, S.; Diaconis, P. The sample size required in importance sampling. arXiv 2015, arXiv:1511.01437. [Google Scholar] [CrossRef] [Green Version]
  8. Sanz-Alonso, D. Importance sampling and necessary sample size: An information theory approach. SIAM/ASA J. Uncertain. Quantif. 2018, 6, 867–879. [Google Scholar] [CrossRef] [Green Version]
  9. Rubino, G.; Tuffin, B. Rare Event Simulation Using Monte Carlo Methods; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  10. Kahn, H.; Marshall, A.W. Methods of reducing sample size in Monte Carlo computations. J. Oper. Res. Soc. Am. 1953, 1, 263–278. [Google Scholar] [CrossRef]
  11. Kahn, H. Use of different Monte Carlo Sampling Techniques; Rand Corporation: Santa Monica, CA, USA, 1955. [Google Scholar]
  12. Bugallo, M.F.; Elvira, V.; Martino, L.; Luengo, D.; Miguez, J.; Djuric, P.M. Adaptive importance sampling: The past, the present, and the future. IEEE Signal Process. Mag. 2017, 34, 60–79. [Google Scholar] [CrossRef]
  13. Bengtsson, T.; Bickel, P.; Li, B. Curse-of-dimensionality revisited: Collapse of the particle filter in very large scale systems. In Probability and Statistics: Essays in honor of David A. Freedman; Institute of Mathematical Statistics: Beachwood, OH, USA, 2008; pp. 316–334. [Google Scholar]
  14. Bickel, P.; Li, B.; Bengtsson, T. Sharp failure rates for the bootstrap particle filter in high dimensions. In Pushing the Limits of Contemporary Statistics: Contributions in Honor of Jayanta K. Ghosh; Institute of Mathematical Statistics: Beachwood, OH, USA, 2008; pp. 318–329. [Google Scholar]
  15. Snyder, C.; Bengtsson, T.; Bickel, P.; Anderson, J. Obstacles to high-dimensional particle filtering. Mon. Weather Rev. 2008, 136, 4629–4640. [Google Scholar] [CrossRef] [Green Version]
  16. Rebeschini, P.; Van Handel, R. Can local particle filters beat the curse of dimensionality? Ann. Appl. Probab. 2015, 25, 2809–2866. [Google Scholar] [CrossRef]
  17. Chorin, A.J.; Morzfeld, M. Conditions for successful data assimilation. J. Geophys. Res. Atmos. 2013, 118, 11–522. [Google Scholar] [CrossRef] [Green Version]
  18. Houtekamer, P.L.; Mitchell, H.L. Data assimilation using an ensemble Kalman filter technique. Mon. Weather Rev. 1998, 126, 796–811. [Google Scholar] [CrossRef]
  19. Hamill, T.M.; Whitaker, J.S.; Anderson, J.L.; Snyder, C. Comments on “Sigma-point Kalman filter data assimilation methods for strongly nonlinear systems”. J. Atmos. Sci. 2009, 66, 3498–3500. [Google Scholar] [CrossRef]
  20. Morzfeld, M.; Hodyss, D.; Snyder, C. What the collapse of the ensemble Kalman filter tells us about particle filters. Tellus A Dyn. Meteorol. Oceanogr. 2017, 69, 1283809. [Google Scholar] [CrossRef]
  21. Farchi, A.; Bocquet, M. Comparison of local particle filters and new implementations. Nonlinear Process. Geophys. 2018, 25. [Google Scholar] [CrossRef] [Green Version]
  22. Morzfeld, M.; Tong, X.T.; Marzouk, Y.M. Localization for MCMC: Sampling high-dimensional posterior distributions with local structure. J. Comput. Phys. 2019, 380, 1–28. [Google Scholar] [CrossRef] [Green Version]
  23. Tong, X.T.; Morzfeld, M.; Marzouk, Y.M. MALA-within-Gibbs samplers for high-dimensional distributions with sparse conditional structure. SIAM J. Sci. Comput. 2020, 42, A1765–A1788. [Google Scholar] [CrossRef]
  24. Liu, J.S. Metropolized independent sampling with comparisons to rejection sampling and importance sampling. Stat. Comput. 1996, 6, 113–119. [Google Scholar] [CrossRef]
  25. Pitt, M.K.; Shephard, N. Filtering via simulation: Auxiliary particle filters. J. Am. Stat. Assoc. 1999, 94, 590–599. [Google Scholar] [CrossRef]
  26. Kong, A. A note on importance sampling using standardized weights. Univ. Chicago, Dept. Stat. Tech. Rep 1992, 348. [Google Scholar]
  27. Kong, A.; Liu, J.S.; Wong, W.H. Sequential imputations and Bayesian missing data problems. J. Am. Stat. Assoc. 1994, 89, 278–288. [Google Scholar] [CrossRef]
  28. Ryu, E.K.; Boyd, S.P. Adaptive importance sampling via stochastic convex programming. arXiv 2014, arXiv:1412.4845. [Google Scholar]
  29. Akyildiz, Ö.D.; Míguez, J. Convergence rates for optimised adaptive importance samplers. arXiv 2019, arXiv:1903.12044. [Google Scholar]
  30. Bogachev, V.I. Gaussian Measures; Number 62; American Mathematical Society: Providence, RI, USA, 1998. [Google Scholar]
  31. Snyder, C.; Bengtsson, T.; Morzfeld, M. Performance bounds for particle filters using the optimal proposal. Mon. Weather Rev. 2015, 143, 4750–4761. [Google Scholar] [CrossRef]
  32. Doucet, A.; De Freitas, N.; Gordon, N. An Introduction to Sequential Monte Carlo Methods. In Sequential Monte Carlo Methods in Practice; Springer: Berlin/Heisenberg, Germany, 2001; pp. 3–14. [Google Scholar]
  33. Del Moral, P. Feynman-Kac Formulae; Springer: Berlin/Heisenberg, Germany, 2004. [Google Scholar]
  34. Doucet, A.; Godsill, S.; Andrieu, C. On sequential Monte Carlo sampling methods for Bayesian filtering. Stat. Comput. 2000, 10, 197–208. [Google Scholar] [CrossRef]
  35. Nielsen, F.; Nock, R. On the chi square and higher-order chi distances for approximating f-divergences. IEEE Signal Process. Lett. 2013, 21, 10–13. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Noise scaling with d = k = 5 . Each histogram represents the empirical distribution of the largest autonormalized weight of importance sampling with a given choice of sample size N and noise level γ 2 . The empirical distribution is obtained using 400 sets of random weights and the histograms are arranged so that in (a) N = γ 4 along the bottom-left to top-right diagonal, while in (b) N = γ 6 along the same diagonal. With scaling γ 4 the distribution of the maximum weight concentrates around 1 along this diagonal, suggesting weight collapse. In contrast, with scaling γ 6 weight collapse is avoided with high probability.
Figure 1. Noise scaling with d = k = 5 . Each histogram represents the empirical distribution of the largest autonormalized weight of importance sampling with a given choice of sample size N and noise level γ 2 . The empirical distribution is obtained using 400 sets of random weights and the histograms are arranged so that in (a) N = γ 4 along the bottom-left to top-right diagonal, while in (b) N = γ 6 along the same diagonal. With scaling γ 4 the distribution of the maximum weight concentrates around 1 along this diagonal, suggesting weight collapse. In contrast, with scaling γ 6 weight collapse is avoided with high probability.
Entropy 23 00022 g001
Figure 2. Prior scaling with d = k = 5 . The setting is similar to the one considered in Figure 1.
Figure 2. Prior scaling with d = k = 5 . The setting is similar to the one considered in Figure 1.
Entropy 23 00022 g002
Figure 3. Dimensional scaling with λ = 1.3 . The experimental setting is similar to those in Figure 1 and Figure 2.
Figure 3. Dimensional scaling with λ = 1.3 . The experimental setting is similar to those in Figure 1 and Figure 2.
Entropy 23 00022 g003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sanz-Alonso, D.; Wang, Z. Bayesian Update with Importance Sampling: Required Sample Size. Entropy 2021, 23, 22. https://doi.org/10.3390/e23010022

AMA Style

Sanz-Alonso D, Wang Z. Bayesian Update with Importance Sampling: Required Sample Size. Entropy. 2021; 23(1):22. https://doi.org/10.3390/e23010022

Chicago/Turabian Style

Sanz-Alonso, Daniel, and Zijian Wang. 2021. "Bayesian Update with Importance Sampling: Required Sample Size" Entropy 23, no. 1: 22. https://doi.org/10.3390/e23010022

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop