Next Article in Journal
Optimization Method for Guillotine Packing of Rectangular Items within an Irregular and Defective Slate
Next Article in Special Issue
Some Properties of Univariate and Multivariate Exponential Power Distributions and Related Topics
Previous Article in Journal
Identification Problem for Nonlinear Gao Beam
Previous Article in Special Issue
Asymptotic Properties of MSE Estimate for the False Discovery Rate Controlling Procedures in Multiple Hypothesis Testing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Accuracy of the Exponential Approximation to Random Sums of Alternating Random Variables

by
Irina Shevtsova
1,2,3,4,* and
Mikhail Tselishchev
2,4,*
1
Department of Mathematics, School of Science, Hangzhou Dianzi University, Hangzhou 310018, China
2
Department of Mathematical Statistics, Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University, 119991 Moscow, Russia
3
Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences, 119333 Moscow, Russia
4
Moscow Center for Fundamental and Applied Mathematics, 119991 Moscow, Russia
*
Authors to whom correspondence should be addressed.
Mathematics 2020, 8(11), 1917; https://doi.org/10.3390/math8111917
Submission received: 7 October 2020 / Revised: 17 October 2020 / Accepted: 20 October 2020 / Published: 1 November 2020
(This article belongs to the Special Issue Analytical Methods and Convergence in Probability with Applications)

Abstract

:
Using the generalized stationary renewal distribution (also called the equilibrium transform) for arbitrary distributions with a finite non-zero first moment, we prove moment-type error-bounds in the Kantorovich distance for the exponential approximation to random sums of possibly dependent random variables with positive finite expectations, in particular, to geometric random sums, generalizing the previous results to alternating and dependent random summands. We also extend the notions of new better than used in expectation (NBUE) and new worse than used in expectation (NWUE) distributions to alternating random variables in terms of the corresponding distribution functions and provide a criteria in terms of conditional expectations similar to the classical one. As corollary, we provide simplified error-bounds in the case of NBUE/NWUE conditional distributions of random summands.

1. Introduction

According to the generalized Rényi theorem, a geometric random sum of independent identically distributed (i.i.d.) nonnegative random variables (r.v.’s), normalized by its mean, converges in distribution to the exponential law when the expectation of the geometric number of summands tends to infinity. Some numerical bounds for the exponential approximation to geometric random sums, as well as their various applications, can be found in the classical monograph of Kalashnikov [1]. Peköz and Röllin [2] developed Stein’s method for the exponential distribution and obtained moment-type estimates for the exponential approximation to geometric and non-geometric random sums with non-negative summands completing Kalashnikov’s bounds in the Kantorovich distance. Their method was substantially based on the equilibrium transform (stationary renewal distribution) of non-negative random variables, hence yielding the technical restriction on the support of the random summands under consideration. Moreover, Peköz and Röllin considered dependent random summands with constant conditional expectations and presented some error-bounds in this case. The present authors extended Stein’s method to alternating (i.e., taking values of both signs) random summands by generalizing the equilibrium transform to distributions with arbitrary support, and obtained moment-type estimates of the accuracy of the exponential approximation for geometric and non-geometric random sums of independent alternating random variables. The same paper [3] contains a detailed overview of the estimates of the exponential approximation to geometric random sums.
The aim of the present work is to extend the results of [3] to dependent random summands with constant conditional expectations, also generalizing the results of [2] to alternating random summands.
Recall that the Kantorovich distance ζ 1 between probability distributions of r.v.’s X and Y with distribution functions (d.f.’s) F and G is defined as a simple probability metric with ζ -structure (see [1,4]) as
ζ 1 F , G ζ 1 L X , L Y ζ 1 X , Y : = sup h Lip 1 R h d F R h d G ,
where Lip c = { h Lip c h is bounded } and
Lip c : = h : R R | | h ( x ) h ( y ) | c | x y | x , y R , c > 0 .
If both X and Y are integrable, then ζ 1 X , Y < and the supremum in (1) can be taken over a wider class Lip 1 of Lipschitz functions. In this case, according to the Kantorovich–Rubinshtein theorem, ζ 1 allows several alternative representations
ζ 1 F , G = min L ( X , Y ) : X = d X , Y = d Y E | X Y | = 0 1 F 1 ( u ) G 1 ( u ) d u = F ( x ) G ( x ) d x ,
where F 1 and G 1 are generalized inverse functions of F and G, respectively.
We will use a generalized equilibrium transform that was introduced and studied in [3]. Given a probability distribution of a r.v. X with d.f. F and finite a : = E X 0 , its equilibrium transform is defined as a (signed) measure L e X on ( R , B ) with the d.f.
F e ( x ) : = 1 a x F ( y ) d y , if x 0 , E X a + 1 a 0 x ( 1 F ( y ) ) d y , if x > 0 .
Observe that L e X is absolutely continuous (a.c.) with respect to (w.r.t.) the Lebesgue measure with the density
p e ( x ) = 1 a F ( x ) , if x 0 , 1 a ( 1 F ( x ) ) , if x > 0 .
The characteristic function (ch.f.) of L e X can be expressed in terms of the original ch.f. f of r.v. X as
f e ( t ) : = R e i t x d F e ( x ) = f ( t ) 1 t f ( 0 ) = f ( t ) 1 i t a , if t 0 , and f e ( 0 ) = 1 .
If X is nonnegative or nonpositive almost surely (a.s.), then L e X is a probability distribution and it is possible to construct a r.v. X e L e X .
Below, we list some other properties of the equilibrium transform (see ([3], Theorem 1) for details and proofs) which will be used in the present work:
Homogeneity. For any r.v. X with finite E X 0 and d.f. F X we have
( F c X ) e ( x ) = F X e ( x / c ) , for all c R { 0 } , x R .
Moments. If E | X | r + 1 < for some r > 0 , then for all k N [ 1 , r ] we have
R x k d F e ( x ) = E X k + 1 ( k + 1 ) E X , R | x | r d F e ( x ) = E X | X | r ( r + 1 ) E X .
We will also use the following inequality from ([3], Theorem 3), which states that the Kantorovich distance to the exponential law is no more than twice greater than distance to the equilibrium transform.
Lemma 1.
Let X be a square integrable r.v. with E X = 1 and E Exp 1 . Then,
ζ 1 ( X , E ) 2 ζ 1 L X , L e X ,
where L e X is the equilibrium transform of L X .
The r.-h.s.’s of (8), in turn, can be bounded from above by the second moment E X 2 in the following way.
Lemma 2
(see Theorem 2 and Remark 2 in [3]). For any square-integrable r.v. X with E X 0 ,
ζ 1 L X , L e X 1 2 · E X 2 | E X | | E X | · P X · sign E X 0 .
Note the presence of the Kantorovich distance between L X and possibly signed measure L e X on the r.-h.s.’s of (8) and (9), which requires some extra explanation. As described in [3], it is defined in terms of d.f.’s in the same way as for probability measures in (1) and allows an alternative representation as an area between d.f.’s of its arguments (similar to the last expression in (2)). Moreover, this generalization retains the property of the homogeneity of order 1 (see ([3], Lemma 1)). Namely, if F and G are d.f.’s of (signed) Borel measures on R with F ( + ) = G ( + ) and F c ( x ) : = F ( c x ) , G c ( x ) : = G ( c x ) , x R , c > 0 , then
ζ 1 ( F c , G c ) = 1 c ζ 1 ( F , G ) .
Using the above notation and techniques, we prove moment-type error bounds in the Kantorovich distance for the exponential approximation to random sums of possibly dependent r.v.’s with positive finite expectations (Theorem 1), which generalize the results of [2] to alternating random summands and results of [3] to dependent random summands.
Moreover, we extend the definitions of new better than used in expectation (NBUE) and new worse than used in expectation (NWUE) distributions to alternating random variables in terms of the corresponding d.f.’s and provide a criteria in terms of conditional expectations similar to the classical one (Theorem 2). Finally, we provide simplified error-bounds in cases of NBUE/NWUE conditional distributions of random summands, generalizing those obtained in [2].

2. Main Results

Lemma 3.
Let X 1 , X 2 , be a sequence of random variables, such that for every n 2 there exists a regular conditional probability L X n X 1 , , X n 1 with the constant conditional expectation a n : = E X n X 1 , , X n 1 ( 0 , + ) . Let S n : = i = 1 n X n for n N , S 0 : = 0 and N be a N 0 N { 0 } -valued r.v., independent of { X 1 , X 2 , } , with
A : = E S N = n = 1 a n P ( N n ) < + .
Then the characteristic function of L e S N is
f S N e ( t ) = m = 1 P ( M = m ) E e i t S m 1 · f m e t X 1 , , X m 1 , t R ,
where M is an N -valued r.v. with
P ( M = m ) = a m A P ( N m ) , m N ,
and f n e t x 1 , , x n 1 is the characteristic function of the equilibrium transform L e X n X 1 = x 1 , , X n 1 = x n 1 of the conditional distribution L X n X 1 = x 1 , , X n 1 = x n 1 . Or, in terms of (conditional) distribution functions,
F S N e ( x ) = m = 1 P ( M = m ) R m 1 F m e ( x x 1 x m 1 x 1 , , x m 1 ) d F ( x 1 , , x m 1 ) ,
where F ( x 1 , , x m ) denotes the joint d.f. of X 1 , , X m and F m e ( x x 1 , , x m 1 ) denotes the conditional d.f. of L e X m X 1 = x 1 , , X m 1 = x m 1 , m N . Here, and in what follows, we assume that f m e t X 1 , , X m 1 and F m e ( x x 1 , , x m 1 ) for m = 1 denote unconditional ch.f. and d.f. of L e X 1 . A similar notation will be used for other characteristics of distributions.
Remark 1.
If X 1 , X 2 , are independent, then (11)–(12) reduces to the single summand property of the equilibrium transform (see (Equation (30), [3]).
Remark 2.
If all X n 0 a.e. and M is independent of { X 1 , X 2 , } , then (11)–(12) can be expressed in terms of random variables as
S N e = d S M 1 + Z M ,
where the sequence { Z 1 , Z 2 , } is independent of M and the conditional distribution of Z n given X 1 , , X n 1 coincides with L e X n X 1 , , X n 1 .
Proof. 
According to ([3], Lemma 2), for every t R and n N we have
k = 1 n e i t X k 1 = k = 1 n e i t X k 1 j = 1 k 1 e i t X j ,
where j = 1 0 1 . By applying (5) twice, we obtain for t 0
f S N e ( t ) = f S N ( t ) 1 t f S N ( 0 ) = 1 i t A n = 1 P ( N = n ) E k = 1 n e i t X k 1 = = n = 1 P ( N = n ) k = 1 n E e i t X k 1 i t A j = 1 k 1 e i t X j = = k = 1 a k A P ( N k ) E e i t X k 1 i t a k j = 1 k 1 e i t X j = = k = 1 P ( M = k ) E e i t S k 1 E e i t X k 1 i t a k | X 1 , , X k 1 = = k = 1 P ( M = k ) E e i t S k 1 f k e t X 1 , , X k 1 .
Theorem 1.
Let X 1 , X 2 , be a sequence of random variables, such that for every n 2 there exists a regular conditional probability L X n X 1 , , X n 1 with the constant conditional expectation a n : = E X n X 1 , , X n 1 ( 0 , + ) . Let S n : = i = 1 n X n for n N , S 0 : = 0 and N be a N 0 -valued r.v., independent of { X 1 , X 2 , } , with
A : = E S N = n = 1 a n P ( N n ) < + .
Let E Exp 1 , W : = S N / E S N = S N / A , and M be a N -valued r.v. with
P ( M = m ) = a m A P ( N m ) , m N .
Then, for any joint distribution of N and M we have
ζ 1 ( W , E ) 2 A 1 sup n E | X n | · E | N M | + D ,
where the first term vanishes in case of N = d M and
D = m N P ( M = m ) R m 1 ζ 1 L X m x 1 , , x m 1 , L e X m | x 1 , , x m 1 d F ( x 1 , , x m 1 ) ,
and both notations L X m x 1 , , x m 1 , L e X m | x 1 , , x m 1 stand for the short forms of L X m X 1 = x 1 , , X m 1 = x m 1 , L e X m | X 1 = x 1 , , X m 1 = x m 1 , respectively.
Proof. 
By Lemma 1 and homogeneity of both the Kantorovich distance and the equilibrium transform (see (6) and (10)), we have
ζ 1 ( W , E ) 2 ζ 1 L W , L W e = 2 A 1 ζ 1 L S N , L e S N .
Let us bound ζ 1 L S N , L e S N from above.
For a given joint distribution L N , M , let p n m : = P ( N = n , M = m ) , n N 0 , m N . Denoting S j , k : = i = j k X i for j k , designating F m ( x x 1 , , x m 1 ) and F m e ( x x 1 , , x m 1 ) the short forms of the conditional d.f.’s of L X m X 1 = x 1 , , X m 1 = x m 1 and L e X m X 1 = x 1 , , X m 1 = x m 1 , respectively, m N , and using Lemma 3 together with the representation of the Kantorovich distance between (signed) measures as an area between their distribution functions, we obtain
ζ 1 L S N , L e S N = R F S N ( x ) F S N e ( x ) d x = = R | F S N ( x ) m = 1 P ( M = m ) R m 1 F m e ( x x 1 x m 1 x 1 , , x m 1 ) d F ( x 1 , , x m 1 ) | d x = = R | n N 0 m N p n m F S n ( x ) R m 1 F m e ( x x 1 x m 1 x 1 , , x m 1 ) d F ( x 1 , , x m 1 ) | d x n N 0 m N I n m ,
where
I n m = R F S n ( x ) R m 1 F m e ( x x 1 x m 1 x 1 , , x m 1 ) d F ( x 1 , , x m 1 ) d x .
For the summands with n < m by Tonelli’s theorem we have
I n m R n R | 𝟙 { x 1 + + x n < x } R m 1 n F m e ( x x 1 x m 1 x 1 , , x m 1 ) × × d F ( x n + 1 , , x m 1 x 1 , , x n ) | d x d F ( x 1 , , x n ) = = R n R | 𝟙 { 0 < x } R m 1 n F m e ( x x n + 1 x m 1 x 1 , , x m 1 ) × × d F ( x n + 1 , , x m 1 x 1 , , x n ) | d x d F ( x 1 , , x n ) ,
where F ( x n + 1 , , x m 1 x 1 , , x n ) stands for the conditional joint d.f. of ( X n + 1 , , X m 1 ) given that X 1 = x 1 , , X n = x n . By adding and subtracting
R m 1 n F m ( x x n + 1 x m 1 x 1 , , x m 1 ) d F ( x n + 1 , , x m 1 x 1 , , x n )
under the modulus sign and using further the triangle inequality, we obtain
I n m R n ζ 1 δ 0 , L S n + 1 , m x 1 , , x n d F ( x 1 , , x n ) + + R m 1 ζ 1 L X m x 1 , , x m 1 , L e X m x 1 , , x m 1 d F ( x 1 , , x m 1 ) ,
where δ 0 is the Dirac measure concentrated in zero.
For the case of n m by Tonelli’s theorem, we have
I n m R m 1 R F S m , n ( x x 1 , , x m 1 ) F m e ( x x 1 , , x m 1 ) d x d F ( x 1 , , x m 1 ) ,
where F S m , n ( x x 1 , , x m 1 ) stands for the conditional d.f. F S m , n ( x X 1 = x 1 , , X m 1 = x m 1 ) .
By adding and subtracting F m ( x x 1 , , x m 1 ) in the integrand under the modulus sign and using further the triangle inequality, we obtain
I n m R m ζ 1 δ 0 , L S m + 1 , n x 1 , , x m d F ( x 1 , , x m ) + + R m 1 ζ 1 L X m x 1 , , x m 1 , L e X m x 1 , , x m 1 d F ( x 1 , , x m 1 ) .
Combining both n < m and n m cases and using the fact that ζ 1 ( δ 0 , L X ) = E | X | , we get
ζ 1 L S N , L e S N n N 0 , m N p n m E | i = ( n m ) + 1 n m X i | + + m N P ( M = m ) R m 1 ζ 1 L X m x 1 , , x m 1 , L e X m x 1 , , x m 1 d F ( x 1 , , x m 1 ) ,
where the first sum can be bounded from above by
sup i E | X i | · n N 0 , m N p n m | n m | = sup i E | X i | · E N M .
Substituting the latter bound into (14) yields (13). If N = d M , then we take a comonotonic pair ( N , N ) as ( N , M ) , which eliminates the first term on the r.-h.s. of (15). □
Remark 3.
Theorem 1 reduces to ([2], Theorem 3.1) in case of nonnegative { X n } and to ([3], Theorem 6) in case of independent { X n } .
If both expectations E N and E M are finite, then E | N M | in (13) can be replaced with ζ 1 ( N , M ) due to the dual representation of the ζ 1 -metric as
ζ 1 ( N , M ) = inf L N , M : N = d N , M = d M E | N M | .
Moreover, if N and M are stochastically ordered (that is, F N ( x ) F M ( x ) for all x R or vice versa), then
ζ 1 ( N , M ) = R F N ( x ) F M ( x ) d x = R F N ( x ) F M ( x ) d x = E N E M .
If, in addition, all E X n = a , then
E M = m = 1 m E N P ( N m ) = 1 E N n = 1 m = 1 n m P ( N = n ) = 1 E N E N ( N + 1 ) 2 = 1 2 E N 2 E N + 1 ,
and the first term on the r.-h.s of (13) can be bounded from above as
2 A 1 sup n E | X n | · | E N E M | 2 a E N sup n E | X n | · 1 2 E N 2 E N + 1 E N = = 1 a sup n E | X n | · E N 2 ( E N ) 2 + 1 E N 2 .
Hence, we arrive at the following.
Corollary 1.
Let, in addition to the conditions of Theorem 1, E X n = a for all n N and the r.v.’s N and M be stochastically ordered with finite expectations. Then
ζ 1 ( W , E ) 1 a sup n E | X n | · E N 2 ( E N ) 2 + 1 E N 2 + 2 D a E N .
Remark 4.
If N Geom p , p ( 0 , 1 ) , that is P ( N = n ) = ( 1 p ) n 1 p , n N , then
A = E S N = n = 1 a n P ( N n ) = n = 1 a n ( 1 p ) n 1 = 1 p E a N .
In this case, for every h Lip 1 with E | h ( M ) | < we have
E h ( M ) = m = 1 h ( m ) a m A ( 1 p ) m 1 = E h ( N ) a N E a N .
Therefore, by the Cauchy–Bunyakovsky–Schwarz inequality, we have
ζ 1 ( N , M ) = sup h Lip 1 h ( 0 ) = 0 E h ( M ) E h ( N ) sup h Lip 1 h ( 0 ) = 0 E a N E a N 1 h ( N ) E a N E a N 1 N Var a N · E N 2 E a N = ( 2 p ) Var a N p E a N < 2 Var a N p E a N .
Thus, the first term on the r.-h.s of (13) can be bounded from above as
2 A 1 sup n E | X n | · ζ 1 ( N , M ) 2 2 sup n E | X n | · Var a N ( E a N ) 2 .
This means that in case of sup n E | X n | < and inf n a n > 0 , the first term on the r.-h.s. of (13) is, at most, of order O Var a N as p + 0 .
If N Geom p , p ( 0 , 1 ) and E X n = a for all n N , then M Geom p , and thus, ζ 1 ( N , M ) = 0 . Therefore, if sup n E | X n | < , then the first term on the r.-h.s. of (13) vanishes.
If N + 1 Geom p , p ( 0 , 1 ) and E X n = a for all n N , then M Geom p as well, and thus, ζ 1 ( N , M ) = 1 .
Next, let us simplify the second term D in (13).
Corollary 2.
Let, in addition to the conditions of Theorem 1, b n = E X n 2 < for every n N and the r.v. M be independent of { X 1 , X 2 , } . Then
ζ 1 ( W , E ) A 1 2 sup n E | X n | · E | N M | + E b M a M 2 a M P ( X M 0 | M ) .
Proof. 
By Lemma 2, we have
D m N P ( M = m ) R m 1 E X m 2 x 1 , , x m 1 2 a m a m P ( X m 0 | x 1 , , x m 1 ) d F ( x 1 , , x m 1 ) = = m N P ( M = m ) E X m 2 2 a m a m P ( X m 0 ) = 1 2 E b M a M 2 a M P ( X M 0 | M ) ,
which proves the statement of the corollary. □
Recall that a nonnegative r.v. X with finite E X > 0 is said to be new better than used in expectation (NBUE), if
E X E X t X > t for all t 0 ,
and new worse than used in expectation (NWUE), if
E X E X t X > t for all t 0 .
Using Tonelli’s theorem, it can be ascertained that X is NBUE if and only if X stochastically dominates its equilibrium transform X e , that is, F ( x ) F e ( x ) for all x 0 . Similarly, X is NWUE if and only if X e stochastically dominates X. We will show that the same results hold true if we extend both NBUE and NWUE notions to the case of r.v.s without support constraints.
Definition 1.
We say that a (possibly alternating) r.v. X with d.f. F and E X ( 0 , + ) is NBUE, if F ( x ) F e ( x ) for all x R , where F e is the equilibrium transform w.r.t. F. Similarly, we say that the r.v. X with d.f. F and E X ( 0 , + ) is NWUE (new worse than used in expectation), if F ( x ) F e ( x ) for all x R .
Theorem 2.
A r.v. X with finite E X > 0 is NBUE if and only if
E X E X t X > t for all t [ 0 , ess sup X ) .
Moreover, (16) implies that X > 0 a.s. The r.v. X with finite E X > 0 is NWUE if and only if
E X E X t X > t for all t [ 0 , ess sup X ) .
Proof. 
By Tonelli’s theorem, for every t < ess sup X , we have
E X t X > t = 1 P ( X > t ) ( t , + ) ( x t ) d F ( x ) = = 1 P ( X > t ) ( t , + ) ( t , + ) 𝟙 { x > y } d y d F ( x ) = = 1 P ( X > t ) ( t , + ) ( t , + ) 𝟙 { x > y } d F ( x ) d y = = 1 P ( X > t ) ( t , + ) 1 F ( y ) d y .
Note that the same chain of equalities holds true with the event { X t } in place of { X > t } . If t [ 0 , ess sup X ) , then ( t , + ) 1 F ( y ) d y = E X · 1 F e ( t ) and (18) turns into
E X t X > t = E X 1 F e ( t ) 1 F ( t + 0 ) for all t [ 0 , ess sup X ) .
Let X be NBUE, i.e., F ( x ) F e ( x ) for all x R . This implies that 1 F ( t + 0 ) 1 F e ( t ) due to the absolute continuity of F e , and hence, with the account of (19), we obtain (16).
Conversely, let (16) hold true. For t = 0 , we have
E X + E X E X X > 0 = 1 P ( X > 0 ) E X + ,
which is possible if and only if P ( X > 0 ) = 1 , i.e., X > 0 a.s. Hence, F ( t ) = F e ( t ) = 0 for t 0 . For positive t, inequality (16) together with Equation (19) yields 1 F ( t ) 1 F e ( t ) . Therefore, X is NBUE.
Let X be NWUE, i.e., F ( x ) F e ( x ) for all x R . This yields 1 F ( t + 0 ) 1 F e ( t ) , and hence, with the account of (19), we obtain (17).
Conversely, let (17) hold true. For t 0 we have F ( t ) 0 F e ( t ) , since L e X has negative density on the negative half-line. Finally, (17) and (19) yield F ( t ) F e ( t ) for positive t. □
If X is NBUE or NWUE with E X > 0 and E X 2 < , then
ζ 1 L X , L e X = R F ( x ) F e ( x ) d x = R F ( x ) F e ( x ) d x = = R x d F e ( x ) E X = E X 2 2 E X E X ,
where the last equality follows from (7). Hence, if for all m N and x 1 , x 2 R the conditional distribution L X m x 1 , , x m 1 is NBUE or NWUE, then the second term on the r.-h.s. of (13) takes the form
D = m N P ( M = m ) R m 1 E X m 2 x 1 , , x m 1 2 a m a m d F ( x 1 , , x m 1 ) .
If M is independent of { X 1 , X 2 , } , then the latter expression can be bounded from above with the help of the conditional Jensen’s inequality
D m N P ( M = m ) E X m 2 2 a m a m = m N P ( M = m ) E X m 2 2 a m 2 2 a m = E X M 2 2 a M 2 2 a M E b M + 2 a M 2 2 a M ,
where we used the notation b m = E X m 2 as before. If all the r.v.’s { M , X 1 , X 2 , } are independent, then (20) may be simplified as
D = m N P ( M = m ) E X m 2 2 a m a m = E | b M 2 a M 2 | 2 a M .
Hence, we get the following
Corollary 3.
Let, in addition to the conditions of Theorem 1, b n = E X n 2 < for every n N , the conditional distributions L X n x 1 , , x n 1 be NBUE or NWUE for all n N and x 1 , x 2 R , and the r.v. M be independent of { X 1 , X 2 , } . Then
ζ 1 ( W , E ) A 1 2 sup n E | X n | · E | N M | + E X M 2 2 a M 2 a M A 1 2 sup n E | X n | · E | N M | + E b M + 2 a M 2 a M .
Moreover, if all the r.v.’s { M , X 1 , X 2 , } are independent, then
ζ 1 ( W , E ) A 1 2 sup n E | X n | · E | N M | + E b M 2 a M 2 a M .
Corollary 3 reduces to ([2], Corollary 3.1) in the case of all X n 0 being independent, all E X n = 1 and the r.v.s. N, M being stochastically ordered (cf. Corollary 1).

Author Contributions

Conceptualization, I.S.; methodology, I.S. and M.T.; formal analysis, I.S. and M.T.; investigation, I.S. and M.T.; writing—original draft preparation, I.S. and M.T.; writing—review and editing, I.S. and M.T.; supervision, I.S.; funding acquisition, I.S. All authors have read and agreed to the published version of the manuscript.

Funding

The work was supported by the Ministry of Science and Higher Education of the Russian Federation, project No. 075-15-2020-799.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
r.v.random variable
i.i.d.independent identically distributed
d.f.distribution function
ch.f.characteristic function
a.s.almost sure
a.c.absolute continuity, absolutely continuous
w.r.t.with respect to
r.-h.s.right-hand side

References

  1. Kalashnikov, V.V. Geometric Sums: Bounds for Rare Events with Applications: Risk Analysis, Reliability, Queueing; Mathematics and Its Applications; Springer: Dordrecht, The Netherlands, 1997. [Google Scholar]
  2. Peköz, E.A.; Röllin, A. New rates for exponential approximation and the theorems of Rényi and Yaglom. Ann. Probab. 2011, 39, 587–608. [Google Scholar] [CrossRef] [Green Version]
  3. Shevtsova, I.; Tselishchev, M. A Generalized Equilibrium Transform with Application to Error Bounds in the Rényi Theorem with No Support Constraints. Mathematics 2020, 8, 577. [Google Scholar] [CrossRef]
  4. Zolotarev, V.M. Probability metrics. Theory Probab. Appl. 1984, 28, 278–302. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shevtsova, I.; Tselishchev, M. On the Accuracy of the Exponential Approximation to Random Sums of Alternating Random Variables. Mathematics 2020, 8, 1917. https://doi.org/10.3390/math8111917

AMA Style

Shevtsova I, Tselishchev M. On the Accuracy of the Exponential Approximation to Random Sums of Alternating Random Variables. Mathematics. 2020; 8(11):1917. https://doi.org/10.3390/math8111917

Chicago/Turabian Style

Shevtsova, Irina, and Mikhail Tselishchev. 2020. "On the Accuracy of the Exponential Approximation to Random Sums of Alternating Random Variables" Mathematics 8, no. 11: 1917. https://doi.org/10.3390/math8111917

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop