Next Article in Journal
Queuing-Inventory System with Catastrophes in the Warehouse: Case of Rare Catastrophes
Next Article in Special Issue
The Law of the Iterated Logarithm for Lp-Norms of Kernel Estimators of Cumulative Distribution Functions
Previous Article in Journal
Uncertain Asymptotic Stability Analysis of a Fractional-Order System with Numerical Aspects
Previous Article in Special Issue
A Continuous-Time Urn Model for a System of Activated Particles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

5th-Order Multivariate Edgeworth Expansions for Parametric Estimates

Callaghan Innovation (formerly Industrial Research Ltd.), Lower Hutt 5011, New Zealand
Mathematics 2024, 12(6), 905; https://doi.org/10.3390/math12060905
Submission received: 30 January 2024 / Revised: 4 March 2024 / Accepted: 8 March 2024 / Published: 19 March 2024
(This article belongs to the Special Issue Advances in Applied Probability and Statistical Inference)

Abstract

:
The only cases where exact distributions of estimates are known is for samples from exponential families, and then only for special functions of the parameters. So statistical inference was traditionally based on the asymptotic normality of estimates. To improve on this we need the Edgeworth expansion for the distribution of the standardised estimate. This is an expansion in n 1 / 2 about the normal distribution, where n is typically the sample size. The first few terms of this expansion were originally given for the special case of a sample mean. In earlier work we derived it for any standard estimate, hugely expanding its application. We define an estimate w ^ of an unknown vector w in R p , as a standard estimate, if E w ^ w as n , and for r 1 the rth-order cumulants of w ^ have magnitude n 1 r and can be expanded in n 1 . Here we present a significant extension. We give the expansion of the distribution of any smooth function of w ^ , say t ( w ^ ) in R q , giving its distribution to n 5 / 2 . We do this by showing that t ( w ^ ) , is a standard estimate of t ( w ) . This provides far more accurate approximations for the distribution of t ( w ^ ) than its asymptotic normality.
MSC:
60B12; 60B20; 60E05; 62E20; 62F12; 62G86; 62H10

1. Introduction and Summary

Suppose that w ^ is a standard or Type A estimate of an unknown w in R p with respect to a given parameter n. That is, E w ^ w as n and for r 1 , its rth-order cumulants have magnitude n 1 r and can be expanded as
k ¯ 1 r = κ ( w ^ i 1 , , w ^ i r ) = e = r 1 n e k ¯ e 1 r f o r 1 i 1 , , i r p ,
where the cumulant coefficients  k ¯ e 1 r = k e i 1 i r do not depend on n, or at least are bounded as n . So k ¯ 0 1 = w i 1 . For example, (1) holds for w ^ a function of a sample mean. We show that if t ( w ^ ) is a smooth function of a standard estimate w ^ , then it is a standard estimate of t ( w ) . We establish this for unbiased w ^ in Theorem 2, and for biased w ^ in Theorem 3. More generally, we define w ^ as a Type B estimate if E w ^ w as n , and for r 1 ,
k ¯ 1 r = d = 2 r 2 n d / 2 b ¯ d 1 r f o r 1 i 1 , , i r p , b ¯ d 1 r = b d i 1 i r .
For example, this type arises when considering one-sided confidence regions. If t ( w ^ ) is a smooth function of a Type B estimate, then it is a Type B estimate of t ( w ) . So for a Type A estimate, b ¯ d 1 r is k ¯ e 1 r for d = 2 e and 0 for d odd. n is typically the sample size or the minimum sample size if there is more than one sample.
Section 3 and Section 4 show that a smooth function of w ^ , say t ( w ^ ) , is a standard estimate of t = t ( w ) . These sections provide the cumulant coefficients of t ( w ^ ) in terms of those of w ^ and the derivatives of t ( w ) . Section 3 does this for w ^ unbiased and Section 4 for w ^ biased. So they can be thought of as chain rules for obtaining the cumulant coefficients for t ( w ^ ) from those of w ^ . We use the notation Y n = O ( n γ ) to mean that n γ Y n is bounded as n . We provide the cumulant coefficients required for Edgeworth expansions of t ^ to O ( n 5 / 2 ) . Cumulant coefficients up to O ( n 1 ) were given in [1]. Cumulant coefficients up to O ( n r / 2 ) use the rth derivatives of t ( w ) . Section 5 specialises to univariate t ( w ) with examples. Theorem 3 and Corollary 4 rectify a ¯ 2 12 = K 2 j 1 j 2 and a 22 on pages 67 and 59 of [2]. Section 2 extends the shorthand bar notation above and gives the foundation theorem.
We now summarise the expressions for Edgeworth expansions of w ^ for standard and Type B estimates in terms of the cumulant coefficients k ¯ e 1 r and b ¯ d 1 r given in [3,4,5]:
P r o b . ( Y n w x ) = r = 0 n r / 2 P r ( x ) , p Y n w ( x ) = r = 0 n r / 2 p r ( x ) ,
w h e r e Y n w = n 1 / 2 ( w ^ w b 1 n 1 / 2 ) , ( b 1 ) i = b 1 i , P 0 ( x ) = Φ V ( x ) ,
P r ( x ) = B ˜ r ( e ( / x ) ) Φ V ( x ) f o r r 1 ,
e j ( t ) = r = 1 j + 2 b ¯ r + j 1 r t i 1 t i r / r ! , b ¯ r + j 1 r = b r + j i 1 i r ,
Φ V ( x ) is the multivariate normal distribution with zero mean and covariance V = ( b ¯ 2 12 ) , B ˜ r ( e ) is the complete ordinary Bell polynomial of [6]:
B ˜ 1 ( e ) = e 1 , B ˜ 2 ( e ) = e 2 + e 1 2 , B ˜ 3 ( e ) = e 3 + 2 e 1 e 2 + e 1 3 , B ˜ 4 ( e ) = e 4 + 2 e 1 e 3 + e 2 2 + 3 e 1 2 e 2 + e 1 4 .
This equation provides the 5th-order Edgeworth expansion for the distribution of Y n w , extending it up to O ( n 5 / 2 ) . It is important to note that (5) utilises the tensor summation convention of implicitly summing i 1 , , i r over their range 1 , , p . For example,
f o r i = / x i a n d ¯ k = i k , P 1 ( x ) = e 1 ( / x ) ) Φ V ( x ) = r = 1 3 b ¯ r + 1 1 r ( ¯ 1 ) ( ¯ r ) Φ V ( x ) / r ! = k ¯ 1 1 ( ¯ 1 ) Φ V ( x ) + k ¯ 2 1 3 ( ¯ 1 ) ( ¯ 2 ) ( ¯ 3 ) Φ V ( x ) / 6
for a standard estimate. For a standard estimate, b 1 = 0 in (3) and the cumulant coefficients needed for P r ( x ) , p r ( x ) of (2) are k ¯ 0 1 = w i 1 ,
f o r r = 0 : k ¯ 1 12 ; f o r r = 1 : k ¯ 1 1 , k ¯ 2 1 3 ; f o r r = 2 : k ¯ 2 12 , k ¯ 3 1 4 ;
f o r r = 3 : k ¯ 2 1 , k ¯ 3 1 3 , k ¯ 4 1 5 ; f o r r = 4 : k ¯ 3 12 , k ¯ 4 1 4 , k ¯ 5 1 6 .
Therefore, to derive the 5th-order Edgeworth expansion for the distribution of n 1 / 2 ( t ( w ^ ) t ( w ) ) for w ^ a standard estimate, we simply substitute the coefficients in (6) and (7) in the expression for P r ( x ) , r 4 , with those corresponding to t ( w ^ ) as provided in Section 3, Section 4 and Section 5.
Equation (9) of [3] provides P r ( x ) for the more general case where P 0 ( x ) is the distribution function of Y in R p which depends on n but is asymptotic to Φ V ( x ) and has a Type B expansion. One can choose P 0 ( x ) so that the number of terms in each P r ( x ) greatly reduces: see Withers and Nadarajah (2012d) [7,8]. When w ^ is lattice, further terms need to be added: see for example Chapter 5 of [9], [10], and for the density of Y n w , p211 of [11], Section 5 of [12], and Section 6 of [13]. Corollary 1 of [3] gives the tilted Edgeworth expansion for t ( w ^ ) , sometimes called the saddlepoint approximation, or the small sample expansion as it is a series in n 1 not just n 1 / 2 . It is very useful for the tails of the distribution where Edgeworth expansions perform poorly. Cumulant coefficients are also needed for bias reduction, Bayesian inference, confidence regions and power. See [7,8,14,15,16,17,18]. For examples. For a historical overview of Edgeworth expansions, refer to Section 7.
In summary, this paper gives high-order expansions for the distribution of a wide range of estimates, by determining the cumulant coefficients required for any smooth function of a standard estimate. This approach offers unprecedented accuracy for these distributions and eliminates the necessity for simulation methods.

2. Foundations

Considering w = ( w 1 , , w p ) in R p and an estimate w ^ , assume that E w ^ w as n and that for r 1 , its rth-order cumulants have magnitude n 1 r . Given i 1 , , i r in 1 , 2 , , p , we write these cumulants in shorthand as
k ¯ 1 r = k i 1 i r = κ ( w ^ i 1 , , w ^ i r ) = O ( n 1 r ) a s   n .
For example, if w ^ = X ¯ is the mean of a random sample of size n, then (8) holds since k ¯ 1 r = n 1 r κ ( X i 1 , , X i r ) where X i is the ith component of X. According to Theorem 1, Equation (8) is valid if w ^ is a smooth function of one or more sample means. Let t : R p R q be a smooth function in a neighbourhood of w with jth component t j = t j ( w ) , j = 1 , , q and finite partial derivatives
t ¯ s r k = t i s i r j k = i s i r t j k ( w ) , t ¯ s r k = t i s i r j k f o r   s r
where i = / w i .  Superscripts i are reserved for the cumulants of w ^ and subscripts for partial derivatives of t ( w ) . Superscripts j are reserved for the components of t ( w ) and for the joint cumulants of t ^ = t ( w ^ ) . This bar shorthand allows us to shorten expressions by suppressing the is and js. We write the cumulants of t ^ = t ( w ^ ) as
K ¯ 1 r = K j 1 j r = κ ( t ^ j 1 , , t ^ j r ) w h e r e   t ^ = t ( w ^ ) , t ^ j = t j ( w ^ ) .
For example, k ¯ 12 = k i 1 i 2 and K ¯ 12 = K j 1 j 2 imply that the covariance of w ^ is represented by ( k ¯ 12 ) , and the covariance of t ^ is represented by ( K ¯ 12 ) , both of which scale as O ( n 1 ) . Next, we demonstrate that
K ¯ 12 =   1 K ¯ 12 + O ( n 2 ) w h e r e   1 K ¯ 12 = t ¯ 1 1 t ¯ 2 2 k ¯ 12 .
In other words,   1 K j 1 j 2 = t i 1 j 1 t i 2 j 2 k i 1 i 2 , employing the tensor sum convention. The rest of this section and all proofs can be skipped on a first reading. Theorem 1 provides the cumulants of t ^ = t ( w ^ ) when w ^ is unbiased.
We use the notation N f j 1 j 2 to denote summing over all N permutations of j 1 , j 2 , resulting distinct terms.
Theorem 1.
Suppose E w ^ = w and Equation (8) holds. Then for r 1 and 1 j 1 , ,   j r q ,   K ¯ 1 r of (9) satisfies
K ¯ 1 r = e = r 1   e K ¯ 1 r w h e r e   e K ¯ 1 r =   e K j 1 j r = O ( n e ) a s   n ,
and the leading   e K ¯ 1 r are as follows.
  0 K ¯ 1 = t ¯ 1 , t h a t   i s ,   0 K j 1 = t j 1 .   1 K ¯ 1 = t ¯ 12 1 k ¯ 12 / 2 , t h a t   i s ,   1 K j 1 = t i 1 i 2 j 1 k i 1 i 2 / 2 = i 1 , i 2 = 1 p t i 1 i 2 j 1 k i 1 i 2 / 2 .   2 K ¯ 1 = t ¯ 1 3 1 k ¯ 1 3 / 6 + t ¯ 1 4 1 k ¯ 12 k ¯ 34 / 8 ,
that is,
  2 K j 1 = t i 1 i 2 i 3 j 1 k i 1 i 2 i 3 / 6 + t i 1 i 4 j 1 k ¯ i 1 i 2 k ¯ i 3 i 4 / 8 .   3 K ¯ 1 = t ¯ 1 4 1 k ¯ 1 4 / 24 + t ¯ 1 5 1 k ¯ 1 3 k ¯ 45 / 12 + t ¯ 1 6 1 k ¯ 12 k ¯ 34 k ¯ 56 / 48 ,   4 K ¯ 1 = t ¯ 1 5 1 k ¯ 1 5 / 120 + t ¯ 1 6 1 ( k ¯ 1 4 k ¯ 56 / 48 + k ¯ 1 3 k ¯ 4 6 / 72 ) + t ¯ 1 7 1 k ¯ 1 3 k ¯ 45 k ¯ 67 / 48 + t ¯ 1 8 1 k ¯ 12 k ¯ 34 k ¯ 56 k ¯ 78 / 384 .   1 K ¯ 12 = t ¯ 1 1 t ¯ 2 2 k ¯ 12 ,   2 K ¯ 12 = T 1 3 12 k ¯ 1 3 / 2 + T 1 4 12 k ¯ 12 k ¯ 34 / 2
where
T 1 3 12 = 2 t ¯ 12 1 t ¯ 3 2 , T 1 4 12 = 2 t ¯ 1 3 1 t ¯ 4 2 + t ¯ 13 1 t ¯ 24 2 , 2 t ¯ a b 1 t ¯ c d 2 = t ¯ a b 1 t ¯ c d 2 + t ¯ a b 2 t ¯ c d 1 .   3 K ¯ 12 = U 1 4 12 k ¯ 1 4 + T 1 5 12 k ¯ 1 3 k ¯ 45 + T 1 6 12 k ¯ 12 k ¯ 34 k ¯ 56 / 4
where
U 1 4 12 = 2 t ¯ 1 3 1 t ¯ 4 2 / 6 + t ¯ 12 1 t ¯ 34 2 / 4 , T 1 5 12 = 2 ( t ¯ 1 4 1 t ¯ 5 2 / 6 + t ¯ 1245 1 t ¯ 3 2 / 4 + t ¯ 124 1 t ¯ 35 2 / 2 + t ¯ 145 1 t ¯ 23 2 / 4 ) , T 1 6 12 = 2 t ¯ 1 5 1 t ¯ 6 2 / 2 + 2 t ¯ 1235 1 t ¯ 46 2 + t ¯ 1 3 1 t ¯ 4 6 2 + 2 t ¯ 135 1 t ¯ 246 1 / 3 .   2 K ¯ 1 3 = t ¯ 1 1 t ¯ 2 2 t ¯ 3 3 k ¯ 1 3 + T 1 4 1 3 k ¯ 12 k ¯ 34
where
T 1 4 1 3 = 3 t ¯ 13 1 t ¯ 2 2 t ¯ 4 3 .   3 K ¯ 1 3 = T 1 4 1 3 k ¯ 1 4 / 2 + T 1 5 1 3 k ¯ 1 3 k ¯ 45 + T 1 6 1 3 k ¯ 12 k ¯ 34 k ¯ 56
where
T 1 5 1 3 = 6 t ¯ 124 1 t ¯ 3 2 t ¯ 5 3 / 2 + 3 t ¯ 145 1 t ¯ 2 2 t ¯ 3 3 / 2 + 6 t ¯ 12 1 t ¯ 34 2 t ¯ 5 3 / 2 + 3 t ¯ 14 1 t ¯ 25 2 t ¯ 3 3 , T 1 6 1 3 = 3 t ¯ 1235 1 t ¯ 4 2 t ¯ 6 3 / 2 + 6 t ¯ 1 3 1 t ¯ 45 2 t ¯ 6 3 + 6 t ¯ 135 1 t ¯ 24 2 t ¯ 6 3 / 2 + t ¯ 13 1 t ¯ 25 2 t ¯ 46 3 .   3 K ¯ 1 4 = t ¯ 1 1 t ¯ 4 4 k ¯ 1 4 + T 1 5 1 4 k ¯ 1 3 k ¯ 45 + T 1 6 1 4 k ¯ 12 k ¯ 34 k ¯ 56
where
T 1 5 1 4 = 12 t ¯ 14 1 t ¯ 2 2 t ¯ 3 3 t ¯ 5 4 , T 1 6 1 4 = 4 t ¯ 135 1 t ¯ 2 2 t ¯ 4 3 t ¯ 6 4 + 12 t ¯ 13 1 t ¯ 25 2 t ¯ 4 3 t ¯ 6 4 .   4 K ¯ 1 4 = U 1 5 1 4 k ¯ 1 5 / 2 + U 1 6 1 4 k ¯ 1 4 k ¯ 56 + V 1 6 1 4 k ¯ 1 3 k ¯ 4 6 + T 1 7 1 4 k ¯ 1 3 k ¯ 45 k ¯ 67 + T 1 8 1 4 k ¯ 12 k ¯ 34 k ¯ 56 k ¯ 78
where
U 1 5 1 4 = 4 t ¯ 12 1 t ¯ 3 2 t ¯ 4 3 t ¯ 5 4 , U 1 6 1 4 = 12 t ¯ 125 1 t ¯ 3 2 t ¯ 4 3 t ¯ 6 4 / 2 + 4 t ¯ 156 1 t ¯ 2 2 t ¯ 3 3 t ¯ 4 4 / 2 + 24 t ¯ 12 1 t ¯ 35 2 t ¯ 4 3 t ¯ 6 4 / 2 + 6 t ¯ 15 1 t ¯ 26 2 t ¯ 3 3 t ¯ 4 4 ) , V 1 6 1 4 = 12 t ¯ 124 1 t ¯ 3 2 t ¯ 5 3 t ¯ 6 4 / 2 + 12 t ¯ 12 1 t ¯ 34 2 t ¯ 5 3 t ¯ 6 4 / 2 + 6 t ¯ 14 1 t ¯ 25 2 t ¯ 3 3 t ¯ 6 4 , T 1 7 1 4 = 12 t ¯ 1246 1 t ¯ 3 2 t ¯ 5 3 t ¯ 7 4 / 2 + 12 t ¯ 1456 1 t ¯ 2 2 t ¯ 3 3 t ¯ 7 4 / 2 + 24 t ¯ 124 1 t ¯ 36 2 t ¯ 5 3 t ¯ 7 4 / 2 + 24 t ¯ 124 1 t ¯ 56 2 t ¯ 3 3 t ¯ 7 4 / 2 + 24 t ¯ 145 1 t ¯ 26 2 t ¯ 3 3 t ¯ 7 4 / 2 + 12 t ¯ 146 1 t ¯ 23 2 t ¯ 5 3 t ¯ 7 4 + 24 t ¯ 146 1 t ¯ 25 2 t ¯ 3 3 t ¯ 7 4 + 12 t ¯ 146 1 t ¯ 57 2 t ¯ 2 3 t ¯ 3 4 / 2 + 12 t ¯ 456 1 t ¯ 17 2 t ¯ 2 3 t ¯ 3 4 + 24 t ¯ 12 1 t ¯ 34 2 t ¯ 56 3 t ¯ 7 4 / 2 + 12 t ¯ 14 1 t ¯ 25 2 t ¯ 36 3 t ¯ 7 4 + 12 t ¯ 14 1 t ¯ 26 2 t ¯ 57 3 t ¯ 3 4 ) , T 1 8 1 4 = 4 t ¯ 12357 1 t ¯ 4 2 t ¯ 6 3 t ¯ 8 4 / 2 + 24 t ¯ 1235 1 t ¯ 47 2 t ¯ 6 3 t ¯ 8 4 / 2 + 12 t ¯ 1357 1 t ¯ 24 2 t ¯ 6 3 t ¯ 8 4 / 2 + 12 t ¯ 123 1 t ¯ 457 2 t ¯ 6 3 t ¯ 8 4 / 2 + 12 t ¯ 135 1 t ¯ 247 2 t ¯ 6 3 t ¯ 8 4 / 2 + 24 t ¯ 123 1 t ¯ 45 2 t ¯ 67 3 t ¯ 8 4 / 2 + 24 t ¯ 135 1 t ¯ 24 2 t ¯ 67 3 t ¯ 8 4 + 12 t ¯ 135 1 t ¯ 27 2 t ¯ 48 3 t ¯ 6 4 / 2 + 3 t ¯ 13 1 t ¯ 25 2 t ¯ 47 3 t ¯ 68 4 .   4 K ¯ 1 5 = t ¯ 1 1 t ¯ 5 5 k ¯ 1 5 + T 1 6 1 5 k ¯ 1 4 k ¯ 56 + U 1 6 1 5 k ¯ 1 3 k ¯ 4 6 + T 1 7 1 5 k ¯ 1 3 k ¯ 45 k ¯ 67 + T 1 8 1 5 k ¯ 12 k ¯ 34 k ¯ 56 k ¯ 78
where
T 1 6 1 5 = 20 t ¯ 15 1 t ¯ 2 2 t ¯ 3 3 t ¯ 4 4 t ¯ 6 5 , U 1 6 1 5 = 15 t ¯ 14 1 t ¯ 2 2 t ¯ 3 3 t ¯ 5 4 t ¯ 6 5 , T 1 7 1 5 = 30 t ¯ 146 1 t ¯ 2 2 t ¯ 3 3 t ¯ 5 4 t ¯ 7 5 + 60 t ¯ 14 1 t ¯ 26 2 t ¯ 3 3 t ¯ 5 4 t ¯ 7 5 + 60 t ¯ 14 1 t ¯ 56 2 t ¯ 2 3 t ¯ 3 4 t ¯ 7 5 , T 1 8 1 5 = 5 t ¯ 1357 1 t ¯ 2 2 t ¯ 4 3 t ¯ 6 4 t ¯ 8 5 / 5 + 60 t ¯ 135 1 t ¯ 27 2 t ¯ 4 3 t ¯ 6 4 t ¯ 8 5 + 60 t ¯ 13 1 t ¯ 25 2 t ¯ 47 3 t ¯ 6 4 t ¯ 8 5 .   5 K ¯ 1 6 = t ¯ 1 1 t ¯ 6 5 k ¯ 1 6 + T 1 7 1 6 k ¯ 1 5 k ¯ 67 + U 1 7 1 6 k ¯ 1 4 k ¯ 5 7 + T 1 8 1 6 k ¯ 1 4 k ¯ 56 k ¯ 78 + U 1 8 1 6 k ¯ 1 3 k ¯ 4 6 k ¯ 78 + T 1 9 1 6 k ¯ 1 3 k ¯ 45 k ¯ 67 k ¯ 89 + T 1 10 1 6 k ¯ 12 k ¯ 34 k ¯ 56 k ¯ 78 k ¯ 9 , 10
where
T 1 7 1 6 = 30 t ¯ 16 1 t ¯ 2 2 t ¯ 3 3 t ¯ 4 4 t ¯ 5 5 t ¯ 7 6 , U 1 7 1 6 = 60 t ¯ 15 1 t ¯ 2 2 t ¯ 3 3 t ¯ 4 4 t ¯ 6 5 t ¯ 7 6 , T 1 8 1 6 = 60 t ¯ 157 1 t ¯ 2 2 t ¯ 3 3 t ¯ 4 4 t ¯ 6 5 t ¯ 8 6 + 180 t ¯ 15 1 t ¯ 27 2 t ¯ 3 3 t ¯ 4 4 t ¯ 6 5 t ¯ 8 6 + 120 t ¯ 15 1 t ¯ 67 2 t ¯ 2 3 t ¯ 3 4 t ¯ 4 5 t ¯ 8 6 , U 1 8 1 6 = 90 t ¯ 147 1 t ¯ 2 2 t ¯ 3 3 t ¯ 5 4 t ¯ 6 5 t ¯ 8 6 + 360 t ¯ 14 1 t ¯ 27 2 t ¯ 3 3 t ¯ 5 4 t ¯ 6 5 t ¯ 8 6 + 90 t ¯ 17 1 t ¯ 48 2 t ¯ 2 3 t ¯ 3 4 t ¯ 5 5 t ¯ 6 6 , T 1 9 1 6 = 60 t ¯ 1468 1 t ¯ 2 2 t ¯ 3 3 t ¯ 5 4 t ¯ 7 5 t ¯ 9 6 + 360 t ¯ 146 1 t ¯ 28 2 t ¯ 3 3 t ¯ 5 4 t ¯ 7 5 t ¯ 9 6 + 360 t ¯ 146 1 t ¯ 58 2 t ¯ 2 3 t ¯ 3 4 t ¯ 7 5 t ¯ 9 6 + 180 t ¯ 468 1 t ¯ 15 2 t ¯ 2 3 t ¯ 3 4 t ¯ 7 5 t ¯ 9 6 + 120 t ¯ 14 1 t ¯ 26 2 t ¯ 38 3 t ¯ 5 4 t ¯ 7 5 t ¯ 9 6 + 720 t ¯ 14 1 t ¯ 26 2 t ¯ 58 3 t ¯ 3 4 t ¯ 7 5 t ¯ 9 6 + 360 t ¯ 14 1 t ¯ 56 2 t ¯ 78 3 t ¯ 2 4 t ¯ 3 5 t ¯ 9 6 , T 1 10 1 6 = 6 t ¯ 13579 1 t ¯ 2 2 t ¯ 4 3 t ¯ 6 4 t ¯ 8 5 t ¯ 10 6 + 120 t ¯ 1357 1 t ¯ 29 2 t ¯ 4 3 t ¯ 6 4 t ¯ 8 5 t ¯ 10 6 + 90 t ¯ 135 1 t ¯ 279 2 t ¯ 4 3 t ¯ 6 4 t ¯ 8 5 t ¯ 10 6 + 360 t ¯ 135 1 t ¯ 27 2 t ¯ 49 3 t ¯ 6 4 t ¯ 8 5 t ¯ 10 6 + 360 t ¯ 135 1 t ¯ 27 2 t ¯ 89 3 t ¯ 4 4 t ¯ 6 5 t ¯ 10 6 + 360 t ¯ 13 1 t ¯ 25 2 t ¯ 47 3 t ¯ 69 4 t ¯ 8 5 t ¯ 10 6 .
Note 1.
For reference regarding N in N , refer to page 48 of [19]. It is important to note that the notation N in terms like T 1 s 1 r only applies for N < r ! in the context where they are used. For example, writing ( a b c ) = t ¯ 13 a t ¯ 2 b t ¯ 4 c and recalling that N only permutes superscripts but leaves subscripts alone, we have
T 1 4 1 3 = N ( 123 ) = ( 123 ) + ( 213 ) + ( 321 )
with N = 3 not 3 ! since
3 ! ( 123 ) = ( 123 ) + ( 132 ) + ( 213 ) + ( 231 ) + ( 321 ) + ( 312 ) = k = 1 6 S k
say, when multiplied by k ¯ 12 k ¯ 34 , as in   2 K ¯ 1 3 , gives k = 1 6 S k say, where for k = 1 , 2 , 3 , S 2 k = S 2 k 1 . For example, T 1 4 1 3 k ¯ 12 k ¯ 34 in   2 K ¯ 1 3 above is shorthand for 3 t ¯ 13 1 t ¯ 2 2 t ¯ 4 3 k ¯ 12 k ¯ 34 . For,
S 2 = t ¯ 4 2 k ¯ 43 t ¯ 31 1 k ¯ 12 t ¯ 2 3 = t ¯ 1 2 k ¯ 12 t ¯ 13 1 k ¯ 34 t ¯ 4 3 = S 1 T 1 4 1 3 k ¯ 12 k ¯ 34 = S 1 + S 3 + S 5 .
Proof 
This result can be derived by substituting A ¯ 1 r j = A i 1 i r j by t ¯ 1 r 1 / r ! = t . i 1 i r j 1 / r ! according to [19]. □
Likewise, one can readily derive   4 K ¯ 12 ,   4 K ¯ 1 3 from pages 51–53 of [19]. The tensor form 2 K ¯ 1 1 = t ¯ 12 1 k ¯ 1 12 can be conceptualised as a molecule or molecular structure of 2 atoms, t ¯ 12 1 and k ¯ 1 12 , connected by the double bond 1, 2, represented as i 1 , i 2 .   2 K ¯ 1 is a linear combination of t ¯ 1 3 1 k ¯ 1 3 , 2 atoms linked by the triple bond 1,2,3, and secondly k ¯ 12 t ¯ 1 4 1 k ¯ 34 . The last expression has the structure of CO2, with 2 identical atoms each linked by a double bond to a central atom. Just as such bonds are depicted in chemistry to illustrate the structure of a molecule, they can be very useful here to illustrate the difference in structure of similar mathematical expressions. S 1 of Note 1 is a linear molecular form with the 4 single bonds 1,2,3,4 and 4 distinct atoms, t ¯ 1 2 , t ¯ 1 1 , t ¯ 12 1 , and k ¯ 12 . Other expressions have more complex structures. Doubling the last term in   2 K ¯ 12 yields T 1 4 12 k ¯ 12 k ¯ 34 = S 12 + S 21 + S where S 12 = k ¯ 12 t ¯ 1 3 1 k ¯ 34 t ¯ 4 2 exhibits a linear structure with a double bond between 1 and 2, followed by two single bonds, 3 and 4. Additionally, S = t ¯ 31 1 k ¯ 12 t ¯ 24 2 k ¯ 43 forms a square or rectangle with four single bonds 1,2,4,3 arranged along successive edges of the square. These pictorial forms are a very useful way to distinguish similar expressions in N f j 1 j 2 .
Section 6 provides the ’more complicated’ terms referred to (but not given) on p49 of [19] when w ^ is biased. It can be used for an alternative Proof of Theorem 3 below. From Theorem 1, Edgeworth expansions can be obtained for the distribution and density of the standardised form of t ( w ^ ) ,
Y n t = n 1 / 2 ( t ^ t ) = n 1 / 2 ( t ( w ^ ) t ( w ) ) ,
of the form
P r o b . ( Y n t x ) = r = 0 P r n ( x ) , p Y n t ( x ) = r = 0 p r n ( x ) ,
where P r n ( x ) , p r n ( x ) are O ( n r / 2 ) . The   e K ¯ 1 r of Theorem 1 needed for P r n ( x ) , p r n ( x ) are as follows.
F o r P 0 n ( x ) , p 0 n ( x ) :   0 K ¯ 1 = t ¯ 1 ,   1 K ¯ 12 . F o r P 1 n ( x ) , p 1 n ( x ) :   1 K ¯ 1 ,   2 K ¯ 1 3 . F o r P 2 n ( x ) , p 2 n ( x ) :   2 K ¯ 12 ,   3 K ¯ 1 4 . F o r P 3 n ( x ) , p 3 n ( x ) :   2 K ¯ 1 ,   3 K ¯ 1 3 ,   4 K ¯ 1 5 . F o r P 4 n ( x ) , p 4 n ( x ) :   3 K ¯ 1 2 ,   4 K ¯ 1 4 ,   5 K ¯ 1 6 .

3. Cumulant Coefficients for t ( w ^ ) when E w ^ = w

We now show that for r 1 and 1 j 1 , , j r q , the cumulant coefficient K ¯ 1 r from Equation (10) can be expanded as
K ¯ 1 r = K j 1 j r = κ ( t ^ j 1 , , t ^ j r ) = e = r 1 n e K ¯ e 1 r
Substituting { k ¯ 1 r } with { K ¯ 1 r } on the right-hand side of (4), denoted as RHS (4), provides the Edgeworth expansion for Y n t as in Equation (12). If π is a product of cumulants as in Equation (1), let ( π ) e denote the coefficient of n e in the expansion of π . For example, ( k ¯ 1 r ) e = k ¯ e 1 r ,
( k ¯ 12 k ¯ 34 ) 3 = k ¯ 1 12 k ¯ 2 34 + k ¯ 2 12 k ¯ 1 34 , ( k ¯ 12 k ¯ 34 ) 4 = k ¯ 1 12 k ¯ 3 34 + k ¯ 2 12 k ¯ 2 34 + k ¯ 3 12 k ¯ 1 34 , ( k ¯ 1 3 k ¯ 45 ) 4 = k ¯ 2 1 3 k ¯ 2 45 + k ¯ 3 1 3 k ¯ 1 45 , ( k ¯ 12 k ¯ 34 k ¯ 56 ) 4 = k ¯ 1 12 k ¯ 1 34 k ¯ 2 56 + k ¯ 1 12 k ¯ 2 34 k ¯ 1 56 + k ¯ 2 12 k ¯ 1 34 k ¯ 1 56 .
Now, let us provide the elements of the expansion (14) when E w ^ = w .
Theorem 2.
Assume that w ^ is an unbiased estimate of w satisfying Equation (1) and t ( w ) has finite derivatives. In this case, Equation (14) holds with bounded cumulant coefficients
K ¯ e 1 r = K e j 1 j r = k = r 1 e   k K ¯ e 1 r :
K ¯ r 1 1 r =   r 1 K ¯ r 1 1 r , K ¯ r 1 r =   r 1 K ¯ r 1 r +   r K ¯ r 1 r ,
and so forth. The leading coefficients needed for P r ( x ) , p r ( x ) of (4) for the distribution of Y n t of (12) are given in the T , U , V notation of Theorem 1 as follows.
K ¯ 0 1 = t ¯ 1 , t h a t   i s , K 0 j 1 = t j 1 = t j 1 ( w ) .   0 K ¯ e 1 = 0   f o r   e 1 . F o r   P 0 ( x ) : K ¯ 1 12 =   1 K ¯ 1 12 = t ¯ 1 1 t ¯ 2 2 k ¯ 1 12 , t h a t   i s , K 1 j 1 j 2 = t i 1 j 1 t i 2 j 2 k 1 i 1 i 2 . F o r   P 1 ( x ) : K ¯ 1 1 =   1 K ¯ 1 1 = t ¯ 12 1 k ¯ 1 12 / 2 , t h a t   i s , K 1 j 1 = t i 1 i 2 j 1 k 1 i 1 i 2 / 2 , K ¯ 2 1 3 =   2 K ¯ 2 1 3 = t ¯ 1 1 t ¯ 2 2 t ¯ 3 3 k ¯ 2 1 3 + T 1 4 1 3 k ¯ 1 12 k ¯ 1 34 . F o r   P 2 ( x ) : K ¯ 2 12 =   1 K ¯ 2 12 +   2 K ¯ 2 12 f o r   1 K ¯ 2 12 = t ¯ 1 1 t ¯ 2 2 k ¯ 2 12 ,   2 K ¯ 2 12 = T 1 3 12 k ¯ 2 1 3 / 2 + T 1 4 12 k ¯ 1 12 k ¯ 1 34 / 2 , K ¯ 3 1 4 =   3 K ¯ 3 1 4 = ( t ¯ 1 1 t ¯ 4 4 ) k ¯ 3 1 4 + T 1 5 1 4 k ¯ 2 1 3 k ¯ 1 45 + T 1 6 1 4 k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 . F o r   P 3 ( x ) : K ¯ 2 1 =   1 K ¯ 2 1 +   2 K ¯ 2 1 f o r   1 K ¯ 2 1 = t ¯ 12 1 k ¯ 2 12 / 2 ,   2 K ¯ 2 1 = t ¯ 1 3 1 k ¯ 2 1 3 / 6 + t ¯ 1 4 1 k ¯ 1 12 k ¯ 1 34 / 8 , t h a t   i s , K 2 j 1 = t i 1 i 2 j 1 k 2 i 1 i 2 / 2 + t i 1 i 2 i 3 j 1 k 2 i 1 i 2 i 3 / 6 + t i 1 i 4 j 1 k 1 i 1 i 2 k 1 i 3 i 4 / 8 , K ¯ 3 1 3 = k = 2 3   k K ¯ 3 1 3 f o r   2 K ¯ 3 1 3 = t ¯ 1 1 t ¯ 2 2 t ¯ 3 3 k ¯ 3 1 3 + T 1 4 1 3 ( k ¯ 12 k ¯ 34 ) 3 ,   3 K ¯ 3 1 3 = T 1 4 1 3 k ¯ 3 1 4 / 2 + T 1 5 1 3 k ¯ 2 1 3 k ¯ 1 45 + T 1 6 1 3 k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 , K ¯ 4 1 5 =   4 K ¯ 4 1 5 = t ¯ 1 1 t ¯ 5 5 k ¯ 4 1 5 + T 1 6 1 5 k ¯ 3 1 4 k ¯ 1 56 + U 1 6 1 5 k ¯ 2 1 3 k ¯ 2 4 6 + T 1 7 1 5 k ¯ 2 1 3 k ¯ 1 45 k ¯ 1 67 + T 1 8 1 5 k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 k ¯ 1 78 . F o r   P 4 ( x ) : K ¯ 3 12 = k = 1 3   k K ¯ 3 12 f o r   1 K ¯ 3 12 = t ¯ 1 1 t ¯ 2 2 k ¯ 3 12 ,
  2 K ¯ 3 12 = T 1 3 12 k ¯ 3 1 3 / 2 + T 1 4 12 ( k ¯ 12 k ¯ 34 ) 3 / 2 ,   3 K ¯ 3 12 = U 1 4 12 k ¯ 3 1 4 + T 1 5 12 k ¯ 2 1 3 k ¯ 1 45 + T 1 6 12 k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 / 4 , K ¯ 4 1 4 = k = 3 4   k K ¯ 4 1 4 f o r   3 K ¯ 4 1 4 = ( t ¯ 1 1 t ¯ 4 4 ) k ¯ 4 1 4 + T 1 5 1 4 ( k ¯ 1 3 k ¯ 45 ) 4 + T 1 6 1 4 ( k ¯ 12 k ¯ 34 k ¯ 56 ) 4 ,   4 K ¯ 4 1 4 = U 1 5 1 4 k ¯ 4 1 5 / 2 + U 1 6 1 4 k ¯ 3 1 4 k ¯ 1 56 + V 1 6 1 4 k ¯ 2 1 3 k ¯ 2 4 6 + T 1 7 1 4 k ¯ 2 1 3 k ¯ 1 45 k ¯ 1 67 + T 1 8 1 4 k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 k ¯ 1 78 , K ¯ 5 1 6 =   5 K ¯ 5 1 6 = t ¯ 1 1 t ¯ 6 5 k ¯ 5 1 6 + T 1 7 1 6 k ¯ 4 1 5 k ¯ 1 67 + U 1 7 1 6 k ¯ 3 1 4 k ¯ 2 5 7 + T 1 8 1 6 k ¯ 3 1 4 k ¯ 1 56 k ¯ 1 78 + U 1 8 1 6 k ¯ 2 1 3 k ¯ 2 4 6 k ¯ 2 78 + T 1 9 1 6 k ¯ 2 1 3 k ¯ 1 45 k ¯ 1 67 k ¯ 1 89 + T 1 10 1 6 k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 k ¯ 1 78 k ¯ 1 9 , 10 . A l s o ,   f o r   K ¯ 0 1 , K ¯ 1 1 , K ¯ 2 1   a b o v e ,   E t j 1 ( w ^ ) = e = 0 4 n e K ¯ e 1 + O ( n 5 ) w h e r e K ¯ 3 1 = k = 1 3   k K ¯ 3 1   f o r   1 K ¯ 3 1 = t ¯ 12 1 k ¯ 3 12 / 2 ,   2 K ¯ 3 1 = t ¯ 1 3 1 k ¯ 3 1 3 / 6 + t ¯ 1 4 1 k ¯ 1 12 k ¯ 2 34 / 4 ,   3 K ¯ 3 1 = t ¯ 1 4 1 k ¯ 3 1 4 / 24 + t ¯ 1 5 1 k ¯ 2 1 3 k ¯ 1 45 / 12 + t ¯ 1 6 1 k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 / 48 , K ¯ 4 1 = k = 1 4   k K ¯ 4 1 f o r   1 K ¯ 4 1 = t ¯ 12 1 k ¯ 4 12 / 2 ,   2 K ¯ 4 1 = t ¯ 1 3 1 k ¯ 4 1 3 / 6 + t ¯ 1 4 1 ( 2 k ¯ 1 12 k ¯ 3 34 + k ¯ 2 12 k ¯ 2 34 ) / 8 ,   3 K ¯ 4 1 = t ¯ 1 4 1 k ¯ 4 1 4 / 24 + t ¯ 1 5 1 ( k ¯ 2 1 3 k ¯ 2 45 + k ¯ 3 1 3 k ¯ 1 45 ) / 12 + t ¯ 1 6 1 k ¯ 1 12 k ¯ 1 34 k ¯ 2 56 / 16 ,   4 K ¯ 4 1 = t ¯ 1 5 1 k ¯ 4 1 5 / 120 + t ¯ 1 6 1 ( k ¯ 3 1 4 k ¯ 1 56 / 48 + k ¯ 2 1 3 k ¯ 2 4 6 / 72 ) + t ¯ 1 7 1 k ¯ 2 1 3 k ¯ 1 45 k ¯ 1 67 / 48 + t ¯ 1 8 1 k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 k ¯ 1 78 / 384 .
Proof 
Substituting (1) into   k K ¯ 1 r of Theorem 1 gives   k K ¯ 1 r = e = k   k K ¯ e 1 r n e say. So by (10), (14) and (16) hold.   k K ¯ e 1 r =   k K e j 1 j r is   k V ˜ e j 1 j r of [2]. □
Note  2.
(11) made explicit the 3 terms needed in T 1 4 1 3 for P 1 ( x ) of Theorem 2. Similarly P 2 ( x ) needs the 12 terms
T 1 5 1 4 = 12 ( 1234 ) = ( 1234 ) + ( 1243 ) + ( 2413 ) + ( 2431 ) + ( 3124 ) + ( 3142 ) + ( 3241 ) + ( 3412 ) + ( 4123 ) + ( 4132 ) + ( 4231 ) + ( 4321 )
where ( a b c d ) = t ¯ 14 a t ¯ 2 b t ¯ 3 c t ¯ 5 d . It also needs the 4 + 12 terms T 1 6 1 4 = A + B where
A = 4 ( 1234 ) = ( 1234 ) + ( 2134 ) + ( 3124 ) + ( 4123 ) f o r ( a b c d ) = t ¯ 135 a t ¯ 2 b t ¯ 4 c t ¯ 6 d , B = 12 ( 1234 ) = ( 1234 ) + ( 1423 ) + ( 1432 ) + ( 1324 ) + ( 2134 ) + ( 2314 ) + ( 2413 ) + ( 3124 ) + ( 3214 ) + ( 3412 ) + ( 4213 ) + ( 4312 ) f o r ( a b c d ) = t ¯ 13 a t ¯ 25 b t ¯ 4 c t ¯ 6 d .

4. Cumulant Coefficients for t ( w ^ ) when E w ^ w

We proceed by removing the assumption of w ^ being unbiased. We utilise K ¯ e 1 r from Theorem 2, and the shorthand f ¯ m = i m f where again i = / w i . A significant distinction arises compared to Theorem 2: in that case, k ¯ e 1 r was treated as an algebraic expression. However, now we must consider each of them as a function of w. Thus, we assume that the distribution of w ^ is determined by w. This assumption is necessary to derive higher order confidence intervals for t ( w ) when q = 1 : see [20]. It is demonstrated that for Y n t from Equation (12), P 2 ( x ) , p 2 ( x ) require the first derivatives k ¯ 1 i 12 = i k ¯ 1 12 where i = / w i , P 3 ( x ) , p 3 ( x ) need the 1st derivatives k ¯ 2 4 1 3 , and so on. The derivatives of K ¯ e 1 r are computed using Leibniz’s rule for the derivatives of a product. For example,
K ¯ 1 3 12 = ( t ¯ 1 1 t ¯ 2 2 k ¯ 1 12 ) 3 = ( 12 2 t ¯ 1 1 t ¯ 23 2 ) k ¯ 1 12 + t ¯ 1 1 t ¯ 2 2 k ¯ 1 3 12 f o r 12 2 t ¯ 1 1 t ¯ 23 2 = t ¯ 13 1 t ¯ 2 2 + t ¯ 1 1 t ¯ 23 2 , K ¯ 1 3 1 = ( t ¯ 12 1 k ¯ 1 12 ) 3 / 2 = t ¯ 1 3 1 k ¯ 1 12 / 2 + t ¯ 12 1 k ¯ 1 3 12 / 2 , K ¯ 1 34 12 = 12 2 [ ( t ¯ 14 1 t ¯ 23 2 + t ¯ 1 1 t ¯ 2 4 2 ) k ¯ 1 12 + t ¯ 1 1 t ¯ 23 2 k ¯ 1 4 12 + t ¯ 14 1 t ¯ 2 2 k ¯ 1 3 12 ] + t ¯ 1 1 t ¯ 2 2 k ¯ 1 34 12 , ( t ¯ 1 1 t ¯ 2 2 t ¯ 3 3 k ¯ 2 1 3 ) 4 = ( t ¯ 1 1 t ¯ 2 2 t ¯ 3 3 ) 4 k ¯ 2 1 3 + t ¯ 1 1 t ¯ 2 2 t ¯ 3 3 k ¯ 2 4 1 3 , ( t ¯ 1 1 t ¯ 2 2 t ¯ 3 3 ) 4 = t ¯ 1 1 t ¯ 2 2 t ¯ 34 3 + t ¯ 1 1 t ¯ 24 2 t ¯ 3 3 + t ¯ 14 1 t ¯ 2 2 t ¯ 3 3 , ( T 1 4 1 3 k ¯ 1 12 k ¯ 1 34 ) 5 = T 1 4 5 1 3 k ¯ 1 12 k ¯ 1 34 + T 1 4 1 3 ( k ¯ 1 12 k ¯ 1 34 ) 5 , T 1 4 5 1 3 = 3 ( t ¯ 135 1 t ¯ 2 2 t ¯ 4 3 + t ¯ 13 1 t ¯ 25 2 t ¯ 4 3 + t ¯ 13 1 t ¯ 2 2 t ¯ 45 3 ) , ( k ¯ 1 12 k ¯ 1 34 ) 5 = k ¯ 1 5 12 k ¯ 1 34 + k ¯ 1 12 k ¯ 1 5 34 .
Theorem 3.
Let w ^ in R p be a biased standard estimate of w satisfying (1) where k ¯ e 1 r depend on w. Then t ^ = t ( w ^ ) in R q is a standard estimate of t ( w ) :
κ ( t ^ j 1 , , t ^ j r ) = e = r 1 n e a ¯ e 1 r f o r r 1 , 1 j 1 , , j r q ,
w h e r e a ¯ e 1 r = K ¯ e 1 r + D ¯ e 1 r , D ¯ r 1 1 r = 0 ,
for K ¯ e 1 r of Theorem 2, and the other D ¯ e 1 r = D e j 1 j r needed for P r ( x ) , p r ( x ) of (4) for Y n t of (12) are as follows.
F o r P 0 ( x ) : D ¯ 1 12 = 0 a ¯ 1 12 = K ¯ 1 12 = K 1 j 1 j 2 = t ¯ 1 1 t ¯ 2 2 k ¯ 1 12 . F o r P 1 ( x ) : D ¯ 1 1 = t ¯ 1 1 k ¯ 1 1 a ¯ 1 1 = K ¯ 1 1 + D ¯ 1 1 = t ¯ 1 1 k ¯ 1 1 + t ¯ 12 1 k ¯ 1 12 / 2 , F o r P 2 ( x ) : D ¯ 2 12 = K ¯ 1 3 12 k ¯ 1 3 = [ ( t ¯ 13 1 t ¯ 2 2 + t ¯ 1 1 t ¯ 23 2 ) k ¯ 1 12 + t ¯ 1 1 t ¯ 2 2 k ¯ 1 3 12 ] k ¯ 1 3 a ¯ 2 12 = t ¯ 1 1 t ¯ 2 2 k ¯ 2 12 + T 1 3 12 k ¯ 2 1 3 / 2 + T 1 4 12 k ¯ 1 12 k ¯ 1 34 / 2 + [ ( t ¯ 13 1 t ¯ 2 2 + t ¯ 1 1 t ¯ 23 2 ) k ¯ 1 12 + t ¯ 1 1 t ¯ 2 2 k ¯ 1 3 12 ] k ¯ 1 3 . F o r P 3 ( x ) : D ¯ 2 1 = K ¯ 1 , 1 1 + K ¯ 0 , 2 1 , K ¯ 1 , 1 1 = K ¯ 1 3 1 k ¯ 1 3 , K ¯ 0 , 2 1 = t ¯ 1 1 k ¯ 2 1 + t ¯ 12 1 k ¯ 1 1 k ¯ 1 2 / 2 a ¯ 2 1 = t ¯ 1 1 k ¯ 2 1 + t ¯ 12 1 ( k ¯ 2 12 + k ¯ 1 1 k ¯ 1 2 + k ¯ 1 3 12 k ¯ 1 3 ) / 2 + t ¯ 1 3 1 ( k ¯ 2 1 3 / 6 + k ¯ 1 1 k ¯ 1 23 / 2 ) + t ¯ 1 4 1 k ¯ 1 12 k ¯ 1 34 / 8 , D ¯ 3 1 3 = K ¯ 2 4 1 3 k ¯ 1 4 = ( t ¯ 1 1 t ¯ 2 2 t ¯ 3 3 k ¯ 2 1 3 ) 4 k ¯ 1 4 + ( T 1 4 1 3 k ¯ 1 12 k ¯ 1 34 ) 5 k ¯ 1 5 . F o r P 4 ( x ) : D ¯ 3 12 = K ¯ 2 , 1 12 + K ¯ 1 , 2 12 , K ¯ 2 , 1 12 = K ¯ 2 3 12 k ¯ 1 3 , K ¯ 1 , 2 12 = K ¯ 1 3 12 k ¯ 2 3 / 2 + K ¯ 1 34 12 k ¯ 1 3 k ¯ 1 4 , D ¯ 4 1 4 = K ¯ 3 5 1 4 k ¯ 1 5 , D ¯ 5 1 6 = 0 .
For E t j 1 ( w ^ ) to O ( n 5 ) we also need D ¯ j = D ¯ j 1 , j = 3 , 4 , given by
D ¯ 3 = K ¯ 2 , 1 + K ¯ 1 , 2 + K ¯ 0 , 3 , K ¯ 2 , 1 = K ¯ 2 1 k ¯ 1 1 , K ¯ 2 1 = ( t ¯ 1 3 k ¯ 2 23 + t ¯ 23 k ¯ 2 1 23 ) / 2 + ( t ¯ 1 4 k ¯ 2 2 4 + t ¯ 2 4 k ¯ 2 1 2 4 ) / 6 + t ¯ 1 5 k ¯ 1 23 k ¯ 1 45 / 8 + t ¯ 2 5 k ¯ 1 23 k ¯ 1 1 45 / 4 , K ¯ 1 , 2 = K ¯ 1 1 k ¯ 2 1 + K ¯ 1 12 k ¯ 1 1 k ¯ 1 2 / 2 , 2 K ¯ 1 1 = t ¯ 1 3 k ¯ 1 23 + t ¯ 23 k ¯ 1 1 23 , 2 K ¯ 1 12 = t ¯ 1 4 k ¯ 1 34 + 12 2 t ¯ 2 4 k ¯ 1 1 34 + t ¯ 34 k ¯ 1 12 34 , K ¯ 0 , 3 = t ¯ 1 k ¯ 3 1 + t ¯ 12 k ¯ 1 1 k ¯ 2 2 + t ¯ 1 3 k ¯ 1 1 k ¯ 1 2 k ¯ 1 3 / 6 , D ¯ 4 = K ¯ 3 , 1 + K ¯ 2 , 2 + K ¯ 1 , 3 + K ¯ 0 , 4 , K ¯ 3 , 1 = K ¯ 3 1 k ¯ 1 1 , K ¯ 3 1 = t ¯ 1 3 k ¯ 3 23 / 2 + t ¯ 23 k ¯ 3 1 23 / 2 + t ¯ 1 4 k ¯ 3 2 4 / 6 + t ¯ 2 4 k ¯ 3 1 2 4 / 6 + t ¯ 1 5 k ¯ 1 23 k ¯ 2 45 / 4 + t ¯ 2 5 k ¯ 1 23 k ¯ 2 1 45 / 2 + ( t ¯ 1 5 k ¯ 3 2 5 + t ¯ 2 5 k ¯ 3 1 2 5 ) / 24 + ( t ¯ 1 6 k ¯ 2 2 4 k ¯ 1 56 + t ¯ 2 6 k ¯ 2 1 2 4 k ¯ 1 56 + t ¯ 2 6 k ¯ 2 2 4 k ¯ 1 1 56 ) / 12 + k ¯ 1 23 k ¯ 1 45 ( t ¯ 1 7 k ¯ 1 67 / 48 + t ¯ 2 7 k ¯ 1 1 67 / 16 ) ,
K ¯ 2 , 2 = K ¯ 2 1 k ¯ 2 1 + K ¯ 2 12 k ¯ 1 1 k ¯ 1 2 / 2 , K ¯ 2 1 = t ¯ 1 3 k ¯ 2 23 / 2 + t ¯ 23 k ¯ 2 1 23 / 2 + t ¯ 1 4 k ¯ 2 2 4 / 6 + t ¯ 2 4 k ¯ 2 1 2 4 / 6 + t ¯ 1 5 k ¯ 1 23 k ¯ 1 45 / 8 + t ¯ 2 5 k ¯ 1 23 k ¯ 1 1 45 / 4 , 2 K ¯ 2 12 = t ¯ 1 4 k ¯ 2 34 + 12 2 t ¯ 2 4 k ¯ 2 1 34 + t ¯ 34 k ¯ 2 12 34 + ( t ¯ 1 5 k ¯ 2 3 5 + 12 2 t ¯ 13 5 k ¯ 2 2 3 5 + t ¯ 3 5 k ¯ 2 12 3 5 ) / 3 + t ¯ 1 6 k ¯ 1 34 k ¯ 1 56 / 4 + 12 2 t ¯ 13 6 k ¯ 1 34 k ¯ 1 2 56 / 2 + t ¯ 3 6 ( k ¯ 1 2 34 k ¯ 1 1 56 + k ¯ 1 34 k ¯ 1 12 56 ) / 2 , K ¯ 1 , 3 = K ¯ 1 1 k ¯ 3 1 + K ¯ 1 12 k ¯ 1 1 k ¯ 2 2 + K ¯ 1 123 k ¯ 1 1 k ¯ 1 2 k ¯ 1 3 / 6 , 2 K ¯ 1 1 = t ¯ 1 3 k ¯ 1 23 + t ¯ 23 k ¯ 1 1 23 , 2 K ¯ 1 12 = t ¯ 1 4 k ¯ 1 34 + 12 2 t ¯ 134 k ¯ 1 2 34 + t ¯ 34 k ¯ 1 12 34 , 2 K ¯ 1 123 = t ¯ 1 5 k ¯ 1 45 + 1 3 3 ( t ¯ 1345 k ¯ 1 2 45 + t ¯ 3 5 k ¯ 1 12 45 ) + t ¯ 45 k ¯ 1 1 3 45 , K ¯ 0 , 4 = t ¯ 1 k ¯ 4 1 + t ¯ 12 ( k ¯ 1 1 k ¯ 3 2 + k ¯ 2 1 k ¯ 2 2 / 2 ) + t ¯ 1 3 k ¯ 1 1 k ¯ 1 2 k ¯ 2 3 / 2 + t ¯ 1 4 k ¯ 1 1 k ¯ 1 2 k ¯ 1 3 k ¯ 1 4 / 24 .
Proof 
K ¯ 1 r ( w ) = K ¯ 1 r and K ¯ e 1 r ( w ) = K ¯ e 1 r are functions of w. By (14)
K ¯ 1 r ( w n ) = e = r 1 n e K ¯ e 1 r ( w n ) f o r w n = E w ^ = w + d n ,
where by (1), d n has i 1 th component d ¯ n 1 = d n i 1 = e = 1 n e k ¯ e 1 . Consider the Taylor series expansion
K ¯ k 1 r ( w + d n ) = K ¯ k 1 r + K ¯ k 1 1 r d ¯ n 1 + K ¯ k 12 1 r d ¯ n 1 d ¯ n 2 / 2 ! + = e = 0 K ¯ k , e 1 r n e say .
Substituting into (14) gives (17) with
a ¯ c 1 r = k + e = c K ¯ k , e 1 r = e = 0 c r + 1 K ¯ c e , e 1 r .
Also K ¯ k , 0 1 r = K ¯ k 1 r so that (18) holds with
D ¯ c 1 r = e = 1 c r + 1 K ¯ c e , e 1 r : D ¯ r 1 r = K ¯ r 1 , 1 1 r , D ¯ r + 1 1 r = e = 1 2 K ¯ r + 1 e , e 1 r ,
An alternative proof can be obtained using Section 6. This corrects C e = a ¯ e 1 given in Appendix B of [21]. Ref. [2] uses K k j 1 j r = K ¯ k 1 r for a ¯ k 1 r but the expression for K 2 a b on p67, lines 2–3 omitted the term A i a A j b k 1 , k i j k 1 k . That is, the last term in a ¯ 2 12 of Theorem 3 was omitted. Similarly the results on p67 for r = 3 , 4 are only true when the w ^ is unbiased or the cumulant coefficients of w ^ do not depend on w, as they omit the derivatives of k ¯ e 1 r . The examples given there are not affected as w ^ is unbiased. Nor are the nonparametric examples of [22] and [23] affected, as the empirical distribution is an unbiased estimate of a distribution. Likewise w ^ is unbiased for the examples of [20]. M-estimates are biased but the results of [16] are not affected as only K 1 j 1 j 2 , K 1 j 1 , K 2 j 1 j 2 j 3 are given. No changes are needed for [3,4,17,24]. Applications to non-parametric and parametric confidence intervals were given in [22] and [20,23] and to ellipsoidal confidence regions and power in [4] and [25]. For nonparametric problems, F ( x ) and its empirical distribution F n ( x ) play the role of w and w ^ ; since it is unbiased, no corrections are needed. For q = 1 , a r i = a ¯ i 1 r were given for parametric and non-parametric problems in [22] and [2,23] and expressions for the classic Edgeworth expansion of Y n w in terms of a r i were given in [14]. For q 1 , a ¯ i 1 r for parametric problems were given in [2], and can be obtained easily from a r i given when q = 1 for 1-sample and multi-sample non-parametric problems in [22] and [23] and for semi-parametric problems in [16,24]. All these results can be extended to samples with independent non-identically distributed residuals, as done in [26] Section 6 and [17]. The extension to matrix  w ^ just needs a slight change in notation. For example, in [17], w ^ can be viewed as a function of the mean of n independent complex random matrices, although n is actually the number of transmitters or receivers. Extensions to dependent random variables are also possible: see [27].

5. Cumulant Coefficients for Univariate t ( w ^ )

Now suppose that q = 1 . Let   k K r e be the coefficient of n e in   k K 1 r . We write K ¯ e 1 r as K r e . For E w ^ = w , (14), (16) and (20) become
K r = κ r ( t ^ ) = e = r 1 n e K r e , r 1 ; K r e = k = r 1 e   k K r e : K r , r 1 =   r 1 K r , r 1 , K r r = k = r 1 r   k K r r , K r , r + 1 = k = r 1 r + 1   k K r , r + 1 ,
For E w ^ w , (17)–(19) become
K r = κ r ( t ^ ) = e = r 1 n e a r e , r 1 ; a r e = K r e + D r e , D r c = e = 1 c r + 1 K r , c e , e : D r , r 1 = 0 , D r r = K r , r 1 , 1 , D r , r + 1 = e = 1 2 K r , r + 1 e , e ,
Here, we give the cumulant coefficients K r e needed for the Edgeworth expansion of Y n t of (12) for P r ( x ) , r 4 . We do this when E w ^ = w in Corollary 1 and when E w ^ w in Corollaries 3 and 4. To show more clearly the expressions we need in molecular form, we introduce the following ions, (expressions with unpaired suffixes),
s i 1 = s ¯ 1 = k ¯ 1 12 t ¯ 2 , u ¯ 1 = t ¯ 12 s ¯ 2 = t ¯ 12 k ¯ 1 23 t ¯ 3 , X ¯ 34 = k ¯ 1 31 t ¯ 12 k ¯ 1 24 , z ¯ 12 = t ¯ 1 3 s ¯ 3 , v ¯ 1 = k ¯ 1 12 u ¯ 2 = X ¯ 14 t ¯ 4 , x ¯ 1 = t ¯ 12 v ¯ 2 , S ¯ 1 = k ¯ 2 12 t ¯ 2 , y ¯ 1 = k ¯ 2 1 3 t ¯ 2 t ¯ 3 , Y ¯ 1 = t ¯ 12 y ¯ 2 .
where a suffix does not have a match then summation does not occur. For example, the RHS of s ¯ 1 = k ¯ 1 12 t ¯ 2 sums over i 2 but not i 1 . Let v , c 01 , c 02 , c 21 , c 22 , c 23 ,   c 11 , , c 1 , 10 , c 31 , , c 3 , 11 be the 27 functions of ω given on p4234–4235 of [20], labelled there as I 2 2 0 , I 1 1 0 , , I 301 222 000 . By Corollaries 1 and 3 below, those needed for P r ( x ) , r 2 , of (4), that is, for the Edgeworth expansion of Y n t of (12) to O ( n 3 / 2 ) , are the following molecules.
F o r P 0 ( x ) : v = K 21 = t ¯ 1 k ¯ 1 12 t ¯ 2 . F o r P 1 ( x ) , K 11 : c 02 = t ¯ 12 k ¯ 1 12 ; f o r D 11 : c 01 = t ¯ 1 k ¯ 1 1 ; f o r K 32 : c 21 = t ¯ 1 t ¯ 2 t ¯ 3 k ¯ 2 1 3 = t ¯ 1 y ¯ 1 , c 23 = s ¯ 1 t ¯ 12 s ¯ 2 = s ¯ 1 u ¯ 1 . F o r P 2 ( x ) , K 22 : c 11 = t ¯ 1 k ¯ 2 12 t ¯ 2 = t ¯ 1 S ¯ 1 , c 15 = t ¯ 1 k ¯ 2 1 3 t ¯ 23 , c 19 = t ¯ 12 X ¯ 12 , c 1 , 10 = s ¯ 1 t ¯ 1 3 k ¯ 1 23 = z ¯ 23 k ¯ 1 23 ; f o r D 22 : c 12 = k ¯ 1 1 k ¯ 1 1 23 t ¯ 2 t ¯ 3 , c 16 = k ¯ 1 1 u ¯ 1 = k ¯ 1 1 t ¯ . 12 k ¯ 1 23 t ¯ 3 ; f o r K 43 : c 31 = t ¯ 1 t ¯ 2 t ¯ 3 t ¯ 4 k ¯ 3 1 4 , c 36 = y ¯ 3 u ¯ 3 , c 3 , 10 = u ¯ 1 k ¯ 1 12 u ¯ 2 , c 3 , 11 = s ¯ 1 s ¯ 2 s ¯ 3 t ¯ 1 3 .
Each molecule can be written as a shape. For example, c 19 is a rectangle. We now give the molecules L j , L i j needed for the Edgeworth expansion to O ( n 5 / 2 ) , that is, for P r ( x ) for r = 3 , 4 . Note that P r ( x ) needs the derivatives of t ( w ) up to order r + 1 .
F o r P 3 ( x ) , K 12 : L 1 = t ¯ 12 k ¯ 2 12 , L 2 = t ¯ 1 3 k ¯ 2 1 3 , L 3 = t ¯ 1 4 k ¯ 1 12 k ¯ 1 34 ; f o r K 33 : L 4 = t ¯ 1 t ¯ 2 t ¯ 3 k ¯ 3 1 3 , L 5 = u ¯ 1 S ¯ 1 , L 6 = t ¯ 13 t ¯ 2 t ¯ 4 k ¯ 3 1 4 , L 71 = z ¯ 12 k ¯ 2 1 3 t ¯ 3 , L 72 = y ¯ 1 t ¯ 145 k ¯ 1 45 , L 73 = t ¯ 12 k ¯ 2 1 3 u ¯ 3 , L 74 = t ¯ 14 k ¯ 1 45 t ¯ 52 k ¯ 2 1 3 t ¯ 3 , L 81 = k ¯ 1 12 t ¯ 1 4 s ¯ 3 s ¯ 4 , L 82 = k ¯ 1 12 t ¯ 1 3 v ¯ 3 , L 83 = X ¯ 34 z ¯ 34 , L 84 = X ¯ 14 t ¯ 45 k ¯ 1 56 t ¯ 61 , a s e x a g o n , f o r K 54 : L 9 = t ¯ 1 t ¯ 5 k ¯ 4 1 5 , L 10 = u ¯ 1 t ¯ 2 t ¯ 3 t ¯ 4 k ¯ 3 1 4 , L 11 = y ¯ 1 Y ¯ 1 = y ¯ 1 t ¯ 12 y ¯ 2 , L 121 = y ¯ 1 t ¯ 1 3 s ¯ 2 s ¯ 3 , L 122 = t ¯ 1 k ¯ 2 1 3 u ¯ 2 u ¯ 3 , L 123 = Y ¯ 2 v ¯ 2 , L 131 = s ¯ 1 s ¯ 4 t ¯ 1 4 , L 132 = s ¯ 1 s ¯ 2 t ¯ 1 3 v ¯ 3 , L 133 = v ¯ 1 t ¯ 12 v ¯ 2 = v ¯ 1 x ¯ 1 . F o r P 4 ( x ) , K 23 : L 14 = t ¯ 1 t ¯ 2 k ¯ 3 12 , L 15 = t ¯ 12 k ¯ 2 1 3 t ¯ 3 , L 161 = S ¯ 1 t ¯ 1 3 k ¯ 1 23 , L 162 = z ¯ 12 k ¯ 2 12 , L 171 = X ¯ 24 t ¯ 24 . L 181 = t ¯ 1 3 k ¯ 3 1 4 t ¯ 4 , L 182 = t ¯ 12 k ¯ 3 1 4 t ¯ 34 , L 191 = k ¯ 2 1 3 t ¯ 1 4 s ¯ 4 , L 192 = k ¯ 1 12 t ¯ 1 4 k ¯ 2 3 5 t ¯ 5 , L 193 = t ¯ 1 3 k ¯ 2 2 4 t ¯ 45 k ¯ 1 51 , L 194 = t ¯ 12 k ¯ 2 1 3 t ¯ 3 5 k ¯ 1 45 , L 201 = k ¯ 1 12 k ¯ 1 34 t ¯ 1 5 s ¯ 5 , L 202 = k ¯ 1 12 t ¯ 1 4 X ¯ 34 , L 203 = k ¯ 1 12 t ¯ 1 3 k ¯ 1 34 t ¯ 4 6 k ¯ 1 56 , L 204 = t ¯ 135 ( k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 ) t ¯ 246 ; f o r K 44 : L 21 = t ¯ 1 t ¯ 4 k ¯ 4 1 4 , L 221 = Y ¯ 2 k ¯ 2 23 t ¯ 3 , L 222 = u ¯ 1 t ¯ 2 t ¯ 3 k ¯ 3 1 3 , L 231 = s ¯ 1 z ¯ 12 S ¯ 2 . L 241 = x ¯ 2 S ¯ 2 , L 242 = u ¯ 1 k ¯ 2 12 u ¯ 2 . L 25 = t ¯ 12 k ¯ 4 1 5 t ¯ 3 t ¯ 4 t ¯ 5 , L 261 = t ¯ 1 t ¯ 2 k ¯ 3 1 4 t ¯ 3 5 s ¯ 5 , L 262 = t ¯ 1 t ¯ 2 t ¯ 3 k ¯ 3 1 4 t ¯ 4 6 k ¯ 1 56 , L 263 = t ¯ 12 k ¯ 3 1 4 t ¯ 3 u ¯ 4 , L 264 = t ¯ 1 t ¯ 2 k ¯ 3 1 4 ( t ¯ 35 t ¯ 46 ) k ¯ 1 56 , L 271 = t ¯ 1 k ¯ 2 1 3 t ¯ 2 4 y ¯ 4 , L 272 = t ¯ 12 k ¯ 2 1 3 Y ¯ 3 , L 273 = t ¯ 1 k ¯ 2 1 3 ( t ¯ 24 t ¯ 35 ) k ¯ 3 4 6 t ¯ 6 , L 281 = t ¯ 1 k ¯ 2 1 3 t ¯ 2 5 s ¯ 4 s ¯ 5 , L 282 = s ¯ 1 k ¯ 1 23 t ¯ 1 4 y ¯ 4 , L 283 = u ¯ 1 k ¯ 2 1 3 z ¯ 23 , L 284 = t ¯ 1 k ¯ 2 1 3 t ¯ 2 4 v ¯ 4 , L 285 = Y ¯ 2 k ¯ 1 23 t ¯ 3 5 k ¯ 1 45 , L 286 = t ¯ 12 k ¯ 2 1 3 z ¯ 34 s ¯ 4 , L 287 = t ¯ 1 k ¯ 2 1 3 t ¯ 24 k ¯ 1 45 z ¯ 53 , L 288 = y ¯ 1 t ¯ 1 3 X ¯ 23 , L 289 = t ¯ 12 k ¯ 2 1 3 x ¯ 3 , L 2810 = u ¯ 1 k ¯ 2 1 3 ( t ¯ 24 t ¯ 35 ) k ¯ 1 45 , L 2811 = t ¯ 1 k ¯ 2 1 3 ( t ¯ 24 k ¯ 1 45 t ¯ 36 k ¯ 1 67 ) t ¯ 57 , L 291 = k ¯ 1 12 t ¯ 1 5 s ¯ 3 s ¯ 4 s ¯ 5 , L 292 = k ¯ 1 12 t ¯ 1 4 v ¯ 3 s ¯ 4 , L 293 = X ¯ 34 t ¯ 3 6 s ¯ 5 s ¯ 6 , L 294 = k ¯ 1 12 t ¯ 1 3 k ¯ 1 34 t ¯ 4 6 s ¯ 5 s ¯ 6 , L 295 = k ¯ 1 12 ( z ¯ 13 z ¯ 24 ) k ¯ 1 34 , L 296 = k ¯ 1 12 t ¯ 1 3 k ¯ 1 34 x ¯ 4 , L 297 = k ¯ 1 12 t ¯ 135 v ¯ 5 t ¯ 24 k ¯ 1 34 , L 298 = X ¯ 14 t ¯ 45 k ¯ 1 56 z ¯ 61 , L 299 = X ¯ 14 t ¯ 45 X ¯ 58 ; f o r K 65 : L 30 = t ¯ 1 t ¯ 6 k 5 1 6 , L 31 = u ¯ 1 t ¯ 2 t ¯ 3 t ¯ 4 t ¯ 5 k ¯ 4 1 5 , L 32 = t ¯ 1 t ¯ 2 t ¯ 3 k ¯ 4 1 5 t ¯ 45 , L 331 = t ¯ 1 t ¯ 2 t ¯ 3 k ¯ 3 1 4 z ¯ 45 s ¯ 5 , L 332 = u ¯ 1 u ¯ 2 k ¯ 3 1 4 t ¯ 3 t ¯ 4 , L 333 = t ¯ 1 t ¯ 2 t ¯ 3 k ¯ 3 1 4 x ¯ 4 L 341 = t ¯ 1 3 y ¯ 1 s ¯ 2 y ¯ 3 , L 342 = Y ¯ 2 k ¯ 2 2 4 t ¯ 3 u ¯ 4 , L 343 = Y ¯ 1 k ¯ 1 12 Y ¯ 2 , L 351 = y ¯ 3 t ¯ 3 6 s ¯ 4 s ¯ 5 s ¯ 6 , L 352 = t ¯ 1 u ¯ 2 k ¯ 2 1 3 z ¯ 34 s ¯ 4 , L 353 = y ¯ 1 t ¯ 12 v ¯ 2 , L 354 = Y ¯ 4 k ¯ 1 45 t ¯ 5 7 s ¯ 6 s ¯ 7 , L 355 = u ¯ 1 u ¯ 2 u ¯ 3 k ¯ 2 1 3 , L 356 = t ¯ 1 u ¯ 2 k ¯ 2 1 3 x ¯ 3 , L 357 = t ¯ 3 5 y ¯ 3 v ¯ 4 s ¯ 5 . L 361 = s ¯ 1 s ¯ 5 t ¯ 1 5 , L 362 = s ¯ 1 s ¯ 2 s ¯ 3 t ¯ 1 4 v ¯ 4 , L 363 = s ¯ 1 z ¯ 13 k ¯ 1 34 t ¯ 4 6 s ¯ 5 s ¯ 6 , L 364 = v ¯ 1 z ¯ 12 v ¯ 2 , L 365 = s ¯ 1 z ¯ 13 k ¯ 1 34 x ¯ 4 , L 366 = x ¯ 1 k ¯ 1 12 x ¯ 2 .
These c r s and L j do not use derivatives of k ¯ e 1 r , the cumulant coefficients of w ^ .
Corollary 1.
Suppose that w ^ is an unbiased standard estimate of w in R p with respect to n, and that q = 1 . Then the cumulants of t ^ = t ( w ^ ) can be expanded as (21) with bounded cumulant coefficients K r e . The leading coefficients needed for P r ( x ) of (4) for the distribution of Y n t of (12) are as follows.
K 10 = t ¯ = t ( w ) . F o r P 0 ( x ) : K 21 = v = t ¯ 1 k ¯ 1 12 t ¯ 2 . F o r P 1 ( x ) : K 11 = c 02 / 2 , K 32 = c 21 + 3 c 23 . F o r P 2 ( x ) : K 22 = k = 1 2   k K 22 ,   1 K 22 = c 11 ,   2 K 22 = c 15 + c 19 / 2 + c 1 , 10 , K 43 = c 31 + 12 c 36 + 12 c 3 , 10 + 4 c 3 , 11 . F o r P 3 ( x ) : K 12 = k = 1 2   k K 12 ,   1 K 12 = L 1 / 2 ,   2 K 12 = L 2 / 6 + L 3 / 8 ;
K 33 = k = 2 3   k K 33 ,   2 K 33 = L 4 + 6 L 5 ,   3 K 33 = 3 L 6 / 2 + 3 L 7 + L 8 w h e r e
L 7 = L 71 + 3 L 72 / 2 + 3 k = 3 4 L 7 k , L 8 = 3 L 81 / 2 + 6 L 82 + 3 L 83 + 3 L 84 ,
K 54 = L 9 + 20 L 10 + 15 L 11 + 30 L 12 + L 13   w h e r e
L 12 = L 121 + 2 L 122 + 2 L 123 , L 13 = L 131 + 60 ( L 132 + L 133 ) . F o r P 4 ( x ) : K 23 = k = 1 3   k K 23 ,   1 K 23 = L 14 ,   2 K 23 = L 15 + k = 1 2 L 16 k + L 171 ,   3 K 23 = L 181 / 3 + L 182 / 4 + k = 19 20 L k w h e r e L 19 = L 191 / 3 + k = 2 4 L 19 k , L 20 = L 201 / 4 + L 202 / 2 + L 203 / 4 + L 204 / 6 ; K 44 = k = 3 4   k K 44 ,   3 K 44 = L 21 + 12 k = 1 2 L 22 k + 12 L 231 + 24 L 241 + 12 L 242 .   4 K 44 = 2 L 25 + 2 L 26 + 6 k = 1 3 L 27 k + L 28 + L 29 w h e r e L 26 = 3 L 261 + L 262 + 6 L 263 + 3 L 264 , L 28 = k = 1 11 c k L 28 k , c 1 = 6 , c 3 = 24 , c 8 = 4 , c k = 12 o t h e r w i s e , L 29 = k = 1 9 h k L 29 k , h 1 = 2 , h 2 = h 6 = 12 , h 7 = 24 , h 9 = 3 , h k = 6 o t h e r w i s e ; K 65 = L 30 + 30 L 31 + 60 v L 32 + 60 L 33 + 90 L 34 + 60 L 35 + 6 L 36 f o r L 33 = L 331 + 3 L 332 + 2 L 333 , L 34 = L 341 + 4 L 342 + L 343 , L 35 = k = 1 7 d k L 35 k , d 1 = 1 , d 2 = d 3 = d 7 = 6 , d 4 = 3 , d 5 = 2 , d 6 = 12 , L 36 = k = 1 6 e k L 36 k , e 1 = 1 , e 2 = 20 , e 3 = 15 , e 7 = 6 , e 4 = e 5 = e 6 = 60 . A l s o ,   K 13 = t ¯ 12 k ¯ 3 12 / 2 + t ¯ 1 3 k ¯ 3 1 3 / 6 + t ¯ 1 4 ( k ¯ 12 k ¯ 34 ) 3 / 8 + t ¯ 1 4 k ¯ 3 1 4 / 24 + t ¯ 1 5 k ¯ 2 1 3 k ¯ 1 45 / 12 + t ¯ 1 6 k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 / 48 w h e r e ( a b ) 3 = a 1 b 2 + a 2 b 1 , K 14 = t ¯ 12 k ¯ 4 12 / 2 + t ¯ 1 3 k ¯ 4 1 3 / 6 + t ¯ 1 4 [ ( k ¯ 12 k ¯ 34 ) 4 / 8 + k ¯ 4 1 4 / 24 ] + t ¯ 1 5 [ k ¯ 4 1 5 / 120 + ( k ¯ 1 3 k ¯ 45 ) 4 / 12 ] + t ¯ 1 6 [ ( k ¯ 12 k ¯ 34 k ¯ 56 ) 4 / 48 + k ¯ 3 1 4 k ¯ 1 56 / 48 + k ¯ 2 1 3 k ¯ 2 4 6 / 72 ] + t ¯ 1 7 k ¯ 2 1 3 k ¯ 1 45 k ¯ 1 67 / 48 + t ¯ 1 8 k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 k ¯ 1 78 / 384 w h e r e t h e 1 s t ( a b ) 4 = a 1 b 3 + a 2 b 2 + a 3 b 1 , t h e 2 n d ( a b ) 4 = a 2 b 2 + a 3 b 1 , a n d ( a b c ) 4 = a 1 b 1 c 2 + a 1 b 2 c 1 + a 2 b 1 c 1 .
Proof 
Since q = 1 , N becomes N. We write T 1 s 1 r , U 1 s 1 r , V 1 s 1 r as T 1 s r , U 1 s r , V 1 s r . By Theorem 2 we need the following.
T 1 4 3 / 3 = t ¯ 13 t ¯ 2 t ¯ 4 , T 1 3 2 / 2 = t ¯ 12 t ¯ 3 , T 1 4 2 / 2 = t ¯ 1 3 t ¯ 4 + t ¯ 13 t ¯ 24 , T 1 5 4 / 12 = t ¯ 14 t ¯ 2 t ¯ 3 t ¯ 5 , T 1 6 4 / 4 = t ¯ 135 t ¯ 2 t ¯ 4 t ¯ 6 + 3 t ¯ 13 t ¯ 25 t ¯ 4 t ¯ 6 , T 1 5 3 / 3 = t ¯ 124 t ¯ 3 t ¯ 5 + 3 t ¯ 145 t ¯ 2 t ¯ 3 / 2 + 3 t ¯ 12 t ¯ 34 t ¯ 5 + 3 t ¯ 14 t ¯ 25 t ¯ 3 , T 1 6 3 = 3 t ¯ 1235 t ¯ 4 t ¯ 6 / 2 + 6 t ¯ 1 3 t ¯ 45 t ¯ 6 + 3 t ¯ 135 t ¯ 24 t ¯ 6 + t ¯ 13 t ¯ 25 t ¯ 46 , T 1 6 5 / 20 = t ¯ 15 t ¯ 2 t ¯ 3 t ¯ 4 t ¯ 6 , U 1 6 5 / 15 = t ¯ 14 t ¯ 2 t ¯ 3 t ¯ 5 t ¯ 6 , T 1 7 5 / 30 = t ¯ 146 t ¯ 2 t ¯ 3 t ¯ 5 t ¯ 7 + 2 t ¯ 14 t ¯ 26 t ¯ 3 t ¯ 5 t ¯ 7 + 2 t ¯ 14 t ¯ 56 t ¯ 2 t ¯ 3 t ¯ 7 , T 1 8 5 = t ¯ 1357 t ¯ 2 t ¯ 4 t ¯ 6 t ¯ 8 + 60 t ¯ 135 t ¯ 27 t ¯ 4 t ¯ 6 t ¯ 8 + 60 t ¯ 13 t ¯ 25 t ¯ 47 t ¯ 6 t ¯ 8 . U 1 4 2 = t ¯ 1 3 t ¯ 4 / 3 + t ¯ 12 t ¯ 34 / 4 , T 1 5 2 = t ¯ 1 4 t ¯ 5 / 3 + t ¯ 1245 t ¯ 3 / 2 + t ¯ 124 t ¯ 35 + t ¯ 145 t ¯ 23 / 2 , T 1 6 2 = t ¯ 1 5 t ¯ 6 + 2 t ¯ 1235 t ¯ 46 + t ¯ 1 3 t ¯ 4 6 + 2 t ¯ 135 t ¯ 246 / 3 , T 1 5 4 / 12 = t ¯ 14 t ¯ 2 t ¯ 3 t ¯ 5 , U 1 5 4 / 4 = t ¯ 12 t ¯ 3 t ¯ 4 t ¯ 5 , U 1 6 4 / 2 = 3 t ¯ 125 t ¯ 3 t ¯ 4 t ¯ 6 + 2 t ¯ 156 t ¯ 2 t ¯ 3 t ¯ 4 + 6 t ¯ 12 t ¯ 35 t ¯ 4 t ¯ 6 + 3 t ¯ 15 t ¯ 26 t ¯ 3 t ¯ 4 , V 1 6 4 / 6 = t ¯ 124 t ¯ 3 t ¯ 5 t ¯ 6 + t ¯ 12 t ¯ 34 t ¯ 5 t ¯ 6 + t ¯ 14 t ¯ 25 t ¯ 3 t ¯ 6 , T 1 7 4 = 6 t ¯ 1246 t ¯ 3 t ¯ 5 t ¯ 7 + 3 t ¯ 1456 t ¯ 2 t ¯ 3 t ¯ 7 + 24 t ¯ 124 t ¯ 36 t ¯ 5 t ¯ 7 + 12 t ¯ 124 t ¯ 56 t ¯ 3 t ¯ 7 + 12 t ¯ 145 t ¯ 26 t ¯ 3 t ¯ 7 + 12 t ¯ 146 t ¯ 23 t ¯ 5 t ¯ 7 + 24 t ¯ 146 t ¯ 25 t ¯ 3 t ¯ 7 + 4 t ¯ 146 t ¯ 57 t ¯ 2 t ¯ 3 + 12 t ¯ 456 t ¯ 17 t ¯ 2 t ¯ 3 + 12 t ¯ 12 t ¯ 34 t ¯ 56 t ¯ 7 + 12 t ¯ 14 t ¯ 25 t ¯ 36 t ¯ 7 + 12 t ¯ 14 t ¯ 26 t ¯ 57 t ¯ 3 , T 1 8 4 = 2 t ¯ 12357 t ¯ 4 t ¯ 6 t ¯ 8 + 12 t ¯ 1235 t ¯ 47 t ¯ 6 t ¯ 8 + 6 t ¯ 1357 t ¯ 24 t ¯ 6 t ¯ 8 + 6 t ¯ 123 t ¯ 457 t ¯ 6 t ¯ 8 + 6 t ¯ 135 t ¯ 247 t ¯ 6 t ¯ 8 + 12 t ¯ 123 t ¯ 45 t ¯ 67 t ¯ 8 + 24 t ¯ 135 t ¯ 24 t ¯ 67 t ¯ 8 + 6 t ¯ 135 t ¯ 27 t ¯ 48 t ¯ 6 + 3 t ¯ 13 t ¯ 25 t ¯ 47 t ¯ 68 , T 1 7 6 / 30 = t ¯ 16 t ¯ 2 t ¯ 3 t ¯ 4 t ¯ 5 t ¯ 7 , U 1 7 6 / 60 = t ¯ 15 t ¯ 2 t ¯ 3 t ¯ 4 t ¯ 6 t ¯ 7 , T 1 8 6 / 60 = t ¯ 157 t ¯ 2 t ¯ 3 t ¯ 4 t ¯ 6 t ¯ 8 + 3 t ¯ 15 t ¯ 27 t ¯ 3 t ¯ 4 t ¯ 6 t ¯ 8 + 2 t ¯ 15 t ¯ 67 t ¯ 2 t ¯ 3 t ¯ 4 t ¯ 8 , U 1 8 6 / 90 = t ¯ 147 t ¯ 2 t ¯ 3 t ¯ 5 t ¯ 6 t ¯ 8 + 4 t ¯ 14 t ¯ 27 t ¯ 3 t ¯ 5 t ¯ 6 t ¯ 8 + t ¯ 17 t ¯ 48 t ¯ 2 t ¯ 3 t ¯ 5 t ¯ 6 , T 1 9 6 / 60 = t ¯ 1468 t ¯ 2 t ¯ 3 t ¯ 5 t ¯ 7 t ¯ 9 + 6 t ¯ 146 t ¯ 28 t ¯ 3 t ¯ 5 t ¯ 7 t ¯ 9 + 6 t ¯ 146 t ¯ 58 t ¯ 2 t ¯ 3 t ¯ 7 t ¯ 9 + 3 t ¯ 468 t ¯ 15 t ¯ 2 t ¯ 3 t ¯ 7 t ¯ 9 + 2 t ¯ 14 t ¯ 26 t ¯ 38 t ¯ 5 t ¯ 7 t ¯ 9 + 12 t ¯ 14 t ¯ 26 t ¯ 58 t ¯ 3 t ¯ 7 t ¯ 9 + 6 t ¯ 14 t ¯ 56 t ¯ 78 t ¯ 2 t ¯ 3 t ¯ 9 , T 1 10 6 / 6 = t ¯ 13579 t ¯ 2 t ¯ 4 t ¯ 6 t ¯ 8 t ¯ 10 + 20 t ¯ 1357 t ¯ 29 t ¯ 4 t ¯ 6 t ¯ 8 t ¯ 10 + 15 t ¯ 135 t ¯ 279 t ¯ 4 t ¯ 6 t ¯ 8 t ¯ 10 + 60 t ¯ 135 t ¯ 27 t ¯ 49 t ¯ 6 t ¯ 8 t ¯ 10 + 60 t ¯ 135 t ¯ 27 t ¯ 89 t ¯ 4 t ¯ 6 t ¯ 10 + 60 t ¯ 13 t ¯ 25 t ¯ 47 t ¯ 69 t ¯ 8 t ¯ 10 . F o r P 1 ( x ) : T 1 4 3 k ¯ 1 12 k ¯ 1 34 = 3 c 23 . For P 2 ( x ) : T 1 3 2 k ¯ 2 1 3 / 2 = c 15 , T 1 4 2 k ¯ 1 12 k ¯ 1 34 / 2 = c 19 / 2 + c 1 , 10 ; K 43 = c 31 + g 1 + g 2 f o r g 1 = T 1 5 4 k ¯ 2 1 3 k ¯ 1 45 = 12 c 36 , g 2 = T 1 6 4 k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 = 4 c 3 , 11 + 12 c 3 , 10 . F o r P 3 ( x ) :   2 K 33 = L 4 + 3 L 5 , L 5 = T 1 4 3 / 3 ( k ¯ 12 k ¯ 34 ) 3 = t ¯ 13 t ¯ 2 t ¯ 4 ( k ¯ 12 k ¯ 34 ) 3 = 2 L 5 , L 6 = T 1 4 3 / 3 k ¯ 3 1 4 , L 7 = T 1 5 3 / 3 k ¯ 2 1 3 k ¯ 1 45 , L 71 = s ¯ 4 t ¯ 412 k ¯ 2 1 3 t ¯ 3 , L 8 = T 1 6 3 k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 , L 81 = t ¯ 1235 t ¯ 4 t ¯ 6 k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 , L 82 = t ¯ 1 3 t ¯ 45 t ¯ 6 k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 , L 83 = t ¯ 135 t ¯ 24 t ¯ 6 k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 , L 84 = t ¯ 13 t ¯ 25 t ¯ 46 k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 . K 54 is   given   by   ( 25 ) with   L 10 = T 1 6 5 / 20 k ¯ 3 1 4 k ¯ 1 56 , L 11 = U 1 6 5 / 15 k ¯ 2 1 3 k ¯ 2 4 6 , L 12 = T 1 7 5 / 30 k ¯ 2 1 3 k ¯ 1 45 k ¯ 1 67 , L 121 = t ¯ 2 t ¯ 3 k ¯ 2 1 3 t ¯ 146 s ¯ 4 s ¯ 6 , L 13 = T 1 8 5 k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 k ¯ 1 78 . For   P 4 ( x ) :   2 K 23 = L 15 + L 16 + L 17 / 2 , L 15 = T 1 3 2 / 2 k ¯ 3 1 3 , L 16 = k = 1 2 L 16 k ,
L 162 = s ¯ 2 t ¯ 2 4 k ¯ 2 34 , L 17 = T 1 4 2 / 2 ( k ¯ 12 k ¯ 34 ) 3 = k = 1 2 L 17 k , L 172 = L 171 ;   3 K 23 = k = 18 20 L k : L 18 = U 1 4 2 k ¯ 3 1 4 = L 181 / 3 + L 182 / 4 , L 19 = T 1 5 2 k ¯ 2 1 3 k ¯ 1 45 = k = 1 4 L 19 k / g k for g 1 = 3 , g 2 = g 3 = g 4 = 1 , L 192 = t ¯ 3 k ¯ 2 1 3 t ¯ 1245 k ¯ 1 45 , L 193 = t ¯ 412 k ¯ 2 1 3 t ¯ 35 k ¯ 1 54 , L 194 = k ¯ 1 45 t ¯ 451 k ¯ 2 1 3 t ¯ 23 , L 20 = T 1 6 2 k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 / 4 = L 201 / 4 + L 202 / 2 + L 203 / 4 + L 204 / 6 , L 202 = k ¯ 1 12 t ¯ 1235 ( k ¯ 1 34 k ¯ 1 56 ) t ¯ 46 .   3 K 44 = L 21 + 12 L 22 + 4 L , L = T 1 6 4 / 4 ( k ¯ 12 k ¯ 34 k ¯ 56 ) 4 = L 23 + 3 L 24 , L 22 = T 1 5 4 / 12 ( k ¯ 1 3 k ¯ 45 ) 4 = ( L 221 + L 222 ) by   ( 15 ) , L 23 = t ¯ 135 t ¯ 2 t ¯ 4 t ¯ 6 ( k ¯ 12 k ¯ 34 k ¯ 56 ) 4 = k = 1 3 L 23 k by ( 15 ) , L 23 k L 231 . By ( 15 ) , L 24 = t ¯ 13 t ¯ 25 t ¯ 4 t ¯ 6 ( k ¯ 12 k ¯ 34 k ¯ 56 ) 4 = 2 L 241 + L 242 ;   4 K 44 = 2 L 25 + 2 L 26 + 6 L 27 + L 28 + L 29 , L 25 = k ¯ 4 1 5 U 1 5 4 / 4 , L 26 = U 1 6 4 / 2 k ¯ 3 1 4 k ¯ 1 56 , L 27 = V 1 6 4 / 6 k ¯ 2 1 3 k ¯ 2 4 6 = k = 1 3 L 27 k , L 28 = T 1 7 4 k ¯ 2 1 3 k ¯ 1 45 k ¯ 1 67 , L 29 = T 1 8 4 k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 k ¯ 1 78 ; for   K 65 : L 31 = T 1 7 6 k 4 1 5 k 1 67 / 30 , L 32 = U 1 7 6 k 4 1 5 k 1 67 / 60 v , L 33 = T 1 8 6 k ¯ 3 1 4 k ¯ 1 56 k ¯ 1 78 / 60 , L 34 = U 1 8 6 k ¯ 2 1 3 k ¯ 2 4 6 k ¯ 2 78 / 90 , L 35 = T 1 9 6 k ¯ 2 1 3 k ¯ 1 45 k ¯ 1 67 k ¯ 1 89 / 60 , L 36 = T 1 10 6 k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 k ¯ 1 78 k ¯ 1 9 , 10 / 6 .
Example 1.
Suppose that E w ^ = w and t ( w ) is linear in w. Then K 1 e = 0 for e 1 . For r 4 , the K i j needed for P r ( x ) of (4) for the distribution of Y n t of (12) are as follows.
K 10 = t ¯ = t ( w ) . F o r P 0 ( x ) : K 21 = v . F o r P 1 ( x ) : K 32 = c 21 . F o r P 2 ( x ) : K 22 =   1 K 22 = c 11 , K 43 = c 31 . F o r P 3 ( x ) : K 33 =   2 K 33 = L 4 , K 54 = L 9 . F o r P 4 ( x ) : K 23 =   1 K 23 = L 14 , K 44 =   3 K 44 = L 21 , K 65 = L 30 .
For, u ¯ 1 , v ¯ 1 , x ¯ 1 , z ¯ 12 are 0, as are most c i j , L k and   2 K 22 ,   3 K 33 ,   2 K 23 ,   3 K 23 ,   4 K 44 .
So for r = 0 , , 4 , for P r ( x ) we only need to calculate these 3 c i j and 5 L j .
Let G γ be a gamma random variable with known mean γ . Its rth cumulant is ( r 1 ) ! γ . For a standard exponential random variable γ = 1 .
Example 2.
Linear combinations of scale parameters. Suppose that E w ^ = w and t ( w ) is linear, the components of w ^ are independent, and for 1 i p , w ^ i / w i has a distribution with known rth cumulant n 1 r κ r i . Then, K r e = 0 for e r 1 and
K r , r 1 = i = 1 p t i r κ r i ( w i ) r .
For example, if w ^ i / w i is a gamma random variable with mean γ , then κ r i = ( r 1 ) ! γ .
For s r and any function f i s i r , set s r f ¯ s r = i s , , i r f i s i r summed over their range. In Example 3 their range is 1 , 2 ; for example in L 123 ,   1 t 2 i 1 v ¯ 1 = i 1 = 1 k t 2 i 1 v ¯ 1 . In Example 4 their range is 1 , , k ; for example u ¯ 1 = 2 t ¯ 12 s ¯ 2 = i 2 = 1 k t ¯ 12 s ¯ 2 .
Example 3.
Suppose that μ ^ N 1 ( μ , V / n ) and V ^ / V = χ f 2 / f = G γ / γ are independent, where γ = f / 2 has magnitude n. Set ν = γ / n . Then κ r ( μ ^ ) = μ δ r 1 + V n 1 δ r 2 , κ r ( V ^ ) = k r n 1 r for k r = ( r 1 ) ! ν 1 r V r , and cross-cumulants of w ^ are zero. Take p = 2 , w 1 = μ , w 2 = V . Then by Corollary 1, K r e are given in terms of
s 1 = t 1 V , s 2 = t 2 k 2 , u 1 = t 11 t 1 V + t 12 t 2 k 2 , u 2 = t 12 t 1 V + t 22 t 2 k 2 , v 1 = V u 1 , v 2 = k 2 u 2 ,
as follows.
For P 0 ( x ) : K 21 = v = t 1 2 V + t 2 2 k 2 / 2 . For P 1 ( x ) : c 02 = t 11 + t 22 k 2 , c 21 = t 2 3 k 3 , c 23 = i = 1 2 t i i ( s i ) 2 . For P 2 ( x ) : c 11 = 0 , c 15 = t 22 t 2 k 3 , c 19 = ( t 11 V ) 1 + ( t 22 k 2 ) 2 + 2 t 12 V k 2 , c 31 = t 2 4 k 4 , c 36 = t 2 2 k 3 u 2 , c 3 , 10 = u 1 2 V + u 2 2 k 2 , c 3 , 11 = i = 1 2 ( s i ) 3 t i i i + 3 12 2 ( s 1 ) 2 s 2 t 112 w h e r e 12 2 f 12 = f 12 + f 21 . For P 3 ( x ) : L 1 = L 4 = L 5 = 0 , L 2 = t 222 k 3 , L 3 = t 1111 V 2 + 2 t 1122 V k 2 + t 2222 k 2 2 , L 6 = t 22 t 2 2 k 4 , L 71 = z 22 k 3 t 2 , L 72 = y 2 ( t 211 V + t 222 k 3 ) , L 73 = t 22 k 3 u 2 , L 74 = ( V t 12 2 + k 2 t 22 2 ) k 3 t 2 , L 81 = ( V t 1 i 2 i 3 i 4 + k 2 t 2 i 2 i 3 i 4 ) s ¯ 3 s ¯ 4 ,
L 82 = 1 ( V t 11 i 1 + k 2 t 22 i 1 ) v i 1 , L 83 = V 2 t 11 z 11 + 2 V k 2 t 12 z 12 + k 2 2 t 22 z 22 , L 84 = 1 3 k ¯ 1 11 k ¯ 1 22 k ¯ 1 33 t ¯ 12 t ¯ 23 t ¯ 31 , L 9 = t 2 5 k 5 , L 10 = u 2 t 2 3 k 4 , L 11 = ( y 2 ) 2 t 22 , L 121 = y 2 12 t 2 i 1 i 2 s ¯ 1 s ¯ 2 , L 122 = t 2 k 3 u 2 2 , L 123 = y 2 1 t 2 i 1 v ¯ 1 , L 13 = 1 3 s ¯ 1 s ¯ 2 t ¯ 1 3 k ¯ 1 33 u ¯ 3 .
Similarly, one can write down the Ls needed for P 4 ( x ) .
Example 4.
Suppose that we have the summary statistics from k samples of size n i from normal populations with means and variances μ i , V i , 1 i k . Take p = 2 k , w i = μ i , w i + k = V i , 1 i k . So we have p independent statistics, μ ^ i N ( μ i , V i / n i ) and V ^ i V i χ f i 2 / f i = V i G γ i / γ i , 1 i k where γ i = f i / 2 has magnitude n, the total sample size. Set
ν i = γ i / n , λ i = n i / n , τ i = V i / λ i .
Then κ r ( μ ^ i ) = μ i δ r 1 + V i ( λ i n ) 1 δ r 2 , κ r ( V ^ i ) = k r i ( λ i n ) 1 r for k r i = ( r 1 ) ! ν i 1 r , and cross-cumulants of w ^ are zero. Suppose that t ( w ) only depends on μ 1 , , μ k , as in Example 3.3 of [2]. (The notation there is slightly different). Then
s ¯ 1 = t ¯ 1 τ ¯ 1 , u ¯ 1 = 2 t ¯ 12 s ¯ 2 , v ¯ 1 = τ ¯ 1 u ¯ 1 , z ¯ 12 = 3 t ¯ 1 3 s ¯ 3 ,
and by Corollary 1, the coefficients needed are as follows.
For P 0 ( x ) : K 21 = v = t ¯ 1 s ¯ 1 = 1 t ¯ 1 2 τ ¯ 1 , For P 1 ( x ) : c 02 = 1 t ¯ 11 τ ¯ 1 , c 21 = 0 , c 23 = s ¯ 1 t ¯ 12 s ¯ 2 = 12 t ¯ 1 τ ¯ 1 t ¯ 12 t ¯ 2 τ ¯ 2 , For P 2 ( x ) , K 22 : c 11 = c 15 = 0 , c 19 = 12 t ¯ 12 2 τ ¯ 1 τ ¯ 2 , c 1 , 10 = 12 t ¯ 1 τ ¯ 1 t ¯ 122 τ ¯ 2 ; For K 43 : c 31 = c 36 = 0 , c 3 , 10 = 1 u ¯ 1 2 τ ¯ 1 , c 3 , 11 = 1 3 s ¯ 1 s ¯ 2 s ¯ 3 t ¯ 1 3 . For P 3 ( x ) , K 12 = L 3 / 6 = 12 t ¯ 1122 τ ¯ 1 τ ¯ 2 / 6 a s L 1 = L 2 = 0 ; for K 33 : L 4 = L 5 = L 6 = L 7 = 0 , L 81 = 1 3 τ ¯ 1 t ¯ 1123 s ¯ 2 s ¯ 3 , L 82 = 12 τ ¯ 1 t ¯ 122 v ¯ 2 , L 83 = 12 t ¯ 12 τ ¯ 1 τ ¯ 2 z ¯ 12 = 1 3 τ ¯ 1 τ ¯ 2 τ ¯ 3 t ¯ 12 t ¯ 123 t ¯ 3 , L 83 = 1 3 τ ¯ 1 τ ¯ 2 τ ¯ 3 t ¯ 12 t ¯ 23 t ¯ 31 ; for K 54 : L k = 0 f o r 10 k 14 . For P 4 ( x ) , K 23 : L i , k = 0 f o r i = 15 , 16 , 18 , 19 , L 171 = 12 t ¯ 12 2 τ ¯ 1 τ ¯ 2 , L 201 = 1 3 τ ¯ 1 τ ¯ 2 t ¯ 11223 s ¯ 5 , L 202 = 1 3 τ ¯ 1 τ ¯ 2 τ ¯ 3 t ¯ 1123 t ¯ 23 , L 203 = 1 3 τ ¯ 1 τ ¯ 2 τ ¯ 3 t ¯ 112 t ¯ 233 , L 204 = 1 3 τ ¯ 1 τ ¯ 2 τ ¯ 3 t ¯ 123 2 ; K 44 = L 29 a s L k = 0 f o r k = 21 , ( 22 , k ) , ( 23 , 1 ) , ( 24 , k ) , 25 , ( 26 , k ) , ( 27 , k ) , ( 28 , k ) , L 291 = 1 4 τ ¯ 1 s ¯ 2 s ¯ 3 s ¯ 4 t ¯ 11234 , L 292 = 1 3 τ ¯ 1 t ¯ 1123 v ¯ 2 s ¯ 3 , L 293 = 1 4 τ ¯ 1 τ ¯ 2 s ¯ 3 s ¯ 4 t ¯ 1234 , L 294 = 1 4 τ ¯ 1 τ ¯ 2 t ¯ 112 t ¯ 234 s ¯ 3 s ¯ 4 , L 295 = 12 τ ¯ 1 z ¯ 12 2 τ ¯ 2 , L 296 = 1 3 τ ¯ 1 τ ¯ 2 τ ¯ 3 t ¯ 112 t ¯ 23 , L 297 = 12 τ ¯ 1 x ¯ 12 t ¯ 12 f o r x ¯ 12 = 3 t ¯ 123 v ¯ 3 , L 298 = 1 3 τ ¯ 1 τ ¯ 2 τ ¯ 3 t ¯ 12 t ¯ 23 z ¯ 31 , L 299 = 1 4 τ ¯ 1 τ ¯ 2 τ ¯ 3 τ ¯ 4 t ¯ 12 t ¯ 23 t ¯ 34 t ¯ 41 ; K 65 = 6 L 36 , a s   t h e   o t h e r   c o m p o n e n t s   o f   K 65   a r e   0 .   A l s o , K 13 = 1 3 τ ¯ 1 τ ¯ 2 τ ¯ 3 t ¯ 112233 / 48 , K 14 = 1 4 τ ¯ 1 τ ¯ 2 τ ¯ 3 τ ¯ 4 t ¯ 11223344 / 384 .
Corollary 2.
Set Y n = v 1 / 2 Y n t = ( n / v ) 1 / 2 ( t ( w ^ ) t ( w ) ) . Then
P r o b . ( Y n x ) = r = 0 n r / 2 P r v ( x )
where P r v ( x ) is P r ( x ) of Corollary 1 with K r e replaced by K r e / v r / 2 .
Proof 
This is straightforward. □
Looking at K r e , c a b , v , s ¯ 2 , u ¯ 3 as functions of w, we denote their partial derivatives with respect to w ¯ 3 = w r i 3 , say. by K ¯ r e 3 , c ¯ a b 3 , v ¯ 3 , s ¯ 3 2 , u ¯ 3 3 and similarly for higher derivatives. We shall give the ones we need in Lemma 1. When constructing confidence regions, one needs to assume that the distribution of w ^ is determined by w. So far, we have not assumed this. For w ^ biased, we need
Corollary 3.
Let w ^ in R p be a biased standard estimate of w satisfying (1) where k ¯ e 1 r may depend on w. Then for q = 1 , t ^ = t ( w ^ ) is a standard estimate of t = t ( w ) in R:
κ r ( t ^ ) = e = r 1 n e a r e   f o r   r 1 ,   w h e r e   a r e = K r e + D r e , D r , r 1 = 0 ,
and the other D r e needed for P j ( x ) of (4) for the distribution of Y n t of (12) are as follows.
F o r   P 0 ( x ) : D 21 = 0 a 21 = K 21 = v = t ¯ 1 k ¯ 1 12 t ¯ 2 . F o r   P 1 ( x ) : D 11 = c 01 a 11 = c 01 + c 02 / 2 , a 32 = K 32 = c 21 + 3 c 23 , F o r   P 2 ( x ) : D 22 = v ¯ 3 k ¯ 1 3 = c 12 + 2 c 16 , a 43 = K 43 . F o r   P 3 ( x ) : D 12 = K 11 , 1 + K 10 , 2 , K 11 , 1 = K ¯ 11 3 k ¯ 1 3 , K 10 , 2 = t ¯ 1 k ¯ 2 1 + k ¯ 1 1 t ¯ 12 k ¯ 1 2 / 2 , D 33 = K ¯ 32 4 k ¯ 1 4 , a 54 = K 54 . F o r   P 4 ( x ) : D 23 = K 22 , 1 + K 21 , 2 , K 22 , 1 = K ¯ 22 3 k ¯ 1 3 , K 21 , 2 = v ¯ 3 k ¯ 2 3 / 2 + v ¯ 34 k ¯ 1 3 k ¯ 1 4 , D 44 = K ¯ 43 5 k ¯ 1 5 , a 65 = K 65 .
Proof 
This follows from Theorem 3. D r c = c = 1 c r + 1 K r , c r , e where K r k e is the coefficient of n e in the expansion of K r k ( E w ^ ) about K r k . □
For r s , and any X ¯ r s , let r s N X ¯ r s sums over all N permutations of i r , , i s giving distinct terms. For example,
3 5 3 t ¯ 13 t ¯ 245 = t ¯ 13 t ¯ 245 + t ¯ 14 t ¯ 235 + t ¯ 15 t ¯ 234 .
The derivatives of v = K 21 and K r e needed for Corollary 3 are given by
Lemma 1.
v ¯ 3 = 2 u ¯ 3 + T ¯ 3 , v ¯ 34 = k = 0 2 v ¯ 34 k , w h e r e   T ¯ 3 = t ¯ 1 t ¯ 2 k ¯ 1 3 12 , v ¯ 340 = 2 z ¯ 34 + 2 t ¯ 31 k ¯ 1 12 t ¯ 24 , v ¯ 341 = 2 t ¯ 1 ( k ¯ 1 3 12 t ¯ 24 + k ¯ 1 4 12 t ¯ 23 ) , v ¯ 342 = t ¯ 1 t ¯ 2 k ¯ 1 34 12 .
v ¯ 3 5 = k = 0 3 v ¯ 3 5 k f o r v ¯ 3 50 = 2 k ¯ 1 62 3 5 3 t ¯ 63 t ¯ 245 + s ¯ 6 t ¯ 3 6 , v ¯ 3 51 = 2 3 5 3 ( t ¯ 1 k ¯ 1 3 12 t ¯ 245 + t ¯ 31 k ¯ 1 4 12 t ¯ 25 ) , v ¯ 3 52 = 2 t ¯ 1 3 5 3 t ¯ 23 k ¯ 1 45 12 , v ¯ 3 53 = t ¯ 1 t ¯ 2 k ¯ 1 3 5 12 , 2 K ¯ 11 3 = c 02 3 = k ¯ 1 12 t ¯ 1 3 + t ¯ 12 k ¯ 1 3 12 , K ¯ 32 4 = c 21 4 + 3 c 23 4 f o r   c 21 4 = 3 Y ¯ 4 + t ¯ 1 t ¯ 2 t ¯ 3 k ¯ 2 4 1 3 , c 23 4 = s ¯ 1 s ¯ 2 t ¯ 124 + 2 b 4 , b 4 = s ¯ 4 1 u ¯ 1 = x ¯ 4 + u ¯ 1 k ¯ 1 4 12 t ¯ 2 . K ¯ 22 3 = c 11 3 + c 15 3 + c 19 3 / 2 + c 1 , 10 3 f o r c 11 3 = 2 t ¯ 31 S ¯ 1 + t ¯ 1 t ¯ 2 k ¯ 2 3 12 , c 15 4 = k = 1 3 c 15 k 4 , c 1514 = t ¯ 14 k ¯ 2 1 3 t ¯ 23 , c 1524 = t ¯ 1 k ¯ 2 1 3 t ¯ 2 4 , c 1534 = t ¯ 1 k ¯ 2 4 1 3 t ¯ 23 , c 19 5 = 2 c 1915 + 2 c 1925 , c 1915 = t ¯ 34 k ¯ 1 31 k ¯ 1 42 t ¯ 125 , c 1925 = k ¯ 1 5 23 t ¯ 34 k ¯ 1 41 t ¯ 12 , c 1 , 10 4 = k = 1 3 c 1 , 10 k 4 , c 1 , 1014 = k ¯ 1 23 t ¯ 1 3 s ¯ 4 1 = k ¯ 1 23 t ¯ 1 3 ( k ¯ 1 15 t ¯ 54 + t ¯ 5 k ¯ 1 4 15 ) , c 1 , 1024 = s ¯ 1 k ¯ 1 23 t ¯ 1 4 , c 1 , 1034 = s ¯ 1 t ¯ 1 3 k ¯ 1 4 23 , K ¯ 43 5 = c 31 5 + 12 c 36 5 + 12 c 3 , 10 5 + 4 c 3 , 11 5 f o r c 31 5 = 4 t ¯ 51 k ¯ 3 1 4 t ¯ 2 t ¯ 3 t ¯ 4 + t ¯ 1 t ¯ 4 k ¯ 3 5 1 4 , c 36 5 = 2 t ¯ 51 k ¯ 2 1 3 t ¯ 2 u ¯ 3 + t ¯ 1 t ¯ 2 u ¯ 3 k ¯ 2 5 1 3 + y ¯ 3 t ¯ 3 5 s ¯ 4 + Y ¯ 4 ( k ¯ 1 5 41 t ¯ 1 + k ¯ 1 41 t ¯ 15 ) . c 3 , 10 3 = u ¯ 1 k ¯ 1 3 12 u ¯ 2 + 2 v ¯ 1 s ¯ 2 t ¯ 1 3 + 2 t ¯ 1 x ¯ 2 k ¯ 1 3 12 + 2 x ¯ 1 k ¯ 1 12 t ¯ 23 ,
c 3 , 11 5 = 3 s ¯ 1 s ¯ 2 t ¯ 1 3 s ¯ 5 3 = 3 s ¯ 1 s ¯ 2 t ¯ 1 3 t ¯ 4 k ¯ 1 5 34 + 3 s ¯ 1 s ¯ 2 t ¯ 1 3 k ¯ 1 34 t ¯ 45 .
Proof 
For example, substitute u ¯ 3 5 = t ¯ 3 5 s ¯ 4 + t ¯ 34 k ¯ 1 5 41 t ¯ 1 + t ¯ 34 k ¯ 1 41 t ¯ 15 into c 36 5 = 2 t ¯ 1 t ¯ 25 k ¯ 2 1 3 u ¯ 3 + t ¯ 1 t ¯ 2 ( k ¯ 2 5 1 3 u ¯ 3 + k ¯ 2 1 3 u ¯ 3 5 ) .
So now we can write D r e needed for Corollary 3 in molecular form:
Corollary 4.
Assume that the conditions of Corollary 3 hold. Then D r e and K r e , j given there satisfy
D 22 = c 12 + 2 c 16 a 22 = c 11 + c 15 + c 19 / 2 + c 1 , 10 + c 12 + 2 c 16 . F o r   D 12 , K 11 , 1 = ( t ¯ 1 3 k ¯ 1 12 + t ¯ 12 k ¯ 1 3 12 ) k ¯ 1 3 / 2 . D 33 = ( 3 y ¯ 3 t ¯ 34 + t ¯ 1 t ¯ 2 t ¯ 3 k ¯ 2 4 1 3 + 3 s ¯ 1 s ¯ 2 t ¯ 124 + 6 v ¯ 1 t ¯ 14 + 6 u ¯ 1 k ¯ 1 4 12 t ¯ 2 ) k ¯ 1 4 . F o r   D 23 , K 22 , 1 = ( 2 t ¯ 31 S ¯ 1 + t ¯ 1 t ¯ 2 k ¯ 2 3 12 ) k ¯ 1 3 + ( t ¯ 14 k ¯ 2 1 3 t ¯ 23 + t ¯ 1 k ¯ 2 1 3 t ¯ 2 4 + t ¯ 1 k ¯ 2 4 1 3 t ¯ 23 ) k ¯ 1 4 + ( t ¯ 125 X ¯ 12 + k ¯ 1 5 23 t ¯ 34 k ¯ 1 41 t ¯ 12 ) k ¯ 1 5 + [ k ¯ 1 23 t ¯ 1 3 ( k ¯ 1 15 t ¯ 54 + t ¯ 5 k ¯ 1 4 15 ) + s ¯ 1 k ¯ 1 23 t ¯ 1 4 + s ¯ 1 t ¯ 1 3 k ¯ 1 4 23 ] k ¯ 1 4 ; K 21 , 2 = ( 2 u ¯ 3 + t ¯ 1 t ¯ 2 k ¯ 1 3 12 ) k ¯ 2 3 / 2 + [ 4 t ¯ 1 k ¯ 1 3 12 t ¯ 24 + s ¯ 1 t ¯ 134 + t ¯ 31 k ¯ 1 12 t ¯ 24 + t ¯ 1 t ¯ 2 k ¯ 1 34 12 ] k ¯ 1 3 k ¯ 1 4 . D 44 = [ 4 t ¯ 51 k ¯ 3 1 4 t ¯ 2 t ¯ 3 t ¯ 4 + t ¯ 1 t ¯ 4 k ¯ 3 5 1 4 + 24 t ¯ 51 k ¯ 2 1 3 t ¯ 2 u ¯ 3 + 12 t ¯ 1 t ¯ 2 u ¯ 3 k ¯ 2 5 1 3 + 12 y ¯ 3 ( t ¯ 3 5 s ¯ 4 + t ¯ 34 k ¯ 1 5 41 t ¯ 1 + t ¯ 34 k ¯ 1 41 t ¯ 15 ) ] k ¯ 1 5 + 12 [ 2 v ¯ 2 t ¯ 2 4 s ¯ 4 + 2 t ¯ 1 x ¯ 2 k ¯ 1 3 12 + 2 x ¯ 1 k ¯ 1 12 t ¯ 23 + 12 u ¯ 1 k ¯ 1 3 12 u ¯ 2 ] k ¯ 1 3 + 12 ( s ¯ 1 s ¯ 2 t ¯ 1 3 t ¯ 4 k ¯ 1 5 34 + s ¯ 1 s ¯ 2 t ¯ 1 3 k ¯ 1 34 t ¯ 45 ) k ¯ 1 5 .
Proof 
a 1 e were given for i 4 by Theorem 3. Corollaries 5.3, 5.4 agree with a 11 , a 32 , a 22 , a 43 given for P r ( x ) , r 2 on p59 of [2] except that D 22 in a 22 was overlooked. □
Fisher and Cornish (1960) [28] showed the accuracy available using a few terms for the quantile expansions for the chi-square (or gamma), Student’s t, and F distributions. Similar results can be given for the accuracy of their Edgeworth expansions in approximating their distributions.

6. An Extension to Theorem 1

Here we remove the condition in Theorem 1 that E w ^ = w and give the extra terms referred to but not given on p49 of [19]. We use   e K ¯ 1 r of Theorem 1, and the shorthand f ¯ m = i m f ¯ where i = / w i . Suppose that for r 1 , the rth-order cumulants of (8) can be expanded as
k ¯ 1 r = κ ( w ^ i 1 , , w ^ i r ) = e = r 1   e k ¯ 1 r f o r 1 i 1 , , i r p , where   e k ¯ 1 r = O ( n e ) , and   that   w ^ w as   n , so   that   0 k ¯ 1 = w ¯ 1 .
There is a key difference with Theorem 1: there,   e k ¯ 1 r was treated as an algebraic expression. But now we must view each of them as a function of w. So, we assume that the distribution of w ^ is determined by w.
The derivatives of   e K ¯ 1 r of Theorem 1 are given by Leibniz’s rule for the derivatives of a product:
  1 K ¯ 3 12 = ( t ¯ 1 1 t ¯ 2 2 k ¯ 12 ) 3 = ( t ¯ 13 1 t ¯ 2 2 + t ¯ 1 1 t ¯ 23 2 ) k ¯ 12 + t ¯ 1 1 t ¯ 2 2 k ¯ 3 12 ,   1 K ¯ 3 1 = ( t ¯ 12 1 k ¯ 12 ) 3 / 2 = t ¯ 1 3 1 k ¯ 12 / 2 + t ¯ 12 1 k ¯ 3 12 / 2 , ( t ¯ 1 1 t ¯ 2 2 t ¯ 3 3 k ¯ 1 3 ) 4 = ( t ¯ 1 1 t ¯ 2 2 t ¯ 3 3 ) 4 k ¯ 1 3 + t ¯ 1 1 t ¯ 2 2 t ¯ 3 3 k ¯ 4 1 3 , ( t ¯ 1 1 t ¯ 2 2 t ¯ 3 3 ) 4 = t ¯ 1 1 t ¯ 2 2 t ¯ 34 3 + t ¯ 1 1 t ¯ 24 2 t ¯ 3 3 + t ¯ 14 1 t ¯ 2 2 t ¯ 3 3 , ( T 1 4 1 3 k ¯ 12 k ¯ 34 ) 5 = ( T 1 4 1 3 ) 5 k ¯ 12 k ¯ 1 34 + T 1 4 1 3 ( k ¯ 12 k ¯ 34 ) 5 , ( T 1 4 1 3 ) 5 = 3 ( t ¯ 135 1 t ¯ 2 2 t ¯ 4 3 + t ¯ 13 1 t ¯ 25 2 t ¯ 4 3 + t ¯ 13 1 t ¯ 2 2 t ¯ 45 3 ) , ( k ¯ 12 k ¯ 34 ) 5 = k ¯ 5 12 k ¯ 34 + k ¯ 12 k ¯ 5 34 .
Theorem 4.
Let w ^ in R p be a biased standard estimate of w satisfying (30). Then t ^ = t ( w ^ ) in R p is a standard estimate of t = t ( w ) :
κ ( t ^ j 1 , , t ^ j r ) = e = r 1   e a ¯ 1 r f o r r 1 , 1 j 1 , , j r q ,
w h e r e   e a ¯ 1 r =   e K ¯ 1 r +   e D ¯ 1 r ,   r 1 D ¯ 1 r = 0 ,
and the other   e D ¯ 1 r =   e D j 1 j r needed for P r n ( x ) of (13) for the distribution of Y n t of (12) are as follows.
F o r   P 0 ( x ) :   1 D ¯ 12 = 0   1 a ¯ 12 =   1 K ¯ 12 =   1 K j 1 j 2 = t ¯ 1 1 t ¯ 2 2   1 k ¯ 12 ] , F o r   P 1 ( x ) :   1 D ¯ 1 =   1 t ¯ 1 k ¯ 1   1 a ¯ 1 = t ¯ 1 1   1 k ¯ 1 + t ¯ 12 1   1 k ¯ 12 / 2 , F o r   P 2 ( x ) :   2 D ¯ 12 =   1 K ¯ 3 12   1 k ¯ 3 a ¯ 2 12 = t ¯ 1 1 t ¯ 2 2   2 k ¯ 12 + T 1 3 12   2 k ¯ 1 3 / 2 + T 1 4 12   1 k ¯ 12   1 k ¯ 34 / 2 + ( t ¯ 13 1 t ¯ 2 2 + t ¯ 1 1 t ¯ 23 2 )   1 k ¯ 12 + t ¯ 1 1 t ¯ 2 2   1 k ¯ 3 12   1 k ¯ 3 . F o r   P 3 ( x ) :   2 D ¯ 1 =   1 K ¯ 1 1 +   0 K ¯ 2 1 ,   1 K ¯ 1 1 = (   1 K ¯ 1 ) 3   1 k ¯ 3 ,   0 K ¯ 2 1 = t ¯ 1 1   2 k ¯ 1 + t ¯ 12 1   1 k ¯ 1   1 k ¯ 2 / 2   2 a ¯ 1 = t ¯ 1 1   2 k ¯ 1 + t ¯ 12 1 (   2 k ¯ 12 +   1 k ¯ 1   1 k ¯ 2 +   1 K ¯ 3 12   1 k ¯ 3 ) / 2 + t ¯ 1 3 1 (   2 k ¯ 1 3 / 6 +   1 k ¯ 1   1 k ¯ 23 / 2 ) + t ¯ 1 4 1   1 k ¯ 12   1 k ¯ 34 / 8 ,   3 D ¯ 1 3 =   2 K ¯ 4 1 3 k ¯ 1 4 = ( t ¯ 1 1 t ¯ 2 2 t ¯ 3 3   2 k ¯ 1 3 ) 4   1 k ¯ 4 + ( T 1 4 1 3   1 k ¯ 12   1 k ¯ 34 ) 5   1 k ¯ 5 . F o r   P 4 ( x ) :   3 D ¯ 12 =   2 K ¯ 1 12 +   1 K ¯ 2 12 ,   2 K ¯ 1 12 =   2 K ¯ 3 12   1 k ¯ 3 ,   1 K ¯ 2 12 =   1 K ¯ 3 12   2 k ¯ 3 / 2 +   1 K ¯ 34 12   1 k ¯ 3   1 k ¯ 4 ,   4 D ¯ 1 4 =   3 K ¯ 5 1 4   1 k ¯ 5 ,   5 D ¯ 1 5 = 0 .
Proof 
K ¯ 1 r ( w ) = K ¯ 1 r and   e K ¯ 1 r ( w ) =   e K ¯ 1 r are functions of w. By (10),
K ¯ 1 r ( w n ) = e = r 1   k K ¯ 1 r ( w n ) f o r w n = E w ^ = w + d n ,
where by (30), d n has i 1 th component d ¯ n 1 = d n i 1 = e = 1   e k ¯ 1 . Consider the Taylor series expansion
  k K ¯ 1 r ( w + d n ) =   k K ¯ 1 r +   k K ¯ 1 1 r d ¯ n 1 +   k K ¯ 12 1 r d ¯ n 1 d ¯ n 2 / 2 ! + = e = 0   k K ¯ e 1 r
say. Substituting into (10) gives (31) with
  c a ¯ 1 r = k + e = c   k K ¯ e 1 r = e = 0 c r + 1   c e K ¯ e 1 r .
Also   k K ¯ 0 1 r =   k K ¯ 1 r so that (32) holds with
  c D ¯ 1 r = e = 1 c r + 1   c e K ¯ e 1 r :   r D ¯ 1 r =   r 1 K ¯ 1 1 r ,   r + 1 D ¯ 1 r = e = 1 2   r + 1 e K ¯ e 1 r ,
The Edgeworth expansion (13) holds if {   e K ¯ 1 r } are replaced by {   e a ¯ 1 r } .

7. Discussion

Approximations to the distributions of estimates is of vital importance in statistical inference. Asymptotic normality uses just the first term of the Edgeworth expansion. That approximation can be greatly improved with further terms. When the estimate is a sample mean, basic results were given by [29] and [30] with major advances by [31,32,33], Corollary 20.4 of [9], and many others. For an application to the jackknife, see [34]. See [35] for some historical references. For an application to the bootstrap, see [26]. For an application to transport, see [36]. For an application to medical research, see [27]. For an application to econometrics, see [37]. For an extension to order stats for a finite population, see [38]. For a first-order application to inference on networks, see [39]. For more historical references and a recent application to option and derivative pricing, see [40].
Extensions to stationary sequences were given by [41,42] For a derivation of the Edgeworth expansion for a sample mean from the Gram–Charlier expansion, see [5,43] for the univariate and vector cases. These showed for the first time that the coefficients in these expansions were Bell polynomials in the cumulants.
The first extension from a sample mean for univariate estimates was by [28,44] They assumed that the rth cumulant of the estimate was κ r ( w ^ ) = n 1 r k r where k r is a constant. However, in applications they assumed that w ^ was a Type A estimate, and collected terms. It was not until [14] that explicit results were given a univariate Type A estimate. Major advances were made in [3]. This gave explicit results for the terms in the Edgeworth expansion of a Type A or B estimate using Bell polynomials, as outlined in Section 1. It also allowed for expansions about asymptotically normal random variables. The advantage of this approach in greatly reducing the number of terms in each P r ( x ) was illustrated in [7].
For univariate estimates, Cornish and Fisher (1937) [44] also showed how to invert the Edgeworth expansion to obtain an expansion for the distribution quantiles. This was extended to Type A estimates in [14]. For extensions to transformations of multivariate estimates, like t ( w ^ ) = ( w ^ w ) V 1 ( w ^ w ) , see [4,45,46]. An application to the amplitude and phase of the mean of a complex sample is given in [47].
Turning now to smooth functions of a Type A estimate, the first univariate results were given by [2] for parametric problems and [22] for nonparametric problems. These built on a deep result of [19]. This is why if w ^ is a Type A (or B) estimate of w, then a smooth function of w ^ , say t ( w ^ ) , is a Type A (or B) estimate of t ( w ) .
The extension from a vector to a matrix estimate is just a matter of relabelling: a single sum becomes a double sum. The first examples of this we know of are in [17,24]. The extension to a complex scalar or vector or matrix w was given in these same papers. The first of these three papers applied it to the multi-tone problem in electrical engineering, and the other papers to channel capacity problems where w ^ is a weighted mean of complex matrix random variables, and n is no longer a sample size, but the number of transmitters or receivers.
A different type of extension can be obtained by identifying a sample mean w ^ = X ¯ from a distribution F ( x ) with its empirical distribution F n ( x ) , and t ( w ) with T ( F ) , a smooth functional of F ( x ) , such as the bivariate correlation. T ( F n ) is a Type A estimate of T ( F ) , and its cumulant coefficient can be read off those of t ( w ^ ) . In this way one obtains the Edgeworth expansion for n 1 / 2 ( T ( F n ) T ( F ) ) . See [15] and its references for one or more weighted samples. An extension to samples from a linear process was given by [18].
A caveat on the use of an Edgeworth expansion is that including more terms makes it more inaccurate in the tails. This is where the tilted expansions, also known as saddlepoint, or small sample expansions, become essential. Results for the density of Y n w for a sample mean, were given in Section 5 of [12] and Section 6 of [13]. For a discussion and more references on tilting, mainly for a sample mean, see [11]. Ref. [3] shows how the cumulant coefficients given in this paper can be used to obtain the tilted expansion for the distribution and density of any Type A estimate.

8. Conclusions

Let w ^ represent a Type A estimate of an unknown parameter w belonging to R p . Its cumulant coefficients, as defined by (1), serve as the foundational elements for the Edgeworth expansion (2) as a series n 1 / 2 , where n is typically the sample size.
The necessary coefficients for the rth term, P r ( x ) , are provided in (6) and (7). Consider a smooth function t ( w ^ ) mapping to R q , which in turn is regarded as a Type A estimate of t ( w ) in R q . Consider a smooth function t ( w ^ ) mapping to R q , which in turn is regarded as a Type A estimate of t ( w ) mapping to R q . This paper presents the cumulant coefficients for t ( w ^ ) in terms of those of w ^ and the derivatives of w ^ . Substituting these coefficients into (2) yields the Edgeworth expansion of n 1 / 2 ( t ( w ^ ) t ( w ) ) up to n 5 / 2 .
The tilted Edgeworth expansion for w ^ , crucial for tail accuracy, was previously delineated in [3] in terms of its cumulant coefficients. By incorporating those of t ( w ^ ) as presented here, we derive the tilted Edgeworth expansion for t ( w ^ ) .
In many practical statistical estimation problems, simulations serve as a favored method for approximating distributions. However, their limitation lies in their inability to comprehensively represent the entire parametric landscape.
We have showcased some applications in electrical engineering. For instance, ref. [17] offered numerical comparisons of the initial three approximations to channel capacity for multiple arrays with multiple frequencies and delay spread. Given p = 1 , this permitted an expansion for the percentile. There exist myriad other potential applications across electrical engineering and allied fields.
Lastly, we outline potential future research avenues. Chain rules applied to t ( w ^ ) can yield the cumulant coefficients of its Studentised form, paving the way for expansions in the coverage probability of confidence regions and enhancements in their accuracy. These coefficients find applications in bias reduction, Bayesian inference, confidence regions, and power analysis. While the Edgeworth expansion can sometimes yield negative values in distribution tails, tilted expansions circumvent this issue. Alternatively, selecting y in R p such that n x = P r o b . ( Y n w x + n 1 / 2 y ) Φ V ( x ) is O ( n 1 ) offers another approach. For p > 1 the diversity of y choices allows for potential reductions to n x = O ( n 3 / 2 ) or smaller. Introducing n 1 / 2 y replacements such as y ( n ) = n 1 / 2 y 0 + n 1 y 1 further expands the range of options.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article and Appendix A.

Acknowledgments

I wish to thank the reviewers for their careful reading of the manuscript. Their suggestions have greatly improved the paper.

Conflicts of Interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Appendix A. Some Comments on the References

Here, we give some comments and corrections to some of our papers.
Withers (1982) [2]: To the expression for a 22 on p59 add c 12 + 2 c 16 where
c 12 = I 2 12 01 = t i t j k 1 , k i j k 1 k , c 16 = I 11 12 00 = t i k 1 i j t j k k 1 k .
This correction does not effect applications in which ω ^ is unbiased, as in [2,23].
In the expression on p60 for ( a 22 ) 2 , I 31 23 22 should be c 36 = I 31 23 00 .
On p61, 4 lines before Table 1, replace n / 2 ) r 1 by n / 2 ) 1 r .
On p67, add to K 2 a b , t i a t j b k 1 , k i j k 1 k . For r = 3 , 4 see Section 4.
On p68 in (A3), replace r + 2 r by r + 2 2 . Changing to the simpler notation of [15], denote the expressions for I 2 2 0 , . . . , I 301 222 000 and I 3 22 01 , . . . , I 31 222 001 given on pages 58–59 by σ 2 = V = V ( w ) = t i k 1 i j t j , c 01 , c 02 , c 21 , and c 23 , c 11 , c 15 , c 19 , c 1 , 10 , c 31 , c 36 , c 3 , 10 , c 3 , 11 , c 22 , c 12 , c 14 , c 17 , c 32 , c 34 , c 35 , c 38 , c 39 . So, the expressions on pages 59–60 become
a 11 = c 01 + c 02 / 2 , a 32 = c 21 + 3 c 23 , a 22 = c 11 + c 15 + c 19 / 2 + c 1 , 10 + c 12 + 2 c 16 by Corollay 3 , a 43 = c 31 + 12 c 36 + 12 c 3 , 10 + 4 c 3 , 11 , ( a 11 ) 1 = σ 1 ( c 01 + c 02 / 2 ) σ 3 ( c 22 / 2 + c 23 ) , ( a 32 ) 2 = σ 3 ( c 21 3 c 22 3 c 23 ) , ( a 22 ) 2 = i = 1 3 V i A i , ( a 43 ) 4 = i = 2 3 V i B i for A 1 = c 11 c 14 / 2 + c 15 2 c 17 c 19 / 2 , A 2 = ( c 01 + c 02 / 2 ) ( c 22 + 2 c 23 ) c 32 c 34 + c 35 2 c 36 4 c 38 + 2 c 39 2 c 3 , 10 2 c 3 , 11 , A 3 = 7 ( c 22 + 2 c 23 ) 2 / 4 , B 2 = c 31 6 c 32 6 c 34 + 3 c 35 24 c 38 12 c 3 , 10 8 c 3 , 11 , B 3 = 6 ( c 22 + 2 c 23 ) ( c 21 + 3 c 22 + 3 c 23 ) .
We now illustrate how the results on p.60 were obtained. Let c r s denote c r s when t ( w ^ ) is replaced by its Studentised form t ( 0 ) ( w ^ ) . Then
( a 22 ) 2 = c 11 + c 15 + c 19 / 2 + c 1 , 10 + c 12 + 2 c 16 ,
The first few derivatives of t ( 0 ) ( ω ^ ) at w, and of V ( w ) , are
t 0 . i = V 1 / 2 t i , t 0 . i j = V 1 / 2 t i j V 3 / 2 ( t i V j + t j V i ) / 2 , t 0 . i j k = V 1 / 2 t i j k V 3 / 2 i j k 3 ( t i j V k + V i j t k ) / 2 + 3 V 5 / 2 i j k 3 t i V j V k / 4 , V i = 2 t i a k 1 a b t b + k 1 , i a b t a t b , and V i j = 2 t i j a k 1 a b t b + 2 t i a k 1 a b t j b + 2 i j 2 t i a k 1 , j a b t b + k 1 , i j a b t a t b , where i j 2 a i b j = a i b j + a j b i , i j k 3 a i j b k = a i j b k + a i k b j + a j k b i f o r a i j = a j i . S o c 11 = V 1 c 11 , c 15 = V 1 c 15 V 2 M 1 , c 19 = V 1 c 19 2 V 2 M 4 + V 3 ( V M 2 + M 3 2 ) / 2 , c 1 , 10 = V 1 c 1 , 10 V 2 ( M 3 c 02 + 2 M 4 + V M 5 + 2 M 6 ) / 2 + 3 V 3 ( V M 2 + 2 M 3 2 ) / 4 , c 12 = V 1 c 12 , c 16 = V 1 c 16 V 2 ( V M 7 + c 01 M 3 ) / 2 , where M 1 = t i t j k 2 i j k V k = c 32 + 2 c 36 , M 2 = V i k 1 i j V j = c 35 + 4 c 39 + 4 c 3 , 10 , M 3 = t i k 1 i j V j = c 22 + 2 c 23 = c ˜ 22 say , M 4 = t i k 1 i j t j k k k l V l = c 39 + 2 c 3 , 10 , M 5 = k 1 i j V i j = 2 c 1 , 10 + 2 c 19 + c 14 + 4 c 17 , M 6 = t i k 1 i j V j k k 1 k l t l = 2 c 3 , 10 + 2 c 3 , 11 + c 34 + 4 c 38 , M 7 = k 1 i V i = c 12 + 2 c 16 c 19 = V 1 c 19 + 2 V 2 ( 4 c 35 c 3 , 10 ) + V 3 c ˜ 22 2 / 2 , c 1 , 10 = i = 1 3 V i b i f o r b 1 = c 14 / 2 2 c 17 c 19 , b 3 = 3 c ˜ 22 2 / 2 , b 2 = c ˜ 22 c 02 / 2 c 34 + 3 c 35 / 4 4 c 38 + 2 c 39 c 3 , 10 2 c 3 , 11 .
Substitution into (A1) yields ( a 22 ) 2 . The other a r i = ( a r i ) r given on p.60 are obtained similarly.
Withers (1984) [14]:
p393: In the 5-line expression for f 4 ( x , L ) , replace ( x 3 x ) / 30 by ( x 3 3 x ) / 30 , and + 2473 ) / 7776 by + 2473 x ) / 7776 .
p394: In (3.4), replace ( x 2 1 ) / 2 , by ( x 2 1 ) / 6 ,
p394: In line 2 of Section 4, ‘of Section 2’ should be ‘of Section 3’.
The following corrigendum for a printer’s error appeared in Withers, C.S. J. R. Stat. Soc. B, 1986, 48, p258:
The expression l 3 4 ( 252 x 5 1688 x 3 + 1511 x ) / 7776 should be added to the last line on p393.
That also gives g 5 ( y ) and g 6 ( y ) for the last line on p393.
Withers (1987) [21]:
p2371: (2.4): i = 1 n i C i need not converge. We only require an asymptotic expansion. The same is true for (3.2) p2375.
p2371, 3rd to last paragraph: Replace ‘Appendix C, which also’ with ‘Appendix D. Appendix C’
p2372, Example 2.2, line 2: Replace t 2 j ( θ ) with t ( 2 j ) ( θ ) , the 2 j derivative of t ( θ ) . In line 3 and in Example 3.1, ( y ) j = y ( y 1 ) ( y j + 1 ) .
p2377 line 2: Replace (1.2) with (2.4)
p2377 line 3: Replace / N | with / N
p2377 line 9: Replace ‘Section 2’ with ‘Section 3’ Since E 1 = 0 , C 1 of Example 3.2 p2376 is unaffected.
p2378: These expression for C j are correct if θ ^ is unbiased. In that case, the terms on p2378 with a 1 in the top line are 0 so that C j has only m j terms where m 1 = 1 ,   m 2 = 3 ,   m 3 = 6 , m 4 = 12 . However, if θ ^ is biased, then these expression for C j did not allow for contributions from replacing θ by E θ ^ in the cumulant coefficients k j a 1 a r of (3.2). These are corrected in Withers, C.S. and Nadarajah, S. (Submitted), Bias-reduced estimates for parametric problems.
p2379 Appendix D: Add at start: For p = 1 see (3.4) of
Withers, C.S. and Nadarajah, S., Journal of Multivariate Analysis, 2013, 118, 138–147.
Withers (1988) [23]:
p729: In the 10th line from the bottom, replace “their range 1 p ” with “their range 1 k
p732 line 9: T x i x j a i a j should be T x 1 x 2 a 1 a 2 .
p734: In the expression for h 1 in the 5th to last line, replace H e 2 with H e 2 / 6 .
p737: In line 11, “Sections 1 and 2 of Withers (1983a)” should read “Sections 1 and 2 of Withers (1983b)”.
p741: In the 4th equation from the bottom, at the end of the line, replace [ 1 j ] T 2 j , with [ 1 j ] 1 T 2 j ,
Withers and Nadarajah (2008) [15]:
p743 para 2, line 4: Replace ’about zero.’ qith ’about zero when G puts mass 1 at x.’
p754, p756: Replace w i n a with w i n a . Different samples can have different weights.
p754, 2nd-to-last line: The first term on RHS, c 11 / 2 , should be c 11 .
p755, line 6: There is a typesetting error in the first of the 2 lines for a 220 . Replace the first line with
a 220 = σ 2 ( c 11 c 12 c 14 / 2 + c 15 2 c 17 c 19 / 2 ) σ 4 [ ( c 01 + c 02 / 2 ) ( c 22 + 2 c 23 ) + c 32
p756: The 3rd and 4th lines after (8), should be
[ 1 r ] a = T F   x a r d F a ( x ) , [ 1 , 12 , 2 r ] a 1 a 2 = T F   x 1 a 1 T F   x 1 x 2 a 1 a 2 T F   x 2 a 2 r d F a 1 ( x 1 ) d F a 2 ( x 2 ) ,
Withers and Nadarajah (2009) [43]:
p272. Line 3: Convergence of S ( t ) is not needed, since B r k is a finite sum.
κ r on LHS(1.1) should be k r .
p273 last paragraph: Also, f ( x ) / ϕ ( x ) is only meaningful if X is dimension-free.
p275. (2.8) is correct, but since B 1 = B 2 = 0 , (2.8) can also be written as
f 2 / ϕ = 1 + k = 3 B k 2 / k ! .
In the 5th line of Section 3, insert after B r k ( α ) , ‘at α 1 = α 2 = 0 ’.
The first line of (3.1) should read
B r = ( B r k ϵ r 2 k : 1 k r / 3 )
(3.2) can be written K s = s 2 [ s / 3 ] where [ x ] is the integral part of x.
p276: In the expression for B 10 , B 10 , 18 should be B 10 , 1 .
p277: In the 2nd-to-last line, b 64 should be B 61 .
p278: In the expression for b 4 , the first term should be doubled. In the expression for b 5 , b 82 should be B 82 .
Withers and Nadarajah (2010a) [16]:
p3: In the 5th- and 6th-to-last lines, replace = n 2 K a b + O ( n 2 ) with = n 1 K a b + O ( n 2 )
p5: 2 lines above Theorem 2.2, replace “third moments” with “third central moments”
p7, lines 2–3: delete “and its Studentised version”
p7, lines 3–4: delete “or n 1 / 2 ( β a β a ) J ^ a a 1 / 2
p7, line 7–10: delete from “So, a one-sided” to “by O ( n 1 / 2 ) ;
p9: Move “Set ϕ = m = p + 1 . on the last 2 lines of p9 and 1st line of p10 to just before “Set” on p9 line 9.
p10, lines 14–15: replace “ g i j where” by “ g i j .” and move the rest of the sentence, “ g i j = g N . i j ” to the line after (6.1) p9, preceded by the word “Set”
Withers and Nadarajah (2010b) [3]:
p1129, line 7: replace b k i 1 . . . i r ( Y n ) by b k i 1 . . . i r ( Y )
To the 9th-to-last line we can add
P ˜ 3 ( t , B ) = e 3 ( t ) + e 1 ( t ) e 2 ( t ) + e 1 ( t ) 3 / 6 ,
From p1130 line 6 to the end of Section 5: Replace s with p, the dimension of θ .
p1130 line 7 is clearer, we replace line 8 with
for   p ( n ) ( k ) ( y ) = k 1 k p p ( n ) ( y ) , k = / y k ,
p1130 line 9: Replace H ν + k ( y ) with H ν + k ( y , V )
p1130, 5th- and 6th-to-last lines: for example K j i 1 . . . i r = k j ( i 1 . . . i r ) ( t ) = i 1 i r k j ( t ) where i = / t i .
p1132: A note on Corollary 3.2. For the duality of I ( x ) and k 0 ( t ) see p176 of McCullagh, P., Tensor methods in statistics. Chapman and Hall, London, 1987.
p1133: In line 14, replace H ν + λ ( θ , V t ) with H ν + λ ( 0 , V t )
Withers and Nadarajah (2014a) [5]:
p81: In (2.14), replace J r ( x ) and J r ( x ) with J r ( x ) / r ! and J r ( x ) / r ! .
p81: The 2nd line after (2.15) should read
J r ( x ) = V y ¯ V x ¯ H ¯ r ( y ¯ , V ¯ ) ϕ V ¯ ( y ¯ ) d y ¯
The next line is correct:
J r ( x ) = V y ¯ V x ¯ ( y ¯ ) r ϕ V ¯ ( y ¯ ) d y ¯ .
p82: In (2.20), replace J r ( y 1 , y 2 ) with J r ( y 1 , y 2 ) / r ! .
p85: In Withers, C.S. and Nadarajah, S. (2009), replace ‘via’ with ‘in terms of’.
Withers and Nadarajah (2014b) [7]:
p676. Multiply RHS of (1.13) by n 1 / 2 . That is, replace it by
Y J K θ = ( n / s 2 K θ ) 1 / 2 ( θ ^ s 1 J θ ) , J 0 , K 1 .
p699: In the editing of the original paper of 64 pages down to 21 pages, some details had to be removed. Here, are some more details for Theorem 1.2 after (1.24)
1 = 0 if J 1 , 2 = A ¯ 43 H 2 if K 2 , 3 = A ¯ 33 H 2 + A ¯ 54 H 4 if J 2 , 4 = A ¯ 44 H 3 + A ¯ 65 H 5 if K 3 , 5 = A ¯ 34 H 2 + A ¯ 55 H 4 + A ¯ 76 H 6 if J 3 , 6 = A ¯ 45 H 3 + A ¯ 66 H 5 + A ¯ 87 H 7 if K 4 . r e = 0 if r 3 for e = h , f , g , 4 e = [ 4 2 ] 0 e ( 4 2 ) , 5 e = [ 45 ] 0 e ( 45 ) + [ 34 ] 1 e ( 34 ) , 6 e = { [ π ] 0 e ( π ) : π = 5 2 , 46 , 4 3 } + { [ π ] 1 e ( π ) : π = 4 2 , 35 } + [ 3 2 ] 2 e ( 3 2 ) ,
where the e ( π ) , [ π ] i needed for e r ( x ) , 1 r 6 , are as follows.
h ( i j ) = H i + j + 1 so   that   h ( 1 i 1 2 i 2 ) = H 1 i 1 + 2 i 2 + 1 . For   r = 4 : [ 4 2 ] 0 = A ¯ 43 2 / 2 ! , h ( 4 2 ) = H 7 , f ( 4 2 ) = H 7 H 1 H 3 2 , g ( 4 2 ) = H 7 2 H 3 H 4 + H 1 H 3 2 . For   r = 5 : [ 45 ] 0 = A ¯ 43 A ¯ 54 , h ( 45 ) = H 8 , f ( 45 ) = H 8 H 1 H 3 H 4 , g ( 45 ) = H 8 H 3 H 5 H 4 2 + H 1 H 3 H 4 , [ 34 ] 1 = A ¯ 33 A ¯ 43 , h ( 34 ) = H 6 , f ( 34 ) = H 6 H 1 H 2 H 3 , g ( 34 ) = H 6 H 2 H 4 H 3 2 + H 1 H 2 H 3 . For   r = 6 : [ 5 2 ] 0 = A ¯ 54 2 / 2 ! , h ( 5 2 ) = H 9 , f ( 5 2 ) = H 9 H 1 H 4 2 , g ( 5 2 ) = H 9 2 H 4 H 5 + H 1 H 4 2 , [ 46 ] 0 = A ¯ 43 A ¯ 65 , h ( 46 ) = H 9 , f ( 46 ) = H 9 H 1 H 3 H 5 , g ( 46 ) = H 9 H 3 H 6 H 4 H 5 + H 1 H 3 H 5 , [ 4 3 ] 0 = A ¯ 43 3 / 3 ! , h ( 4 3 ) = H 11 , f ( 4 3 ) = H 11 3 H 1 H 3 H 7 H 2 H 3 3 + 3 H 1 2 H 3 3 , g ( 4 3 ) = H 11 3 H 3 H 8 3 H 4 H 7 + 3 H 1 H 3 H 7 + 3 H 3 2 H 5 + 6 H 3 H 4 2 9 H 1 H 3 2 H 4 H 2 H 3 3 + 3 H 1 2 H 3 3 , [ 4 2 ] 1 = A ¯ 43 A ¯ 44 , [ 35 ] 1 = A ¯ 33 A ¯ 54 , f ( 35 ) = H 7 H 1 H 2 H 4 , g ( 35 ) = H 7 H 3 H 4 H 2 H 5 , [ 3 2 ] 2 = A ¯ 33 2 / 2 ! , f ( 3 2 ) = H 5 H 1 H 2 2 , g ( 3 2 ) = H 5 2 H 2 H 3 + H 1 H 2 2 .
p702 Section 2: In the 3rd equation of Theorem 1, ln Γ ( μ ) should be ln Γ ( m ) . p704: Disregard Table 3.
Withers and Nadarajah (2015) [8]:
In (22) and the formulas for d 2 , , d 6 that follow, replace d r by
d ¯ r = r d r = c r ( x ) / ( r 1 ) ! .
As stated this gives c r = c r ( x ) = r ! d r . For example,
c 2 = a 1 , c 3 = 2 a 1 2 + a 2 , c 4 = 3 ! a 1 3 + 7 a 1 a 2 + a 3 , c 5 = 4 ! a 1 4 + 46 a 1 2 a 2 + 11 a 1 a 3 + 7 a 2 2 + a 4 , c 6 = 5 ! a 1 5 + 326 a 1 3 a 2 + 101 a 1 2 a 3 + 127 a 1 a 2 2 + 16 a 1 a 4 + 25 a 2 a 3 + a 5 .
In the first reference, [1], replace J.J. Alfredo with J.A. Jimenez.

References

  1. Withers, C.S.; Nadarajah, S. Chain rules for multivariate cumulant coefficients. Stat 2022, 11, e451. [Google Scholar] [CrossRef]
  2. Withers, C.S. The distribution and quantiles of a function of parameter estimates. Ann. Inst. Stat. Math. Ser. A 1982, 34, 55–68. [Google Scholar] [CrossRef]
  3. Withers, C.S.; Nadarajah, S. Tilted Edgeworth expansions for asymptotically normal vectors. Ann. Inst. Stat. Math. 2010, 62, 1113–1142. [Google Scholar] [CrossRef]
  4. Withers, C.S.; Nadarajah, S. Improved confidence regions based on Edgeworth expansions. Comput. Stat. Data Anal. 2012, 56, 4366–4380. [Google Scholar] [CrossRef]
  5. Withers, C.S.; Nadarajah, S. The dual multivariate Charlier and Edgeworth expansions. Stat. Probab. Lett. 2014, 87, 76–85. [Google Scholar] [CrossRef]
  6. Comtet, L. Advanced Combinatorics; Reidel: Dordrecht, The Netherlands, 1974. [Google Scholar]
  7. Withers, C.S.; Nadarajah, S. Expansions about the gamma for the distribution and quantiles of a standard estimate. Methodol. Comput. Appl. Probab. 2014, 16, 693–713. [Google Scholar] [CrossRef]
  8. Withers, C.S.; Nadarajah, S. Edgeworth-Cornish-Fisher-Hill-Davis expansions for normal and non-normal limits via Bell polynomials. Stochastics Int. J. Probab. Stoch. Process. 2015, 87, 794–805. [Google Scholar] [CrossRef]
  9. Bhattacharya, R.N.; Rao, R.R. Normal Approximation and Asymptotic Expansions; Wiley: New York, NY, USA, 1976. [Google Scholar]
  10. Cai, T. One-sided confidence intervals in discrete distributions. J. Stat. Plan. Inference 2005, 131, 63–88. [Google Scholar]
  11. Barndoff-Nielsen, O.E.; Cox, D.R. Asymptotic Techniques for Use in Statistics; Chapman & Hall: London, UK, 1989. [Google Scholar]
  12. Daniels, H.E. Saddlepoint approximations for estimating equations. Biometrika 1983, 70, 89–96. [Google Scholar] [CrossRef]
  13. Daniels, H.E. Tail probability expansions. Intern. Stat. Rev. 1987, 55, 37–48. [Google Scholar] [CrossRef]
  14. Withers, C.S. Asymptotic expansions for distributions and quantiles with power series cumulants. J. R. Stat. Soc. B 1984, 46, 389–396. [Google Scholar] [CrossRef]
  15. Withers, C.S.; Nadarajah, S. Edgeworth expansions for functions of weighted empirical distributions with applications to nonparametric confidence intervals. J. Nonparametr. Stat. 2008, 20, 751–768. [Google Scholar] [CrossRef]
  16. Withers, C.S.; Nadarajah, S. The bias and skewness of M-estimators in regression. Electron. J. Stat. 2010, 4, 1–14. [Google Scholar] [CrossRef]
  17. Withers, C.S.; Nadarajah, S. Channel capacity for MIMO systems with multiple frequencies and delay spread. Appl. Math. Inf. Sci. 2011, 5, 480–483. [Google Scholar]
  18. Withers, C.S.; Nadarajah, S. Cornish-Fisher expansions for sample autocovariances and other functions of sample moments of linear processes. Braz. J. Probab. Stat. 2012, 26, 149–166. [Google Scholar] [CrossRef]
  19. James, G.S.; Mayne, A.J. Cumulants of functions of random variables. Sankhya A 1962, 24, 47–54. [Google Scholar]
  20. Withers, C.S. Accurate confidence intervals when nuisance parameters are present. Commun. Stat.—Theory Methods 1989, 18, 4229–4259. [Google Scholar] [CrossRef]
  21. Withers, C.S. Bias reduction by Taylor series. Commun. Stat.—Theory Methods 1987, 16, 2369–2383. [Google Scholar] [CrossRef]
  22. Withers, C.S. Expansions for the distribution and quantiles of a regular functional of the empirical distribution with applications to nonparametric confidence intervals. Ann. Stat. 1983, 11, 577–587. [Google Scholar] [CrossRef]
  23. Withers, C.S. Nonparametric confidence intervals for functions of several distributions. Ann. Inst. Stat. Math. Part A 1988, 40, 727–746. [Google Scholar] [CrossRef]
  24. Withers, C.S.; Nadarajah, S. Expansions for the distribution of M-estimates with applications to the multi-tone problem. Esiam—Probab. Stat. 2011, 15, 139–167. [Google Scholar] [CrossRef]
  25. Kakizawa, Y. Some integrals involving multivariate Hermite polynomials: Application to evaluating higher-order local powers. Stat. Prob. Lett. 2016, 110, 162–168. [Google Scholar] [CrossRef]
  26. Hall, P. The Bootstrap and Edgeworth Expansion; Springer: New York, NY, USA, 1992. [Google Scholar]
  27. Yang, J.J.; Trucco, E.M.; Buu, A. A hybrid method of the sequential Monte Carlo and the Edgeworth expansion for computation of very small p-values in permution tests. Stat. Methods Med. Res. 2019, 28, 2937–2951. [Google Scholar] [CrossRef] [PubMed]
  28. Fisher, R.A.; Cornish, E.A. The percentile points of distributions having known cumulants. Technometrics 1960, 2, 209–225. [Google Scholar] [CrossRef]
  29. Chebyshev, P. Sur deux théorèmes relatifs aux probabilités. Acta Math. 1890, 14, 305–315. [Google Scholar]
  30. Edgeworth, F.Y. The asymmetrical probability curve. Proc. R. Soc. Lond. 1894, 56, 271–272. [Google Scholar] [CrossRef]
  31. Charlier, C.V.L. Applications de la Theorie des Probabilites a l’Astronomie; Tome II, Les Applications de la Theorie des Probabilites Aux Sciences Mathematiques et Aux Science Physique; Borel, E., Ed.; Gauthier-Villars: Paris, France, 1931. [Google Scholar]
  32. Cramer, H. On the composition of elementary errors. Skand Aktuarietidskr 1928, 11, 13–74. [Google Scholar]
  33. Ibragimov, I.A. On the accuracy of approximation by the normal distribution of distribution functions of sums of independent random variables. Teor. Verojatnost. Primenen 1966, 11, 632–655. [Google Scholar]
  34. Efron, B. Bootstrap methods: Another look at the jackknife. Ann. Stat. 1979, 7, 1–26. [Google Scholar]
  35. Stuart, A.; Ord, K. Kendall’s Advanced Theory of Statistics, 5th ed.; Griffin: London, UK, 1987; Volume 1. [Google Scholar]
  36. Bobkov, S.G. Berry-Esseen bounds and Edgeworth expansions in the central limit theorem for transport distances. Prob. Theory Relat. Fields 2018, 170, 229–262. [Google Scholar] [CrossRef]
  37. Velasco, C. Edgeworth Expansions for Spectral Density Estimates and Studentized Sample Mean; Econometrics Discussion Paper; Suntory and Toyota International Centres for Economics and Related Discipline: London, UK, 2000. [Google Scholar]
  38. Ciginas, A.; Pumputis, D. Calibrated Edgeworth expansions of finite population L-statistics. Math. Popul. Stud. 2020, 27, 59–80. [Google Scholar] [CrossRef]
  39. Zhang, Y.; Xia, D. Edgeworth expansions for network moments. arXiv 2021. [Google Scholar] [CrossRef]
  40. Jirak, M. Edgeworth expansions for volatility models. Electron. J. Probab. 2023, 28, 171. [Google Scholar] [CrossRef]
  41. Gotze, F.; Hipp, C. Asymptotic expansions for sums of weakly dependent random vectors. Z. Wahrsch. Verw. Gebiete 1983, 64, 211–239. [Google Scholar] [CrossRef]
  42. Ibragimov, I.A.; Linnik, Y.V. Independent and Stationary Sequences of Random Variables; Wolters-Noordhoff: Groningen, The Netherlands, 1971. [Google Scholar]
  43. Withers, C.S.; Nadarajah, S. Charlier and Edgeworth expansions for distributions and densities in terms of Bell polynomials. Probab. Math. Stat. 2009, 29, 271–280. [Google Scholar]
  44. Cornish, E.A.; Fisher, R.A. Moments and cumulants in the specification of distributions. Rev. l’Inst. Int. Stat. 1937, 5, 307–322. [Google Scholar] [CrossRef]
  45. Hill, G.W.; Davis, A.W. Generalised asymptotic expansions of Cornish-Fisher type. Ann. Math. Stat. 1968, 39, 1264–1273. [Google Scholar] [CrossRef]
  46. Withers, C.S.; Nadarajah, S. Transformations of multivariate Edgeworth-type expansions. Stat. Methodol. 2012, 9, 423–439. [Google Scholar] [CrossRef]
  47. Withers, C.S.; Nadarajah, S. The distribution of the amplitude and phase of the mean of a sample of complex random variables. J. Multivar. Anal. 2013, 113, 128–152. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Withers, C.S. 5th-Order Multivariate Edgeworth Expansions for Parametric Estimates. Mathematics 2024, 12, 905. https://doi.org/10.3390/math12060905

AMA Style

Withers CS. 5th-Order Multivariate Edgeworth Expansions for Parametric Estimates. Mathematics. 2024; 12(6):905. https://doi.org/10.3390/math12060905

Chicago/Turabian Style

Withers, C. S. 2024. "5th-Order Multivariate Edgeworth Expansions for Parametric Estimates" Mathematics 12, no. 6: 905. https://doi.org/10.3390/math12060905

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop