Next Article in Journal
On a Generalized Wave Equation with Fractional Dissipation in Non-Local Elasticity
Next Article in Special Issue
(Re-)Reading Sklar (1959)—A Personal View on Sklar’s Theorem
Previous Article in Journal
3D-ShuffleViT: An Efficient Video Action Recognition Network with Deep Integration of Self-Attention and Convolution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Posterior p-Value for Homogeneity Testing of the Three-Sample Problem

1
School of Mathematics and Statistics, Beijing Institute of Technology, Beijing 100081, China
2
School of Mathematical Science, Shenzhen University, Shenzhen 518060, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(18), 3849; https://doi.org/10.3390/math11183849
Submission received: 15 August 2023 / Revised: 6 September 2023 / Accepted: 6 September 2023 / Published: 8 September 2023

Abstract

:
In this paper, we study a special kind of finite mixture model. The sample drawn from the model consists of three parts. The first two parts are drawn from specified density functions, f 1 and f 2 , while the third one is drawn from the mixture. A problem of interest is whether the two functions, f 1 and f 2 , are the same. To test this hypothesis, we first define the regular location and scale family of distributions and assume that f 1 and f 2 are regular density functions. Then the hypothesis transforms to the equalities of the location and scale parameters, respectively. To utilize the information in the sample, we use Bayes’ theorem to obtain the posterior distribution and give the sampling method. We then propose the posterior p-value to test the hypothesis. The simulation studies show that our posterior p-value largely improves the power in both normal and logistic cases and nicely controls the Type-I error. A real halibut dataset is used to illustrate the validity of our method.

1. Introduction

In this paper, we focus on the model proposed by Hosmer [1], which is used to study the halibut data. There are two different sources of halibut data. One is from the research cruises, where the sex, age and length of the halibut are available, while another comes from the commercial catch where only age and length can be obtained since the fish have been cleaned before the boats returned to the port. The length distribution of an age class of halibut is closely approximated by a mixture of two normal distributions, which are
X i i i d f 1 ( y ) , i = 1 , , n 1 Y j i i d f 2 ( y ) , j = 1 , , n 2 Z k i i d λ f 1 ( y ) + ( 1 λ ) f 2 ( y ) , k = 1 , , n 3 ,
where f 1 and f 2 are the probability density functions of the normal distributions and λ is the proportion of the male halibut in the commercial catches. Hosmer [1] estimated the parameters of the two normal distributions using the iterative maximum likelihood estimate method. Murray and Titterington [2] generalized the problem to higher dimensions and summarized a variety of possible techniques, such as maximum likelihood estimation and Bayesian analysis. Anderson [3] proposed a semiparametric modeling assumption known as the exponential tilt mixture model, where the estimating of the proportion is performed by a general method based on direct estimation of the likelihood ratio. This semiparametric model is further studied by Qin [4], who extended Owen’s [5] empirical likelihood to the semiparametric model and gave the asymptotic variance formula for the maximum semiparametric likelihood estimation. However, empirical likelihood may suffer from some computational difficulties. Therefore, Zou et al. [6] proposed the use of partial likelihood and showed that the asymptotic null distribution of the log partial likelihood ratio is chi-square. To estimate the mixing proportion, an EM algorithm is given by Zhang [7]. It is shown that the sequence of proposed EM iterates, irrespective of the starting value, converges to the maximum semiparametric likelihood estimator of the parameters in the mixture model. Furthermore, Inagaki and Komaki [8] and Tan [9] respectively modified the profile likelihood function to provide better estimators for the parameters.
Except for the estimation of parameters, another important issue is to test the homogeneity of the model. Thus, the null hypothesis is
H 0 : f 1 = f 2 .
To test the null hypothesis, the classical results on the likelihood ratio test (LRT) may be invalid. This is caused by the lack of identifiability of some nuisance parameters. To solve this problem, Liang and Rathouz [10] proposed a score test and applied it to genetic linkage analysis. They showed that the score test has a simple asymptotic distribution under the null hypothesis and maintains adequate power in detecting the alternatives. This idea is further generalized by Duan et al. [11] and Fu et al. [12]. On the other hand, Chen et al. [13,14] proposed modified likelihood functions to make the LRT available. They gave the asymptotic theory of the modified LRT and showed that the asymptotic null distribution is a mixture of χ -type distributions and is asymptotically most powerful under local alternatives. Furthermore, Chen and Li [15] designed an EM-test for finite normal mixture models, which performed promisingly in their simulation study. To solve the problem of degeneration of the Fisher information, Li et al. [16] used a high-order expansion to establish a nonstandard convergence rate, N 1 / 4 , for the odds ratio parameter estimator. The methods mentioned above have been applied successfully in many real applications. For example, genetic imprinting and quantitative trait locus mapping; see Li et al. [17] and Liu et al. [18].
Most of the mixture models described above mainly consider the case when f 1 and f 2 are normal density functions or have an exponential tilt. In this paper, we want to extend the conclusion to more general cases. A similar question has been researched by Ren et al. [19]. In their paper, a two-block Gibbs sampling method is proposed to obtain the samples of the generalized pivot quantities of the parameters. They studied both cases when f 1 and f 2 are normal and logistic density functions. In our paper, we assume that f 1 and f 2 are in a specified location-scale family with location parameter μ and scale parameter σ . We propose a posterior p-value based on the posterior distribution to test the homogeneity. We aim to give a p-value under the posterior distribution, such that it has the same frequentist properties as the classical p-value. This means that the Bayesian p-value under proper definition can play the same role as the classical one. To sample from the posterior distributions, we propose to use the approximate Bayesian computation (ABC) method for the case when f 1 and f 2 are normal density functions, which is different from the cases when f 1 and f 2 are general. This is because the posterior distribution of the normal case can be regarded as using the information contained in the first two samples as prior distribution and updating it via the third one without loss of information. We find in our simulation that this method is promising and efficient, even though we use the simplest reject-sampling. For the general case, since the ABC method is no longer available, we use Markov Chain Monte Carlo (MCMC) methods, such as the Metropolis–Hastings sampling method proposed by Hannig et al. [20] and the two-block Gibbs sampling proposed by Ren et al. [19] to sample from the posterior distribution.
The paper is organized as follow. In Section 2, we first define the regular location-scale family and give some properties of the family. We then propose our posterior p-value for testing the homogeneity. We further introduce the sampling method for different cases. Real data of the halibut is studied in Section 3 to illustrate the validity of our method. The simulation study is given in Section 4, while the conclusion is given in Section 5.

2. Test Procedure

In this section, we consider model (1), where the distributions are in a certain regular location-scale family. Thus, we first give the definition in the following subsection.

2.1. Regular Location-Scale Family

In this section, we first give the definition of the regular location-scale family.
Definition 1
(regular location-scale family). Let f ( x ) be a probability density function. If f ( x ) satisfies
(1) 
f ( x ) > 0 , < x < ;
(2) 
f ( x ) is continuous;
(3) 
lim x x 2 f ( x ) = lim x + x 2 f ( x ) = 0 ;
(4) 
+ x 2 [ f ( x ) ] 2 f ( x ) d x < .
Then f ( x ) is defined as a regular density function, and
R f = 1 σ f x μ σ ; μ ( , + ) , σ ( 0 , + )
is defined as the regular location-scale family.
It is easy to verify that many families of distributions are regular location-scale families. For example, let
f 1 ( x ) = 1 2 π e x 2 2 , f 2 ( x ) = e x ( 1 + e x ) 2 .
Then f 1 ( x ) and f 2 ( x ) are regular density functions. The families of distributions that are constructed by f 1 ( x ) and f 2 ( x ) are regular, and they are the families of normal distributions and logistic distributions, respectively. The two families of distributions are included later in the paper.
The following lemma highlights some properties of this family.
Lemma 1.
If f ( x ) is a regular density function, then we have
(1) 
lim x x f ( x ) = lim x + x f ( x ) = 0 ;
(2) 
+ f ( x ) d x = 0 ;
(3) 
+ x f ( x ) d x = 1 ;
(4) 
+ f ( x ) d x = 0 ;
(5) 
+ x f ( x ) d x = 0 ;
(6) 
+ x 2 f ( x ) d x = 0 .
The proof of this lemma is given in Appendix A.
We further calculate the Fisher information matrix of the regular location-scale family with the following proposition.
Proposition 1.
Assume that f ( x ; ξ ) = 1 σ f ( x μ σ ) is in the regular location-scale family, where ξ = ( μ , σ ) . The parameter space is Ω = { ( μ , σ ) : < μ < , σ > 0 } . Let l ( X ; ξ ) be log f ( X ; ξ ) . Then
(1) 
The score function satisfies
E ξ l ( X ; ξ ) ξ = 0 ,
where
l ( X ; ξ ) ξ = l ( X ; ξ ) μ , l ( X ; ξ ) σ
is a two-dimensional vector. E ξ denotes the expectation under the distribution of parameters ξ = ( μ , σ ) .
(2) 
The Fisher information matrix satisfies
0 < I f ( ξ ) = E ξ l ( X ; ξ ) ξ l ( X ; ξ ) ξ = 1 σ 2 C ( f ) = 1 σ 2 C 11 ( f ) C 12 ( f ) C 21 ( f ) C 22 ( f ) < ,
where
C 11 ( f ) = [ f ( y ) ] 2 f ( y ) d y C 12 ( f ) = C 21 ( f ) = y [ f ( y ) ] 2 f ( y ) d y C 22 ( f ) = 1 + y f ( y ) f ( y ) 2 f ( y ) d y = y 2 [ f ( y ) ] 2 f ( y ) d y 1
(3) 
The Fisher information matrix is given by
I f ( ξ ) = E ξ 2 f ( X , ξ ) ξ ξ .
The proof is given in Appendix A.
Proposition 2.
Assume that 0 < λ 0 < 1 and f ( · ) is regular. Then { g ( x ; θ ) : θ Ω } given by
g ( x , θ ) = λ 0 σ 1 f x μ 1 σ 1 + 1 λ 0 σ 2 f x μ 2 σ 2 ,
where θ = ( μ 1 , μ 2 , σ 1 , σ 2 ) and Ω = R 2 × R + 2 has the following properties.
(1) 
E θ log g ( x , θ ) θ = 0 ;
(2) 
I ( θ ) = E log g ( x , θ ) θ log g ( x , θ ) θ < ;
(3) 
I ( θ ) = E θ 2 log g ( x ; θ ) θ θ .
The proof is given in Appendix A.
We then give the Fisher information matrix of the normal and logistic distribution. For the normal distribution, we have
C 11 ( f ) = 1 2 π e y 2 2 y 2 d y = 1 C 12 ( f ) = C 21 ( f ) = 1 2 π e y 2 2 y 3 d y = 0 C 22 ( f ) = 1 2 π e y 2 2 y 4 d y 1 = 2
Thus, the Fisher information matrix of normal distribution is
I f ( ξ ) = 1 σ 2 1 0 0 2
Similarly, for the logistic distribution,
C 11 ( f ) = e y ( e y 1 ) 2 ( 1 + e y ) 4 d y = 1 / 3 C 12 ( f ) = C 21 ( f ) = e y ( e y 1 ) 2 ( 1 + e y ) 4 d y = 0 C 22 ( f ) = e y ( e y 1 ) 2 ( 1 + e y ) 4 d y = 1 3 + π 2 9
Thus, the Fisher information matrix of logistic distribution is
I f ( ξ ) = 1 σ 2 1 3 0 0 1 3 + π 2 9

2.2. A Posterior p-Value

We now consider testing the homogeneity of model (1), where f 1 and f 2 are in R f ,
f 1 = 1 σ 1 f x μ 1 σ 1 , f 2 = 1 σ 2 f x μ 2 σ 2 .
This is equivalent to testing the equality of the parameters of the two density functions, that is,
H 0 : μ 1 = μ 2 , σ 1 = σ 2 v . s . H 1 : μ 1 μ 2 or σ 1 σ 2 .
Consider the density function
g ( x ; θ ) = λ σ 1 f x μ 1 σ 1 + λ σ 2 f x μ 2 σ 2 ,
where θ = ( μ 1 , μ 2 , σ 1 , σ 2 , λ ) is the unknown parameter. When f ( x ) is the regular density function, then the Fisher information matrix is
I g ( θ ) = E θ log g ( x ; θ ) θ log g ( x ; θ ) θ ,
where
log g ( x ; θ ) θ = 1 g ( x ; θ ) λ σ 1 2 f x μ 1 σ 1 1 g ( x ; θ ) 1 λ σ 2 2 f x μ 2 σ 2 1 g ( x ; θ ) λ σ 1 2 f x μ 1 σ 1 λ σ 1 3 f x μ 1 σ 1 1 g ( x ; θ ) 1 λ σ 2 2 f x μ 2 σ 2 1 λ σ 2 3 f x μ 2 σ 2 1 g ( x ; θ ) 1 σ 1 f x μ 1 σ 1 1 σ 2 f x μ 2 σ 2
When μ 1 = μ 2 , σ 1 = σ 2 ,
log g ( x ; θ ) λ = 0 ,
the last row and column of I g ( θ ) is zero, which means that | I g ( θ ) | = 0 and is non-definite. Thus, we may encounter some difficulties when using some traditional test methods, such as the likelihood ratio test.
We suggest a solution here. First we assume that λ = λ 0 is known. There are then four parameters, and we still denote them by θ = ( μ 1 , μ 2 , σ 1 , σ 2 ) . We use the estimate of λ instead since λ is unknown. This is because when the homogeneity hypothesis holds, the distribution of the population is irrelative to λ , so the level of the test is irrelative to the estimate of λ . We then give the inference on θ below. For the first two samples, the fiducial density of ( μ 1 , σ 1 ) and ( μ 2 , σ 2 ) are
( μ 1 , σ 1 ) i = 1 n 1 1 σ 1 f x 1 i μ 1 σ 1 1 σ 1 , ( μ 2 , σ 2 ) j = 1 n 2 1 σ 2 f x 2 j μ 2 σ 2 1 σ 2 ,
where “∝” denotes “proportion to”; see example 3 of Hannig et al. [20]. To combine (4) with the third sample, we regard (4) as the prior distribution. By the Bayes’ theorem
θ k = 1 n 3 λ 0 σ 1 f x 3 k μ 1 σ 1 + 1 λ 0 σ 2 f x 3 k μ 2 σ 2 · i = 1 n 1 1 σ 1 f x 1 i μ 1 σ 1 1 σ 1 · j = 1 n 2 1 σ 2 f x 2 j μ 2 σ 2 1 σ 2
Denote the probability measure on the parameter space determined by (5) by P Θ | x , where x = ( x 1 , x 2 , x 3 ) , x 1 = ( x 11 , x 12 , , x 1 n 1 ) , x 2 = ( x 21 , x 22 , , x 2 n 2 ) , x 3 = ( x 31 , x 32 , , x 3 n 3 ) . Θ denotes the random variable. We can see from expression (5) that P Θ | x is the posterior distribution under the prior distribution
d θ = 1 σ 1 σ 2 d μ 1 d μ 2 d σ 1 σ 2 .
Let
A = 1 1 0 0 0 0 1 1 , b = [ 0 , 0 ] .
Then, hypotheses (3) is equivalent to
H 0 : A θ = b v . s . H 1 : A θ b .
where θ = ( μ 1 , μ 2 , σ 1 , σ 2 ) .
To establish Bernstein-von Mises theorem for multiple samples, we first introduce some necessary assumptions below. Let l i ( θ ) be the log-likelihood function of the ith sample, where i = 1 , 2 , 3 .
Assumption 1.
Given any ε > 0 , there exists δ > 0 , such that in the expansion
l i ( θ ) = l i ( θ 0 ) + ( θ θ 0 ) l i ( θ 0 ) 1 2 ( θ θ 0 ) [ n I i ( θ 0 ) + R n i ( θ ) ] ( θ θ 0 ) , i = 1 , 2 , 3 ,
where θ 0 is the true value of the parameter. I i ( θ 0 ) is the Fisher Information matrix. The probability of the following event
sup 1 n λ max R n i ( θ ) : θ θ 0 δ ε
tends to 0 as n , where · is the Euclidean norm and λ max ( A ) denotes the largest absolute eigenvalues of a square matrix, A.
Assumption 2.
For any δ > 0 , there exists ε > 0 , such that the probability of the event
sup 1 n i l i ( θ ) l i ( θ 0 ) : θ θ 0 δ ε
tends to 1 as n .
Assumption 3.
Under the prior π, there exist k 0 , such that the integral of θ below exists,
θ 2 i = 1 k 0 1 σ 1 f x 1 i μ 1 σ 1 j = 1 k 0 1 σ 2 f x 2 i μ 2 σ 2 π ( θ ) d θ < .
Assumption 4.
When n = n 1 + n 2 + n 3 ,
n i n r i ( 0 , 1 ) , i = 1 , 2 , 3 .
We then give the Berstein-von Mises theorem for multiple samples as follows.
Theorem 1.
Denote the posterior density of t = n ( θ T n ) by π ( t | x ) , where
T n = θ 0 + 1 n I 1 ( θ 0 ) l ( θ 0 ) .
If Assumptions 1, 2 and 4 hold, then
Ω π ( t | x ) ( 2 π ) k 2 I θ 0 1 2 e 1 2 t I θ 0 t d t P 0 .
Furthermore, if Assumption 3 holds, then
Ω 1 + t 2 π ( t | x ) ( 2 π ) k 2 I θ 0 1 2 e 1 2 t I θ 0 t d t P 0 .
We can then define the posterior p-value as follows
Definition 2.
Let
p ( x ) = P Θ | x ( ( Θ θ B ) A A Σ B A 1 A ( Θ θ B ) ( b A θ B ) A Σ B A 1 ( b A θ B ) ) ,
where P Θ | x ( · ) is the probability under the posterior distribution. θ B is the posterior mean and Σ B is the posterior covariance matrix. We call p ( x ) a posterior p-value.
It should be noted that p ( x ) is defined under the posterior distribution, which is the distribution of parameters given the observation X = x . However, when studying the properties of p ( x ) , we regard it as a random variable and denote it by p ( X ) . The theorem below guarantees the validity of the posterior p-value.
Theorem 2.
Under the assumption of Theorem 1, if the null hypothesis in (3) is true, that is, μ 1 = μ 2 and σ 1 = σ 2 , then the p-value defined by (7) satisfies
p ( X ) d U ( 0 , 1 ) .
where “ d ” is the convergence in distribution and U ( 0 , 1 ) is the uniform distribution on the internal ( 0 , 1 ) .
The proof is given in the Appendix A. For a given significance level, α , we may reject the null hypothesis if the p-value is less than α .

2.3. Sampling Method

The posterior mean, θ B , and the posterior variance, Σ B , in Equation (7) can be estimated by the sample mean and variance, respectively. Now the remain problem is how to sample from the posterior distribution. When λ is unknown, we first propose an EM algorithm to estimate λ , then we sample from the posterior distribution where λ is fixed to the estimate of λ . The Markov Chain Monte Carlo (MCMC) method is commonly used. However, as we have mentioned earlier, the MCMC method needs to discard a large number of samples in the burn-in period to guarantee the samples are accepted sufficiently close to the ones from the real distribution. Fortunately, when f 1 and f 2 are normal density functions, we find that the posterior distribution can be transformed and sampled by using the approximate Bayesian computation (ABC) method. However, when f 1 and f 2 are more general, such as the logistic density functions, the two-block Gibbs sampling proposed by Ren et al. [19] can be an appropriate substitution. We will discuss the details in the following subsection.

2.3.1. EM Algorithm for λ

In this subsection, we propose the EM algorithm for estimating λ .
The log-likelihood function of the model is
L ( x ; θ , λ ) = i = 1 n 1 log f 1 x 1 i ; θ + j = 1 n 2 log f 2 x 2 j ; θ + k = 1 n 3 log p x 3 k ; θ , λ ,
where f 1 and f 2 are in the same regular location-scale family, R f , with parameters ( μ 1 , σ 1 ) and ( μ 2 , σ 2 ) , respectively. In the log-likelihood function of the third sample, p x 3 k ; θ , λ is
p x 3 k ; θ , λ = λ f 1 x 3 k ; θ + 1 λ f 2 x 3 k ; θ .
The EM algorithm was first proposed by Dempster et al. [21] and broadly applied to a wide variety of parametric models; see McLachlan and Krishnan [22] for a better review.
Assume that we have obtained the estimate of the parameters after m times of iterative, denote them by θ ( m ) = ( μ 1 ( m ) , σ 1 ( m ) , μ 2 ( m ) , σ 2 ( m ) ) and λ ( m ) . We introduce the latent variable γ = ( γ 1 , γ 2 , , γ n 3 ) ; the component γ k indicates which distribution the sample x 3 k is drawn from. γ k = 1 when it is drawn from the first distribution f 1 ( x 3 k ; θ ) , otherwise, γ k = 0 . We then have
P ( γ k = 1 ) = λ , P ( γ k = 0 ) = 1 λ , k = 1 , 2 , , n 3
The density of the joint distribution of X 1 , X 2 , X 3 , γ is
i = 1 n 1 f 1 ( x 1 i ; θ ) j = 1 n 2 f 2 ( x 2 j ; θ ) k = 1 n 3 λ f 1 ( x 3 k ; θ ) γ k ( 1 λ ) f 2 ( x 3 k ; θ ) 1 γ k .
Given X 1 = x 1 , X 2 = x 2 , X 3 = x 3 , the conditional distribution of γ k is
λ f 1 ( x 3 k ; θ ) λ f 1 ( x 3 k ; θ ) + ( 1 λ ) f 2 ( x 3 k ; θ ) γ k ( 1 λ ) f 2 ( x 3 k ; θ ) λ f 1 ( x 3 k ; θ ) + ( 1 λ ) f 2 ( x 3 k ; θ ) 1 γ k ,
where γ k = 0 , 1 , k = 1 , 2 , , n 3 . Thus, the conditional expectation of γ k is
E ( θ , λ ) γ k = P ( θ , λ ) ( γ k = 1 ) = λ f 1 ( x 3 k ; θ ) λ f 1 ( x 3 k ; θ ) + ( 1 λ ) f 2 ( x 3 k ; θ ) .
When θ = θ ( m ) and λ = λ ( m ) , the conditional expectation of γ k can be the estimate of γ k .
γ ^ k ( θ ( m ) , λ ( m ) ) = λ ( m ) f 1 ( x 3 k ; θ ( m ) ) λ ( m ) f 1 ( x 3 k ; θ ( m ) ) + ( 1 λ ( m ) ) f 2 ( x 3 k ; θ ( m ) ) ,
k = 1 , 2 , , n 3 .
The log likelihood function is
i = 1 n 1 log f 1 x 1 i ; θ + j = 1 n 2 log f 2 x 2 j ; θ + k = 1 n 3 γ k log f 1 x 3 k ; θ , + k = 1 n 3 ( 1 γ k ) log f 2 x 3 k ; θ , + k = 1 n 3 γ k log λ + n 3 k = 1 n 3 γ k log ( 1 λ ) .
Since the latent variable is unknown, we use its conditional expectation. Furthermore, the MLE of λ is
λ ( m + 1 ) = k = 1 n 3 γ ^ k ( θ ( m ) , λ ( m ) ) n 3 .
Then, in the E-step, we calculate the expectation of new parameters conditional on ( θ ( m ) , λ ( m ) ) ,
Q ( θ , λ | θ ( m ) , λ ( m ) ) = i = 1 n 1 log f 1 x 1 i ; θ + j = 1 n 2 log f 2 x 2 j ; θ + k = 1 n 3 γ ^ k ( ( θ ( m ) , λ ( m ) ) ) log f 1 x 3 k ; θ , + k = 1 n 3 ( 1 γ ^ k ( ( θ ( m ) , λ ( m ) ) ) ) log f 2 x 3 k ; θ , .
Let γ ^ k = γ ^ k ( θ ( m ) , λ ( m ) ) , then
Q ( θ , λ | θ ( m ) , λ ( m ) ) = i = 1 n 1 log σ 1 + log f x 1 i μ 1 σ 1 + j = 1 n 2 log σ 2 + log f x 2 j μ 2 σ 2 + k = 1 n 3 γ ^ k log σ 1 + log f x 3 k μ 1 σ 1 + k = 1 n 3 ( 1 γ ^ k ) log σ 2 + log f x 3 k μ 2 σ 2 .
In the M-step, we compute the simultaneous equations below to maximize Q ( θ , λ | θ ( m ) , λ ( m ) ) . The solutions are the new parameters ( θ ( m + 1 ) , λ ( m + 1 ) ) . We give the equations of ( μ 1 , σ 1 ) ; similarly, we can obtain ( μ 1 , σ 1 ) .
0 = k = 1 n 3 γ k λ f 1 + 1 λ f 2 f x 3 k μ 1 σ 1 + i = 1 n 1 1 f 1 f x 1 i μ 1 σ 1 0 = k = 1 n 3 γ k λ f 1 + 1 λ f 2 σ 1 2 f 1 + f x 3 k μ 1 σ 1 x 3 k μ 1 + i = 1 n 1 σ 1 2 + x 1 i μ 1 f x 1 i μ 1 σ 1 1 f 1 λ = 1 n 3 k = 1 n 3 γ k
In the simulation study, we consider the normal and logistic cases. The maximization step of the normal case can be simplified as
μ 1 = k = 1 n 3 γ k x 3 k + i = 1 n 1 x 1 i k = 1 n 3 γ k + n 1 , σ 1 2 = k = 1 n 3 γ k ( x 3 k μ 1 ) 2 + i = 1 n 1 ( x 1 i μ 1 ) 2 k = 1 n 3 γ k + n 1 ,
while that of the logistic case is
0 = k = 1 n 3 γ k e x 3 k μ 1 σ 1 1 e x 3 k μ 1 σ + 1 + i = 1 n 1 e x 1 i μ 1 σ 1 e x 1 i μ 1 σ 1 + 1 0 = k = 1 n 3 γ k x 3 k μ 1 σ 1 e x 3 k μ 1 σ 1 1 e x 3 k μ 1 σ 1 + 1 1 + i = 1 n 1 x 1 i μ 1 σ 1 e x 1 i μ 1 σ 1 1 e x 1 i μ 1 σ 1 + 1 1 .
The two steps are repeated sufficiently to gurantee the convergence. W can then obtain the MLE of the parameters.

2.3.2. Normal Case

When the estimate of λ is obtained, the posterior distribution (5) can be rewritten as
π ( θ | X 1 , X 2 , X 3 ) = 1 σ 1 i = 1 n 1 f ( X 1 i ; μ 1 , σ 1 ) × 1 σ 2 j = 1 n 2 f ( X 2 j ; μ 2 , σ 2 ) × k = 1 n 3 λ ^ f ( X 3 k ; μ 1 , σ 1 ) + ( 1 λ ^ ) f ( X 3 k ; μ 2 , σ 2 ) .
This means that the posterior distribution is equivalent to using the first two terms on the right side of the equation as the “prior distribution” and the third term as the likelihood function. For the first term, we have
1 σ 1 i = 1 n 1 f ( X i ; μ 1 , σ 1 ) = 1 σ 1 1 2 π σ 1 n exp i = 1 n 1 ( X i μ 1 ) 2 2 σ 1 2 .
By denoting the sample mean and variance by X ¯ and S 1 2 , respectively, we have
X 1 ¯ = 1 n 1 i = 1 n 1 X 1 i , S 1 2 = 1 n 1 i = 1 n 1 ( X 1 i X 1 ¯ ) 2 ,
which follows a normal and χ 2 ( n 1 1 ) distribution, respectively; that is,
X 1 ¯ N μ 1 , σ 1 2 n 1 , ( n 1 1 ) S 1 2 σ 1 2 χ 2 ( n 1 1 ) .
Let U N ( 0 , 1 ) and V χ 2 ( n 1 1 ) be two independent random variables. Then
X 1 ¯ = μ 1 + σ 1 n 1 U , ( n 1 1 ) S 1 2 = σ 1 2 V .
Given X 1 ¯ = x 1 ¯ and S 1 2 = s 1 2 , then μ 1 and σ 1 can be regarded as the functions of U and V
μ 1 = x 1 ¯ σ 1 n 1 U , σ 1 2 = ( n 1 1 ) s 1 2 V .
The joint distribution of ( U , V ) is
1 2 π e u 2 2 v n 1 1 2 1 Γ ( n 1 1 2 ) 2 n 1 1 2 e v 2 .
Then the joint distribution of ( μ 1 , σ 1 ) can be calculated as
π ( μ 1 , σ 1 | x o b s ) = n 1 2 π σ 1 e n 1 ( x 1 ¯ μ 1 ) 2 2 σ 1 2 ( s 1 2 ) n 1 1 2 ( n 1 1 ) n 1 1 2 Γ ( n 1 1 2 ) 2 n 1 1 2 ( 1 σ 1 2 ) n 1 1 2 1 + 3 2 e ( n 1 1 ) s 1 2 2 σ 1 2 ,
where x 1 o b s = ( x 1 , x 2 , , x n 1 ) . This coincides with the joint fiducial density proposed by Fisher [23,23], which means that the fiducial distribution of ( μ 1 , σ 1 ) is
μ 1 | σ 1 2 N x 1 ¯ , σ 1 2 n 1 , 1 σ 1 2 χ 2 ( n 1 1 ) ( n 1 1 ) s 1 2 .
Similarly, can we obtain
μ 2 | σ 2 2 N x 2 ¯ , σ 2 2 n 1 , 1 σ 2 2 χ 2 ( n 2 1 ) ( n 2 1 ) s 2 2 ,
where x 2 ¯ and s 2 2 are the sample mean and variance of the second sample and x 2 o b s = ( x 21 , x 22 , , x 2 n 2 ) .
With the conclusion above, sampling from the posterior distribution (5) can be conducted by sampling first from the fiducial distribution of the parameters and then combine the information with the likelihood function of the third sample from the mixture model (1). This can be carried out simply using the approximate Bayesian computation (ABC) method. In this case, we regard the fiducial distributions of ( μ 1 , σ 1 , μ 2 , σ 2 ) as the prior distribution. After we have drawn samples of parameters from (10) and (11), denoted by ( μ 1 , σ 1 , μ 2 , σ 2 ) , we generate simulations from the model below and denote them by x 3 s i m = ( x 31 , x 32 , , x 3 n 3 ) ,
λ ^ f ( x ; μ 1 , σ 1 ) + ( 1 λ ^ ) f ( x ; μ 2 , σ 2 )
where λ ^ is the MLE of λ , estimated beforehand using the EM algorithm proposed in the last subsection. We then calculate the distance between the simulations and the observation and accept those whose distance is below a given threshold, ε . The algorithm is given below.
  • Compute the sample mean and variance of the first two samples and denote them by x ¯ , s 1 2 , y ¯ and s 2 2 . Calculate the MLE of λ using the EM algorithm and denote it by λ ^ .
  • Sample U 1 and U 2 from the standard normal distribution, V 1 from the χ 2 ( n 1 1 ) distribution and V 2 from χ 2 ( n 2 1 ) , respectively. To sample from the fiducial distributions of the parameters, we calculate μ 1 , σ 1 , μ 2 and σ 2 using
    μ 1 = x 1 ¯ U 1 V 1 / n 1 1 s 1 n 1 , σ 1 2 = ( n 1 1 ) s 1 2 V 1 2 , μ 2 = x 2 ¯ U 2 V 2 / n 2 1 s 2 n 2 , σ 2 2 = ( n 2 1 ) s 2 2 V 2 2 .
    We denote the samples of the parameters by θ = ( μ 1 , μ 2 , σ 1 , σ 2 ) .
  • Generate a simulation of size n 3 from
    λ ^ f ( x ; μ 1 , σ 1 ) + ( 1 λ ^ ) f ( x ; μ 2 , σ 2 ) .
    The simulation is represented by x 3 s i m = ( x 31 , x 32 , , x 3 n 3 ) .
  • Calculate the Euclidean distance between the order statistics of the observation Z 1 , Z 2 , , Z n 3 and the simulation z 1 , z 2 , , z n 3 . We accept the parameters if the distance is below a given threshold, ε . Otherwise, we reject the parameters.
  • The procedure is repeated until we accept a certain number of parameters.
A remark that should be noted in this algorithm is that the samples we receive are an approximation of the posterior distribution (5). We actually receive samples from
π ( θ , ε | x 1 o b s , x 2 o b s , x ) π ( θ | x 1 o b s , x 2 o b s , x 3 o b s ) I ( x x 3 o b s ε ) ,
where I is the indicator function. ε controls the proximity of (12) to (5) and can be adjusted to balance the accuracy and computational cost.

2.3.3. General Case

When f 1 and f 2 are not normal, to sample from the posterior (8), it is natural to use the Markov chain Monte Carlo (MCMC) method. The Metropolis–Hastings (MH) sampling method and Gibbs sampling method are commonly used. An early version of the MH algorithm was given by Metropolis et al. [24] in a statistical physics context, with subsequent generalization by Hastings [25], who focused on statistical problems. Some computational problem and solutions can be further seen in Owen and Glynn [26].
The initial values of the parameters can be determined by the EM algorithm mentioned above. For the proposal distribution, we choose
q ( μ k | μ k ( τ ) ) = N ( · ; μ k ( τ ) , 1 ) q ( σ k | σ k ( τ ) ) = G a ( · ; σ k ( τ ) , 1 )
where G a ( · ) and N ( · ) denote the gamma distribution and normal distribution, respectively. k = 1 , 2 and μ k ( τ ) , σ k ( τ ) denotes the parameters accepted in the τ th loop. After we obtain θ ( τ ) = ( μ 1 ( τ ) , σ 1 ( τ ) , μ 2 ( τ ) , σ 2 ( τ ) ) , we can further obtain θ ( τ + 1 ) via the following two-step algorithm.
  • Sample ( μ 1 , σ 1 , μ 2 , σ 2 ) respectively from the proposal distribution (13). Compute
    log Q ( θ | θ τ ) = log q ( μ 1 | μ 1 ( τ ) ) + log q ( μ 2 | μ 2 ( τ ) ) + log q ( σ 1 | σ 1 ( τ ) ) + log q ( σ 2 | σ 2 ( τ ) ) + i = 1 n 1 log f ( x i ; μ 1 , σ 1 ) + i = 1 n 2 log f ( y i ; μ 2 , σ 2 ) + i = 1 n 3 λ ^ f ( z i ; μ 1 , σ 1 ) + ( 1 λ ^ ) f ( z i ; μ 2 , σ 2 ) log ( σ 1 σ 2 ) .
  • Accept θ with probability
    P ( θ , θ ( τ ) ) = exp min 0 , log Q ( θ | θ τ ) log Q ( θ ( τ ) | θ ) .
    and let θ ( τ + 1 ) = θ . Otherwise, we reject the parameters and return to the first step.
The algorithm should be repeated sufficiently before obtaining the samples from the posterior distribution. This costs much more time compared with the ABC algorithm for the normal case. What is more, in our simulation we found that the MH algorithm may be too conservative. A better substitution could be the two-block Gibbs sampling proposed by Ren et al. [19]. In this sampling method, λ is first estimated using the EM algorithm, then for each loop, the parameters are updated by the conditional generalized pivotal quantities.

3. Real Data Example

In this section, we apply the proposed posterior p-value to the real halibut dataset studied by Hosmer [1], which was provided by the International Halibut Commission in Seattle, Washington. This dataset consists of the lengths of 208 halibut caught on one of their research cruises, in which 134 are female while the rest 74 are male. The data is summarized by Karunamuni and Wu [27]. We follow their method and randomly select 14 males and 26 females from the samples and regard them as the first and second sample of the mixture model (1). The remaining male proportion of 60/168 is approximately identical to the original male proportion of 74/208, which is 0.3558. One hundred replications are generated with the same procedure. Hosmer [1] pointed out that the component for the dataset can be fitted by the normal distribution. A problem of interest is whether the sex effects the length of the halibut.
To test the homogeneity, for each replication we first use the EM algorithm to estimate λ , then we use the reject-ABC method to generate 8000 samples. We choose a moderate threshold, ε , to balance the accuracy and the computational cost. For the 100 replications, the mean estimate of the male proportion, λ , is 0.3381, with the mean squared error of 0.0045, which illustrates the accuracy of our EM algorithm The estimates of the location and scale parameter of the male halibut are μ 1 ^ = 96.655 and σ 1 ^ = 12.983 , while those of the female ones are μ 2 ^ = 118.806 and σ 2 ^ = 9.077 . This is close to the estimates of Ren et al. [19]. As with the hypothesis testing f 1 = f 2 , we calculate the posterior p-value of the 100 replications. Given the significance level, α = 0.05 , all the p-values are less than α . Thus, the null hypothesis is rejected, which indicates that there exists an association between the sex and length of the halibut.

4. Simulation Study

In this section, we present the simulation study of the cases discussed above. We compare the results of the posterior p-value (7) using different sampling methods and the generalized fiducial method proposed by Ren et al. [19]. As we can see from the simulations, the posterior p-value we proposed largely improves the testing of homogeneity. R programming language is used for our calculation and simulations.

4.1. Normal Case

When f 1 and f 2 are normal density functions, we compare the results of three different tests. The first two are the posterior p-value we proposed, but using the two-block Gibbs sampling and reject-ABC sampling methods, respectively. The last one is the generalized fiducial method proposed by [19]. In the following tables, the first two are denoted by “ T G ” and “ T R ”, while the last is denoted by “G”. We fix f 1 to N ( 0 , 1 ) while f 2 is set to be N ( 0 , 1 ) , N ( 1 , 1 ) , N ( 0 , 1 . 5 2 ) and N ( 1 , 1 . 5 2 ) . For each f 1 and f 2 we consider λ = 0.3 , 0.5 , 0.7 and different sample sizes for n 1 , n 2 and n 3 . We simulate N = 10000 repetitions for each case. For the Gibbs sampling, we accept 3000 samples after burning in the first 2000. For the reject-ABC sampling method, we first calculate the estimate of λ and accept 4000 parameters with ε set to n 3 / 2 . We then calculate the posterior p-value using the samples. We set the significance level to α = 0.05 and reject the null hypothesis when the posterior p-value is below α . The results are shown in Table 1, Table 2, Table 3 and Table 4. We further provide the QQ-plot of T R in Figure 1, which indicates the correctness of Theorem 2. The first rows are the cases of ( n 1 , n 2 , n 3 ) = ( 10 , 10 , 10 ) , ( 20 , 20 , 20 ) and ( 30 , 30 , 30 ) , while the last rows are the cases of ( 10 , 20 , 30 ) , ( 30 , 20 , 10 ) and ( 15 , 25 , 150 ) .
We can see from the results that the posterior p-value largely improves the testing of homogeneity in normal cases. The Type-I error is controlled as well as the generalized fiducial methods. Moreover, our method significantly improves the power of testing homogeneity, especially when σ is different. The reject-ABC sampling method has the advantage of lower computational cost, compared with the two-block Gibbs sampling method. However, the power of using the reject-ABC sampling method is smaller than using two-block Gibbs sampling when n 3 is much larger than n 1 and n 2 . Thus, we can use the reject-ABC sampling method when the sample size is small or moderate and two-block Gibbs sampling when the sample size is large.

4.2. General Case

For the general case, we assume that f 1 and f 2 are logistic density functions. The location and scale parameters of f 1 and f 2 are set the same as that of the normal case. We simulate 10,000 repetitions for each sample size. We propose three methods in this simulation. The first two are the generalized fiducial method proposed by Ren et al. [19] and our posterior p-value using two-blocks Gibbs sampling. They are denoted by “G” and “ T G ”, as in the last simulation. The last one is the posterior p-value using the M–H algorithm, which is denoted by “ T M ”. First, we calculate the MLE of λ using the EM algorithm. We then propose the Metropolis–Hastings algorithm to obtain 12,000 samples after the first burn-in of 8000 ones. To avoid the dependency between the samples, we choose the first one in every three samples, which leaves us 4000 samples. We then use these samples to calculate the posterior p-value. The algorithm is natural and seems to be feasible. However, from Table 5 we can see that with this sampling method the results are rather conservative. Given the significance level α = 0.05 , the type-I error of T M is always much smaller than 0.05, which makes the power of T M also smaller than the other two when f 1 f 2 . However, We find that the two-block Gibbs sampling method can successfully solve the problem. It can be seen that the type-I error of “ T G ” can be controlled well, while the power is largely improved compared with the generalized fiducial method. The results are shown in Table 6, Table 7 and Table 8. We also provide the QQ-plot of “ T G ” in Figure 2.

5. Conclusions

In this paper, we propose a new posterior p-value for testing the homogeneity of the three-sample problem. We define the regular location-scale family and assume that both f 1 and f 2 are in the same family. Therefore, testing the homogeneity is equivalent to testing the equality of the location and scale parameters. We use the Bayes’ theorem to obtain the posterior distribution of the parameters and propose the Bernstein-von Mises theorem for multiple samples. We then propose the posterior p-value for testing the equality of the parameters. To sample from the posterior distribution, we compare different sampling methods. The simulation studies illustrate that the reject-ABC sampling method may be a good choice for the normal case while the two-block Gibbs sampling is better for the general ones. It should be noted that we transform the hypotheses of homogeneity to hypotheses (6). Then, with a different matri, A , we can generate our method to a variety of hypotheses.

Author Contributions

Conceptualization, X.X.; methodology, X.X.; software, Y.W.; validation, Y.W. and X.X.; formal analysis, Y.W. and X.X.; writing—original draft preparation, Y.W.; writing—review and editing, Y.W.; visualization, Y.W.; supervision, X.X.; project administration, X.X.; funding acquisition, X.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under Grant No. 11471030 and No. 11471035.

Institutional Review Board Statement

The study did not require ethical approval.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The author is very grateful to the referees and to the assistant editor for their kind and professional remarks.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A

Proof of Lemma 1.
(1)
First we show that
lim x f ( x ) = lim x + f ( x ) = 0 .
By the third condition in Definition 1, there exists an M > 0 , such that when | x | > M , x 2 | f ( x ) | < 1 . Let { x n , n = 1 , 2 , } be a sequence that satisfies lim n x n = and let a n = f ( x n ) . Then, for sufficiently large m and n, such that | x n | > M and | x m | > M , we have
| a m a n | = | f ( x m ) f ( x n ) | = | x m x n f ( x ) d x | | x n x m 1 x 2 d x | = | 1 x m 1 x n | .
This indicates that when m , n , | a m a n | 0 . Thus, { a n , n = 1 , 2 , } is a Cauchy sequence, which must be convergent. Since { x n , n = 1 , 2 , } is an arbitrary sequence, then the limit lim x f ( x ) exists. Notice that f ( x ) is a continuous density function, so
lim x f ( x ) = 0 .
Similarly, we can show that
lim x + f ( x ) = 0 .
By condition (3) in Definition 1, for arbitrary ε > 0 , there exists a number B > 0 such that when | x | > B , x 2 | f ( x ) | < ε . Then, by Equation (A1), if x < B , we have
| x f ( x ) | = | x x f ( t ) d t | | x | x | f ( t ) | d t | x | x ε t 2 d t = ε .
This means that lim x x f ( x ) = 0 . If x > B , we have
| x f ( x ) | = | x x + f ( t ) d t | | x | x + | f ( t ) | d t | x | x + ε t 2 d t = ε .
This means that lim x + x f ( x ) = 0 .
(2)
From (A1) we can get
f ( x ) d x = f ( x ) | = 0 .
(3)
As we can see
x f ( x ) d x = x f ( x ) | f ( x ) d x .
Then, by the lemma, we just proved the fact that f ( x ) is a density function,
x f ( x ) d x = 1 .
(4)
Since
lim x f ( x ) = lim x + f ( x ) = 0 .
Then it is easy to get
f ( x ) d x = f ( x ) | = 0 .
(5)
lim x x f ( x ) = lim x + x f ( x ) = 0
+ x f ( x ) d x = x f ( x ) | f ( x ) d x = 0 f ( x ) |
Then, by Equation (A1),
x f ( x ) d x = 0 .
(6)
x 2 f ( x ) d x = x 2 f ( x ) | 2 x f ( x ) d x = 2 x f ( x ) d x
Then by, Equation (A2), we have
x 2 f ( x ) d x = 2
Proof of Proposition 1.
(1)
The log likelihood function, l ( ξ , x ) , is
l ( ξ , x ) = log f ( x , ξ ) = log σ + log f x μ σ .
We can then get the derivatives as below
l ( ξ , x ) μ = 1 σ f x μ σ / f x μ σ l ( ξ , x ) σ = 1 σ 1 σ x μ σ f x μ σ / f x μ σ
Using the second term in Lemma 1, we can get the expectation of the first derivatives
E ξ l ( ξ , x ) μ = 1 σ 2 f x μ σ d x = 1 σ f ( y ) d y = 0 E ξ l ( ξ , x ) σ = 0 .
(2)
The elements of the Fisher information matrix are computed as
E ξ l ( ξ , x ) μ 2 = 1 σ 2 f x μ σ 2 f 2 x μ σ · 1 σ f x μ σ d x = 1 σ 2 f ( y ) 2 f ( y ) d y = C 11 ( f ) σ 2 ,
E ξ l ( ξ , x ) μ l ( θ , x ) σ = 1 σ f x μ σ f x μ σ 1 σ + 1 σ x μ σ f x μ σ f x μ σ 1 σ f x μ σ d x = 1 σ 2 f ( y ) + y f ( y ) 2 / f ( y ) d y = 1 σ 2 y f ( y ) 2 f ( y ) d y = C 12 ( f ) σ 2 ,
E ξ l ( ξ , x ) σ 2 = 1 σ 2 1 + x μ σ f x μ σ f x μ σ 2 1 σ f x μ σ d x = 1 σ 2 1 + y f ( y ) f ( y ) 2 f ( y ) d y = 1 σ 2 1 + 2 y f ( y ) d y + y 2 f ( y ) 2 f ( y ) d y = 1 σ 2 y 2 f ( y ) 2 f ( y ) d y 1 = C 22 ( f ) σ 2 .
So the equation holds. By the fourth condition in Definition 1, we can prove that
I f ( ξ ) = 1 σ 2 C ( f ) < .
Now, we show that C ( f ) > 0 . Suppose that | C ( f ) | = 0 , then there exists a nonzero vector a = ( a 1 , a 2 ) , such that a C a = 0 , which also means that
a l ( ξ , x ) θ | ξ = ( 0 , 1 ) = 0 , a . e . f ( x ) .
Since f ( x ) > 0 , we have
a l ( ξ , x ) θ ξ = ( 0 , 1 ) = a 1 f ( x ) f ( x ) + a 2 1 x f ( x ) f ( x ) = 0 , a . e . L
where L is the Lebesgue measure. Because a is nonzero, so a 2 0 , then
( x + b ) f ( x ) f ( x ) + 1 = 0 ,
where b = a 1 / a 2 . When x > b ,
( log f ( x ) ) = f ( x ) f ( x ) = 1 x + b , log f ( x ) = ln ( x + b ) + D ,
where D is a constant. Then, f ( x ) = e D / ( x + b ) , this contradicts the first equation in Lemma 1. Thus, the assumption of | C ( f ) | = 0 is not true, then I ( θ ) > 0 .
(3)
We first calculate the second derivatives of the parameters.
2 l ( ξ , x ) μ 2 = 1 σ 2 f x μ σ f x μ σ f x μ σ 2 f 2 x μ σ , 2 l ( ξ , x ) μ σ = 1 σ 2 f x μ σ f x μ σ + x μ σ f x μ σ f x μ σ x μ σ f x μ σ 2 f 2 x μ σ , 2 l ( ξ , x ) σ 2 = 1 σ 2 + 2 ( x μ ) σ 3 · f x μ σ f x μ σ + ( x μ ) 2 σ 4 f x μ σ f x μ σ ( x μ ) 2 σ 4 f x μ σ 2 f x μ σ 2 .
Then, by Lemma 1, we have
E ξ 2 l ( ξ , x ) μ 2 = C 11 ( f ) σ 2 , E ξ 2 l ( ξ , x ) μ σ = C 12 ( f ) σ 2 , E ξ 2 l ( ξ , x ) σ 2 = C 22 ( f ) σ 2 .
Proof of Proposition 2.
(1)
First, we calculate the derivatives as follows.
log g ( x , θ ) μ 1 = λ 0 σ 1 2 f x μ 1 σ 1 / g ( x ; θ ) ; log g ( x , θ ) μ 2 = 1 λ 0 σ 2 2 f x μ 2 σ 2 / g ( x ; θ ) ; log g x , θ σ 1 = λ 0 σ 1 2 f x μ 1 σ 1 λ 0 σ 1 x μ 1 σ 1 2 f x μ 1 σ 1 / g x ; θ ; log g ( x , θ ) σ 2 = 1 λ 0 σ 2 2 f x μ 2 σ 2 1 λ 0 σ 2 x μ 2 σ 2 2 f x μ 2 σ 2 / g ( x ; θ ) .
Then, by the second equation in Lemma 1, we have
E θ log g ( x ; θ ) μ 1 = λ 0 σ 1 2 f x μ 1 σ d x = λ 0 σ f ( y ) d y = 0
By Lemma 1(3),
E θ log g ( x ; θ ) σ 1 = + λ 0 σ 1 2 f x μ 1 σ 1 λ 0 σ 1 x μ 1 σ 1 2 f x μ 1 σ 1 d x = λ 0 σ 1 f ( y ) λ 0 σ 1 y f ( y ) d y = λ 0 σ 1 1 + y f ( y ) d y = 0 .
Similarly, can we prove that
E θ log g ( x ) θ ) σ 2 = 0 .
(2)
First, we calculate the derivatives on the location parameter as
E θ log g ( x ; θ ) μ 1 2 = λ 0 2 σ 1 4 f x μ 1 σ 1 2 / g ( x ; θ ) d x λ 0 4 σ 1 2 f x μ 1 σ 1 2 / λ 0 σ 1 f x μ 1 σ 1 d x = λ 0 σ 1 2 f ( y ) 2 f ( y ) d y = λ 0 σ 1 2 C 11 ( f ) < .
Similarly, we have
E θ log g ( x ; θ ) μ 2 2 1 λ 0 σ 2 2 C 11 ( f ) <
Then, as with the scale parameter, we have
E θ log g ( x ; θ ) σ 1 2 = λ 0 2 σ 1 2 1 σ 1 f x μ 1 σ 1 + x μ 1 σ 1 2 f x μ 1 σ 1 2 / g ( x ; θ ) d x λ 0 2 σ 1 2 1 σ 1 f x μ 1 σ 1 + x μ 1 σ 1 2 f x μ 1 σ 1 2 / λ 0 σ 1 f x μ 1 σ 1 d x = λ 0 σ 1 2 1 + y f ( y ) 2 f ( y ) 2 f ( y ) d y = λ 0 σ 1 2 C 22 ( f ) <
E θ log g ( x ; θ ) σ 2 2 = 1 λ 0 σ 2 2 C 22 ( f ) < .
Then I ( θ ) < .
(3)
2 log g ( x ; θ ) μ 1 2 = λ 0 σ 1 3 f x μ 1 σ 1 g ( x ; θ ) λ 0 2 σ 1 4 f x μ 1 σ 1 2 ( g ( x ; θ ) ) 2
Then, by the fourth equation in Lemma 1 can we get
E θ 2 log g ( x ; θ ) μ 1 2 = E θ λ 0 2 σ 1 4 f ( x μ 1 σ 1 ) 2 g ( x ; θ ) 2 = E θ log g ( x , θ ) μ 1 2
The same procedure can be applied to the other nine equations to show that the conclusion holds.
Proof of Theorem 1.
First, we provide the Bernstein-von Mises theorem for multiple samples; see Theorem 2 in Long and Xu [28]. Besides Assumptions 1 to 4 in the context, there are some other assumptions below
Assumption A1.
For all i = 1 , 2 , , k , the density function, f i ( x | θ ) , of the population G i satisfies the following conditions:
(a) The parameter space of θ contains an open subset, ω Ω , in which the true value is included.
(b) The set A i = { x : f i ( x | θ ) > 0 } is independent of θ.
(c) For almost all x A i , f i ( x | θ ) as a function of θ admits continous second derivatives 2 θ j θ h f i ( x | θ ) , j , h = 1 , 2 , , d , for all θ ω .
(d) Denote by I ( i ) ( θ ) the Fisher’s information matrix of f i ( x | θ ) . The first and second derivatives of the logarithm of f i ( x | θ ) satisfy the equations
E θ θ j l o g f i ( x | θ ) = 0 , j = 1 , , d , I j h ( i ) ( θ ) = E θ θ j l o g f i ( x | θ ) · θ h l o g f i ( x | θ ) = E θ 2 θ j θ h l o g f i ( x | θ ) , j , h = 1 , 2 , , d .
(e) Suppose the sample size n i of G i satisfies when n , n i / n r i ( 0 , 1 ) . Let
I ( θ ) = i = 1 k r i I ( i ) ( θ ) .
We assume that all entries of I ( θ ) are finite, and I ( θ ) is positive definite.
Then, by Definition 1, Propositions 1 and 2 and Assumptions 1–A1, Theorem 1 holds. It should be noted that since the prior is π ( θ ) = 1 / σ 1 σ 2 , its second moment does not exist. Therefore, we draw k 0 samples from the first two density functions and combine them with π ( θ ) ; thus, we get the new prior. This is a trick in the research of big data. □
Proof of Theorem 2.
First, we present two conclusions
n θ B T n P 0 , n Σ B P I 1 θ 0 .
Let E p g ( θ ) be the expectation of g ( θ ) under distribution P. Then
n ( θ B T n ) = n ( E π θ T n ) = E π [ n ( θ T n ) ] = E π θ E N ( 0 , I 1 ( θ 0 ) ) θ .
n ( θ B T n ) = E π θ E N ( 0 , I 1 ( θ 0 ) ) θ θ | π ( θ | x ) ( 2 π ) 4 2 | I ( θ 0 ) | 1 2 e θ I ( θ 0 ) θ | d θ .
By Theorem 1, the above equation converges in probability to 0.
n Σ B = n E π ( θ θ B ) ( θ θ B ) = n E π ( θ T n + T n θ b ) ( θ T n + T n θ b ) = n E π ( θ T n ) ( θ T n ) + n E π ( θ T n ) ( T n θ B ) + n ( T n θ B ) E π ( θ T n ) + n ( T n θ B ) ( T n θ B ) = E π θ θ + E π θ n ( T n θ B ) + n ( T n θ B ) E π θ + [ n ( T n θ B ) ] [ n ( T n θ B ) ] .
From the conclusion above we have
n ( T n θ B ) P 0 , E π θ P 0 ,
then, by Theorem 1,
E π ( θ θ ) I 1 ( θ 0 ) .
thus,
n Σ B P I 1 ( θ 0 ) .
Then we can get
θ θ B A A Σ B A 1 A θ θ B = n θ θ B A A n Σ B A 1 A n θ θ B = n θ T n n ( θ B T n ) A A n Σ B A 1 A n ( θ T n ) λ n θ B T n = t A A n Σ B A 1 A t 2 t A A n Σ B A 1 A n θ B T n + n θ B T n A A n Σ B A 1 A n θ B T n .
The expression above should have the same asymptotic distribution as
t A A ( n Σ B ) A 1 A t , where t N p ( 0 , I 1 ( θ 0 ) ) and A t N k ( 0 , A I 1 ( θ 0 ) A ) . From the conclusion above, we have A n Σ B A P A I 1 θ 0 A ; thus, we can get
t A A ( n Σ B ) A 1 A t χ 2 ( k ) ,
where k is the degree of freedom and also the rows of matrix A. Thus
θ θ B A A Σ B A 1 A θ θ B d χ 2 ( k )
Under the null hypothesis,
b A θ B A Σ B A 1 b A θ B = b A T n A θ B T n 1 A Σ B A 1 b A T n A θ B T n 1 = b A T n A Σ B A 1 b A T n + θ B T n A A Σ B A 1 A θ B T n 2 b A T n A Σ B A 1 A θ B T n .
Since b = A θ 0 , the expression above is equalivalent to
1 n I 1 θ 0 l θ 0 A A Σ B A 1 A 1 n I θ 0 l ( θ 0 ) + n θ B T n 1 A A n Σ B A 1 A n θ B T n + 2 1 n I 1 θ 0 1 θ 0 A A n Σ B A 1 A n θ B T n .
The first term can be rewritten as
1 n I 1 θ 0 l θ 0 A A n Σ B A 1 A 1 n I 1 θ 0 l θ 0 .
This asypototically follows the χ 2 ( k ) distribution. The second and third terms tend to 0 in probability by Equation (A3). Thus
p ( x ) F k 1 1 n I 1 θ 0 l θ 0 A A n Σ B A 1 A 1 n I 1 ( θ 0 ) l ( θ 0 ) P 0 .
where F k is the cumulative distribution function of χ 2 ( k ) . Then, by the asymptotic property, we have
p ( x ) d U ( 0 , 1 ) .

References

  1. Hosmer, D.W. A Comparison of Iterative Maximum Likelihood Estimates of the Parameters of a Mixture of Two Normal Distributions Under Three Different Types of Sample. Biometrics 1973, 29, 761–770. [Google Scholar] [CrossRef]
  2. Murray, G.D.; Titterington, D.M. Estimation Problems with Data from a Mixture. J. R. Stat. Soc. Ser. (Appl. Stat.) 1978, 27, 325–334. [Google Scholar] [CrossRef]
  3. Anderson, J.A. Multivariate logistic compounds. Biometrika 1979, 66, 17–26. [Google Scholar] [CrossRef]
  4. Qin, J. Empirical likelihood ratio based confidence intervals for mixture proportions. Ann. Stat. 1999, 27, 1368–1384. [Google Scholar] [CrossRef]
  5. Owen, A. Empirical Likelihood Ratio Confidence Regions. Ann. Stat. 1990, 18, 90–120. [Google Scholar] [CrossRef]
  6. Zou, F.; Fine, J.P.; Yandell, B.S. On empirical likelihood for a semiparametric mixture model. Biometrika 2002, 89, 61–75. [Google Scholar] [CrossRef]
  7. Zhang, B. Assessing goodness-of-fit of generalized logit models based on case-control data. J. Multivar. Anal. 2002, 82, 17–38. [Google Scholar] [CrossRef]
  8. Inagaki, K.; Komaki, F. A modification of profile empirical likelihood for the exponential-tilt model. Stat. Probab. Lett. 2010, 80, 997–1004. [Google Scholar] [CrossRef]
  9. Tan, Z. A note on profile likelihood for exponential tilt mixture models. Biometrika 2009, 96, 229–236. [Google Scholar] [CrossRef]
  10. Liang, K.Y.; Rathouz, P.J. Hypothesis Testing Under Mixture Models: Application to Genetic Linkage Analysis. Biometrics 1999, 55, 65–74. [Google Scholar] [CrossRef]
  11. Duan, R.; Ning, Y.; Wang, S.; Lindsay, B.G.; Carroll, R.J.; Chen, Y. A fast score test for generalized mixture models. Biometrics 2019, 76, 811–820. [Google Scholar] [CrossRef] [PubMed]
  12. Fu, Y.; Chen, J.; Kalbfleisch, J.D. Testing for homogeneity in genetic linkage analysis. Stat. Sin. 2006, 16, 805–823. [Google Scholar]
  13. Chen, H.; Chen, J.; Kalbfleisch, J.D. A Modified Likelihood Ratio Test for Homogeneity in Finite Mixture Models. J. R. Stat. Soc. Ser. Stat. Methodol. 2002, 63, 19–29. [Google Scholar] [CrossRef]
  14. Chen, H.; Chen, J.; Kalbfleisch, J.D. Testing for a Finite Mixture Model with Two Components. J. R. Stat. Soc. Ser. Stat. Methodol. 2003, 66, 95–115. [Google Scholar] [CrossRef]
  15. Chen, J.; Li, P. Hypothesis test for normal mixture models: The EM approach. Ann. Stat. 2009, 37, 2523–2542. [Google Scholar] [CrossRef]
  16. Li, P.; Liu, Y.; Qin, J. Semiparametric Inference in a Genetic Mixture Model. J. Am. Stat. Assoc. 2017, 112, 1250–1260. [Google Scholar] [CrossRef]
  17. Li, S.; Chen, J.; Guo, J.; Jing, B.Y.; Tsang, S.Y.; Xue, H. Likelihood Ratio Test for Multi-Sample Mixture Model and Its Application to Genetic Imprinting. J. Am. Stat. Assoc. 2015, 110, 867–877. [Google Scholar] [CrossRef]
  18. Liu, G.; Li, P.; Liu, Y.; Pu, X. Hypothesis testing for quantitative trait locus effects in both location and scale in genetic backcross studies. Scand. J. Stat. 2020, 47, 1064–1089. [Google Scholar] [CrossRef]
  19. Ren, P.; Liu, G.; Pu, X. Generalized fiducial methods for testing the homogeneity of a three-sample problem with a mixture structure. J. Appl. Stat. 2023, 50, 1094–1114. [Google Scholar] [CrossRef]
  20. Hannig, J.; Iyer, H.; Lai, R.C.S.; Lee, T.C.M. Generalized Fiducial Inference: A Review and New Results. J. Am. Stat. Assoc. 2016, 111, 1346–1361. [Google Scholar] [CrossRef]
  21. Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum likelihood from incomplete data via the EM algorithm; With discussion. J. R. Stat. Soc. Ser. B 1977, 39, 1–22. [Google Scholar]
  22. McLachlan, G.J.; Krishnan, T. The EM Algorithm and Extensions; Wiley Series in Probability and Statistics: Applied Probability and Statistics; A Wiley-Interscience Publication; John Wiley & Sons, Inc.: New York, NY, USA, 1997; pp. xviii+274. [Google Scholar]
  23. Fisher, R.A. The fiducial argument in statistical inference. Ann. Eugen. 1935, 6, 391–398. [Google Scholar] [CrossRef]
  24. Metropolis, N.; Rosenbluth, A.W.; Rosenbluth, M.N.; Teller, A.H.; Teller, E. Equation of State Calculations by Fast Computing Machines. J. Chem. Phys. 1953, 21, 1087–1092. [Google Scholar] [CrossRef]
  25. Hastings, W.K. Monte Carlo Sampling Methods Using Markov Chains and Their Applications. Biometrika 1970, 57, 97–109. [Google Scholar] [CrossRef]
  26. Owen, A.B.; Glynn, P.W. (Eds.) Monte Carlo and Quasi-Monte Carlo Methods; Springer International Publishing: Cham, Switzerland, 2018. [Google Scholar]
  27. Karunamuni, R.; Wu, J. Minimum Hellinger distance estimation in a nonparametric mixture model. J. Stat. Plan. Inference 2009, 139, 1118–1133. [Google Scholar] [CrossRef]
  28. Long, Y.; Xu, X. Bayesian decision rules to classification problems. Aust. N. Z. J. Stat. 2021, 63, 394–415. [Google Scholar] [CrossRef]
Figure 1. QQ-plot of the normal cases.
Figure 1. QQ-plot of the normal cases.
Mathematics 11 03849 g001
Figure 2. QQ-plot of the logistic cases.
Figure 2. QQ-plot of the logistic cases.
Mathematics 11 03849 g002
Table 1. Type-I errors (%) of the three methods in normal cases with nominal level α = 0.05 .
Table 1. Type-I errors (%) of the three methods in normal cases with nominal level α = 0.05 .
n 1 , n 2 , n 3 G T G T R
( 10 , 10 , 10 ) 3.89 3.92 4.47
( 20 , 20 , 20 ) 4.63 4.84 4.72
( 30 , 30 , 30 )4.634.755.42
( 10 , 20 , 30 ) 4.36 4.854.36
( 30 , 20 , 10 ) 4.46 4.464.79
( 10 , 10 , 100 ) 3.924.644.77
( 15 , 25 , 150 ) 4.64 5.654.51
Table 2. Power comparison (%) of the cases when f 1 = N ( 0 , 1 ) and f 2 = N ( 1 , 1 ) .
Table 2. Power comparison (%) of the cases when f 1 = N ( 0 , 1 ) and f 2 = N ( 1 , 1 ) .
0.30.50.7
G T G T R G T G T R G T G T R
(10, 10, 10)32.233.936.832.233.936.533.335.936.5
(20, 20, 20)70.674.276.370.374.076.072.376.275.6
(30, 30, 30)90.293.392.189.291.292.290.792.292.0
(10, 20, 30)46.350.946.451.356.145.656.863.245.6
(30, 20, 10)84.385.885.381.383.385.482.884.385.2
(10, 10, 100)35.443.041.833.141.941.734.344.541.2
(15, 25, 150)63.570.771.566.875.174.075.882.372.6
Table 3. Power comparison (%) of the cases when f 1 = N ( 0 , 1 ) and f 2 = N ( 0 , 1 . 5 2 ) .
Table 3. Power comparison (%) of the cases when f 1 = N ( 0 , 1 ) and f 2 = N ( 0 , 1 . 5 2 ) .
0.30.50.7
G T G T R G T G T R G T G T R
(10, 10, 10)11.820.822.513.922.322.814.024.322.4
(20, 20, 20)28.439.943.829.741.644.632.844.543.4
(30, 30, 30)42.257.158.544.156.358.345.557.458.5
(10, 20, 30)14.323.826.315.526.826.520.933.526.8
(30, 20, 10)38.450.550.138.649.851.338.949.250.7
(10, 10, 100)9.418.025.214.422.223.718.527.625.4
(15, 25, 150)20.935.435.628.242.537.735.348.838.4
Table 4. Power comparison (%) of the cases when f 1 = N ( 0 , 1 ) and f 2 = N ( 1 , 1 . 5 2 ) .
Table 4. Power comparison (%) of the cases when f 1 = N ( 0 , 1 ) and f 2 = N ( 1 , 1 . 5 2 ) .
0.30.50.7
G T G T R G T G T R G T G T R
(10, 10, 10)38.942.646.638.242.141.742.046.242.6
(20, 20, 20)77.379.782.375.278.080.680.082.680.8
(30, 30, 30)93.694.295.593.894.093.393.994.594.4
(10, 20, 30)47.250.055.255.459.657.262.166.056.1
(30, 20, 10)88.489.088.985.887.286.586.387.685.3
(10, 10, 100)46.048.949.153.856.945.959.164.648.6
(15, 25, 150)77.178.678.183.984.874.489.691.575.1
Table 5. Type-I errors (%) of the three methods in logistic cases with nominal level α = 0.05 .
Table 5. Type-I errors (%) of the three methods in logistic cases with nominal level α = 0.05 .
n 1 , n 2 , n 3 G T G T M
( 10 , 10 , 10 ) 4.15 3.34 2.98
( 20 , 20 , 20 ) 4.81 4.83 2.81
( 10 , 20 , 30 ) 4.18 3.852.67
( 30 , 20 , 10 ) 4.73 4.612.31
( 30 , 30 , 30 ) 4.724.923.41
( 10 , 10 , 100 ) 4.164.922.58
( 15 , 25 , 150 ) 5.06 5.962.91
Table 6. Power comparison (%) of the cases when f 1 = L o g i s ( 0 , 1 ) and f 2 = L o g i s ( 1 , 1 ) .
Table 6. Power comparison (%) of the cases when f 1 = L o g i s ( 0 , 1 ) and f 2 = L o g i s ( 1 , 1 ) .
0.30.50.7
G T G T M G T G T M G T G T M
(10, 10, 10)12.212.013.212.613.314.513.213.013.7
(20, 20, 20)29.430.326.228.329.926.428.930.226.0
(30, 30, 30)43.245.436.643.145.235.943.946.537.3
(10, 20, 30)17.518.115.119.320.216.922.223.218.7
(30, 20, 10)35.436.631.833.935.832.433.535.130.4
(10, 10, 100)13.817.812.713.817.111.813.916.611.9
(15, 25, 150)24.429.320.125.330.022.730.938.124.1
Table 7. Power comparison (%) of the cases when f 1 = L o g i s ( 0 , 1 ) and f 2 = L o g i s ( 0 , 1.5 ) .
Table 7. Power comparison (%) of the cases when f 1 = L o g i s ( 0 , 1 ) and f 2 = L o g i s ( 0 , 1.5 ) .
0.30.50.7
G T G T M G T G T M G T G T M
(10, 10, 10)10.313.516.210.514.716.311.215.915.9
(20, 20, 20)20.832.423.621.733.323.723.033.424.1
(30, 30, 30)32.444.733.333.844.933.234.946.234.1
(10, 20, 30)9.615.312.111.118.015.214.221.917.9
(30, 20, 10)27.437.121.927.136.722.427.136.322.3
(10, 10, 100)8.914.810.610.216.312.212.718.416.8
(15, 25, 150)15.126.220.621.132.020.724.235.622.0
Table 8. Power comparison (%) of the cases when f 1 = L o g i s ( 0 , 1 ) and f 2 = L o g i s ( 1 , 1.5 ) .
Table 8. Power comparison (%) of the cases when f 1 = L o g i s ( 0 , 1 ) and f 2 = L o g i s ( 1 , 1.5 ) .
0.30.50.7
G T G T M G T G T M G T G T M
(10, 10, 10)17.417.918.118.321.718.818.522.619.6
(20, 20, 20)43.749.436.343.749.336.245.251.337.7
(30, 30, 30)62.967.358.162.267.257.963.668.459.6
(10, 20, 30)20.624.221.324.129.426.230.135.835.8
(30, 20, 10)52.757.037.651.755.939.751.556.038.8
(10, 10, 100)19.924.118.723.127.319.823.530.219.7
(15, 25, 150)36.743.140.444.651.540.751.057.840.1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Y.; Xu, X. A Posterior p-Value for Homogeneity Testing of the Three-Sample Problem. Mathematics 2023, 11, 3849. https://doi.org/10.3390/math11183849

AMA Style

Wang Y, Xu X. A Posterior p-Value for Homogeneity Testing of the Three-Sample Problem. Mathematics. 2023; 11(18):3849. https://doi.org/10.3390/math11183849

Chicago/Turabian Style

Wang, Yufan, and Xingzhong Xu. 2023. "A Posterior p-Value for Homogeneity Testing of the Three-Sample Problem" Mathematics 11, no. 18: 3849. https://doi.org/10.3390/math11183849

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop