Next Article in Journal
Semi-Supervised Cross-Subject Emotion Recognition Based on Stacked Denoising Autoencoder Architecture Using a Fusion of Multi-Modal Physiological Signals
Previous Article in Journal
Adaptive Orbital Rendezvous Control of Multiple Satellites Based on Pulsar Positioning Strategy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Statistical Inference of the Generalized Inverted Exponential Distribution under Joint Progressively Type-II Censoring

School of Mathematics and Statistics, Beijing Jiaotong University, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(5), 576; https://doi.org/10.3390/e24050576
Submission received: 24 March 2022 / Revised: 15 April 2022 / Accepted: 15 April 2022 / Published: 20 April 2022
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
In this paper, we study the statistical inference of the generalized inverted exponential distribution with the same scale parameter and various shape parameters based on joint progressively type-II censored data. The expectation maximization (EM) algorithm is applied to calculate the maximum likelihood estimates (MLEs) of the parameters. We obtain the observed information matrix based on the missing value principle. Interval estimations are computed by the bootstrap method. We provide Bayesian inference for the informative prior and the non-informative prior. The importance sampling technique is performed to derive the Bayesian estimates and credible intervals under the squared error loss function and the linex loss function, respectively. Eventually, we conduct the Monte Carlo simulation and real data analysis. Moreover, we consider the parameters that have order restrictions and provide the maximum likelihood estimates and Bayesian inference.

1. Introduction

1.1. Generalized Inverted Exponential Distribution

The generalized inverted exponential distribution (GIED) is a modification of the inverse exponential distribution (IED). In this way, it can fit the lifetime data better. The GIED was introduced by [1]. The distribution has a non-constant hazard rate function, which is unimodal and positively skewed. Due to these properties, the distribution can model different shapes of failure rates of aging criteria. Reference [2] proposed the method of the maximum product of spacings for the point estimation of the parameter of the GIED. In [3], acceptance sampling plans were developed based on truncated lifetimes when the life of an item follows a generalized inverted exponential distribution. Reference [4] performed a Monte Carlo simulation for the GIED to analyze the performance of the estimations. Reference [5] studied the point estimation of the parameters of the GIED when the test units are progressively type-II censored. Reference [6] generated samples from the GIED and computed the Bayes estimates. Reference [7] investigated the MLEs of the GIED when the test units are progressively type-II censored. Reference [8] proposed a two-stage group acceptance sampling plan for the GIED under a truncated life experiment.
Provided that X is a variable and it is subject to the GIED, the following is the form of the corresponding probability density function (pdf), the cumulative distribution function, as well as hazard function. Here, λ is the shape parameter and θ is the scale parameter. Besides, they are both positive.
f ( x ; θ , λ ) = θ λ e λ x x 2 1 e λ x θ 1 , x > 0
F ( x ; θ , λ ) = 1 1 e λ x θ , x > 0
h ( x ; θ , λ ) = θ λ e λ x x 2 1 e λ x 1 , x > 0
The plots of the pdf and hazard function are presented in Figure 1 and Figure 2.

1.2. The Joint Progressive Type-II Censoring Scheme

It is difficult to obtain the lifetime data of all the products given the cost and time in many practical situations. Hence, testing experiment censoring is of great importance. A great deal of work has been performed on a variety of censoring schemes. The experimental units cannot be withdrawn during the experiment under the type-I censoring scheme and type-II censoring scheme. Reference [9] described the progressive type-II censoring scheme, in which some units are allowed to be withdrawn during the test. Afterward, we describe the progressive type-II censoring briefly. Suppose n units are placed in a lifetime test. k is the effective sample size. It also represents the number of observed failures that satisfies k < n . R 1 , ⋯, R k stand for the number of units to be withdrawn for each failure time. Furthermore, they are non-negative integers and satisfy i = 1 k ( R i + 1 ) = n . At the first failure time, R 1 units are withdrawn from the remaining n 1 surviving units randomly. When the second failure happens, we randomly withdraw R 2 units from the remaining n 2 R 1 surviving units. Analogously, when the k-th failure happens, the remaining R k surviving units are withdrawn randomly and the test ceases. Reference [10] provided an amount of work about the progressive censoring schemes. Reference [11] dealt with the Bayesian inference on step stress partially accelerated life tests using type-II progressive censored data in the presence of competing failure causes.
Much research about progressive censoring schemes for one sample has been performed by many scholars. However, there is little research on two samples. Reference [12] first introduced the joint progressive censoring schemes for two samples. It is particularly beneficial to compare the life distribution of different products produced by two different assembly production lines on diverse equipment under the same environmental conditions. The joint progressively type-II censoring (JPC) scheme can be briefly described as follows. Suppose the samples are from two different lines, Line 1 and Line 2. The size of the samples of products in Line 1 is m and in Line 2 is n. Two samples are combined in the joint progressive censoring scheme, and they are placed on a lifetime test. Suppose N = m + n is the size of combined samples. R 1 , ⋯, R k stand for the number of units to be withdrawn in each failure time. Additionally, they are non-negative integers and satisfy i = 1 k ( R i + 1 ) = n . At the first point of failure w 1 , R 1 units are removed from the combined samples at random. R 1 units consist of s 1 units from Line 1 and t 1 units from Line 2. On the second failure w 2 , R 2 units are withdrawn from the remaining m + n 2 R 1 samples at random. R 2 units consist of s 2 units from Line 1 and t 2 units from Line 2. Analogously, at the k-th failure time point w k , the remaining R k surviving units are withdrawn randomly and the test ceases. Let z 1 , ⋯, z k be random variables. If the i-th failure is from Line 1, let z i = 1 . Otherwise, let z i = 0 . Suppose that the censored sample is ( ( w 1 , z 1 , s 1 ) , , ( w k , z k , s k ) ) . Here, we introduce k 1 = i = 1 k z i , which means the number of failures from Line 1. Similarly, k 2 = i = 1 k ( 1 z i ) = k k 1 , which stands for the number of failures from Line 2. Figure 3 shows the scheme.
Reference [12] applied Bayesian estimation techniques to two exponential distributions for the JPC scheme. Reference [13] considered the JPC scheme for more than two exponential populations and studied the statistical inference. Reference [14] investigated the conditional maximum likelihood estimations and the interval estimations of the Weibull distribution for the JPC scheme. Reference [15] discussed the point estimation and obtained the confidence intervals of two Weibull populations for the JPC scheme. Reference [16] obtained the Bayes estimation when data were sampled in the JPC scheme from a general class of distributions. Reference [17] studied the expected number of failures in the lifetime tests under the JPC scheme for various distributions. Besides, a new type-II progressive censoring scheme for two groups was introduced by [18]. Reference [19] extended the JPC scheme for multiple exponential populations and studied the statistical inference. Reference [20] studied the likelihood and Bayesian inference when data were sampled in the JPC scheme from the GIED.
A few scholars have studied the statistical inference of the generalized inverted exponential distribution when the test units are progressively type-II censored. However, no one has studied the statistical inference of the generalized inverted exponential distribution under joint progressively type-II censoring. The research on this aspect is of great significance.
In this article, we provide statistical inference and study the JPC scheme for two groups that follow the GIED with the same scale parameter. The expectation maximization algorithm is adopted to calculate the maximum likelihood estimates of the parameters. In light of the missing value principle, we derive the observed information matrix. We obtain the interval estimations by the bootstrap-p method based on the Fisher information matrix. We assume a Gamma prior for the shape and scale parameters. The Bayes estimates and credible intervals for the informative prior and the non-informative prior under the linex loss function and squared error loss function are calculated by adopting the importance sampling technique. The performances of various methods are compared through the Monte Carlo simulation. Besides, we conduct real data analysis. Moreover, in many practical cases, the experimenters may know that the lifetime of various populations is orderly. We investigate the problem that the parameters have order restrictions. We discuss the maximum likelihood estimation and Bayesian inference of the parameters.
The rest of the article is arranged as follows. In Section 2, we obtain the likelihood function. In Section 3, we apply the EM algorithm to calculate the MLEs. In Section 4, we compute the observed information matrix based on the missing value principle. Next, we adopt the bootstrap method to obtain the confidence intervals. The Bayesian inference is presented in Section 5. In Section 6, a Monte Carlo simulation and real data analysis are shown. In Section 7, we derive the maximum likelihood estimation and Bayesian inference of the parameters that have order restrictions.

2. Likelihood Function

Generate lifetime X 1 : m : n , X 2 : m : n , , X i : m : n from the GIED with the progressive type-II censoring scheme ( R 1 , , R k ) . The observed data are x = ( x 1 , x 2 , , x m ) . The likelihood function is
L ( θ , λ ) = C i = 1 m f x i 1 F x i R i
where x 1 < x 2 < < x m .
Substitute (2) and (1) for (4). Then, we can obtain the observed likelihood function:
L ( θ , λ ) = C θ λ m e λ i = 1 m 1 x i i = 1 m x i 2 1 e λ x i θ 1 i = 1 m 1 e λ x i θ R i
Take the derivative of (5) to obtain the log-likelihood function.
l ( θ , λ ) m ln θ + m ln λ λ i = 1 m 1 x i + i = 1 m ln x i 2 + θ 1 i = 1 m ln ( 1 e λ x i ) + i = 1 m R i ln 1 e λ x i θ
Suppose X 1 , ⋯, X m are m items from Line 1 that are i.i.d. GIED ( θ 1 , λ ) . Y 1 , ⋯, Y m are n items from Line 2 that are i.i.d. GIED ( θ 2 , λ ) . For a given joint progressive type-II censoring scheme ( R 1 , , R k ) , the observed data are ( ( w 1 , z 1 , s 1 ) , ⋯, ( w k , z k , s k ) ) . Thus, the likelihood function without the normalizing constant is
L ( θ 1 , θ 2 , λ | d a t a ) = θ 1 k 1 θ 2 k 2 λ k i = 1 k w i 2 e λ 1 w i 1 e λ w i z i θ 1 + ( 1 z i ) θ 2 1 × i = 1 k 1 e λ w i θ 1 s i 1 e λ w i θ 2 t i
where k 1 = i = 1 k z i , k 2 = i = 1 k ( 1 z i ) = k k 1 .
When k 1 = 0 , k 2 = k , the likelihood function becomes
L ( θ 1 , θ 2 , λ | d a t a ) = θ 2 λ k i = 1 k w i 2 e λ 1 w i 1 e λ w i θ 2 1 i = 1 k 1 e λ w i θ 1 s i 1 e λ w i θ 2 t i
For s i = 0 , 1 e λ w i θ 1 s i = 1 . For s i 0 and a fixed λ , 1 e λ w i θ 1 s i is a strictly decreasing function of θ 1 that decreases to 0. Thus, for k 1 = 0 , fixed θ 2 and λ , L ( θ 1 , θ 2 , λ | d a t a ) is a strictly decreasing function of θ 2 . Therefore, there is no maximum likelihood estimate when k 1 or k 2 equals 0. Thus, we assume that k 1 > 0 and k 2 > 0 .
The log-likelihood function is:
ln L ( θ 1 , θ 2 , λ | d a t a ) = k 1 ln θ 1 + k 2 ln θ 2 + k ln λ + i = 1 k θ 1 s i ln 1 e λ w i + θ 2 t i ln 1 e λ w i + i = 1 k 2 ln w i λ w i + ( z i θ 1 + ( 1 z i ) θ 2 1 ) ln 1 e λ w i
Then, we prove the MLEs of θ 1 , θ 2 for a given λ are unique in the following.
Theorem 1.
For a fixed λ > 0 , if k 1 > 0 and k 2 > 0 , l 1 ( θ 1 , θ 2 ) = ln L ( θ 1 , θ 2 , λ | d a t a ) is a unimodal function of ( θ 1 , θ 2 ) .
Proof. 
Because the Hessian matrix of l 1 ( θ 1 , θ 2 ) is a negative definite matrix, l 1 ( θ 1 , θ 2 ) is a concave function. Moreover, for fixed θ 1 ( θ 2 ) , when θ 2 ( θ 1 ) tends to 0, l 1 ( θ 1 , θ 2 ) tends to . When θ 2 ( θ 1 ) tends to ∞, l 1 ( θ 1 , θ 2 ) tends to .
For a given λ , the MLEs of θ 1 and θ 2 are θ 1 ^ ( λ ) and θ 2 ^ ( λ ) . They can be written as:
θ 1 ^ ( λ ) = k 1 M ( λ ) a n d θ 2 ^ ( λ ) = k 2 N ( λ )
where
M ( λ ) = i = 1 k z i ln 1 e λ w i + i = 1 k s i ln 1 e λ w i N ( λ ) = i = 1 k ( 1 z i ) ln 1 e λ w i + i = 1 k t i ln 1 e λ w i
When λ is unknown, the profile log-likelihood function of λ is
p 1 ( λ ) = ln L ( θ 1 ^ , θ 2 ^ , λ | d a t a ) = k 1 ln M ( λ ) k 2 ln N ( λ ) + k ln λ i = 1 k k 1 w i + ln 1 e λ w i
Maximize (11) to obtain the MLEs of λ . Then, we prove that the MLE of λ exists and is unique in the following. □
Theorem 2.
If k 1 > 0 and k 2 > 0 , p 1 ( λ ) is a unimodal function of λ.
Proof. 
The proof is given in Appendix A. □
Thus, we can obtain that for k 1 > 0 and k 2 > 0 , the MLEs of ( θ 1 , θ 2 , λ ) are unique from Theorems 1 and 2. Next, take the partial derivatives of Equation (8) and let them equal 0. Therefore, we can calculate the MLEs of the parameters. However, the equation is nonlinear, which is cumbersome to compute the solution directly. Therefore, we propose to calculate the MLEs of ( θ 1 , θ 2 , λ ) through the EM algorithm.

3. Expectation Maximization Algorithm

The expectation maximization algorithm is an iterative optimization strategy based on the MLEs of the parameters. First, we introduce the potential data. The potential data can be interpreted as the data that do not have the missing variables. If we add extra variables, it becomes simpler to process. The potential data are the lifetime of the censored samples at each failure point time.
It is assumed that at the i-th failure time point w i , U i j is the lifetime of the j-th censored sample from Line 1, V i j is the lifetime of the j -th censored sample from Line 2 for j = 1 , , s i , j = 1 , , t i , and i = 1 , , k. The observed data are ( ( w 1 , z 1 , s 1 ) , , ( w k , z k , s k ) ) . The potential data are U = ( ( u 11 , , u 1 s 1 ) , , ( u k 1 , , u k s k ) ) and V = ( ( v 11 , , v 1 t 1 ) , , ( v k 1 , , v k t k ) ) . Therefore, the complete data are ( ( w 1 , z 1 , s 1 ) , , ( w k , z k , s k ) , U , V ) = d a t a , which are the combination of the observed data and the potential data. The log-likelihood function based on the complete data is
ln L ( θ 1 , θ 2 , λ | d a t a ) = m ln θ 1 + n ln θ 2 + ( m + n ) ln λ 2 i = 1 k ln w i + j = 1 s i ln u i j + i = 1 k j = 1 t i ln v i j λ i = 1 k 1 w i + j = 1 s i 1 u i j + j = 1 t i 1 v i j + ( θ 1 1 ) i = 1 k j = 1 s i ln 1 e λ u i j + ( θ 2 1 ) i = 1 k j = 1 t i ln 1 e λ v i j + i = 1 k z i θ 1 + ( 1 z i ) θ 2 1 ln 1 e λ w i
In the “E”-step, the pseudo log-likelihood function is given by
l s ( θ 1 , θ 2 , λ ) = m ln θ 1 + n ln θ 2 + ( m + n ) ln λ 2 i = 1 k s i E ( ln U i | U i > w i ) + t i E ( ln V i | V i > w i ) 2 i = 1 k ln w i λ i = 1 k 1 w i + s i E ( 1 u i j ) + t i E ( 1 V i | V i > w i ) + ( θ 1 1 ) i = 1 k s i E ln ( 1 e λ U i | U i > w i ) + ( θ 2 1 ) i = 1 k t i E ln ( 1 e λ V i | V i > w i ) + i = 1 k z i θ 1 + ( 1 z i ) θ 2 1 ln 1 e λ w i
The conditional pdfs of U i j and V i j can be expressed, respectively, as
f U i j | W i ( u i j | w i ) = f G I E D ( u i j , θ 1 , λ ) 1 F G I E D ( w i , θ 1 , λ )
f V i j | W i ( v i j | w i ) = f G I E D ( v i j , θ 2 , λ ) 1 F G I E D ( w i , θ 2 , λ )
where i = 1 , , k.
The expectations associated with the functions of U i j can be written as follows:
E ( ln U i j | U i j > w i ) = w i θ 1 λ u i j 2 ln u i j e λ u i j ( 1 e λ u i j ) θ 1 1 ( 1 e λ u i j ) θ 1 d u i j ,
E ( 1 U i j | U i j > w i ) = w i θ 1 λ u i j 3 e λ u i j ( 1 e λ u i j ) θ 1 1 ( 1 e λ u i j ) θ 1 d u i j ,
E ln ( 1 e λ u i j ) | U i j > w i = w i θ 1 λ u i j 2 ln ( 1 e λ u i j e λ u i j ( 1 e λ u i j ) θ 1 1 ( 1 e λ u i j ) θ 1 d u i j .
Similarly, we can obtain the expectations associated with the functions of V i j .
In the “M”-step, the estimates of θ 1 , θ 2 , and λ can be calculated by maximizing the pseudo log-likelihood function with finite iterations. θ 1 and θ 2 with respect to λ can be calculated by taking partial derivatives of (13), as follows:
θ 1 ^ ( λ ) = θ 1 = m i = 1 k s i E ln ( 1 e λ U i | U i > w i ) + i = 1 k z i ln ( 1 e λ w i )
θ 2 ^ ( λ ) = θ 2 = n i = 1 k t i E ln ( 1 e λ V i | V i > w i ) + i = 1 k ( 1 z i ) ln ( 1 e λ w i )
Use the equations above to rewrite (13) as the function only for λ . Therefore, the three-dimensional optimization problem is transformed into a one-dimensional problem. At the r-th iteration, ( θ 1 ( r ) , θ 2 ( r ) , λ ( r ) ) denotes the estimate of ( θ 1 , θ 2 , λ ) . For fixed ( θ 1 ( r 1 ) , θ 2 ( r 1 ) ) , maximize l s ( θ 1 , θ 2 , λ ) to obtain λ ( r ) . For fixed ( θ 1 ( r 1 ) , θ 2 ( r 1 ) , λ ( r ) ) , θ 1 ( r ) , θ 2 ( r ) can be obtained as follow:
θ 1 ( r ) = m i = 1 k s i E ( θ 1 ( r 1 ) , θ 2 ( r 1 ) , λ ( r ) ) ln ( 1 e λ U i | U i > w i ) + i = 1 k z i ln ( 1 e λ ( q ) w i )
θ 2 ( r ) = n i = 1 k t i E ( θ 1 ( r 1 ) , θ 2 ( r 1 ) , λ ( r ) ) ln ( 1 e λ V i | V i > w i ) + i = 1 k ( 1 z i ) ln ( 1 e λ ( q ) w i )
We stop the iterations when | θ 1 ( r ) θ 1 ( r 1 ) | 0.0001 , | θ 2 ( r ) θ 2 ( r 1 ) | 0.0001 , | λ ( r ) λ ( r 1 ) | 0.0001 . Therefore, the MLEs of θ 1 , θ 2 , λ are θ 1 ( r ) , θ 2 ( r ) , λ ( r ) .

4. Confidence Interval Estimation

Our interest in this section is to obtain the observed information matrix based on the missing value principle. Then, we compute the interval estimations using the bootstrap-p method.

4.1. Observed Fisher Information Matrix

Based on the idea of [21], we have
I o ( θ 1 , θ 2 , λ ) = m I 1 ( θ 1 , θ 2 , λ ) + n I 2 ( θ 1 , θ 2 , λ ) i = 1 k j = 1 s i I u i j | w i ( θ 1 , θ 2 , λ ) + i = 1 k j = 1 t i I v i j | w i ( θ 1 , θ 2 , λ )
and I o represents the observed information matrix.
I 1 ( θ 1 , θ 2 , λ ) = I 11 ( 1 ) I 12 ( 1 ) I 13 ( 1 ) I 21 ( 1 ) I 22 ( 1 ) I 23 ( 1 ) I 31 ( 1 ) I 32 ( 1 ) I 33 ( 1 ) , I 2 ( θ 1 , θ 2 , λ ) = I 11 ( 2 ) I 12 ( 2 ) I 13 ( 2 ) I 21 ( 2 ) I 22 ( 2 ) I 23 ( 2 ) I 31 ( 2 ) I 32 ( 2 ) I 33 ( 2 )
where
I 11 ( 1 ) = E ( 2 ln f G I E D ( x , θ 1 , λ ) θ 1 2 ) , I 12 ( 1 ) = I 21 ( 1 ) = I 22 ( 1 ) = I 23 ( 1 ) = I 32 ( 1 ) = 0 I 13 ( 1 ) = I 31 ( 1 ) = E ( 2 ln f G I E D ( x , θ 1 , λ ) θ 1 λ ) , I 33 ( 1 ) = E ( 2 ln f G I E D ( x , θ 1 , λ ) λ 2 ) I 11 ( 2 ) = I 12 ( 2 ) = I 13 ( 2 ) = I 21 ( 2 ) = I 31 ( 2 ) = 0 , I 22 ( 2 ) = E ( 2 ln f G I E D ( x , θ 2 , λ ) θ 2 2 ) I 23 ( 2 ) = I 32 ( 2 ) = E ( 2 ln f G I E D ( x , θ 2 , λ ) θ 2 λ ) , I 33 ( 2 ) = E ( 2 ln f G I E D ( x , θ 2 , λ ) λ 2 ) .
The missing observed matrices are
I U i j | W i ( θ 1 , θ 2 , λ ) = I 11 ( 3 ) I 12 ( 3 ) I 13 ( 3 ) I 21 ( 3 ) I 22 ( 3 ) I 23 ( 3 ) I 31 ( 3 ) I 32 ( 3 ) I 33 ( 3 ) , I V i j | W i ( θ 1 , θ 2 , λ ) = I 11 ( 4 ) I 12 ( 4 ) I 13 ( 4 ) I 21 ( 4 ) I 22 ( 4 ) I 23 ( 4 ) I 31 ( 4 ) I 32 ( 4 ) I 33 ( 4 )
where
I 11 ( 3 ) = E ( 2 ln f U i 1 | W i ( u i j | w i ) θ 1 2 ) , I 12 ( 3 ) = I 21 ( 3 ) = I 22 ( 3 ) = I 23 ( 3 ) = I 32 ( 3 ) = 0 I 13 ( 3 ) = I 31 ( 3 ) = E ( 2 ln f U i 1 | W i ( u i j | w i ) θ 1 λ ) , I 33 ( 3 ) = E ( 2 ln f U i 1 | W i ( u i j | w i ) λ 2 )
I 11 ( 4 ) = I 12 ( 4 ) = I 13 ( 4 ) = I 21 ( 4 ) = I 31 ( 4 ) = 0 , I 22 ( 4 ) = E ( 2 ln f V i 1 | W i ( v i j | w i ) θ 2 2 ) I 23 ( 4 ) = I 32 ( 4 ) = E ( 2 ln f V i 1 | W i ( v i j | w i ) θ 2 λ ) , I 33 ( 4 ) = E ( 2 ln f V i 1 | W i ( v i j | w i ) λ 2 ) .
All the expectations expressions are given in Appendix B. For every fixed ( θ 1 , θ 2 , λ ) , the covariance matrix of the estimators is the inverse matrix of the observed information matrix.

4.2. Bootstrap-p Method

The asymptotic confidence interval methods are based on the law of large numbers. In many practical cases, the sample size tends to be not enough. Thus, these methods have limitations about the small sample size. Reference [22] introduced the bootstrap method to construct the confidence interval. Therefore, we suggest the percentile bootstrap method to study the parametric bootstrap confidence intervals. The steps for estimating the confidence intervals are briefly summarized as follows:
Step 1:
Compute the MLEs of θ ( 1 ) ^ , θ ( 2 ) ^ and λ ^ from the joint progressively type-II censored samples.
Step 2:
Utilize the same censoring scheme and generate the joint progressively type-II bootstrap censored samples x 11 , x 21 , , x n 1 .
Step 3:
Calculate new MLEs of θ 1 , θ 2 , and λ , say θ 1 ^ ( 1 ) , θ 2 ^ ( 1 ) , and λ ^ 1 ( 1 ) , from the bootstrap samples.
Step 4:
Repeat Step 2 and Step 3 until running B times to obtain a sequence of bootstrap estimates.
Step 5:
Sort ( θ 1 ^ ( 1 ) , θ 1 ^ ( 2 ) , , θ 1 ^ ( B ) ) , ( θ 2 ^ ( 1 ) , θ 2 ^ ( 2 ) , , θ 2 ^ ( B ) ) , and ( λ 1 ^ ( 1 ) , λ 2 ^ ( 2 ) , , λ B ^ ( B ) ) in ascending order, respectively.
Then, we obtain ( θ 1 ( 1 ) ^ , θ 1 ( 2 ) ^ , , θ 1 ( B ) ^ ) , ( θ 2 ( 1 ) ^ , θ 2 ( 2 ) ^ , , θ 2 ( B ) ^ ) , and
( λ ( 1 ) ^ , λ ( 2 ) ^ , , λ ( B ) ^ ) .
Step 6:
The 100 ( 1 ζ ) % bootstrap-p confidence intervals of θ 1 , θ 2 , and λ are
θ ^ 1 ( [ B ( ζ / 2 ) ] ) , θ ^ 1 ( [ B ( 1 ζ / 2 ) ] ) , θ ^ 2 ( [ B ( ζ / 2 ) ] ) , θ ^ 2 ( [ B ( 1 ζ / 2 ) ] ) a n d λ ^ ( [ B ( ζ / 2 ) ] ) , λ ^ ( [ B ( 1 ζ / 2 ) ] )

5. Bayes Estimation

Different from traditional statistics, Bayes estimation considers the prior information about life parameters. Thus, Bayesian estimation thinks about both the data provided and the prior probability to infer the interested parameters. It makes the inference of the interested parameters more objective and reasonable.

5.1. Bayes Estimation

Suppose that the unknown parameters θ 1 , θ 2 , and λ have Gamma prior distributions independently.
π 1 ( θ 1 ) = b 1 a 1 Γ ( a 1 ) θ 1 a 1 1 e b 1 θ 1 , θ 1 > 0 ; a 1 , b 1 > 0
π 2 ( θ 2 ) = b 2 a 2 Γ ( a 2 ) θ 2 a 2 1 e b 2 θ 2 , θ 2 > 0 ; a 2 , b 2 > 0
π 3 ( λ ) = d c Γ ( c ) λ c 1 e d λ , λ > 0 ; c , d > 0
where a 1 , a 2 , b 1 , b 2 , c , d are the hyper-parameters that contain the prior information.
Thus, the joint prior possibility distribution can be written as
π 0 ( θ 1 , θ 2 , λ ) θ 1 a 1 1 θ 2 a 2 1 λ c 1 e b 1 θ 1 e b 2 θ 2 e d λ
The joint posterior probability distribution is
π ( θ 1 , θ 2 , λ d a t a ) = L ( θ 1 , θ 2 , λ , d a t a ) 0 0 0 π 0 ( θ 1 , θ 2 , λ ) L ( θ 1 , θ 2 , λ d a t a ) d θ 1 d θ 2 d λ = π 0 ( θ 1 , θ 2 , λ ) L ( θ 1 , θ 2 , λ d a t a ) 0 0 0 π 0 ( θ 1 , θ 2 , λ ) L ( θ 1 , θ 2 , λ d a t a ) d θ 1 d θ 2 d λ
The denominator of π ( θ 1 , θ 2 , λ d a t a ) is a function of the observed data. Thus, L ( θ 1 , θ 2 , λ , d a t a ) and π ( θ 1 , θ 2 , λ d a t a ) have a coefficient proportional relationship. Therefore, the joint posterior probability distribution is
π ( θ 1 , θ 2 , λ d a t a ) L ( θ 1 , θ 2 , λ , d a t a ) = π 0 ( θ 1 , θ 2 , λ ) L ( θ 1 , θ 2 , λ d a t a ) θ 1 a 1 + k 1 1 e b 1 i = 1 k ( z i + s i ) ln ( 1 e λ w i ) θ 1 × θ 2 a 2 + k 2 1 e b 2 i = 1 k ( 1 z i + t i ) ln ( 1 e λ w i ) θ 2 × λ k + c 1 e d + i = 1 k 1 w i λ × 1 i = 1 k ( 1 e λ w i )
We rewrite (24) as follows:
π ( θ 1 , θ 2 , λ d a t a ) π ( θ 1 λ , d a t a ) × π ( θ 2 λ , d a t a ) × π ( λ d a t a ) × d ( θ 1 , θ 2 , λ )
where
π ( θ 1 λ , d a t a ) G a a 1 + k 1 , b 1 i = 1 k ( z i + s i ) ln ( 1 e λ w i )
π ( θ 2 λ , d a t a ) G a a 2 + k 2 , b 2 i = 1 k ( 1 z i + t i ) ln ( 1 e λ w i )
π ( λ d a t a ) G a c + k , d + i = 1 k 1 w i
d ( θ 1 , θ 2 , λ ) 1 i = 1 k ( 1 e λ w i ) × b 1 i = 1 k ( z i + s i ) ln ( 1 e λ w i ) ( a 1 + k 1 ) × b 2 i = 1 k ( 1 z i + t i ) ln ( 1 e λ w i ) ( a 2 + k 2 )

5.2. Loss Functions

In Bayesian statistics, the Bayesian estimation of a function ϕ ( θ 1 , θ 2 , λ ) is derived on a prescribed loss function. Thus, it is critical to select the loss function:
  • Squared error loss function (SEL)
The SEL function is given by
L s ( ω , ω ^ ) = ω ^ ω 2
Here, ω ^ is an estimate of ω .
The corresponding Bayes estimate ω ^ s of ω can be obtained from
ω ^ s = E [ ω x ]
Thus, ϕ ^ ( θ 1 , θ 2 , λ ) s represents the Bayes estimation of ϕ ( θ 1 , θ 2 , λ ) under the SEL function, that is,
ϕ ^ ( θ 1 , θ 2 , λ ) s = 0 0 0 ϕ ( θ 1 , θ 2 , λ ) π ( θ 1 , θ 2 , λ d a t a ) d θ 1 d θ 2 d λ
  • Linex loss function (LL)
The LL function is the most universally used asymmetric loss function. The asymmetric loss function is considered more comprehensive in many respects. The linex loss function is given below:
L l ( ω , ω ^ ) = e h ( ω ^ ω ) h ( ω ^ ω ) 1 , h 0
where ω ^ is an estimate of ω and h stands for the sign, which presents the asymmetry.
The corresponding Bayesian estimate ω ^ l of ω can be derived from
ω ^ l = 1 h ln [ E ( e h ω x ) ]
Then, the Bayes estimation ϕ ^ ( θ 1 , θ 2 , λ ) l of ϕ ( θ 1 , θ 2 , λ ) results under the LL function in being the following form
ϕ ^ ( θ 1 , θ 2 , λ ) l = 1 h ln 0 0 0 e h ϕ ( θ 1 , θ 2 , λ ) π ( θ 1 , θ 2 , λ d a t a ) d θ 1 d θ 2 d λ
Obviously, we can see from the derivation above that the form of Bayesian estimation is the ratio of two multiple integrals. It is analytically tricky to obtain an explicit solution. Therefore, we propose the importance sampling method to obtain approximate explicit forms for the Bayesian estimation.

5.3. Importance Sampling Method

The importance sampling method can be applied in Bayesian estimation under different loss functions. The importance sampling method algorithm can be described briefly as follows:
Step 1:
Generate λ from π ( λ d a t a ) for the given data.
Step 2:
For given λ , generate θ 1 , θ 2 from π ( θ 1 λ , d a t a ) and π ( θ 2 λ , d a t a ) , respectively.
Step 3:
Repeat Step 1 and Step 2 M times to generate ( θ 1 i , θ 2 i , λ i ) , i = 1 , 2 , , M.
Step 4:
Calculate g ( θ 1 i , θ 2 i , λ i ) , d ( θ 1 i , θ 2 i , λ i ) and the importance weight q. Here,
q i = d i j = 1 M d j
Step 5:
The estimates of g ( θ 1 , θ 2 , λ ) under the squared error loss function and linex loss function are
g ^ ( θ 1 , θ 2 , λ ) s = i = 1 M q i g ( θ 1 i , θ 2 i , λ i )
g ^ ( θ 1 , θ 2 , λ ) l = 1 h ln i = 1 M q i e h g ( θ 1 i , θ 2 i , λ i )
Sort g i in ascending order to obtain ( g ( 1 ) , g ( 2 ) , , g ( M ) ) . The relevant q i is recorded as ( q ( 1 ) , q ( 2 ) , , q ( M ) ) . A credible interval is obtained as ( g ( n 1 ) , g ( n 2 ) ) , when n 1 , n 2 satisfy
n 1 < n 2 , n 1 , n 2 1 , 2 , , M a n d i = n 1 n 2 q i 1 ζ < i = n 1 n 2 + 1 q i
Thus, the 100 ( 1 ζ ) % symmetric credible intervals g ( θ 1 , θ 2 , λ ) are ( g ( [ N ζ 2 ] ) , g ( [ N ( 1 ζ 2 ) ] ) ) .

6. Simulation and Data Analysis

6.1. Numerical Simulation

We conducted simulation experiments in an attempt to analyze the performance of different methods mentioned above. We performed this for various censoring schemes. Here, the notation ( 0 ( 6 ) , 10 ( 4 ) , 0 ( 10 ) ) implies R 1 = = R 6 = 0 , R 7 = = R 10 = 4 , R 11 = = R 20 = 0 . Let m = 20 , n = 25 , and k = 20 , 25 , and 30. Set θ 1 = 1 , θ 2 = 1 , λ = 0.5 as the true values, which are considered as the initial values in the EM algorithm. The Bayesian estimates under the squared error loss function and linex loss function are calculated for informative and non-informative priors. In light of [23], the hyper-parameters for the non-informative prior is 10 5 . For the informative prior, the corresponding hyper parameters are a 1 = 2 , b 1 = 1 , a 2 = 1 , b 2 = 2 , c = 3 , d = 2 . The linex constant are h = 2 . Based on the MLEs calculated by the EM algorithm, the percentile bootstrap confidence intervals can be derived. Then, we constructed the Bayesian symmetric credible intervals. IP represents informative prior and NIP represents non-informative prior.
Table 1 shows the average estimates (AEs) and the mean-squared errors (MSEs) of the MLEs for various JPC schemes based on 1000 repetitions of the EM algorithm. Table 2 and Table 3 present the Bayesian estimates and the MSEs for informative prior under the squared error loss function and the linex loss function based on 1000 repetitions, respectively. Table 4 and Table 5 show the Bayesian estimates and the MSEs for the non-informative prior under the squared error loss function and the linex loss function, which are based on 1000 repetitions, respectively. Table 6 presents the 90 % percentile bootstrap confidence intervals, which contain 1000 bootstrap samples in each replication, as well as 90 % symmetric credible intervals based on informative and non-informative priors. The average lengths (ALs) and the coverage percentages (CPs) of these intervals were calculated based on 1000 repetitions.
From Table 1, we can find that the MLEs perform better in terms of the MSEs as k increases. The MSEs of λ are always much smaller than those of θ 1 and θ 2 , which means λ obtains better estimates. This is reasonable because λ is the same in the two populations. From Table 2 and Table 3, it is clear that the MSEs of Bayesian estimates become smaller as k increases. The MSEs are smaller and the AEs are closer to the true value under the linex loss function than the squared error loss function. From Table 4 and Table 5, it is found that if k is larger, and the results are closer to the true value. Comparing the MLEs and Bayesian estimates, Bayes inference is superior to the MLEs. Bayesian estimates with the informative prior perform better than those with the non-informative prior. Moreover, the results are better under the linex loss function. All in all, Bayesian estimates with the informative prior under the linex loss function perform best among the methods discussed. From Table 6, we can find that the ALs of the symmetric credible intervals are shorter than the bootstrap confidence intervals. The ALs of the NIP are larger than the IP. The CPs of the NIP are less than the IP. Therefore, we can conclude that the IP performs better than the NIP. Comparing these approaches, the Bayesian method obtains better results in the confidence intervals. When k is large, the CPs of both Bayes method and bootstrap method increase greatly. Thus, the Bayesian approach with the informative prior is superior to the other two methods.

6.2. Real Data Analysis

The two sets of real data were the data of the breaking strength of jute fiber. We analyzed them and applied the methods mentioned above. The data sets were from [24]. Each set has 30 observations. The data are shown in Table 7 and Table 8.
The data sets were divided by 1000 for the ease of use. We conducted the Kolmogorov–Smirnov distance test, which computes the difference between empirical distribution functions and the fit distributions. Therefore, the MLEs, the K-S distance, and the corresponding p values can be obtained. We found that the GIED fit well for both data sets. The results are presented in Table 9.
The likelihood ratio test was used to test if the scale parameters can be considered as the same value. H 0 : λ 1 = λ 2 . After calculation, the p value was 0.688. Therefore, the null hypothesis cannot be rejected. The two scale parameters can be considered as the same. On account of the null hypothesis, the MLEs of the parameters were calculated to be θ 1 ^ = 1.454 , θ 2 ^ = 1.596 , λ ^ = 0.228 .
We generated the observed data for two censoring schemes: ( 0 ( 14 ) , 30 , 0 ( 15 ) ) and ( 0 ( 14 ) , 15 , 15 , 0 ( 14 ) ) from the complete data above. Thus, the MLEs of the parameters along with the associated AEs and MSEs were computed. The results are shown in Table 10.
Due to the limitation of the conditions, we cannot obtain the informative prior. Therefore, all Bayesian estimates in real data analysis were based on the non-informative prior. For the non-informative prior, the results of the Bayesian approach under the squared error loss function and the linex loss function are presented in Table 11 and Table 12, respectively. We computed the 90% confidence/credible intervals with the bootstrap-p method and Bayesian estimations. The results are shown in Table 13. Here, LB denotes lower bound and UB denotes upper bound.

7. Order-Restricted Inference

In many practical cases, the experimenters may know that the lifetime of diverse populations is orderly. In this section, we discuss the problem that the parameters have order restrictions. The order restriction of the scale parameter is θ 1 < θ 2 .

7.1. Maximum Likelihood Estimates

For a given λ , the function (8) is a concave function of θ 1 and θ 2 . The maximum value is unique, and it can be obtained at the point ( θ 1 ^ ( λ ) , θ 2 ^ ( λ ) ) . The order-restricted MLEs of θ 1 and θ 2 are θ 1 ˜ ( λ ) and θ 2 ˜ ( λ ) , respectively. If θ 1 ^ ( λ ) < θ 2 ^ ( λ ) , θ 1 ˜ ( λ ) equals θ 1 ^ ( λ ) and θ 2 ˜ ( λ ) equals θ 2 ^ ( λ ) . Otherwise, the maximum value of l 1 ( θ 1 , θ 2 ) will be on the line θ 1 = θ 2 under the order restriction θ 1 < θ 2 . Hence, we can obtain
θ 1 ˜ ( λ ) = θ 2 ˜ ( λ ) = arg max l 1 ( θ 1 , θ 2 )
Therefore, for a given λ , we can obtain the following results:
( θ 1 ˜ ( λ ) , θ 2 ˜ ( λ ) ) = k i = 1 k ( R i + 1 ) ln ( 1 e λ w i ) , k i = 1 k ( R i + 1 ) ln ( 1 e λ w i ) θ 1 ^ ( λ ) θ 2 ^ ( λ ) ( θ 1 ^ ( λ ) , θ 2 ^ ( λ ) ) θ 1 ^ ( λ ) < θ 2 ^ ( λ )
Maximize p 2 ( λ ) = ln L ( λ , θ 1 ˜ ( λ ) , θ 2 ˜ ( λ ) | d a t a ) to obtain the MLE of λ , say λ ˜ . Then, we prove the MLE of λ exists and is unique.
Theorem 3.
If k 1 > 0 and k 2 > 0 , p 2 ( λ ) is a unimodal function of λ.
Proof. 
This is consistent with Theorem 2. p 2 ( λ ) is log-concave, and p 2 ( λ ) is a continuous function. □
After obtaining λ ˜ , we can calculate θ 1 ˜ ( λ ) and θ 2 ˜ ( λ ) from (32) explicitly. Therefore, we can apply the bootstrap method mentioned in Section 4 to derive the confidence intervals.

7.2. Bayes Estimation

Suppose θ 1 < θ 2 . We can have the following prior assumption based on the idea of [15]:
π 0 ( θ 1 , θ 2 , λ ) ( θ 1 a 1 1 θ 2 a 2 1 + θ 1 a 2 1 θ 2 a 1 1 ) λ c 1 e b 1 θ 1 e b 2 θ 2 e d λ
Therefore, the joint posterior probability distribution for λ > 0 , 0 < θ 1 < θ 2 is
π ( θ 1 , θ 2 , λ d a t a ) L ( θ 1 , θ 2 , λ , d a t a ) = π 0 ( θ 1 , θ 2 , λ ) L ( θ 1 , θ 2 , λ d a t a ) ( θ 1 a 1 1 θ 2 a 2 1 + θ 1 a 2 1 θ 2 a 1 1 ) × θ 1 k 1 e b 1 i = 1 k ( z i + s i ) ln ( 1 e λ w i ) θ 1 × λ k + c 1 e d + i = 1 k 1 w i λ × θ 2 k 2 e b 2 i = 1 k ( 1 z i + t i ) ln ( 1 e λ w i ) θ 2 × 1 i = 1 k ( 1 e λ w i )
Later, we can further discuss to which distribution each belongs. Therefore, the importance sampling technique mentioned in Section 5 can be conducted to derive the Bayesian inference and the credible intervals of the parameters.
More work will be performed in the future.

8. Conclusions

In this article, we studied the JPC scheme for two groups that follow the GIED with the same scale parameter. We adopted the EM algorithm to calculate the MLEs of the parameters. The missing value principle is important in deriving the observed information matrix. Interval estimations were obtained through the bootstrap-p method. It was assumed that the parameters have a Gamma prior. The Bayes estimates and credible intervals for the informative prior and the non-informative prior under the squared error and the linex loss function were calculated by applying the importance sampling technique. We also compared the difference of diverse methods, priors, and loss functions in the interval estimation. Moreover, in many practical cases, the experimenters may know that the lifetime of diverse populations is orderly. We investigated the problem that the parameters have order restrictions. We considered the maximum likelihood estimation and Bayesian inference of the parameters.
In the future, extensive work will be performed with more populations and different distributions. We will consider the case of different scale parameters and independent samples in the GIED.

Author Contributions

Investigation, Q.C.; Supervision, W.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Project 202210004004 which was supported by National Training Program of Innovation and Entrepreneurship for Undergraduates.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are openly available in [24].

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Suppose a j 0 , for j = 1 , , k, g ( λ ) = i = 1 k a j ln 1 e λ w i , then ln ( g ( λ ) ) is a concave function. The first and second derivatives of g ( λ ) are
g ( λ ) = i = 1 k a j e λ w i w i ( 1 e λ w i ) a n d g ( λ ) = i = 1 k a j e λ w i w i 2 ( 1 e λ w i ) 2
Moreover,
i = 1 k a j e λ w i w i 2 ( 1 e λ w i ) 2 i = 1 k a j ln 1 e λ w i i = 1 k a j e λ w i w i ( 1 e λ w i ) 2 > 0
Thus,
d 2 ln g ( λ ) d λ 2 = g ( λ ) g ( λ ) ( g ( λ ) ) 2 ( g ( λ ) ) 2 < 0
Therefore, we can infer that p 1 ( λ ) is a concave function. When λ tends to 0, p 1 ( λ ) tends to . When λ tends to ∞, p 1 ( λ ) tends to .

Appendix B

E ( 2 ln f G I E D ( x , θ 1 , λ ) θ 1 2 ) = 1 θ 1 2 E ( 2 ln f G I E D ( x , θ 1 , λ ) θ 1 λ ) = 0 θ 1 λ x 3 e 2 λ x ( 1 e λ x ) θ 1 2 d x E ( 2 ln f G I E D ( x , θ 1 , λ ) λ 2 ) = ( θ 1 1 ) 0 θ 1 λ x 4 e 2 λ x ( 1 e λ x ) θ 1 3 d x + 1 λ 2 E ( 2 ln f U i 1 | W i ( u i j | w i ) θ 1 2 ) = 1 θ 2 2 E ( 2 ln f U i 1 | W i ( u i j | w i ) θ 1 λ ) = w i θ 1 λ x 3 e λ x ( 1 e λ x ) θ 1 2 ( 1 e λ w i ) θ 1 d x + e λ w i w i ( 1 e λ w i ) E ( 2 ln f U i 1 | W i ( u i j | w i ) λ 2 ) = θ 1 ln ( 1 e λ w i ) + 1 λ 2 ( θ 1 1 ) w i θ 1 λ x 4 e λ x ( 1 e λ x ) θ 1 3 ( 1 e λ w i ) θ 1 d x

References

  1. Abouammoh, A.M.; Alshingiti, A.M. Reliability estimation of generalized inverted exponential distribution. J. Stat. Comput. Simul. 2009, 79, 1301–1315. [Google Scholar] [CrossRef]
  2. Singh, U.; Singh, S.K.; Singh, R.K. A comparative study of traditional estimation methods and maximum product spacings method in generalized inverted exponential distribution. J. Stat. Appl. Probab. 2014, 3, 153. [Google Scholar] [CrossRef]
  3. Al-Omari, A.I. Time truncated acceptance sampling plans for generalized inverted exponential distribution. Electron. J. Appl. Stat. Anal. 2015, 8, 1–12. [Google Scholar]
  4. Dube, M.; Krishna, H.; Garg, R. Generalized inverted exponential distribution under progressive first-failure censoring. J. Stat. Comput. Simul. 2016, 86, 1095–1114. [Google Scholar] [CrossRef]
  5. Kumar Singh, R.; Kumar Singh, S.; Singh, U. Maximum product spacings method for the estimation of parameters of generalized inverted exponential distribution under Progressive Type II Censoring. J. Stat. Manag. Syst. 2016, 19, 219–245. [Google Scholar] [CrossRef]
  6. Dey, S.; Dey, T.; Luckett, D.J. Statistical inference for the generalized inverted exponential distribution based on upper record values. Math. Comput. Simul. 2016, 120, 64–78. [Google Scholar] [CrossRef]
  7. Njeri, K.G.; Njenga, E.G. Maximum Likelihood Estimation for a Progressively Type II Censored Generalized Inverted Exponential Distribution via EM Algorithm. Am. J. Theor. Appl. Stat. 2021, 10, 14. [Google Scholar] [CrossRef]
  8. Singh, S.; Tripathi, Y.M.; Jun, C.H. Sampling plans based on truncated life test for a generalized inverted exponential distribution. Ind. Eng. Manag. Syst. 2015, 14, 183–195. [Google Scholar] [CrossRef] [Green Version]
  9. Balakrishnan, N.; Aggarwala, R. Progressive Censoring: Theory, Methods, and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2000. [Google Scholar]
  10. Balakrishnan, N.; Cramer, E. The Art of Progressive Censoring. Applications to Reliability and Quality; Birkhäuser: New York, NY, USA, 2014. [Google Scholar]
  11. Shi, X.; Liu, F.; Shi, Y. Bayesian inference for step-stress partially accelerated competing failure model under Type II progressive censoring. Math. Probl. Eng. 2016, 2016, 2097581. [Google Scholar] [CrossRef] [Green Version]
  12. Rasouli, A.; Balakrishnan, N. Exact likelihood inference for two exponential populations under joint progressive type-II censoring. Commun. Stat. Theory Methods 2010, 39, 2172–2191. [Google Scholar] [CrossRef]
  13. Balakrishnan, N.; Su, F.; Liu, K.Y. Exact Likelihood Inference for k Exponential Populations Under Joint Progressive Type-II Censoring. Commun. Stat. Simul. Comput. 2015, 44, 902–923. [Google Scholar] [CrossRef]
  14. Parsi, S.; Ganjali, M.; Farsipour, N.S. Conditional maximum likelihood and interval estimation for two Weibull populations under joint Type-II progressive censoring. Commun. Stat. Theory Methods 2011, 40, 2117–2135. [Google Scholar] [CrossRef]
  15. Mondal, S.; Kundu, D. Point and interval estimation of Weibull parameters based on joint progressively censored data. Sankhya B 2019, 81, 1–25. [Google Scholar] [CrossRef] [Green Version]
  16. Doostparast, M.; Ahmadi, M.V.; Ahmadi, J. Bayes Estimation Based on Joint Progressive Type II Censored Data Under LINEX Loss Function. Commun. Stat. Simul. Comput. 2013, 42, 1865–1886. [Google Scholar] [CrossRef]
  17. Parsi, S.; Bairamov, I. Expected values of the number of failures for two populations under joint Type-II progressive censoring. Comput. Stat. Data Anal. 2009, 53, 3560–3570. [Google Scholar] [CrossRef]
  18. Mondal, S.; Kundu, D. A new two sample type-II progressive censoring scheme. Commun. Stat. Theory Methods 2019, 48, 2602–2618. [Google Scholar] [CrossRef]
  19. Mondal, S.; Kundu, D. Exact inference on multiple exponential populations under a joint type-II progressive censoring scheme. Statistics 2019, 53, 1329–1356. [Google Scholar] [CrossRef]
  20. Mondal, S.; Kundu, D. On the joint Type-II progressive censoring scheme. Commun. Stat. Theory Methods 2020, 49, 958–976. [Google Scholar] [CrossRef]
  21. Louis, T.A. Finding the Observed Information Matrix When Using the EM Algorithm. J. R. Stat. Soc. Ser. B 1982, 44, 226–233. [Google Scholar]
  22. Efron, B.; Tibshirani, R. Bootstrap methods for standard errors, confidence intervals, and other measures of statistical accuracy. Stat. Sci. 1986, 1, 54–75. [Google Scholar] [CrossRef]
  23. Congdon, P. Applied Bayesian Modelling; John Wiley & Sons: Hoboken, NJ, USA, 2014. [Google Scholar]
  24. Xia, Z.; Yu, J.; Cheng, L.; Liu, L.; Wang, W. Study on the breaking strength of jute fibres using modified Weibull distribution. Compos. Part A Appl. Sci. Manuf. 2009, 40, 54–59. [Google Scholar] [CrossRef]
Figure 1. pdf of GIED.
Figure 1. pdf of GIED.
Entropy 24 00576 g001
Figure 2. Hazard function of GIED.
Figure 2. Hazard function of GIED.
Entropy 24 00576 g002
Figure 3. JPC scheme.
Figure 3. JPC scheme.
Entropy 24 00576 g003
Table 1. The AEs and MSEs of the MLEs with n = 20 , m = 25 , θ 1 = 1 , θ 2 = 1 , λ = 0.5 .
Table 1. The AEs and MSEs of the MLEs with n = 20 , m = 25 , θ 1 = 1 , θ 2 = 1 , λ = 0.5 .
kScheme θ 1 θ 2 λ
AEMSEAEMSEAEMSE
20 ( 0 ( 4 ) , 25 , 0 ( 15 ) ) 1.8900.7910.4640.2870.5730.005
( 0 ( 9 ) , 25 , 0 ( 10 ) ) 1.8120.6591.5820.3380.5840.007
( 0 ( 14 ) , 25 , 0 ( 5 ) ) 0.8530.0220.3400.4360.8480.121
( 0 ( 5 ) , 5 ( 5 ) , 0 ( 10 ) ) 1.9220.8511.1160.0140.6330.018
( 25 , 0 ( 19 ) ) 2.0911.1900.7160.0810.5970.009
( 2 ( 12 ) , 1 , 0 ( 7 ) ) 1.9730.9470.6670.1110.5690.005
25 ( 0 ( 4 ) , 20 , 0 ( 20 ) ) 1.8790.7721.4920.2420.6530.023
( 0 ( 9 ) , 20 , 0 ( 15 ) ) 1.7250.5260.7900.0440.8030.092
( 0 ( 14 ) , 20 , 0 ( 10 ) ) 0.7830.0470.6150.1480.8920.153
( 0 ( 5 ) , 5 ( 4 ) , 0 ( 16 ) ) 1.4860.2361.6930.4800.4120.008
( 20 , 0 ( 14 ) ) 1.8960.8031.4020.1620.5760.006
( 2 ( 10 ) , 1 , 0 ( 15 ) ) 1.8620.7431.1140.0130.7410.058
30 ( 0 ( 4 ) , 15 , 0 ( 25 ) ) 1.8250.6810.7640.0560.6430.020
( 0 ( 9 ) , 15 , 0 ( 20 ) ) 1.4140.1720.5120.2380.8490.121
( 0 ( 14 ) , 15 , 0 ( 15 ) ) 1.2190.0480.6830.1010.6220.015
( 0 ( 5 ) , 5 ( 6 ) , 0 ( 19 ) ) 1.8250.6810.8830.0140.7690.072
( 15 , 0 ( 29 ) ) 1.6340.4021.1660.0270.4300.005
( 2 ( 7 ) , 1 , 0 ( 22 ) ) 1.6760.4570.8150.0340.6470.022
Table 2. The AEs and MSEs of the Bayes estimates for the informative prior under the squared error loss function.
Table 2. The AEs and MSEs of the Bayes estimates for the informative prior under the squared error loss function.
kScheme θ 1 θ 2 λ
AEMSEAEMSEAEMSE
20 ( 0 ( 4 ) , 25 , 0 ( 15 ) ) 1.8120.6590.5460.2060.5480.002
( 0 ( 9 ) , 25 , 0 ( 10 ) ) 1.5110.2611.4800.2310.5760.006
( 0 ( 14 ) , 25 , 0 ( 5 ) ) 0.8920.0120.4540.2980.8310.110
( 0 ( 5 ) , 5 ( 5 ) , 0 ( 10 ) ) 1.8650.7481.0420.0020.5850.007
( 25 , 0 ( 19 ) ) 2.0721.1490.6190.1450.5510.003
( 2 ( 12 ) , 1 , 0 ( 7 ) ) 1.8140.6630.7570.0590.5660.004
25 ( 0 ( 4 ) , 20 , 0 ( 20 ) ) 1.8650.7481.3990.1590.6290.017
( 0 ( 9 ) , 20 , 0 ( 15 ) ) 1.6450.4160.8490.0230.7420.059
( 0 ( 14 ) , 20 , 0 ( 10 ) ) 0.6780.1040.7520.0620.7960.088
( 0 ( 5 ) , 5 ( 4 ) , 0 ( 16 ) ) 1.4490.2011.5790.3360.4330.005
( 20 , 0 ( 14 ) ) 1.8830.7801.3200.1030.5470.002
( 2 ( 10 ) , 1 , 0 ( 15 ) ) 1.8520.7261.1040.0110.7040.042
30 ( 0 ( 4 ) , 15 , 0 ( 25 ) ) 1.8370.7010.8460.0240.6070.011
( 0 ( 9 ) , 15 , 0 ( 20 ) ) 1.4790.2300.6010.1590.8520.124
( 0 ( 14 ) , 15 , 0 ( 15 ) ) 1.2180.0480.6500.1230.6070.011
( 0 ( 5 ) , 5 ( 6 ) , 0 ( 19 ) ) 1.8730.7620.9280.0050.7470.061
( 15 , 0 ( 29 ) ) 1.7640.5841.1030.0110.4430.003
( 2 ( 7 ) , 1 , 0 ( 22 ) ) 1.5680.3220.8640.0180.5960.009
Table 3. The AEs and MSEs of the Bayes estimates for the informative prior under the linex loss function.
Table 3. The AEs and MSEs of the Bayes estimates for the informative prior under the linex loss function.
kScheme θ 1 θ 2 λ
AEMSEAEMSEAEMSE
20 ( 0 ( 4 ) , 25 , 0 ( 15 ) ) 1.7460.5571.4070.1660.5460.002
( 0 ( 9 ) , 25 , 0 ( 10 ) ) 1.4330.1881.0680.0050.5730.005
( 0 ( 14 ) , 25 , 0 ( 5 ) ) 0.7840.0470.7380.0690.8150.099
( 0 ( 5 ) , 5 ( 5 ) , 0 ( 10 ) ) 1.8470.7181.4650.2160.5830.007
( 25 , 0 ( 19 ) ) 1.8780.7711.9100.8280.5470.002
( 2 ( 12 ) , 1 , 0 ( 7 ) ) 1.8720.7611.5850.3430.5640.004
25 ( 0 ( 4 ) , 20 , 0 ( 20 ) ) 1.6930.4802.2181.4840.6250.016
( 0 ( 9 ) , 20 , 0 ( 15 ) ) 1.5410.2921.9940.9880.7370.056
( 0 ( 14 ) , 20 , 0 ( 10 ) ) 0.6140.1490.9150.0070.7740.075
( 0 ( 5 ) , 5 ( 4 ) , 0 ( 16 ) ) 1.2350.0551.5540.3070.4310.005
( 20 , 0 ( 14 ) ) 1.8940.7991.9760.9520.5410.002
( 2 ( 10 ) , 1 , 0 ( 15 ) ) 2.0631.1291.6680.4460.6980.039
30 ( 0 ( 4 ) , 15 , 0 ( 25 ) ) 1.7630.5811.4220.1780.6030.011
( 0 ( 9 ) , 15 , 0 ( 20 ) ) 1.2270.0511.1780.0320.8190.102
( 0 ( 14 ) , 15 , 0 ( 15 ) ) 1.0240.0011.0300.0010.5920.009
( 0 ( 5 ) , 5 ( 6 ) , 0 ( 19 ) ) 1.9840.9691.2990.0900.7440.059
( 15 , 0 ( 29 ) ) 1.8250.6811.6040.3640.4410.003
( 2 ( 7 ) , 1 , 0 ( 22 ) ) 1.6390.4081.4330.1880.5920.008
Table 4. The AEs and MSEs of the Bayes estimates for the non-informative prior under the squared error loss function.
Table 4. The AEs and MSEs of the Bayes estimates for the non-informative prior under the squared error loss function.
kScheme θ 1 θ 2 λ
AEMSEAEMSEAEMSE
20 ( 0 ( 4 ) , 25 , 0 ( 15 ) ) 2.0261.0530.8240.0310.4420.003
( 0 ( 9 ) , 25 , 0 ( 10 ) ) 1.3380.1140.3630.4060.4620.001
( 0 ( 14 ) , 25 , 0 ( 5 ) ) 0.7380.0690.6640.1130.7470.061
( 0 ( 5 ) , 5 ( 5 ) , 0 ( 10 ) ) 1.4760.2271.4250.1810.5460.002
( 25 , 0 ( 19 ) ) 2.0381.0782.0031.0060.5580.003
( 2 ( 12 ) , 1 , 0 ( 7 ) ) 2.2181.4840.7410.0670.4620.001
25 ( 0 ( 4 ) , 20 , 0 ( 20 ) ) 1.2030.0410.7460.0640.6190.014
( 0 ( 9 ) , 20 , 0 ( 15 ) ) 1.7380.5450.7570.0590.7950.087
( 0 ( 14 ) , 20 , 0 ( 10 ) ) 1.3200.1030.5360.2150.6860.035
( 0 ( 5 ) , 5 ( 4 ) , 0 ( 16 ) ) 1.5260.2760.9190.0060.5700.005
( 20 , 0 ( 14 ) ) 2.2861.6541.6580.4330.6320.018
( 2 ( 10 ) , 1 , 0 ( 15 ) ) 1.1350.0180.8810.0140.4200.006
30 ( 0 ( 4 ) , 15 , 0 ( 25 ) ) 1.6220.3871.1300.0170.6330.018
( 0 ( 9 ) , 15 , 0 ( 20 ) ) 1.3960.1571.0930.0090.5270.001
( 0 ( 14 ) , 15 , 0 ( 15 ) ) 1.4430.1960.7380.0690.5690.005
( 0 ( 5 ) , 5 ( 6 ) , 0 ( 19 ) ) 1.2570.0661.0800.0060.5270.001
( 15 , 0 ( 29 ) ) 1.6200.3850.9640.0010.3680.017
( 2 ( 7 ) , 1 , 0 ( 22 ) ) 1.8790.7731.5800.3360.4540.002
Table 5. The AEs and MSEs of the Bayes estimates for the non-informative prior under the linex loss function.
Table 5. The AEs and MSEs of the Bayes estimates for the non-informative prior under the linex loss function.
kScheme θ 1 θ 2 λ
AEMSEAEMSEAEMSE
20 ( 0 ( 4 ) , 25 , 0 ( 15 ) ) 1.5460.2981.5530.3050.4410.004
( 0 ( 9 ) , 25 , 0 ( 10 ) ) 1.0470.0020.9770.0010.4720.001
( 0 ( 14 ) , 25 , 0 ( 5 ) ) 0.6610.1150.8410.0250.7250.051
( 0 ( 5 ) , 5 ( 5 ) , 0 ( 10 ) ) 1.2320.0541.1860.0350.5320.001
( 25 , 0 ( 19 ) ) 1.6950.4841.9700.9420.5560.003
( 2 ( 12 ) , 1 , 0 ( 7 ) ) 1.6430.4141.4230.1790.4730.001
25 ( 0 ( 4 ) , 20 , 0 ( 20 ) ) 1.1080.0121.7900.6250.6150.013
( 0 ( 9 ) , 20 , 0 ( 15 ) ) 1.6150.3781.5750.3310.7890.083
( 0 ( 14 ) , 20 , 0 ( 10 ) ) 1.1430.0210.8000.0400.6730.030
( 0 ( 5 ) , 5 ( 4 ) , 0 ( 16 ) ) 1.2220.0491.7680.5900.5670.004
( 20 , 0 ( 14 ) ) 1.9230.8522.0911.1890.6280.016
( 2 ( 10 ) , 1 , 0 ( 15 ) ) 0.9570.0021.0510.0030.4140.007
30 ( 0 ( 4 ) , 15 , 0 ( 25 ) ) 1.4520.2041.6950.4820.6280.016
( 0 ( 9 ) , 15 , 0 ( 20 ) ) 1.2330.0541.2480.0610.5220.001
( 0 ( 14 ) , 15 , 0 ( 15 ) ) 1.3220.1040.8790.0150.5570.003
( 0 ( 5 ) , 5 ( 6 ) , 0 ( 19 ) ) 1.0710.0050.9530.0020.5230.001
( 15 , 0 ( 29 ) ) 1.4200.1771.6350.4030.3650.018
( 2 ( 7 ) , 1 , 0 ( 22 ) ) 1.5440.2961.4450.1980.4510.002
Table 6. The interval estimations.
Table 6. The interval estimations.
kSchemeParameterBootstrapIPNIP
ALCPALCPALCP
25 ( 0 ( 4 ) , 20 , 0 ( 20 ) ) θ 1 2.47485.91.74288.62.31886.3
θ 2 0.75186.10.73987.30.53888.7
λ 0.22384.30.11884.20.13682.9
( 0 ( 9 ) , 20 , 0 ( 15 ) ) θ 1 1.62381.41.39183.61.59182.8
θ 2 0.88480.60.93282.10.66382.6
λ 0.16780.30.14284.70.16381.2
( 0 ( 14 ) , 20 , 0 ( 10 ) ) θ 1 0.98185.20.91488.10.94687.4
θ 2 0.73287.30.77489.20.69288.8
λ 0.45282.40.29285.50.32784.1
( 0 ( 5 ) , 5 ( 4 ) , 0 ( 16 ) ) θ 1 1.63881.51.50883.81.68881.9
θ 2 0.51983.80.62486.10.36685.1
λ 0.20684.40.16383.70.12082.3
( 20 , 0 ( 14 ) ) θ 1 1.54780.41.50882.81.59881.5
θ 2 0.98281.60.62481.71.39280.2
λ 0.18682.70.16385.20.10983.9
( 2 ( 10 ) , 1 , 0 ( 15 ) ) θ 1 1.47285.21.12684.61.40686.2
θ 2 1.15487.30.87685.31.02888.5
λ 0.18482.30.15785.40.12583.6
30 ( 0 ( 4 ) , 15 , 0 ( 25 ) ) θ 1 2.31789.12.17591.71.45490.2
θ 2 1.13590.20.72692.41.02191.3
λ 0.14388.20.13887.40.13786.2
( 0 ( 9 ) , 15 , 0 ( 20 ) ) θ 1 1.43284.61.53786.81.00085.7
θ 2 1.27583.91.03785.21.16285.9
λ 0.26183.70.22988.10.18284.5
( 0 ( 14 ) , 15 , 0 ( 15 ) ) θ 1 1.42788.31.44793.21.00890.4
θ 2 0.98291.50.67594.30.91392.1
λ 0.51885.40.25689.60.51687.3
( 0 ( 5 ) , 5 ( 6 ) , 0 ( 19 ) ) θ 1 1.21484.81.16887.10.91085.2
θ 2 1.03687.10.93889.20.59088.4
λ 0.21387.50.21886.80.14585.6
( 15 , 0 ( 29 ) ) θ 1 1.52983.31.46785.91.56683.8
θ 2 1.32582.81.31684.71.02683.6
λ 0.24885.70.22888.30.11585.2
( 2 ( 7 ) , 1 , 0 ( 22 ) ) θ 1 1.99388.41.63287.11.98689.5
θ 2 0.64190.50.45587.41.00391.8
λ 0.29285.50.15288.20.20486.9
Table 7. Data Set 1 (Gauge length 10 mm).
Table 7. Data Set 1 (Gauge length 10 mm).
Data Set 1
43.9350.16101.15108.94123.06141.38
151.48163.4177.25183.16212.13257.44
262.9291.27303.9323.83353.24376.42
383.43422.11506.6530.55590.48637.66
671.49693.73700.74704.66727.23778.17
Table 8. Data Set 2 (Gauge length 20 mm).
Table 8. Data Set 2 (Gauge length 20 mm).
Data Set 2
36.7545.5848.0171.4683.5599.72
113.85116.99119.86145.96166.49187.13
187.85200.16244.53284.64350.7375.81
419.02456.6547.44578.62581.6585.57
594.29662.66688.16707.36756.7765.14
Table 9. MLEs and K-S distance.
Table 9. MLEs and K-S distance.
Data Set θ ^ λ ^ K-S Distancep Value
Data set 11.8410.2930.1210.798
Data set 21.3530.1880.1510.474
Table 10. MLEs under two schemes.
Table 10. MLEs under two schemes.
Scheme θ 1 ^ θ 2 ^ λ ^
( 0 ( 14 ) , 30 , 0 ( 15 ) ) 1.6112.8540.258
( 0 ( 14 ) , 15 , 15 , 0 ( 14 ) ) 1.5302.9160.250
Table 11. Bayesian estimates under the squared error loss function.
Table 11. Bayesian estimates under the squared error loss function.
Scheme θ 1 ^ θ 2 ^ λ ^
( 0 ( 14 ) , 30 , 0 ( 15 ) ) 1.5752.4890.154
( 0 ( 14 ) , 15 , 15 , 0 ( 14 ) ) 1.8312.5090.149
Table 12. Bayesian estimates under the linex loss function.
Table 12. Bayesian estimates under the linex loss function.
Scheme θ 1 ^ θ 2 ^ λ ^
( 0 ( 14 ) , 30 , 0 ( 15 ) ) 1.5172.3800.153
( 0 ( 14 ) , 15 , 15 , 0 ( 14 ) ) 1.5312.2740.147
Table 13. The interval estimation of real data.
Table 13. The interval estimation of real data.
SchemeParameterBootstrapNIP
LBUBLBUB
( 0 ( 14 ) , 30 , 0 ( 15 ) ) θ 1 0.9753.2430.8722.955
θ 2 1.1133.9471.1023.768
λ 0.1040.3410.0960.329
( 0 ( 14 ) , 15 , 15 , 0 ( 14 ) ) θ 1 0.9883.5760.8913.158
θ 2 1.0163.8250.9633.549
λ 0.0910.3240.0860.314
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, Q.; Gui, W. Statistical Inference of the Generalized Inverted Exponential Distribution under Joint Progressively Type-II Censoring. Entropy 2022, 24, 576. https://doi.org/10.3390/e24050576

AMA Style

Chen Q, Gui W. Statistical Inference of the Generalized Inverted Exponential Distribution under Joint Progressively Type-II Censoring. Entropy. 2022; 24(5):576. https://doi.org/10.3390/e24050576

Chicago/Turabian Style

Chen, Qiyue, and Wenhao Gui. 2022. "Statistical Inference of the Generalized Inverted Exponential Distribution under Joint Progressively Type-II Censoring" Entropy 24, no. 5: 576. https://doi.org/10.3390/e24050576

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop