Next Article in Journal
Quantum Energy Current Induced Coherence in a Spin Chain under Non-Markovian Environments
Previous Article in Journal
First Request First Service Entanglement Routing Scheme for Quantum Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

E-Bayesian and H-Bayesian Inferences for a Simple Step-Stress Model with Competing Failure Model under Progressively Type-II Censoring

1
College of Science, Inner Mongolia University of Technology, Hohhot 010051, China
2
School of Statistics and Mathematics, Inner Mongolia University of Finance and Economics, Hohhot 010070, China
3
Institute of Mathematics and Statistics, Wuhan University, Wuhan 430072, China
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(10), 1405; https://doi.org/10.3390/e24101405
Submission received: 2 August 2022 / Revised: 26 August 2022 / Accepted: 26 September 2022 / Published: 1 October 2022

Abstract

:
In this paper, we discuss the statistical analysis of a simple step-stress accelerated competing failure model under progressively Type-II censoring. It is assumed that there is more than one cause of failure, and the lifetime of the experimental units at each stress level follows exponential distribution. The distribution functions under different stress levels are connected through the cumulative exposure model. The maximum likelihood, Bayesian, Expected Bayesian, and Hierarchical Bayesian estimations of the model parameters are derived based on the different loss function. Based on Monte Carlo Simulations. We also get the average length and the coverage probability of the 95% confidence intervals and highest posterior density credible intervals of the parameters. From the numerical studies, it can be seen that the proposed Expected Bayesian estimations and Hierarchical Bayesian estimations have better performance in terms of the average estimates and mean squared errors, respectively. Finally, the methods of statistical inference discussed here are illustrated with a numerical example.

1. Introduction

In recent years, it has been challenging to obtain sufficient failure data during the general service condition with increasing reliability of products. Accelerated life testing (ALT) is conducted to overcome such difficulties. In ALT, the test units are subjected to higher levels of stress condition for shortening the total testing time, and sufficient failure data can be obtained for reliability assessment. Some authentic books in the field of the ALT include Nelson et al. [1], Nelson [2], Meeker et al. [3], and Bagdonavicius et al. [4]. The constant-stress accelerated life testing (C-SALT) and the step-stress accelerated life testing (S-SALT) are two special types of ALT. S-SALT an advantage of yielding more failure data in a limited testing time and changes the stress level at a prefixed time or a prefixed number of failures during the testing. To analyze the data from S-SALT, one requires a model that relates to the failure lifetimes under different stress levels. The cumulative exposure model (CEM) is the most studied in the literature, and the model was first introduced by Sedyakin [5]. The S-SALT under the assumption of CEM has attracted great attention, such as Balakrishnan et al. [6,7], Lee et al. [8], and Tang [9]. Furthermore, it is common that the products’ failure may be caused by more than one cause. Therefore, each of these causes would compete with each other to result in the final failure. We call those possible failure causes competing failures. The competing failure model plays an important role and the competing failure data are analyzed by Cox [10] and David et al. [11]. In S-SALT, some researchers have discussed the competing failure model, such as Balakrishnan et al. [12], Beltrami [13,14], Shi and Liu [15], Srivastava et al. [16], Xu et al. [17], Zhang et al. [18,19], Ganguly et al. [20], Han et al. [21,22,23], Varhgese et al. [24], Liu et al. [25], Abu-Zinadah et al. [26] and Aljohaniet al. [27]. The maximum likelihood estimation (MLE) and the Bayesian estimation (BE) based on the different loss function (LF) are the common inference for analyzing statistical data. Lindley [28] introduced the Hierarchical Bayesian estimation (H-BE) primarily, and it was examined by Han [29]. The Expected Bayesian estimation (E-BE) is the expectation of BE, and it was introduced by Han [30]. Han [31,32] derived the E-BE and H-BE of the reliability parameter under testing data. However, there are few works concerning the E-Bayesian and H-Bayesian inference for the step-stress accelerated competing failure model.
We will discuss the EB and HB inference for a simple step-stress accelerated competing failure model which has only two stress levels, and the layout of the paper is organized as follows. Section 2 gives the model and basic assumptions. Asymptotic confidence intervals (CIs) and Bootstrap confidence intervals (BCIs) are constructed in Section 3. Bayesian estimations (BEs) are derived based on the different LFs in Section 4. The E-BE is derived based on the different LFs in Section 5. The H-BEs is derived based on the different LFs in Section 6. The Simulation data analysis is provided in Section 7. Section 8 illustrates the proposed methods by a numerical example. Finally, we provide concluding remarks and future research proposes.

2. Model Description and MLEs

In this section, we discuss a simple-stress life testing and provide the MLEs of the unknown parameters based the observed data.

2.1. Basic Assumption

To describe the simple S-SALT clearly, some assumptions are made as follows:
(1)
The unit fails only due to one of the two independent competing failure causes with failure times T 1 and T 2 , respectively. A failure time is recorded as joint random variable T , δ . Let δ = 1 , the   failure   is   caused   by   the   first   cause 2 , the   failure   is   caused   by   the   sec ond   cause , that denote the indicator variable for the cause of failure time T = min ( T 1 , T 2 ) .
(2)
The lifetime follows an exponential distribution with scale parameter λ i j . Let 1 / λ i j be the mean time-to-failure of a test unit at the stress level S i by the failure cause j for i , j = 1 , 2 . The cumulative distribution function (CDF) and probability density function (PDF) are given as follows, respectively
F i j ( t ) = F ( t ; λ i j ) = 1 exp ( λ i j t )
f i j ( t ) = f ( t ; λ i j ) = λ i j exp ( λ i j t )
where t 0 , λ i j > 0 and i , j = 1 , 2 .
(3)
The scale parameter λ i j agrees with a log-linear function of stress
ln λ i j = a j + b j φ ( S i )
where a j and b j are unknown coefficient parameters, φ ( S i ) = 1 / S i that is chosen as the Arrhenius model [2] in this paper, i , j = 1 , 2 , but this not used explicitly in this paper.
(4)
Lifetime distribution at different stress levels is related by CEM. The CEM assumes that the remaining lifetime of a unit depends only on the cumulative exposure accumulated at the current stress level, regardless of how the exposure is actually accumulated. The failure probability of product working time t 1 under stress S 1 is equivalent to the failure probability of product working time t 2 under stress S 2 . At the time τ when the stress level increases from S 1 to S 2 , the CDF of the lifetime of the test unit failed due to cause j for j = 1 , 2 can be written as follows:
F j t = F 1 j ( t ) , 0 < t < τ F 2 j ( t a j ) , t > τ
where F 1 j and F 2 j are the CDFs of the lifetime of the test unit failed under stress S 1 and S 2 , respectively, and a j is such that it satisfies F 1 j ( τ ) = F 2 j ( τ - a j ) and a j = 1 - λ 1 j / λ 2 j τ .
Under these assumptions, the CDF of the lifetime of the test unit failed due to cause j is
F j ( t ) = F j ( t ; λ 1 j , λ 2 j ) = 1 exp ( λ 1 j t ) , 0 < t < τ 1 exp ( λ 1 j λ 2 j ) τ λ 2 j t , t > τ
and the corresponding PDF is given by
f j ( t ) = f j t ; λ 1 j , λ 2 j = λ 1 j exp ( λ 1 j t ) , 0 < t < τ λ 2 j exp ( λ 1 j λ 2 j ) τ λ 2 j t , t > τ
where j = 1 , 2 .

2.2. Model Description

Under a progressively Type-Ⅱ censored (PT-II-C) scheme, the simple S-SALT is described as follows:
The n test units are placed in the test under the initial stress level S 1 . At the first failure time t 1 : n , R 1 units are progressively removed from the remaining n 1 units and recording data ( t 1 : n , δ 1 , R 1 ) . The test continues until time t N 1 : n , R N 1 units are progressively removed and recording data ( t N 1 : n , δ N 1 , R N 1 ) . Then, we increased the stress level to S 2 , and the remaining ( n N 1 R 1 R N 1 ) units continued to be tested. At the time of ( N 1 + 1 ) th failure, R N 1 + 1 units were progressively removed, and we got the sample ( t N 1 + 1 : n , δ N 1 + 1 , R N 1 + 1 ) . The test is continued until the ( N 1 + N 2 ) th failure is observed, R N 1 + N 2 units are removed, and the test terminates. Here, N 1 , N 2 , R 1 , , R N 1 + N 2 ( N 1 + N 2 + R 1 + + R N 1 + N 2 = n ) are prefixed. Therefore, under PT-II-C scheme, the observed data for the simple S-SALT are that:
S 1 : ( t 1 : n , δ 1 , R 1 ) , ( t 2 : n , δ 2 , R 2 ) , , ( t N 1 : n , δ N 1 , R N 1 ) ; S 2 : ( t N 1 + 1 : n , δ N 1 + 1 , R N 1 + 1 ) , ( t N 1 + 2 : n , δ N 1 + 2 , R N 1 + 2 ) , , ( t N 1 + N 2 : n , δ N 1 + N 2 , R N 1 + N 2 ) .
Here, t 1 : n , , t N 1 + N 2 : n are order statistics, δ i { 1 , 2 } , i = 1 , 2 , , N 1 + N 2 .

2.3. Maximum Likelihood Estimates

Based on the assumptions (1)–(4) and the lifetime T = min ( T 1 , T 2 ) of the test unit, the CDF and PDF of T can be obtained as follows:
F T ( t ) = 1 j = 1 2 1 F j ( t ) = 1 exp ( λ 11 + λ 12 ) t , 0 t τ 1 exp ( λ 11 + λ 12 λ 21 λ 22 ) τ ( λ 21 + λ 22 ) t , t > τ
f T ( t ) = ( λ 11 + λ 12 ) exp ( λ 11 + λ 12 ) t , 0 t τ ( λ 21 + λ 22 ) exp ( λ 11 + λ 12 λ 21 λ 22 ) τ ( λ 21 + λ 22 ) t ,   t > τ
Then the joint distribution of ( T , δ ) is given by
f T , δ ( t , j ) = f j ( t ) 1 F j ( t ) = λ 1 j exp ( λ 11 + λ 12 ) t , 0 t τ λ 2 j exp ( λ 11 + λ 12 λ 21 λ 22 ) τ ( λ 21 + λ 22 ) t ,   t > τ
where j , j = 1 , 2 and j j .
With the life-testing scheme described above, the following ordered failure time will be observed:
t 1 : n < t 2 : n < < t N 1 : n < t N 1 + 1 : n < < t N 1 + N 2 : n
n 1 j is the number of units that fail under stress S 1 due to the failure cause j ; n 2 j is the number of units that fail under stress S 2 due to the failure cause j ; for j = 1 , 2 such that N 1 = n 11 + n 12 and N 2 = n 21 + n 22 . The observed failure times t = ( t 1 : n , t 2 : n , , t N 1 : n , t N 1 + 1 : n , , t N 1 + N 2 : n ) and the Θ = λ 11 , λ 12 , λ 21 , λ 22 . Then, the likelihood function can be written as
L ( Θ | t ) i = 1 N 1 [ f T , δ ( t i : n ) 1 F T ( t i : n ) R i ] i = N 1 + 1 N 1 + N 2 [ f T , δ ( t i : n ) 1 F T ( t i : n ) R i ] i = 1 2 j = 1 2 ( λ i j ) n i j exp { ( λ 11 + λ 12 ) ( T 1 + T 21 ) ( λ 21 + λ 22 ) T 22 }
where T 1 = i = 1 N 1 1 + R i t i : n , T 21 = i = N 1 + 1 N 1 + N 2 1 + R i τ , T 22 = i = N 1 + 1 N 1 + N 2 1 + R i t i : n τ , and τ = t N 1 : n .
By using Equation (10), the log-likelihood function can be written as:
l = i = 1 2 j = 1 2 n i j ln λ i j λ 11 + λ 12 T 1 + T 21 λ 21 + λ 22 T 22
Taking the first partial derivative of Equation (11) with respect to λ i j , and let it equal zero
l λ 11 = n 11 λ 11 T 1 + T 21 = 0 l λ 12 = n 12 λ 12 T 1 + T 21 = 0 l λ 21 = n 21 λ 21 T 22 = 0 l λ 22 = n 22 λ 22 T 22 = 0
Using simple algebra calculations, we obtain
λ ^ 1 j ( M L E ) = n 1 j T 1 + T 21
λ ^ 2 j ( M L E ) = n 2 j T 22

3. Interval Estimations

In this section, we propose the asymptotic confidence intervals (ACIs) and the bootstrap confidence intervals (BCIs) for the parameters λ i j .

3.1. Asymptotic Confidence Intervals (ACIs)

According to the asymptotic likelihood theory, we get the information matrix of Θ , the elements of which are
I 11 = 2 l λ 11 2 λ 11 = λ ^ 11 = n 11 λ ^ 11 2 I 22 = 2 l λ 12 2 λ 12 = λ ^ 12 = n 12 λ ^ 12 2 I 33 = 2 l λ 21 2 λ 21 = λ ^ 21 = n 21 λ ^ 21 2 I 44 = 2 l λ 22 2 λ 22 = λ ^ 22 = n 22 λ ^ 22 2 I i j = I j i = 0 i j = 1 , 2 , 3 , 4
Then, the observed Fisher information matrix of Θ is that
I ^ ( Θ ) = I 11 I 14 I 41 I 44
The approximate asymptotic variance-covariance matrix can be given by I ^ ( Θ ) 1 and denoting V ^ ( Θ ) = I ^ ( Θ ) 1 = Diag ( λ ^ 11 2 n 11 , λ ^ 12 2 n 12 , λ ^ 21 2 n 21 , λ ^ 22 2 n 22 ) . Therefore, the 100 1 γ % ACIs for the parameter λ i j are
( λ ^ i j Z γ / 2 λ ^ i j n i j ,   λ ^ i j + Z γ / 2 λ ^ i j n i j )
where Z γ / 2 is the γ / 2 th upper percentile of the standard normal distribution.

3.2. Bootstrap Confidence Intervals (BCIs)

3.2.1. Bootstrap-p Method

Efron (1982) proposed a Bootstrap-p (Percentile bootstrap) method, and the bootstrap-p samples are generated as follows:
Step 1. Given n , N 1 , N 2 and progressive censored scheme ( R 1 , , R N 1 + N 2 ) , compute MLEs λ ^ i j of unknown parameters λ i j ( i , j = 1 , 2 ) based on the PT-IIC data ( t 1 : n , δ 1 , R 1 ) , , ( t N 1 : n , δ N 1 , R N 1 ) , ( t N 1 + 1 : n , δ N 1 + 1 , R N 1 + 1 ) , , ( t N 1 + N 2 : n , δ N 1 + N 2 , R N 1 + N 2 ) .
Step 2. Generated a bootstrap sample ( t 1 : n , δ 1 , R 1 ) , , ( t N 1 : n , δ N 1 , R N 1 ) , ( t N 1 + 1 : n , δ N 1 + 1 , , R N 1 + 1 ) , , t N 1 + N 2 : n , δ N 1 + N 2 , R N 1 + N 2 by using λ ^ i j , N 1 , N 2 , and ( R 1 , , R N 1 , , R N 1 + N 2 ) , calculate the new MLE of λ i j , denoted λ ^ i j [ 1 ] ( i , j = 1 , 2 ) from Equation (13).
Step 3. Repeat Step 2 N times so that it is the number of bootstrap samples, and estimators λ ^ i j [ m ] ( i , j = 1 , 2 ; m = 1 , , N ) can be obtained.
Step 4. Arrange λ ^ i j [ m ] in ascending order to obtain the credible interval of the parameter λ i j , then λ ^ i j [ 1 ] < λ ^ i j [ 2 ] < < λ ^ i j [ N ] , i , j = 1 , 2 ; m = 1 , , N .
Step 5. Obtain the two-side 100 1 γ % Bootstrap-p confidence intervals (BPCIs) for parameters as:
λ ^ i j [ N γ / 2 ] ,   λ ^ i j [ N ( 1 γ / 2 ) ]

3.2.2. Bootstrap-t Method

Hall (1988) developed Bootstrap-t method, and the Bootstrap-t samples are generated as follows:
Step 1. Same as Bootstrap-p method step 1.
Step 2. Same as Bootstrap-p method step 2.
Step 3. Obtain the bootstrap-t statistic T ^ i j = λ ^ i j [ 1 ] λ ^ i j V a r ( λ ^ i j ) for the parameter λ i j , and denote T ^ i j [ 1 ] ( i , j = 1 , 2 ) .
Step 4. Repeat Steps 2–3 N times that it is the number of bootstrap samples, the estimators T ^ i j [ m ] ( i , j = 1 , 2 ; m = 1 , , N ) can be obtained.
Step 5. Arrange T ^ i j [ m ] ( i , j = 1 , 2 ; m = 1 , , N ) in ascending order, and obtain the credible interval of the parameter λ i j , then T ^ i j [ 1 ] < T ^ i j [ 2 ] < < T ^ i j [ N ] ( i , j = 1 , 2 ) .
Step 6. Obtain the two-side 100 1 γ % Bootstrap-t confidence intervals (BTCIs) for parameters as
λ ^ i j T ^ i j [ ( 1 γ / 2 ) N ] V a r ( λ ^ i j ) , λ ^ i j T ^ i j [ γ / 2 N ] V a r ( λ ^ i j )

4. Bayesian Analysis

In this section, we consider the BEs of λ i j based on the squared error loss function (SELF), entropy loss function (ELF), and LINEX (linear-exponential) loss function (LLF). The loss functions expressions are in the Appendix A.
As the conjugate prior, an independent gamma was chosen as prior distribution G a ( α i j , β i j ) for λ i j , so
π ( λ i j α i j , β i j ) = β i j α i j Γ ( α i j ) λ i j α i j 1 e β i j λ i j λ i j α i j 1 e β i j λ i j ( λ i j 0 )
Based on Equations (10) and (19), the joint posterior density function of λ i j can be written as:
π ( λ 11 , λ 12 , λ 21 , λ 22 ξ ) = i = 1 2 j = 1 2 L t λ i j π ( λ i j ) i = 1 2 j = 1 2 0 + L t λ i j π ( λ i j ) d λ i j
where ξ = ( n i j , t i , R i , τ , i = 1 , , N ; j = 1 , 2 ) .
The marginal posterior distribution of λ i j is
π ( λ 1 j ξ ) = L ( t | λ 1 j ) π ( λ 1 j ) 0 + L ( t | λ 1 j ) π ( λ 1 j ) d λ 1 j = ( β 1 j + T 1 + T 21 ) n 1 j + α 1 j 1 Γ ( α 1 j + n 1 j ) λ 1 j n 1 j + α 1 j 1 e λ 1 j ( β 1 j + T 1 + T 21 )   λ 1 j n 1 j + α 1 j 1 e λ 1 j ( β 1 j + T 1 + T 21 )
π ( λ 2 j ξ ) = L ( t | λ 2 j ) π ( λ 2 j ) 0 + L ( t | λ 2 j ) π ( λ 2 j ) d λ 2 j = ( β 2 j + T 22 ) n 2 j + α 2 j 1 Γ ( α 2 j + n 2 j ) λ 2 j n 2 j + α 2 j 1 e λ 2 j β 2 j + T 22   λ 2 j n 2 j + α 2 j 1 e λ 2 j ( β 2 j + T 22 )
Given the data, the posterior distribution of λ 1 j is G a ( n 1 j + α 1 j , β 1 j + T 1 + T 21 ) , and the posterior distribution of λ 2 j is G a ( n 2 j + α 2 j , β 2 j + T 22 ) , j = 1 , 2 .

4.1. Bayesian Estimation of λ i j under SELF

The BE of λ i j under the SELF is the expectation of the posterior distribution. Therefore, λ ^ i j ( B S ) can be derived as
λ ^ i j ( B S ) = E λ i j ξ = Θ λ i j π ( λ i j ξ ) d λ i j , i , j = 1 , 2
Thus, the BEs of λ 1 j and λ 2 j are obtained under Equations (21) and (22) as
λ ^ 1 j ( B S ) = n 1 j + α 1 j β 1 j + T 1 + T 21 , j = 1 , 2
λ ^ 2 j ( B S ) = n 2 j + α 2 j β 2 j + T 22 , j = 1 , 2

4.2. Bayesian Estimation of λ i j under ELF

The BEs of λ i j ( i , j = 1 , 2 as follows:
λ ^ i j ( B E ) = [ E ( λ i j 1 | ξ ) ] 1 = 1 / Θ λ i j 1 π ( λ i j ξ ) d λ i j
Thus, the BEs of λ 1 j and λ 2 j are obtained under Equations (21) and (24) as
λ ^ 1 j ( B E ) = n 1 j + α 1 j 1 β 1 j + T 1 + T 21 , j = 1 , 2
λ ^ 2 j ( B E ) = n 2 j + α 2 j 1 β 2 j + T 22 , j = 1 , 2

4.3. Bayesian Estimation of λ i j under LLF

The BEs of λ i j ( i , j = 1 , 2 as follows:
λ ^ i j ( B L ) = 1 k ln [ E ( e k λ i j ξ ) ]
Thus, the BEs of λ 1 j and λ 2 j are obtained under Equations (21) and (26) as
λ ^ 1 j ( B L ) = n 1 j + α 1 j 1 k ln ( 1 + k β 1 j + T 1 + T 21 ) , j = 1 , 2
λ ^ 2 j ( B L ) = n 2 j + α 2 j 1 k ln ( 1 + k β 2 j + T 22 ) , j = 1 , 2

5. Expected Bayesian Analysis

Since the values of the hyper-parameters are not easy to determine, it has certain randomness. The Expected Bayesian estimation is the average of Bayes estimates of θ with hyper-parameters a and b in the domain Θ . In this section, we will obtain the E-BE of based on the SELF, ELF, and LLF. The definition of E-BE was originally addressed by Han [29] as follows.
Definition 1.
With θ ^ B ( a , b ) being continuous,
θ ^ E B = E [ θ ^ B ( a , b ) ] = Θ θ ^ B ( a , b ) π ( a , b ) d a d b
is called the E-BE of θ , which is assumed to be finite, where Θ is the domain of a and b , θ ^ B ( a , b ) is the BE of θ with hyper-parameters a and b , and π a , b is the joint density function of a and b over Θ . By Definition 1, the expected Bayesian is the expectation of θ ^ B ( a , b ) with hyper-parameters a and b .
According to the prior information, the λ i j is large with a low probability, and the λ i j is small with a high probability, so the hyper-parameters α i j and β i j should be chosen to guarantee that π ( λ i j | α i j , β i j ) is a decreasing function of λ i j . For more details, see Han [29]. The derivative of π ( λ i j | α i j , β i j ) with respect to λ i j is,
d π ( λ i j | α i j , β i j ) d λ i j = β i j α i j Γ ( α i j ) λ i j α i j 2 e β i j λ i j [ ( α i j 1 ) β i j λ ]
When 0 < α i j < 1 , β i j > 0 , then d π ( λ i j | α i j , β i j ) d λ i j < 0 . Given 0 < α i j < 1 , then the larger the value of β i j , the thinner the tail of the density function. Berger [33] showed that the hyper parameter β i j should be chosen under the restriction 0 < β i j < c ( c is a constant). Suppose that α i j and β i j are independent, the joint density of α i j and β i j is given by π α i j , β i j = π α i j π β i j . Depending on different distribution of the parameters α i j and β i j , E-BE estimation of λ i j is obtained. Several authors have applied the E-BE method to analyze data, such as Abdul-Sathar et al. [34] and Shahram [35].
E-BE of λ i j is obtained relying on different distributions of α i j and β i j . These distributions are used to describe the effect of the different prior distributions on E-BE of λ i j . In this paper, the π α i j , β i j may be used:
π α i j , β i j = 1 c   ( 0 < α i j < 1 , 0 < β i j < c )
π α i j , β i j = 2 β i j c 2 ( 0 < α i j < 1 , 0 < β i j < c )

5.1. E-Bayesian Estimation of λ i j under SELF

The E-BEs of λ i j are obtained under Equations (23) and (30a) as
λ ^ 1 j ( E B S 1 ) = 1 c 0 1 0 c n 1 j + α 1 j β 1 j + T 1 + T 21 d α 1 j d β 1 j = 2 n 1 j + 1 2 c ln ( 1 + c T 1 + T 21 )
λ ^ 2 j ( E B S 1 ) 1 c 0 1 0 c n 2 j + α 2 j β 2 j + T 22 d α 2 j d β 2 j = 2 n 2 j + 1 2 c ln ( 1 + c T 22 )
The E-BEs of λ i j are obtained under Equations (23) and (30b) as
λ ^ 1 j ( E B S 2 ) 2 c 2 0 1 0 c ( n 1 j + α 1 j ) β 1 j β 1 j + T 1 + T 21 d α 1 j d β 1 j = 2 n 1 j + 1 c 2 c T 1 + T 21 ln ( 1 + c T 1 + T 21 )
λ ^ 2 j ( E B S 2 ) 2 c 2 0 1 0 c ( n 2 j + α 2 j ) β 2 j β 2 j + T 22 d α 2 j d β 2 j = 2 n 2 j + 1 c 2 c T 22 ln ( 1 + c T 22 )

5.2. E-Bayesian Estimation of λ i j under ELF

The E-BEs of λ i j are obtained under Equations (25) and (30a) as
λ ^ 1 j ( E B E 1 ) 1 c 0 1 0 c n 1 j + α 1 j 1 β 1 j + T 1 + T 21 d α 1 j d β 1 j = 2 n 1 j - 1 2 c ln ( 1 + c T 1 + T 21 )
λ ^ 2 j ( E B E 1 ) = 1 c 0 1 0 c n 2 j + α 2 j 1 β 2 j + T 22 d α 2 j d β 2 j = 2 n 2 j - 1 2 c ln ( 1 + c T 22 )
The E-BEs of λ i j are obtained under Equations (25) and (30b) as
λ ^ 1 j ( E B E 2 ) 2 c 2 0 1 0 c ( n 1 j + α 1 j 1 ) β 1 j β 1 j + T 1 + T 21 d α 1 j d β 1 j = 2 n 1 j 1 c 2 c T 1 + T 21 ln ( 1 + c T 1 + T 21 )
λ ^ 3 j ( E B E 3 ) 2 c 2 0 1 0 c ( n 2 j + α 2 j 1 ) β 2 j β 2 j + T 22 d α 2 j d β 2 j = 2 n 2 j 1 c 2 c T 22 ln ( 1 + c T 22 )

5.3. E-Bayesian Estimation of λ i j under LLF

The E-BEs of λ i j are obtained under Equations (27) and (30a) as
λ ^ 1 j ( E B L 1 ) 1 c 0 1 0 c n 1 j + α 1 j 1 k ln ( 1 + k β 1 j + T 1 + T 21 ) d α 1 j d β 1 j = 2 n 1 j 1 2 k ln ( 1 + k c + T 1 + T 21 ) + T 1 + T 21 + k c ln ( 1 + c T 1 + T 21 + k ) T 1 + T 21 c ln ( 1 + c T 1 + T 21 )
λ ^ 2 j ( E B L 2 ) 1 c 0 1 0 c n 2 j + α 2 j 1 k ln ( 1 + k β 2 j + T 22 ) d α 2 j d β 2 j = 2 n 2 j 1 2 k ln ( 1 + k c + T 22 ) + T 22 + k c ln ( 1 + c T 22 + k ) T 22 c ln ( 1 + c T 22 )
The E-BEs of λ i j are obtained under Equations (27) and (30b) as
λ ^ 1 j ( E B L 2 ) 2 c 2 0 1 0 c n 1 j + α 1 j 1 k β 1 j ln ( 1 + k β 1 j + T 1 + T 21 ) d α 1 j d β 1 j = 2 n 1 j 1 2 k ln ( 1 + k c + T 1 + T 21 ) T 1 + T 21 + k 2 c 2 ln ( 1 + c T 1 + T 21 + k ) + T 1 + T 21 2 c 2 ln ( 1 + c T 1 + T 21 ) + k c
λ ^ 2 j ( E B L 2 ) 2 c 2 0 1 0 c n 2 j + α 2 j 1 k β 2 j ln ( 1 + k β 2 j + T 22 ) d α 2 j d β 2 j = 2 n 2 j 1 2 k ln 1 + k c + T 22 T 22 + k 2 c 2 ln 1 + c T 22 + k + T 22 2 c 2 ln 1 + c T 22 + k c

6. Hierarchical Bayesian Estimation

In this section, we obtain the H-BEs of λ i j based on the SELF, ELF, and LLF. The definition of H-BE was originally addressed by Lindley and Smith [28] as follows.
Definition 2.
If a   and   b are hyper-parameters in the parameter of θ , the prior density function of θ is π ( θ | a , b ) , and the prior density function of the hyper-parameters of a   and   b is π ( a , b ) , then the hierarchical prior (H-P) density function of θ is defined as follows:
π ( θ ) = Θ π ( θ | a , b ) π a , b d a d b
where Θ is the domain of a   and   b .
The H-P density functions of the parameters λ i j are obtained under Equations (19) and (30a) as:
π ( λ i j ) = 1 c 0 1 0 c β i j α i j Γ ( α i j ) λ i j α i j 1 e β i j λ i j d α i j d β i j
We get the posterior density function of λ i j under Equations (20) and (38) as:
π ( λ 1 j | ξ ) = π ( t | λ 1 j ) π ( λ 1 j ) 0 + π ( t | λ 1 j ) π ( λ 1 j ) d λ 1 j = λ 1 j n 1 j × e λ 1 j T 1 + T 21 × 1 c 0 1 0 c β 1 j α 1 j Γ ( α 1 j ) λ 1 j α 1 j 1 e β 1 j λ 1 j d α 1 j d β 1 j 1 c 0 1 0 c β 1 j α 1 j Γ ( α 1 j ) ( 0 + λ 1 j n 1 j + α 1 j 1 e ( β 1 j + T 1 + T 21 ) λ 1 j d λ 1 j ) d α 1 j d β 1 j = λ 1 j n 1 j × e λ 1 j T 1 + T 21 × 0 1 0 c β 1 j α 1 j Γ ( α 1 j ) λ 1 j α 1 j 1 e β 1 j λ 1 j d α 1 j d β 1 j 0 1 0 c Γ ( α 1 j + n 1 j ) β 1 j α 1 j Γ ( α 1 j ) ( β 1 j + T 1 + T 21 ) n 1 j + α 1 j 1 d α 1 j d β 1 j
In the same way,
π ( λ 2 j ξ ) = λ 2 j n 2 j × e λ 2 j T 22 × 0 1 0 c β 2 j α 2 j Γ ( α 2 j ) λ 2 j α 2 j 1 e β 2 j λ 2 j d α 2 j d β 2 j 0 1 0 c Γ ( α 2 j + n 2 j ) β 2 j α 2 j Γ ( α 2 j ) ( β 2 j + T 22 ) n 2 j + α 2 j 1 d α 2 j d β 2 j
The H-P density function of the parameters λ i j are obtained under Equations (19) and (30b) as:
π ( λ i j ) = 2 c 2 0 1 0 c β i j α i j + 1 Γ ( α i j ) λ i j α i j 1 e β i j λ i j d α i j d β i j
We get the posterior density function of λ i j under Equations (20) and (40) as:
π ( λ 1 j | ξ ) = λ 1 j n 1 j × e λ 1 j ( T 1 + T 21 ) × 0 1 0 c β 1 j α 1 j + 1 Γ ( α 1 j ) λ 1 j α 1 j 1 e β 1 j λ 1 j d α 1 j d β 1 j 0 1 0 c Γ ( α 1 j + n 1 j ) β 1 j α 1 j + 1 Γ ( α 1 j ) ( β 1 j + T 1 + T 21 ) n 1 j + α 1 j 1 d α 1 j d β 1 j
π ( λ 2 j ξ ) = λ 2 j n 2 j × e λ 2 j T 22 × 0 1 0 c β 2 j α 2 j + 1 Γ ( α 2 j ) λ 2 j α 2 j 1 e β 2 j λ 2 j d α 2 j d β 2 j 0 1 0 c Γ ( α 2 j + n 2 j ) β 2 j α 2 j + 1 Γ ( α 2 j ) ( β 2 j + T 22 ) n 2 j + α 2 j 1 d α 2 j d β 2 j

6.1. H-Bayesian Estimation of λ i j under SELF

The H-BEs of λ i j are obtained under Equations (22) and (39) as:
λ ^ 1 j ( H B S 1 ) = 0 + λ 1 j π ( λ 1 j ξ ) d λ 1 j   = 0 1 0 c Γ ( α 1 j + n 1 j + 1 ) β 1 j α 1 j Γ ( α 1 j ) ( β 1 j + T 1 + T 21 ) n 1 j + α 1 j d α 1 j d β 1 j 0 1 0 c Γ ( α 1 j + n 1 j ) β 1 j α 1 j Γ ( α 1 j ) ( β 1 j + T 1 + T 21 ) n 1 j + α 1 j 1 d α 1 j d β 1 j
λ ^ 2 j ( H B S 1 ) = 0 + λ 2 j π ( λ 2 j ξ ) d λ 2 j   = 0 1 0 c Γ ( α 2 j + n 2 j + 1 ) β 2 j α 2 j Γ ( α 2 j ) β 1 j + T 22 n 2 j + α 2 j d α 2 j d β 2 j 0 1 0 c Γ ( α 2 j + n 2 j ) β 2 j α 2 j Γ ( α 2 j ) β 2 j + T 22 n 2 j + α 2 j 1 d α 2 j d β 2 j
The H-BEs of λ i j are obtained under Equations (22) and (41) as:
λ ^ 1 j ( H B S 2 ) = 0 + λ 1 j π ( λ 1 j ξ ) d λ 1 j   = 0 1 0 c Γ ( α 1 j + n 1 j + 1 ) β 1 j α 1 j + 1 Γ ( α 1 j ) ( β 1 j + T 1 + T 21 ) n 1 j + α 1 j d α 1 j d β 1 j 0 1 0 c Γ ( α 1 j + n 1 j ) β 1 j α 1 j + 1 Γ ( α 1 j ) ( β 1 j + T 1 + T 21 ) n 1 j + α 1 j 1 d α 1 j d β 1 j
λ ^ 2 j ( H B S 2 ) = 0 + λ 2 j π ( λ 2 j ξ ) d λ 2 j   = 0 1 0 c Γ ( α 2 j + n 2 j + 1 ) β 2 j α 2 j + 1 Γ ( α 2 j ) ( β 2 j + T 22 ) n 2 j + α 2 j d α 2 j d β 2 j 0 1 0 c Γ ( α 2 j + n 2 j ) β 2 j α 2 j + 1 Γ ( α 2 j ) ( β 2 j + T 22 ) n 2 j + α 2 j 1 d α 2 j d β 2 j

6.2. H-Bayesian Estimation of λ i j under ELF

The H-BEs of λ i j are obtained under Equations (24) and (39) as:
λ ^ 1 j ( H B E 1 ) = 1 / 0 + λ 1 j 1 π ( λ 1 j ξ ) d λ 1 j   = 0 1 0 c Γ ( α 1 j + n 1 j ) β 1 j α 1 j Γ ( α 1 j ) ( β 1 j + T 1 + T 21 ) n 1 j + α 1 j 1 d α 1 j d β 1 j 0 1 0 c Γ ( α 1 j + n 1 j 1 ) β 1 j α 1 j Γ ( α 1 j ) ( β 1 j + T 1 + T 21 ) n 1 j + α 1 j 2 d α 1 j d β 1 j
λ ^ 2 j ( H B E 1 ) = 1 / 0 + λ 2 j 1 π ( λ 2 j ξ ) d λ 2 j   = 0 1 0 c Γ ( α 2 j + n 2 j ) β 2 j α 2 j Γ ( α 2 j ) ( β 2 j + T 22 ) n 2 j + α 2 j 1 d α 2 j d β 2 j 0 1 0 c Γ ( α 2 j + n 2 j 1 ) β 2 j α 2 j Γ ( α 2 j ) ( β 2 j + T 22 ) n 2 j + α 2 j 2 d α 2 j d β 2 j
The H-BEs of λ i j are obtained under Equations (24) and (41) as:
λ ^ 1 j ( H B E 2 ) = 1 / 0 + λ 1 j 1 π ( λ 1 j ξ ) d λ 1 j   = 0 1 0 c Γ ( α 1 j + n 1 j ) β 1 j α 1 j + 1 Γ ( α 1 j ) ( β 1 j + T 1 + T 21 ) n 1 j + α 1 j 1 d α 1 j d β 1 j 0 1 0 c Γ ( α 1 j + n 1 j 1 ) β 1 j α 1 j + 1 Γ ( α 1 j ) ( β 1 j + T 1 + T 21 ) n 1 j + α 1 j 2 d α 1 j d β 1 j
λ ^ 2 j ( H B E 2 ) = 1 / 0 + λ 2 j 1 π ( λ 2 j ξ ) d λ 2 j   = 0 1 0 c Γ ( α 2 j + n 2 j ) β 2 j α 2 j + 1 Γ ( α 2 j ) ( β 2 j + T 22 ) n 2 j + α 2 j 1 d α 2 j d β 2 j 0 1 0 c Γ ( α 2 j + n 2 j 1 ) β 2 j α 2 j + 1 Γ ( α 2 j ) ( β 2 j + T 22 ) n 2 j + α 2 j 2 d α 2 j d β 2 j

6.3. H-Bayesian Estimation of λ i j under LLF

The H-BEs of λ i j are obtained under Equations (26) and (39) as:
λ ^ 1 j ( H B L 1 ) = 1 k ln 0 + e k λ 1 j π ( λ 1 j ξ ) d λ 1 j = 1 k ln 0 1 0 c Γ ( α 1 j + n 1 j ) β 1 j α 1 j Γ ( α 1 j ) ( β 1 j + T 1 + T 21 + k ) n 1 j + α 1 j 1 d α 1 j d β 1 j 0 1 0 c Γ ( α 1 j + n 1 j ) β 1 j α 1 j Γ ( α 1 j ) ( β 1 j + T 1 + T 21 ) n 1 j + α 1 j 1 d α 1 j d β 1 j
λ ^ 2 j ( H B L 1 ) = 1 k ln 0 + e k λ 2 j π ( λ 2 j | ξ ) d λ 2 j = 1 k ln 0 1 0 c Γ ( α 2 j + n 2 j ) β 2 j α 2 j Γ ( α 2 j ) ( β 2 j + T 22 + k ) n 2 j + α 2 j 1 d α 2 j d β 2 j 0 1 0 c Γ ( α 2 j + n 2 j ) β 2 j α 2 j Γ ( α 2 j ) ( β 2 j + T 22 ) n 2 j + α 2 j 1 d α 2 j d β 2 j
The H-BEs of λ i j are obtained under Equations (26) and (41) as:
λ ^ 1 j ( H B L 2 ) = 1 k ln 0 + e k λ 1 j π ( λ 1 j | ξ ) d λ 1 j = 1 k ln 0 1 0 c Γ ( α 1 j + n 1 j ) β 1 j α 1 j + 1 Γ ( α 1 j ) ( β 1 j + T 1 + T 21 + k ) n 1 j + α 1 j 1 d α 1 j d β 1 j 0 1 0 c Γ ( α 1 j + n 1 j ) β 1 j α 1 j + 1 Γ ( α 1 j ) ( β 1 j + T 1 + T 21 ) n 1 j + α 1 j 1 d α 1 j d β 1 j
λ ^ 2 j ( H B L 2 ) = 1 k ln 0 + e k λ 2 j π ( λ 2 j | ξ ) d λ 2 j = 1 k ln 0 1 0 c Γ ( α 2 j + n 2 j ) β 2 j α 2 j + 1 Γ ( α 2 j ) ( β 2 j + T 22 + k ) n 2 j + α 2 j 1 d α 2 j d β 2 j 0 1 0 c Γ ( α 2 j + n 2 j ) β 2 j α 2 j + 1 Γ ( α 2 j ) ( β 2 j + T 22 ) n 2 j + α 2 j 1 d α 2 j d β 2 j

6.4. Highest Posterior Density (HPD) Credible Intervals (CRIs)

Here, we propose the following algorithm to compute the associated CRI and HPD CRI (reference [36]).
Step 1. Set N = 1000 , and generate a Markov Chain Monte Carlo sample ( λ ^ i j k , i , j = 1 , 2 ; k = 1 , , N ) from π ( λ i j | ξ ) .
Step 2. Arrange the sample in ascending order to obtain the CRIs of the parameters λ i j , then λ ^ i j [ 1 ] < λ ^ i j [ 2 ] < < λ ^ i j [ N ] , i , j = 1 , 2 .
Step 3. Obtain the two-side 100 1 γ % CRIs for parameters as
λ ^ i j [ N γ / 2 ] , λ ^ i j [ N ( 1 γ / 2 ) ]
Step 4. To construct 100 1 γ % HPDCRIs of the parameter λ i j , consider the set of CRIs λ ^ i j [ m ] , λ ^ i j [ m + ( 1 γ ) N ] , m = 1 , 2 , [ γ N ] . The 100 1 γ % HPDCRIs of the parameters λ i j is λ ^ i j [ m ] , λ ^ i j [ m + ( 1 γ ) N ] , where m is such that
λ ^ i j [ m + ( 1 γ ) N ] λ ^ i j [ m ] < λ ^ i j [ m + ( 1 γ ) N ] λ ^ i j [ m ]
for all m = 1 , 2 , [ γ N ] .
Therefore, the 100 1 γ % HPDCRIs have the smallest interval width from all the CRIs found.

7. Simulation Study and Data Analysis

7.1. Simulation Study

In order to investigate the proposed methods, we use Monte Carlo simulations to compare different methods for different sample size and different progressive censoring schemes, which are shown in Table 1. The values of the parameters are chosen to be λ 11 = 2.0 , λ 12 = 1.0 , λ 21 = 4.0 , λ 22 = 2.0 , N = 1000 , c = 1 / 5 and k = 4 . For different censoring schemes, we compute the Average Estimates (AEs) and the mean square errors (MSEs) of the MLEs, BEs, E-BEs, and H-BEs, respectively. AEs and MSEs of the estimator of λ i j can be calculated as
A E i j = 1 N k = 1 N λ ^ i j ( k )  
M S E i j = 1 N k = 1 N λ i j λ ^ i j ( k ) 2
where λ ^ i j ( k ) is the k th estimator of the parameter λ i j ( i , j = 1 , 2 ) .
Simulation study has been done according to the following steps:
Step 1. For given λ 11 = 2.0 , λ 12 = 1.0 , λ 21 = 4.0 λ 22 = 2.0 , generate a PT-IIC samples based on different sample sizes.
Step 2. The MLEs and the asymptotic CIs of the parameter λ i j ( i , j = 1 , 2 ) are computed using Equations (14) and (16).
Step 3. BPCIs and BTCIs are obtained for the parameter λ i j ( i , j = 1 , 2 ) using Equations (17) and (18). Here, we have taken B = 1000 in Bootstrap CIs.
Step 4. The BEs are computed using Equations (23), (25) and (27), the E-BEs are computed using Equations (31)–(36), and the H-BEs are computed using Equations (42)–(47). Here, the PHD CRIs are also obtained using Equation (49).
Step 5. Repeat step 1 to step 4 N times and calculate the AEs and MSEs for each estimate. The results are presented in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12 and Table 13.
Step 6. Compute the average lengths (Als) and the coverage probabilities (CPs) of the 95% asymptotic Cis of the MLEs, the Bootstrap-p method, the Bootstrap-t method, the Bayesian method, the EB method, and the HB method. The results are presented in Table 14, Table 15, Table 16 and Table 17.
Step 7. Compute the Als and the CPs of the HPD CRIs using the Bayesian method, the EB method, and the HB method, and the results are presented in Table 18, Table 19, Table 20 and Table 21.
Table 1. The prefixed sample sizes and PT-IIC cases.
Table 1. The prefixed sample sizes and PT-IIC cases.
Scheme n N 1 N 2 i = 1 N 1 R i i = N 1 + 1 N 2 R i ( R 1 , , R N 1 ) ( R N 1 + 1 , , R N 1 + N 2 )
140151555(0,…,0,1,1,2,1) (0,…,0,1,1,2,1)
201055(0,…,0,1,2,2) (0,…,0,1,2,2)
102055(0,…,0,1,2,2) (0,…,0,1,2,2)
260232377(0,…,0,1,2,2,2) (0,…,0,1,2,2,2)
301686(0,…,0,2,2,2,2) (0,…,0,2,2,2,2)
163068(0,…,0,2,2,2) (0,…,0,2,2,2,2)
38030301010(1,1,2,1,0,…,1,2,1,1) (1,1,2,1,0,…,1,2,1,1)
4020146(2,2,2,1,0,…,0,1,2,2,2) (1,1,1,0,…,0,1,1,1)
2040614(1,1,1,0,…,0,1,1,1) (2,2,2,1,0,…0,1,2,2,2)
Table 2. AEs and MSEs of the parameter λ 11 = 2 based on SELF.
Table 2. AEs and MSEs of the parameter λ 11 = 2 based on SELF.
n N 1 N 2 λ ^ 11 M L E λ ^ 11 B S λ ^ 11 E B S λ ^ 11 H B S Best Estimator
AEMSEAEMSEAEMSEAEMSE
4015152.13160.36752.10960.24682.1184
2.1186
0.2479
0.2393
2.1303
2.1324
0.2127
0.1796
Bayesian
20101.90620.24722.09190.20852.0926
2.0942
0.1945
0.1914
2.1075
2.1123
0.2110
0.2063
E-Bayesian
10201.88030.42432.10680.32961.9524
1.9449
0.2451
0.2557
2.1519
2.1570
0.3470
0.3449
E-Bayesian
6023232.12540.24422.09990.25491.9859
1.9780
0.2123
0.2091
2.1227
2.1152
0.2425
0.2124
E-Bayesian
30162.09850.20492.10710.20371.9273
1.9197
0.1951
0.1901
2.0932
2.0980
0.2031
0.2003
E-Bayesian
16302.14340.38912.12070.26451.8931
1.8860
0.2072
0.2045
2.1113
2.1087
0.1166
0.1019
H-Bayesian
8030302.11360.22581.90850.24851.9216
1.9249
0.2143
0.2112
2.0962
2.1036
0.2040
0.2078
E-Bayesian
40202.10780.15192.09050.13181.9602
1.9630
0.1588
0.1572
2.0461
2.0381
0.1303
0.1322
H-Bayesian
20402.13990.41562.15300.36872.1537
2.1539
0.3251
0.3438
2.0943
2.1067
0.1377
0.1378
H-Bayesian
Table 3. AEs and MSEs of the parameter λ 11 = 2 based on ELF.
Table 3. AEs and MSEs of the parameter λ 11 = 2 based on ELF.
n N 1 N 2 λ ^ 11 M L E λ ^ 11 B E λ ^ 11 E B E λ ^ 11 H B E Best Estimator
AEMSEAEMSEAEMSEAEMSE
4015152.13160.36752.10960.24682.0687
2.0592
0.1897
0.1821
2.1296
2.1275
0.2076
0.2080
E-Bayesian
20101.90620.24722.09190.20852.0143
2.0162
0.1382
0.1377
2.0760
2.0786
0.1502
0.1573
E-Bayesian
10201.88030.42432.10680.32961.8916
1.8985
0.3546
0.3549
2.1107
2.1039
0.3380
0.3479
Bayesian
6023232.12540.24422.09990.25491.9082
1.9106
0.2043
0.2013
2.0982
2.0901
0.2124
0.2301
E-Bayesian
30162.09850.20492.10710.20371.9439
1.9366
0.2714
0.2667
2.0956
2.1010
0.1884
0.1879
E-Bayesian
16302.14340.38912.12070.26451.8203
1.8067
0.2962
0.2912
2.1189
2.1027
0.1978
0.1830
H-Bayesian
8030302.11360.22581.90850.24851.9114
1.9149
0.2170
0.2140
2.0937
2.1025
0.2252
0.2267
E-Bayesian
40202.10780.15192.09050.13181.9012
1.8943
0.1528
0.1512
2.0286
2.0417
0.1282
0.1328
H-Bayesian
20402.13990.41562.15300.36872.1560
2.1566
0.3763
0.3592
2.1097
2.1003
0.1128
0.1129
H-Bayesian
Table 4. AEs and MSEs of the parameter λ 11 = 2 based on LLF.
Table 4. AEs and MSEs of the parameter λ 11 = 2 based on LLF.
n N 1 N 2 λ ^ 11 M L E λ ^ 11 B L λ ^ 11 E B L λ ^ 11 H B L Best Estimator
AEMSEAEMSEAEMSEAEMSE
4015152.13160.36752.10960.24682.0899
2.0806
0.1552
0.1384
1.9033
1.9002
0.2183
0.2048
E-Bayesian
20101.90620.24722.09190.20851.9982
1.9902
0.1373
0.1336
1.9589
1.9596
0.1540
0.1573
E-Bayesian
10201.88030.42432.10680.32961.8974
1.9001
0.4364
0.4402
1.8978
1.8981
0.3942
2.3899
Bayesian
6023232.12540.24422.09990.25491.9231
1.9257
0.2384
0.2354
1.9777
1.9729
0.2005
0.2034
H-Bayesian
30162.09850.20492.10710.20371.9294
1.9223
0.2624
0.2779
1.88115
1.8821
0.2612
0.2681
E-Bayesian
16302.14340.38912.12070.26451.8967
1.8992
0.3912
0.3878
1.8862
1.8879
0.2804
0.2832
Bayesian
8030302.11360.22581.90850.24851.9185
1.9178
0.1911
0.1882
1.9192
1.9224
0.2528
0.2446
E-Bayesian
40202.10780.15192.09050.13181.9043
1.9075
0.1512
0.1496
2.0596
2.0630
0.1308
0.1353
H-Bayesian
20402.13990.41562.15300.36872.1474
2.1574
0.3265
0.3329
1.9133
1.9262
0.2101
0.2214
H-Bayesian
Table 5. AEs and MSEs of the parameter λ 12 = 1 based on SELF.
Table 5. AEs and MSEs of the parameter λ 12 = 1 based on SELF.
n N 1 N 2 λ ^ 12 M L E λ ^ 12 B S λ ^ 12 E B S λ ^ 12 H B S Best Estimator
AEMSEAEMSEAEMSEAEMSE
4015151.10030.18381.09130.10271.1193
1.1148
0.1115
0.1106
1.1468
1.1226
0.1372
0.1327
Bayesian
20101.05760.16011.09120.11861.0477
1.0533
0.1084
0.1074
1.1240
1.1219
0.1581
0.1623
E-Bayesian
10201.14080.26341.17170.27320.8999
0.8958
0.1129
0.1058
1.1526
1.1578
0.2228
0.2178
E-Bayesian
6023231.12170.19441.11310.15781.1003
1.1072
0.1486
0.1369
1.1148
1.1141
0.1780
0.1776
E-Bayesian
30161.09170.114571.09940.11851.0848
1.0801
0.1449
0.1433
1.1096
1.1075
0.1348
0.1388
E-Bayesian
16301.18880.24561.20750.23381.1925
1.1883
0.2225
0.2201
1.1736
1.1739
0.2135
0.2269
H-Bayesian
8030301.12320.19871.13110.12971.1482
1.1446
0.1671
0.1658
1.1202
1.1245
0.1138
0.1172
H-Bayesian
40200.92470.13021.10800.12390.9460
0.9524
0.1163
0.1148
1.0239
1.0268
0.1128
0.1078
H-Bayesian
20401.14380.27841.16050.26971.1414
1.1356
0.1809
0.1758
1.2026
1.1932
0.2323
0.2443
E-Bayesian
Table 6. AEs and MSEs of the parameter λ 12 = 1 based on ELF.
Table 6. AEs and MSEs of the parameter λ 12 = 1 based on ELF.
n N 1 N 2 λ ^ 12 M L E λ ^ 12 B E λ ^ 12 E B E λ ^ 12 H B E Best Estimator
AEMSEAEMSEAEMSEAEMSE
4015151.10030.18381.09130.10271.0796
1.0854
0.1088
0.1079
1.1831
1.1816
0.1290
0.1248
E-Bayesian
20101.05760.16011.09120.11861.1394
1.1353
0.1290
0.1281
1.1665
1.1667
0.1238
0.1210
Bayesian
10201.14080.26341.17170.27320.9792
0.9853
0.1233
0.1176
1.1976
1.1955
0.2335
0.2300
E-Bayesian
6023231.12170.19441.11310.15781.1042
1.0997
0.1353
0.1436
1.0804
1.0911
0.1620
0.1694
H-Bayesian
30161.09170.114571.09940.11851.0213
1.0182
0.1418
0.1425
1.0812
1.0907
0.1279
0.1324
E-Bayesian
16301.18880.24561.20750.23381.1797
1.1757
0.2108
0.2086
1.1570
1.1667
0.1902
0.2003
H-Bayesian
8030301.12320.19871.13110.12971.1380
1.1347
0.1658
0.1653
1.0455
1.0448
0.1727
0.1871
H-Bayesian
40200.92470.13021.10800.12390.9740
0.9806
0.1120
0.1106
1.0234
1.0203
0.1339
0.1323
E-Bayesian
20401.14380.27841.16050.26970.9638
0.9671
0.1578
0.1654
1.1224
1.1168
0.1650
0.1901
E-Bayesian
Table 7. AEs and MSEs of the parameter λ 12 = 1 based on LLF.
Table 7. AEs and MSEs of the parameter λ 12 = 1 based on LLF.
n N 1 N 2 λ ^ 12 M L E λ ^ 12 B L λ ^ 12 E B L λ ^ 12 H B L Best Estimator
AEMSEAEMSEAEMSEAEMSE
4015151.10030.18381.09130.10271.09727
1.08002
0.1062
0.1123
1.1043
1.0951
0.1034
0.1067
E-Bayesian
20101.05760.16011.09120.11861.0474
1.0465
0.0862
0.0973
1.0404
1.0430
0.1041
0.0936
E-Bayesian
10201.14080.26341.17170.27320.8978
0.9012
0.1638
0.1579
1.0372
1.0348
0.1402
0.1587
H-Bayesian
6023231.12170.19441.11310.15781.1110
1.1023
0.1303
0.1294
1.0933
1.0974
0.1274
0.1017
H-Bayesian
30161.09170.114571.09940.11851.0883
1.0898
0.1401
0.1398
1.0523
1.0638
0.0992
0.1274
H-Bayesian
16301.18880.24561.20750.23381.0379
1.0381
0.2043
0.2396
1.0642
1.0778
0.2473
1.2106
E-Bayesian
8030301.12320.19871.13110.12971.1228
1.1235
0.1628
0.1579
0.9898
0.9954
0.1212
0.1174
H-Bayesian
40200.92470.13021.10800.12390.9398
0.9378
0.1079
0.1022
1.0232
1.0250
0.1103
0.1080
H-Bayesian
20401.14380.27841.16050.26970.9225
0.9279
0.1427
0.1529
1.1789
1.1829
0.2319
0.2296
E-Bayesian
Table 8. AEs and MSEs of the parameter λ 21 = 4 based on SELF.
Table 8. AEs and MSEs of the parameter λ 21 = 4 based on SELF.
n N 1 N 2 λ ^ 21 M L E λ ^ 21 B S λ ^ 21 E B S λ ^ 21 H B S Best Estimator
AEMSEAEMSEAEMSEAEMSE
4015154.16630.47254.10640.44144.0965
4.0858
0.3378
0.3412
4.1217
4.1218
0.4314
0.4237
E-Bayesian
20103.87630.37943.88470.33583.8802
3.8544
0.4354
0.4401
4.1941
4.1809
0.4779
0.4649
Bayesian
10203.86470.24393.90820.26293.9169
3.9129
0.3313
0.3137
4.1012
4.1035
0.4153
0.3988
E-Bayesian
6023234.12420.38154.12930.39834.0857
4.0917
0.3735
0.3621
4.0566
4.0863
0.3280
0.3312
H-Bayesian
30164.23650.31783.85940.26154.1617
4.1559
0.3239
0.3190
4.1217
4.1234
0.2613
0.2529
H-Bayesian
16303.85690.31023.89240.27973.9007
3.9021
0.2444
0.2503
4.1224
4.1306
0.3628
0.3604
E-Bayesian
8030303.76810.31483.82230.31253.8469
3.8434
0.2884
0.2935
4.1125
4.1174
0.2189
0.2108
H-Bayesian
40204.21470.40414.15420.37064.1178
4.1126
0.2516
0.2775
4.1345
4.1321
0.3605
0.3768
E-Bayesian
20403.87040.24633.89510.25063.9149
3.9184
0.3190
0.3220
4.0890
4.0927
0.3378
0.3276
H-Bayesian
Table 9. AEs and MSEs of the parameter λ 21 = 4 based on ELF.
Table 9. AEs and MSEs of the parameter λ 21 = 4 based on ELF.
n N 1 N 2 λ ^ 21 M L E λ ^ 21 B E λ ^ 21 E B E λ ^ 21 H B E Best Estimator
AEMSEAEMSEAEMSEAEMSE
4015154.16630.47254.10640.44144.0831
4.0788
0.2848
0.2764
4.1256
4.1168
0.3752
0.3710
E-Bayesian
20103.87630.37943.88470.33583.8530
3.8612
0.4329
0.4275
4.1303
4.1295
0.3854
0.3253
Bayesian
10203.86470.24393.90820.26293.8960
3.8994
0.1453
0.1581
4.1041
4.1019
0.2415
0.2482
E-Bayesian
6023234.12420.38154.12930.39834.0739
4.0810
0.4334
0.4130
4.0533
4.0675
0.3940
0.4003
H-Bayesian
30164.23650.31783.85940.26154.1055
4.0981
0.3432
0.3301
4.0896
4.0920
0.2761
0.2724
H-Bayesian
16303.85690.31023.89240.27973.9619
3.9565
0.2952
0.2828
4.0899
4.0882
0.3127
0.3238
E-Bayesian
8030303.76810.31483.82230.31253.8092
3.7971
0.3486
0.3340
4.0930
4.1029
0.3026
0.3248
H-Bayesian
40204.21470.40414.15420.37064.1821
4.2086
0.3515
0.3487
4.1115
4.1262
0.2442
0.2792
H-Bayesian
20403.87040.24633.89510.25063.9033
3.8924
0.2204
0.2634
4.0510
4.0479
0.2044
0.1978
H-Bayesian
Table 10. AEs and MSEs of the parameter λ 21 = 4 based on LLF.
Table 10. AEs and MSEs of the parameter λ 21 = 4 based on LLF.
n N 1 N 2 λ ^ 21 M L E λ ^ 21 B L λ ^ 21 E B L λ ^ 21 H B L Best Estimator
AEMSEAEMSEAEMSEAEMSE
4015154.16630.47254.10640.44144.0948
4.0960
0.3552
0.3541
4.1424
4.1470
0.3932
0.4009
E-Bayesian
20103.87630.37943.88470.33583.8703
3.8867
0.4353
0.3922
4.1580
4.1515
0.3646
0.3579
Bayesian
10203.86470.24393.90820.26293.9151
3.9188
0.2539
0.2558
4.1147
4.1088
0.3356
0.3218
E-Bayesian
6023234.12420.38154.12930.39834.0996
4.1078
0.2940
0.2749
4.1030
4.0975
0.3003
0.3187
E-Bayesian
30164.23650.31783.85940.26154.1382
4.1449
0.2248
0.2380
4.1208
4.1252
0.2197
0.2119
H-Bayesian
16303.85690.31023.89240.27973.9138
3.9165
0.2248
0.2527
4.0941
4.0897
0.3228
0.3340
E-Bayesian
8030303.76810.31483.82230.31253.8463
3.8452
0.3203
0.3312
4.1051
4.1046
0.2732
0.2825
H-Bayesian
40204.21470.40414.15420.37064.1073
4.1154
0.2754
0.2831
4.1343
4.1430
0.3541
0.3306
E-Bayesian
20403.87040.24633.89510.25063.9249
3.9211
0.2810
0.2717
4.0749
4.0672
0.1620
0.1521
H-Bayesian
Table 11. AEs and MSEs of the parameter λ 22 = 2 based on SELF.
Table 11. AEs and MSEs of the parameter λ 22 = 2 based on SELF.
n N 1 N 2 λ ^ 22 M L E λ ^ 22 B S λ ^ 22 E B S λ ^ 22 H B S Best Estimator
AEMSEAEMSEAEMSEAEMSE
4015151.88960.22612.10120.26621.9012
1.9065
0.1923
0.1713
2.1217
2.1315
0.3897
0.3405
E-Bayesian
20101.84260.35041.84580.32161.8361
1.8327
0.3661
0.3621
2.1612
2.1586
0.4151
0.4164
Bayesian
10202.13610.29641.90440.23712.0502
2.0318
0.1874
0.1916
2.0923
2.0982
0.3362
0.3417
E-Bayesian
6023232.14810.20302.14770.22421.8998
1.9050
0.2087
0.1961
2.1033
2.1027
0.2473
0.2523
E-Bayesian
30162.16510.28932.17780.30022.1414
2.1224
0.2579
0.2637
2.1374
2.1366
0.2225
0.2352
H-Bayesian
16302.13300.21602.12370.18811.9286
1.9244
0.2212
0.2098
2.0864
2.0758
0.2974
0.2736
E-Bayesian
8030302.13310.18422.13880.16672.1481
2.1373
0.1644
0.1693
2.1038
2.1082
0.1346
0.1338
H-Bayesian
40202.18780.29352.18010.30342.1626
2.1708
0.3323
0.3105
2.1826
2.1756
0.3526
0.3781
E-Bayesian
20402.11760.21652.12080.15082.1201
2.1188
0.1237
0.1200
2.0438
2.0386
0.1078
0.1056
H-Bayesian
Table 12. AEs and MSEs of the parameter λ 22 = 2 based on ELF.
Table 12. AEs and MSEs of the parameter λ 22 = 2 based on ELF.
n N 1 N 2 λ ^ 22 M L E λ ^ 22 B E λ ^ 22 E B E λ ^ 22 H B E Best Estimator
AEMSEAEMSEAEMSEAEMSE
4015151.88960.22612.10120.26621.9159
1.9019
0.2176
0.2136
2.1008
2.1011
0.2928
0.2613
E-Bayesian
20101.84260.35041.84580.32161.9088
1.9027
0.2637
0.2597
2.1181
2.1202
0.3377
0.3217
E-Bayesian
10202.13610.29641.90440.23712.0558
2.0582
0.2088
2.2134
2.0945
2.0979
0.2798
0.2808
E-Bayesian
6023232.14810.20302.14770.19421.9179
1.9043
0.1815
0.1696
2.0919
2.0902
0.1577
0.1519
H-Bayesian
30162.16510.28932.17780.30021.9852
1.9676
0.2918
0.3069
2.1355
2.1595
0.3353
0.3459
E-Bayesian
16302.13300.21602.12370.18811.8873
1.8942
0.1987
0.1879
2.0558
2.0562
0.2434
0.2106
H-Bayesian
8030302.13310.18422.13880.16672.1234
2.1207
0.1640
0.1629
2.1025
2.1085
0.1493
0.1342
H-Bayesian
40202.18780.29352.18010.30342.1570
2.1468
0.2119
0.2117
2.1654
2.1635
0.2360
0.2387
E-Bayesian
20402.11760.21652.12080.15082.0884
2.0981
0.2137
0.2104
1.9608
1.9799
0.1663
0.1648
H-Bayesian
Table 13. AEs and MSEs of the parameter λ 22 = 2 based on LLF.
Table 13. AEs and MSEs of the parameter λ 22 = 2 based on LLF.
n N 1 N 2 λ ^ 22 M L E λ ^ 22 B L λ ^ 22 E B L λ ^ 22 H B L Best Estimator
AEMSEAEMSEAEMSEAEMSE
4015151.88960.22612.10120.26621.9385
1.9249
0.1966
0.2057
1.9063
1.9148
0.3635
0.3901
E-Bayesian
20101.84260.35041.84580.32161.9036
1.9161
0.2558
0.2519
1.8968
1.8983
0.3863
0.3866
E-Bayesian
10202.13610.29641.90440.23712.1150
2.1271
0.2923
0.3013
1.9023
1.9096
0.3463
0.3384
Bayesian
6023232.14810.20302.14770.19421.8912
1.9080
0.2585
0.2474
1.9187
1.9203
0.1410
0.1373
H-Bayesian
30162.16510.28932.17780.30021.8708
1.8839
0.3696
0.3570
1.9764
1.9814
0.2338
0.2282
H-Bayesian
16302.13300.21602.12370.18811.9617
1.9489
0.1778
0.1675
1.9212
1.9334
0.2302
0.2123
E-Bayesian
8030302.13310.18422.13880.16672.0855
2.0885
0.1918
0.1907
2.1242
2.1425
0.1860
0.1845
E-Bayesian
40202.18780.29352.18010.30342.1177
2.1084
0.2436
0.2254
1.8713
1.8687
0.3313
0.3436
E-Bayesian
20402.11760.21652.12080.15082.0792
2.0732
0.2301
0.2437
1.9387
19403
0.1172
0.1137
H-Bayesian
Table 14. AL and CP of 95% asymptotic CIs of the parameter λ 11 based on 1000 replications.
Table 14. AL and CP of 95% asymptotic CIs of the parameter λ 11 based on 1000 replications.
n N 1 N 2 MLEBoot-pBoot-tBayE-Bay1E-Bay2H-Bay1H-Bay2
4015152.2650
96.1
1.6267
96.2
1.6045
96.3
1.5098
96.3
1.5104
96.5
1.5087
96.5
1.6801
96.7
1.6646
96.8
20101.7281
96.4
1.5522
96.5
1.5475
96.5
1.4935
96.6
1.5012
96.8
1.5109
96.7
1.6382
96.6
1.6262
96.5
10202.3154
96.1
1.6929
96.3
1.6869
96.2
1.55720
96.2
1.5467
96.5
1.5523
96.4
1.5128
96.6
1.5057
96.7
6023231.7720
96.1
1.5697
96.3
1.6001
96.2
1.4564
96.5
1.5028
96.6
1.4875
96.6
1.5016
96.5
1.5405
96.4
30161.6327
96.4
1.5979
96.5
1.6007
96.4
1.4209
96.8
1.3883
97.0
1.4078
96.9
1.2468
97.0
1.2480
97.1
16302.0635
96.0
1.8008
96.2
1.7902
96.1
1.5702
96.3
1.6056
96.4
1.6123
96.5
1.6058
96.3
1.5735
96.3
8030301.5828
96.3
1.6270
96.6
1.6353
96.7
1.5828
96.8
1.5730
96.7
1.5854
96.8
1.3449
97.2
1.3156
97.3
40201.3788
96.8
1.5905
97.0
1.6032
96.9
1.4168
97.1
1.4140
97.0
1.3977
96.9
1.2802
97.4
1.2919
97.5
20401.8671
96.2
1.6587
96.4
1.6462
96.6
1.6249
96.6
1.5897
96.5
1.6117
96.6
1.4772
97.0
1.5190
97.1
Table 15. AL and CP of 95% asymptotic CIs of the parameter λ 12 based on 1000 replications.
Table 15. AL and CP of 95% asymptotic CIs of the parameter λ 12 based on 1000 replications.
n N 1 N 2 MLEBoot-pBoot-tBayE-Bay1E-Bay2H-Bay1H-Bay2
4015151.6723
96.4
1.4043
96.6
1.4103
96.5
1.3593
96.8
1.4187
96.8
1.4298
96.7
1.3932
96.6
1.4437
96.7
20101.3693
96.6
1.3354
96.7
1.3168
96.7
1.1784
97.2
1.2657
97.0
1.2701
96.9
1.2939
96.8
1.3024
96.9
10201.7863
96.3
1.6423
96.4
1.5858
96.3
1.4460
96.6
1.3334
96.9
1.3385
96.8
1.4606
96.6
1.4642
96.6
6023231.2831
96.4
1.1406
96.5
1.1318
96.6
1.2003
96.7
1.1975
97.1
1.1893
97.0
1.2199
97.1
1.2108
96.9
30161.1871
96.5
1.1650
96.7
1.1745
96.8
1.0954
96.9
1.0876
97.3
1.0933
97.4
1.1456
97.3
1.1554
97.2
16301.5851
96.4
1.2345
96.6
1.2305
96.7
1.2202
96.6
1.2001
96.8
1.1987
96.9
1.3907
96.9
1.3896
96.8
8030301.0901
96.7
1.1267
96.8
1.1197
96.9
1.0956
97.2
1.1063
97.1
1.1085
97.0
1.0688
97.2
1.0693
97.3
40200.9497
96.8
1.0902
97.1
1.0790
97.2
1.0812
97.4
1.0743
97.3
1.0756
97.3
0.9454
97.4
0.9588
97.4
20401.3015
96.5
1.1716
96.5
1.1569
96.7
1.1277
97.1
1.1376
97.0
1.1297
96.9
1.3364
97.0
1.3992
97.1
Table 16. AL and CP of 95% asymptotic CIs of the parameter λ 21 based on 1000 replications.
Table 16. AL and CP of 95% asymptotic CIs of the parameter λ 21 based on 1000 replications.
n N 1 N 2 MLEBoot-pBoot-tBayE-Bay1E-Bay2H-Bay1H-Bay2
4015154.6865
95.6
3.4313
95.8
3.4951
95.9
2.8982
96.1
2.7869
96.0
2.8065
95.9
3.0162
95.7
3.0273
95.7
20103.7323
95.8
3.0256
96.0
3.0088
96.0
2.3586
96.2
2.4067
96.1
2.4198
96.1
3.4455
95.8
3.1465
95.7
10203.2754
95.9
2.9521
96.2
2.9771
96.1
2.1729
96.3
2.2471
96.2
2.3089
96.2
2.8502
95.9
2.8535
96.0
6023233.6721
96.2
3.3443
96.3
3.3672
96.5
2.6876
96.4
2.7013
96.7
2.4987
96.8
2.8644
96.7
2.7576
96.8
30164.4871
96.0
3.4678
96.2
3.5389
96.3
2.6006
96.4
2.5978
96.6
2.5409
96.6
2.9116
96.5
2.9149
96.5
16303.0036
96.4
3.0457
96.5
3.0056
96.7
2.4792
96.6
2.385
96.9
2.455
97.0
2.4469
97.0
2.4607
97.1
8030302.8167
96.5
2.9564
96.5
2.8902
96.7
2.8167
96.8
2.8637
96.9
2.8232
97.0
2.3535
97.2
2.2528
97.3
40203.0092
96.4
3.0532
96.4
3.1671
96.5
2.9869
96.7
3.0011
96.7
2.9945
96.8
2.8054
96.9
2.7909
97.1
20402.5920
96.6
2.8737
96.9
2.7877
97.1
2.6943
97.2
2.7089
97.3
2.6880
97.3
2.2513
97.4
2.1863
97.5
Table 17. AL and CP of 95% asymptotic CIs of the parameter λ 22 based on 1000 replications.
Table 17. AL and CP of 95% asymptotic CIs of the parameter λ 22 based on 1000 replications.
n N 1 N 2 MLEBoot-pBoot-tBayE-Bay1E-Bay2H-Bay1H-Bay2
4015152.7919
95.2
2.0592
96.4
2.0366
96.3
1.9583
96.5
2.2453
96.3
2.2567
96.3
2.2574
96.2
2.2464
96.1
20102.8595
95.1
2.3604
95.4
2.3537
95.8
2.3724
96.0
2.3434
96.2
2.3512
96.2
2.5667
96.0
26456
95.9
10202.4929
95.4
2.2789
95.7
2.2786
95.8
2.4052
95.9
2.1564
96.5
2.1677
96.4
2.2062
96.2
2.1921
96.3
6023232.526
96.4
2.2596
96.5
2.2972
96.7
1.9104
96.9
1.9234
96.6
1.9097
96.6
2.0271
96.7
2.0414
96.6
30162.6553
96.2
2.5979
96.4
2.5328
96.5
2.0071
96.7
2.0198
96.8
2.0210
96.7
1.9900
96.8
2.0108
96.7
16302.3803
96.6
2.2180
96.7
2.2275
96.8
1.8775
96.9
1.9034
96.9
1.8967
96.8
1.8607
97.1
1.8715
97.0
8030302.2274
96.7
2.2970
96.8
2.3097
96.7
2.2273
97.1
2.2456
96.9
2.2428
96.8
2.0538
97.2
2.0419
97.0
40202.6615
96.2
2.3904
96.4
2.3464
96.4
2.2612
96.7
2.3014
96.4
2.2951
96.5
2.1447
96.7
2.1797
96.8
20401.9461
96.8
2.2842
96.9
2.2096
97.0
2.1535
97.4
2.1837
97.3
2.1756
97.3
1.8597
97.4
1.7808
97.5
Table 18. AL and CP of 95% HPD CIs of the parameter λ 11 based on 1000 replications.
Table 18. AL and CP of 95% HPD CIs of the parameter λ 11 based on 1000 replications.
n N 1 N 2 BayE-Bay1E-Bay2H-Bay1H-Bay2
4015151.7864
96.7
1.6341
96.8
1.6402
96.8
1.7754
96.7
1.8354
96.7
20101.6324
96.8
1.5776
96.9
1.5809
97.0
1.6393
96.9
1.6238
96.9
10201.8721
96.8
1.9467
96.7
1.9387
96.7
2.0227
96.6
2.0178
96.6
6023231.5403
96.7
1.4650
96.8
1.4701
96.7
1.4806
96.6
1.4776
96.6
30161.3917
96.9
1.2199
97.0
1.2740
97.1
1.3492
96.8
1.3406
96.7
16301.6362
96.5
1.5267
96.7
1.5249
96.8
1.4258
97.1
1.4375
97.1
8030301.3275
96.7
1.3974
96,6
1.2896
96.7
1.2661
97.2
1.2470
97.4
40201.1935
97.1
1.1890
97.2
1.2011
97.0
1.1279
97.5
1.1489
97.6
20401.5047
96.6
1.5104
96.6
1.5098
96.5
1.3785
97.2
1.3088
97.3
Table 19. AL and CP of 95% HPD CIs of the parameter λ 12 based on 1000 replications.
Table 19. AL and CP of 95% HPD CIs of the parameter λ 12 based on 1000 replications.
n N 1 N 2 BayE-Bay1E-Bay2H-Bay1H-Bay2
4015151.2443
96.2
1.2067
96.5
1.1999
96.8
1.1934
96.9
1.1662
97.0
20101.1185
96.6
1.1367
96.8
1.1289
97.0
1.0630
97.0
1.0445
97.1
10201.3060
96.0
1.2567
96.4
1.2481
96.7
1.2342
96.8
1.1994
96.9
6023231.1831
96.7
1.0165
96.9
1.0189
96.8
1.1312
97.0
1.1338
97.1
30161.1788
96.9
0.8824
97.1
0.8743
97.2
1.0824
97.1
1.0943
97.2
16301.1977
96.7
1.1876
96.9
1.1901
96.8
1.1562
97.0
1.1679
97.1
8030301.0754
96.9
1.0854
97.0
1.0812
97.1
1.1258
97.1
1.1514
97.3
40201.0660
97.2
1.0667
97.1
1.0589
97.2
1.0289
97.2
1.0174
97.5
20401.1099
96.8
1.1123
96.9
1.1143
97.0
1.1638
97.1
1.1769
97.2
Table 20. AL and CP of 95% HPD CIs of the parameter λ 21 based on 1000 replications.
Table 20. AL and CP of 95% HPD CIs of the parameter λ 21 based on 1000 replications.
n N 1 N 2 BayE-Bay1E-Bay2H-Bay1H-Bay2
4015152.2610
96.7
2.2089
96.8
2.1998
96.9
2.4482
96.7
2.4560
96.7
20102.3346
96.6
2.4067
96.6
2.4145
96.5
2.5248
96.4
2.5436
96.3
10202.2429
96.8
2.1698
97.2
2.1704
97.3
2.2213
97.1
2.2357
97.0
6023232.3493
96.5
2.2719
96.9
2.2987
96.8
2.2024
97.0
2.1932
97.1
30162.3732
96.3
2.3545
96.8
2.3499
96.7
2.2676
96.9
2.2569
96.9
16302.2214
96.8
2.1978
97.0
2.2054
97.0
2.0785
97.3
2.0804
97.4
8030302.3141
96.7
2.2270
97.0
2.2320
97.1
2.1084
97.4
1.9397
97.6
40202.4359
96.4
2.3976
96.9
2.4012
96.9
2.3392
97.3
2.3285
97.4
20402.1551
96.9
2.1485
97.2
2.1432
97.3
2.0285
97.5
1.9038
97.7
Table 21. AL and CP of 95% HPD CIs of the parameter λ 22 based on 1000 replications.
Table 21. AL and CP of 95% HPD CIs of the parameter λ 22 based on 1000 replications.
n N 1 N 2 BayE-Bay1E-Bay2H-Bay1H-Bay2
4015151.8864
96.5
1.6887
96.8
1.7014
96.7
2.1704
95.9
2.1615
96.0
20102.4531
95.3
2.2454
95.7
2.3089
96.0
2.3015
95.7
2.2925
95.6
10202.0525
96.2
1.9001
96.6
1.8976
96.6
2.0185
96.3
1.9726
96.4
6023232.5403
96.1
2.3867
96.1
2.4009
96.0
1.8808
96.6
1.8783
96.8
30162.6916
95.9
2.4563
96.0
2.5012
95.9
1.9384
96.5
1.9467
96.7
16302.3362
96.2
2.1554
96.4
2.1398
96.5
1.6731
96.9
1.6643
97.0
8030301.6275
97.0
1.6267
96.9
1.6358
96.8
1.5221
97.1
1.5097
97.2
40202.2935
96.4
2.1152
96.6
2.0147
96.6
1.8577
96.8
1.8449
96.9
20401.5047
97.2
1.5107
97.3
1.5132
97.4
1.4590
97.6
1.4424
97.7

7.2. Results Analysis

(1)
From Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12 and Table 13, we observe that the AEs of λ i j ( i , j = 1 , 2 ) are close to the true values and the MSEs of λ i j ( i , j = 1 , 2 ) decrease as n increasing for all estimates. This indicates that the number of failure values of test units affect the estimation accuracy of parameters.
(2)
From Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12 and Table 13, the Bayesian performances are better than that of MLE, and the E-Bayesian or H-Bayesian performances are better than that of the Bayesian for fixed n , N 1 , N 2 , and censoring scheme. The results show that the Bayesian method improves the estimation accuracy of model parameters due to combining the prior information.
(3)
From Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12 and Table 13, we can infer that the H-BEs are the best in all cases of the larger sample sizes, and the E-BEs are the best in all cases of the smaller sample sizes based on different loss functions.
(4)
From Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12 and Table 13, we observe that the proportion of failure values under the stress S 1 is greater than the one under the stress S 2 , the estimated values of the parameters λ 1 j ( j = 1 , 2 ) are close to the true values, and vice versa. This shows that the pre-fixed time τ in the test also affects the estimation accuracy of model parameters.
(5)
From Table 14, Table 15, Table 16, Table 17, Table 18, Table 19, Table 20 and Table 21, the ALs of all asymptotic CRIs and HPDCRIs become smaller, and the CPs are very close to the corresponding nominal level as n increases.
(6)
From Table 14, Table 15, Table 16, Table 17, Table 18, Table 19, Table 20 and Table 21, we observe that the H-Bayesian CIs are always narrower than the other CIs and the HPDCRIs are always narrower than the other CIs under the same loss function.

8. An Illustrative Example

In this section, we simulate a PT-IIC sample from a simple step-stress accelerated competing failure model. The dataset is generated with the following choices of the parameters: λ 11 = 1.0 , λ 12 = 1.5 , λ 21 = 2.0 and λ 22 = 3.0 , and n = 30 , N 1 = 10 , R 1 = 4 , N 2 = 12 , and R 2 = 4 . The data are given in Table 22. From this dataset, we have n 11 = 4 , n 12 = 6 , n 21 = 5 and n 22 = 7 , and the Average Estimates (AEs) of MLEs, BEs, EBs, and HBs of the parameters are derived based on the SELF. The results are presented in Table 23. From Table 23, it is clearly observed that the E-Bayesian or H-Bayesian performances are better than the MLEs.
We constructed the ALs of the 95% asymptotic CIs and the results are presented in Table 24. We also consider the HPDCRIs, and the results are presented in Table 25. From Table 24 and Table 25, we observe that the H-Bayesian CIs are always narrower than the other CIs. The HPDCRIs are always narrower than the other CIs.
Table 22. The data for an illustrative example.
Table 22. The data for an illustrative example.
First Stress Level(0.00638,2), (0.01442,1), (0.01738,1), (0.02380,2), (0.04067,2),
(0.05375,2), (0.06667,1), (0.08122,1), (0.11568,2), (0.15354,2)
Second Stress Level(0.17226,1), (0.18334,2), (0.20501,2), (0.21434,1), (0.21518,2), (0.22165,1),
(0.23910,1), (0.24391,1), (0.26104,2), (0.32582,2), (0.34505,2), (0.65557,2)
Table 23. AEs of the parameters for an illustrative example.
Table 23. AEs of the parameters for an illustrative example.
ParameterMLEBEEBE1EBE2HBE1HBE2
λ 11 1.090.9681.020.9661.0161.021
λ 12 1.621.5161.471.4851.5111.520
λ 22 3.212.832.922.8943.0523.092
Table 24. The ALs of 95% asymptotic CIs for an illustrative example.
Table 24. The ALs of 95% asymptotic CIs for an illustrative example.
ParameterMLEBEEBE1EBE2HBE1HBE2
λ 11 2.1271.7091.7121.6991.3771.409
λ 12 2.6052.1942.2012.2171.5141.521
λ 21 3.5293.0253.1253.2212.5462.691
λ 22 3.2682.9973.0093.0141.7951.802
Table 25. The ALs of 95% HPD CIs for an illustrative example.
Table 25. The ALs of 95% HPD CIs for an illustrative example.
ParameterBEEBE1EBE2HBE1HBE2
λ 11 1.6211.6351.6591.3461.361
λ 12 2.1582.0982.1171.4811.457
λ 21 2.8072.7652.7692.1842.201
λ 22 2.6792.5442.6081.6851.595

9. Conclusions

This paper has proposed a simple S-SALT based on the CEM and under different PT-IIC schemes. It has been assumed that the lifetimes have an exponential distribution with different scale parameters. We have derived the maximum likelihood estimations, Bayesian estimations, Expected Bayesian estimations, and Hierarchical Bayesian estimations of the scale parameters based on the different LF. Based on Monte Carlo Simulations, we also obtained the ALs and the CPs of the 95% ACIs, BPCIs, BTCIs, and HPDCRIs for all the unknown parameters. Results show that the MSEs of the scale parameters decrease as the experimental units N increasing for all estimators, the H-BEs are the best in all cases of the larger samples sizes, and the E-BEs are the best in all cases of the smaller samples sizes. From the perspective of the CIs and CRIs, it has been observed that the H-Bayesian CIs are always narrower than the other Cis, and the HPDCRIs are always narrower than the other CIs under the same loss function. Note that in this paper we assume that the two competing failures are independent. In future work, the simple step-stress accelerated dependent competing failure model will be considered, such as Copula [37,38], which is one of the popular models for releasing the restriction.

Author Contributions

Methodology and writing, Y.W.; supervision, Z.Y. and Y.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Natural Science Foundation of China (No.11861049), Natural Science Foundation of Inner Mongolia (No.2022MS01006).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data are available from the corresponding author upon request.

Acknowledgments

The authors would like to thank the Associate Editor, Editor, and the anonymous reviewers for carefully reading the paper and for their comments, which greatly improved the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ALTAccelerated life testing
C-SALTConstant-stress accelerated life testing
S-SALTStep-stress accelerated life testing
CEMCumulative exposure model
MLEMaximum likelihood estimation
BEBayesian estimation
LFLoss function
H-BEHierarchical Bayesian estimation
E-BEExpected Bayesian estimation
CIConfidence interval
BCIBootstrap confidence interval
CDFCumulative distribution function
PDFProbability density function
PT-IICProgressively Type-II censored
ACIAsymptotic confidence interval
BPCIBootstrap-p confidence interval
BTCI Bootstrap-t confidence interval
SELFSquared error loss function
ELFEntropy loss function
LLFLinear-exponential loss function
H-PHierarchical prior
HPDHighest posterior density
CRICredible interval
AEAverage Estimate
MSEMean square error
ALAverage length
CPCoverage probability

Appendix A

Expressions for the squared error loss function (SELF), entropy loss function (ELF) and LINEX (linear-exponential) loss function (LLF).
Definition A1.
The BE of θ under the SELF is the expectation of the posterior distribution introduced by Mood et al. [39]. So θ ^ B ( x ) can be derived as:
θ ^ B ( x ) = E ( θ x )
Definition A2.
Dey et al. [40] have discussed the ELF. The BE of θ under the ELF can be derived as:
θ ^ B ( x ) = [ E ( θ 1 | x ) ] 1
Definition A3.
Zellner [41] have discussed the LLF. The BE of θ under the LLF can be derived as:
θ ^ B ( x ) = 1 k ln [ E ( e k θ x ) ]  
where k determines the shape of the loss function and k 0 .

References

  1. Nelson, W.; Meeker, W.Q. Theory for optimum accelerated censored life tests for Weibull and extreme value distribution. Technometrics 1978, 20, 171–177. [Google Scholar] [CrossRef]
  2. Nelson, W.B. Accelerated Testing: Statistical Models, Test Plans and Data Analysis; Wiley: New York, NY, USA, 1990. [Google Scholar]
  3. Meeker, W.Q.; Escobar, L.A. Statistical Methods for Reliability Data; Wiley: New York, NY, USA, 1998. [Google Scholar]
  4. Bagdonaviius, V.; Nikulin, M. Accelerated Life Models: Modeling and Statistical Analysis; Chapman & Hall: New York, NY, USA, 2002. [Google Scholar]
  5. Sedyakin, N.M. On one physical principle in reliability theory. Tech. Cybern. 1996, 3, 80–87. [Google Scholar]
  6. Balakrishnan, N.; Xie, Q. Exact inference for a simple step stress model with type-II hybrid censored data from the exponential distribution. J. Stat. Plan. Infer. 2007, 137, 2543–2563. [Google Scholar] [CrossRef]
  7. Balakrishnan, N.; Xie, Q. Exact inference for a simple step stress model with type-I hybrid censored data from the exponential distribution. J. Stat. Plan. Infer. 2007, 137, 3268–3290. [Google Scholar] [CrossRef]
  8. Lee, H.M.; Wu, J.W.; Lei, C.L. Assessing the lifetime performance index of exponential products with step-stress accelerated life-testing data. IEEE Trans. Reliab. 2013, 62, 296–304. [Google Scholar] [CrossRef]
  9. Tang, Y.C.; Guan, Q.; Xu, P.R.; Xu, H. Optimum design for type-I step-stress accelerated life tests of two-parameter Weibull distribution. Commun. Stat.-Theory Methods 2012, 41, 3863–3877. [Google Scholar] [CrossRef]
  10. Cox, D.R. The analysis of exponentially distributed lifetimes with two types of failure. J. R. Statist. Soc. 1959, 21, 411–421. [Google Scholar]
  11. David, H.A.; Moeschberger, M.L.; Buckland, W.R. The Theory of Competing Risks; Griffin: London, UK, 1978. [Google Scholar]
  12. Balakrishnan, N.; Han, D. Exact inference for a simple step-stress model with competing risks for a failure from exponential distribution under Type-Ⅱcensoring. J. Stat. Plan. infer. 2008, 138, 4172–4186. [Google Scholar] [CrossRef]
  13. Beltrami, J. Competing risk in step-stress model with lagged effect. Int. J. Math. 2015, 16, 1–24. [Google Scholar]
  14. Beltrami, J. Weibull lagged effect step-stress model with competing risks. Commun. Stat.-Theory Methods 2017, 46, 5419–5442. [Google Scholar] [CrossRef]
  15. Liu, F.; Shi, Y.M. Inference for a simple step-stress model with progressively censored competing risks data from Weibull distribution. Commun. Stat.-Theory Methods 2017, 46, 7238–7255. [Google Scholar] [CrossRef]
  16. Srivastava, P.W.; Sharma, D. Optimum time-censored step-stress PALTSP with competing causes of failure using tampered failure rate model. Int. J. Pre. Eng. Man. 2017, 11, 63–88. [Google Scholar]
  17. Xu, A.; Tang, Y.; Guan, Q. Bayesian analysis of masked data in step-stress accelerated life testing. Commun. Stat.–Simul. Comput. 2014, 43, 2016–2030. [Google Scholar] [CrossRef]
  18. Zhang, C.F.; Shi, Y.M.; Wu, M. Statistical inference for competing risks model in step-stress partially accelerated life test with progressively Type-Ⅰhybrid censored Weibull life data. J. Comput. Appl. Math. 2016, 297, 65–74. [Google Scholar] [CrossRef]
  19. Zhang, C.; Shi, Y.M. Statistical Prediction of failure times under generalized progressive hybrid censoring in a simple step-stress accelerated competing risks model. J. Syst. Eng. Elect. 2017, 28, 282–291. [Google Scholar]
  20. Ganguly, A.; Kundu, D. Analysis of simple step stress model in presence of competing risks. J. Stat. Comput. Sim. 2016, 86, 1989–2006. [Google Scholar] [CrossRef]
  21. Han, D.; Kundu, D. Inference for a step-stress model with competing risks for failure from the generalized exponential distribution under Type-Ⅰcensoring. IEEE Trans. Reliab. 2015, 64, 31–43. [Google Scholar] [CrossRef]
  22. Han, D.; Ng, H.K.T. Asymptotic comparison between constant-stress testing and step-stress testing for Type-Ⅰcensored data from exponential distribution. Commun. Stat.-Theory Methods 2014, 43, 2384–2394. [Google Scholar] [CrossRef]
  23. Han, D.; Balakrishnan, N. Inference for a simple step-stress model with competing risks for failure from exponential distribution under time constraint. Comput. Stat. Data Anal. 2010, 54, 2066–2081. [Google Scholar] [CrossRef]
  24. Varhgese, S.; Vaidyanathan, V.S. Parameter estimation of Lindley step stress model with independent competing risk under type-Ⅰcensoring. Commun. Stat.-Theory Methods 2019, 49, 3026–3043. [Google Scholar] [CrossRef]
  25. Liu, X.; Qiu, W.S. Modeling and planning of step-stress accelerated life tests with independent competing risks. IEEE Trans. Reliab. 2011, 60, 712–720. [Google Scholar] [CrossRef]
  26. Abu-Zinadah, H.H.; Sayed-Ahmed, N. Competing Risks Model with Partially Step-Stress Accelerate Life Tests in Analysis Lifetime Chen Data under Type-ⅡCensoring Scheme. Open Phys. 2019, 17, 192–199. [Google Scholar] [CrossRef]
  27. Aljohani, H.M.; Alfar, N.M. Estimations with step-stress partially accelerated life tests for competing risks Burr XⅡlifetime model under type-Ⅱcensored data. Alexandria Eng. J. 2020, 59, 1171–1180. [Google Scholar] [CrossRef]
  28. Lindley, D.V.; Smith, A.F.M. Bayes estimation for the linear model. J. R. Stat. Soc. 1972, 34, 1–41. [Google Scholar]
  29. Han, M. The structure of hierarchical prior distribution and its applications. Oper. Res. Man. 1997, 63, 31–40. [Google Scholar]
  30. Han, M. Expected Bayesian Method for Forecast of Security Investment. J. Oper. Res. Man Sci. 2005, 14, 89–102. [Google Scholar]
  31. Han, M. E-Bayesian and hierarchical Bayesian estimation of failure rate. Appl. Math Model 2011, 33, 1915–1922. [Google Scholar] [CrossRef]
  32. Han, M. E-Bayesian and hierarchical Bayesian estimation for the system reliability parameter. Commun. Stat.-Theory Methods 2017, 46, 1606–1620. [Google Scholar] [CrossRef]
  33. Berger, J.O. Statistical Decision Theory and Bayesian Analysis; Springer: New York, NY, USA, 1985. [Google Scholar]
  34. Abdul-Sathar, E.I.; Athira Krishnan, R.B. E-Bayesian and Hierarchical Bayesian Estimation for the Shape Parameter and Reversed Hazard Rate of Power Function Distribution Under Different Loss Functions. J. Indian Soc. Probab. Stat. 2019, 20, 227–253. [Google Scholar] [CrossRef]
  35. Shahram, Y.S. Estimating E-Bayesian and hierarchial bayesian of scalar parameter of gompertz distribution under type Ⅱ censoring schemes based on fuzzy data. Commun. Stat.-Theory Methods 2019, 48, 831–840. [Google Scholar]
  36. Chen, M.H.; Shao, Q.M. Monte Carlo estimation of Bayesian credible and PHD intervals. J. Comput. Graph Stat. 1999, 8, 69–92. [Google Scholar]
  37. Wang, Y.C.; Emura, T.; Fan, T.H.; Lo, S.M.; Wilke, R.A. Likelihood-based inference for a frailty-copula model based on competing risks failure time data. Qual. Reliab. Eng. Int. 2020, 36, 1622–1638. [Google Scholar] [CrossRef]
  38. Michimae, H.; Emura, T. Likelihood Inference for Copula Models Based on Left-Truncated and Competing Risks Data from Field Studies. Mathematics 2022, 10, 2163. [Google Scholar] [CrossRef]
  39. Mood, A.; Graybill, F.A.; Boes, D. Introduction to the Theory of Statistics: McGraw-Hill Series in Probability and Statistics; McGraw-Hill: New York, NY, USA, 1974. [Google Scholar]
  40. Dey, D.K.; Gosh, M.; Srinivasan, C. Simultaneous estimation of parameter under entropy loss. J. Stat. Plan. Infer. 1987, 15, 347–363. [Google Scholar] [CrossRef]
  41. Zellner, A. Bayesian estimation and Prediction using Asymmetric loss Function. J. Am. Stat. Assoc. 1986, 81, 446–451. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, Y.; Yan, Z.; Chen, Y. E-Bayesian and H-Bayesian Inferences for a Simple Step-Stress Model with Competing Failure Model under Progressively Type-II Censoring. Entropy 2022, 24, 1405. https://doi.org/10.3390/e24101405

AMA Style

Wang Y, Yan Z, Chen Y. E-Bayesian and H-Bayesian Inferences for a Simple Step-Stress Model with Competing Failure Model under Progressively Type-II Censoring. Entropy. 2022; 24(10):1405. https://doi.org/10.3390/e24101405

Chicago/Turabian Style

Wang, Ying, Zaizai Yan, and Yan Chen. 2022. "E-Bayesian and H-Bayesian Inferences for a Simple Step-Stress Model with Competing Failure Model under Progressively Type-II Censoring" Entropy 24, no. 10: 1405. https://doi.org/10.3390/e24101405

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop