Next Article in Journal
Study on Transverse Deformation Characteristics of a Shield Tunnel under Earth Pressure by Refined Finite Element Analyses
Next Article in Special Issue
Analysis of Adaptive Progressive Type-II Hybrid Censored Dagum Data with Applications
Previous Article in Journal
K-Means Clustering Algorithm Based on Memristive Chaotic System and Sparrow Search Algorithm
Previous Article in Special Issue
Lagrangian Zero Truncated Poisson Distribution: Properties Regression Model and Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Statistical Inference of Weighted Exponential Distribution under Joint Progressive Type-II Censoring

School of Mathematics and Statistics, Beijing Jiaotong University, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(10), 2031; https://doi.org/10.3390/sym14102031
Submission received: 24 August 2022 / Revised: 19 September 2022 / Accepted: 24 September 2022 / Published: 28 September 2022
(This article belongs to the Special Issue Symmetry in Statistics and Data Science)

Abstract

:
The weighted exponential distribution is a promising skewed distribution in the life-testing experiment. The joint progressive type-II censoring (JPC) scheme is an effective approach to reducing costs. In this paper, we consider the estimates of parameters of the weighted exponential distribution with the JPC data. Two populations, whose scale parameters are the same but the shape parameters of which are different, are chosen to be studied. We first evaluate the parameters with the maximum likelihood method. Because the maximum likelihood estimates of parameters cannot be obtained in closed form, we apply the Newton–Raphson method in this part. Bayesian estimates and the corresponding credible intervals under the squared error loss function are computed by using the Markov Chain Monte Carlo method. After that, we use the bootstrap method to calculate the associated confidence intervals of the unknown parameters. A simulation has been performed to test the feasibility of the above methods and real data analysis is also provided for illustrative purposes.

1. Introduction

1.1. Joint Progressive Type-II Censoring Scheme

Given the difficulty of obtaining complete data, studies in censoring have attracted a lot of attention in recent years. Type-II censoring only withdraws units at the end of the experiment while progressive type-II censoring, developed by [1], withdraws units at the time of every failure. Time is saved by throwing away samples whose lifetime is too long. This scheme makes it possible to estimate parameters in the absence of data from the course of the experiment. Compared with the traditional censoring data, progressive type-II censoring data has unique advantages. A thorough summary of the work related to progressive censoring is provided in [2].
Applying censoring methods to only a single population has several drawbacks. Although the progressive type-II censoring allows the removal of some data, it is still costly for sufficient observations. Moreover, if we are interested in the dependence and the interaction of populations, the experiment based on a single population cannot provide evidence. Ref. [3] proposed the joint progressive censoring (JPC) scheme, solving these problems to a large extent. In the JPC scheme, failures are allowed to occur in two populations so that the time required for the same amount of data is cut in half. At the same time, it provides the possibility to compare the failure time of the two populations under the same condition. The JPC scheme can be described as follows.
In the beginning, there are m units in population A and n units in population B. The JPC data is given by a life-testing experiment based on these two populations. The number of failures is determined in advance, which is supposed to be k. Denote the time point of each failure as w 1 , , w k and at the time of w i , s i surviving units will be withdrawn at random from population A and t i units will be withdrawn from population B. Consequently, at the time point of i-th failure, we remove a total of R i = s i + t i units. Another group of random variables named z 1 , , z k is also essential, and the values of this group are only allowed to be taken from 1 or 0. If the sample failed at the moment of w i is from population A, z i is recorded as 1; otherwise, it goes to 0. k 1 = i = 1 k z i and k 2 = k i = 1 k z i , respectively, meaning the sum of failures taking place in population A and population B. The experiment concludes after the k-th failure, when m + n k i = 1 k 1 R i units are withdrawn. The process is shown in Figure 1.
Ref. [4] considered the JPC scheme from the perspective of probabilities of failures for different parametric families of distributions. As the research on censoring becomes increasingly mature, work on parameters under different censoring schemes mainly focuses on point estimates and interval estimates. Ref. [5] made a breakthrough in this respect. Ref. [6] dealt with inferences for Lindley populations, when the joint progressive type-II censoring scheme was applied to two samples in a joint manner. However, these works are all founded on two single-parameter populations. Ref. [7] provided the estimates of the two-parameter generalized exponential distribution under the JPC scheme from both frequentist and Bayesian points of view. Ref. [8] considered the joint type-II progressive censoring scheme for two populations following Topp–Leone models with the unknown common scale parameter but different shape parameters.

1.2. Weighted Exponential Distribution

Because of the concise form of its distribution function, the exponential distribution has been widely used in life-testing experiments. For the convenience of practical operation, it is generalized and some one-parameter distributions are created. Following the successful attempts of others to add a skewness parameter to a symmetric distribution, Ref. [9] first tried to apply this method to an exponential distribution, and the weighted exponential (WE) distribution was born. A random variable X follows the WE distribution if its cumulative distribution function (CDF) takes the form
F ( x ; α , λ ) = 1 1 α e λ x α + 1 e α λ x , x > 0 ; α , λ > 0 .
The probability density function (PDF) of X is as follows:
f ( x ; α , λ ) = α + 1 α λ e λ x 1 e α λ x , x > 0 ; α , λ > 0 .
The hazard function is also given here. It is important and commonly used in survival analysis:
h ( x ; α , λ ) = ( α + 1 ) λ ( 1 e α λ x ) α + 1 e α λ x , x > 0 ; α , λ > 0 .
Figure 2 presents some PDFs and hazard functions of the WE distribution with different parameters. The WE distribution possesses a shape comparable to other generalizations of the exponential distribution. Consequently, it is always employed as an alternative to some distributions like Gamma and Weibull distributions. Its versatility is also focused on frequently.
As a matter of fact, the WE distribution has its own advantages. Its moments can be computed in closed form. Therefore, the mean, variance, skewness, and kurtosis of the WE distribution are relatively easy to obtain. The first and second moments of the WE distribution are provided here:
E ( X ) = 1 λ + 1 λ ( α + 1 ) ,
E ( X 2 ) = 2 λ 2 1 + 1 ( α + 1 ) + 1 ( α + 1 ) 2 .
In addition, with the log-concave PDF and the increasing hazard function for any α , the WE distribution presents better suitability in modeling lifetime data when aging or wear-out is considered. The weighted exponential distribution also compensates for the shortcomings of other distributions in the parameter estimate. For example, with maximum likelihood estimates (MLEs) the Weibull parameters are not applicable to all parameter values. The defect of Gamma distribution is that its distribution function can be expressed in explicit form only if its shape parameter is an integer. The WE distribution does not have these limitations. MLEs of the WE distribution can be solved from a non-linear equation. Ref. [9] also mentioned that the weighted exponential distribution is a hidden truncation model. Therefore, if the data is known to come from a truncated model, the WE distribution will have obvious advantages over other skewed distributions. Due to this, the weighted exponential distribution is very promising in life-testing experiments.
Under type-II progressive censoring with binomial removals, Ref. [10] introduced an estimate of unknown parameters of a WE distribution. Ref. [11] considered the problem of Bayesian prediction intervals for future observations from the WE distribution. Ref. [12] studied a change-point problem of the WE distribution.
In this paper, we address the statistical inference of the unknown parameters for two populations following the weighted exponential distribution under the JPC model. The likelihood function is provided in Section 2. To obtain the maximum likelihood estimates, we apply the Newton–Raphson method to solve a three-dimensional optimization problem iteratively. The algorithm and the Hessian matrix needed are also presented in this section. In Section 3, interval estimates are generated with bootstrap methods. To conduct the Bayesian analysis, three independent gamma distributions with different hyperparameters are chosen. In addition, a Monte Carlo method is applied to solve the problem that unknown parameters have no closed form under the square error loss function. The detailed process is performed in Section 4. In Section 5, simulation study and real data analysis are introduced. At the end of the paper, in Section 6, we summarize the paper and draw some conclusions.

2. Maximum Likelihood Estimate

Following the idea of [7], suppose that m units in population A follow the WE distribution with the shape parameter α 1 and the scale parameter λ . Population B is made up of n units following the WE distribution with α 2 and the same scale parameter as population A. There will be k failures observed in the experiment. The time point of the i-th failure is recorded as w i . A mark variable of 1 or 0, say z i , is generated to indicate that the i-th failed individual comes from population A or population B. After the i-th failure, s i and t i units are withdrawn from population A and population B, respectively. The values of s i and t i are random, but their sum R i is predetermined. The total numbers of failures in population A and population B are k 1 and k 2 , expressed as k 1 = i = 1 k z i and k 2 = k i = 1 k z i , respectively. The censoring data group { ( w 1 , z 1 , s 1 , t 1 ) , , ( w k , z k , s k , t k ) } is denoted as data in the following pages. The likelihood function has the following form:
L ( α 1 , α 2 , λ d a t a ) = w i W 1 f ( w i ; α 1 , λ ) × w l W 2 f ( w l ; α 2 , λ ) × j = 1 k 1 F ( w j ; α 1 , λ ) s j 1 F ( w j ; α 2 , λ ) t j .
Here, W 1 indicates the set of the failure time points from population A, and W 2 indicates the set of the failure time points from population B. Substituting the PDF and CDF of the WE distribution into (4), the likelihood function becomes
L ( α 1 , α 2 , λ d a t a ) = α 1 + 1 α 1 k 1 α 2 + 1 α 2 k 2 λ k e λ i = 1 k w i × i = 1 k 1 e α 1 λ w i z i 1 e α 2 λ w i 1 z i × i = 1 k 1 α 1 e λ w i 1 + α 1 e α 1 λ w i s i 1 α 2 e λ w i 1 + α 2 e α 2 λ w i t i .
Because the log-likelihood function has the same monotonicity as the likelihood function, the log-likelihood function is given in the following:
ln L = k 1 ln α 1 + 1 ln α 1 + k 2 ln α 2 + 1 ln α 2 + k ln λ λ i = 1 k w i + i = 1 k z i ln 1 e α 1 λ w i + i = 1 k 1 z i ln 1 e α 2 λ w i + i = 1 k s i ln α 1 λ w i + ln 1 + α 1 e α 1 λ w i + i = 1 k t i ln α 2 λ w i + ln 1 + α 2 e α 2 λ w i .
The Hessian matrix is a symmetric square matrix made up of second-order partial derivatives of a multivariable function. The Hessian matrix can describe the local variation of a function. It is an important part of the Newton method in the solution to the optimization problem. The vector of unknown parameters α 1 , α 2 , λ is represented as Θ . Furthermore, Θ ^ is used to denote an estimate of Θ . Then the Hessian matrix has the following form:
H Θ ^ = 2 ln L α 1 2 2 ln L α 1 α 2 2 ln L α 1 λ 2 ln L α 1 α 2 2 ln L α 2 2 2 ln L α 2 λ 2 ln L α 1 λ 2 ln L α 2 λ 2 ln L λ 2 .
The second-order partial derivatives are given in Appendix A.
The MLE is one of the most important techniques in parameter inference. To obtain the MLEs of the unknown parameters, we have to solve the following equations:
ln L α 1 = k 1 α 1 α 1 + 1 + i = 1 k z i λ w i e α 1 λ w i 1 + i = 1 k s i 1 α 1 + 1 + λ w i e α 1 λ w i 1 + α 1 e α 1 λ w i = 0
ln L α 2 = k 2 α 2 α 2 + 1 + i = 1 k 1 z i λ w i e α 2 λ w i 1 + i = 1 k t i 1 α 2 + 1 + λ w i e α 2 λ w i 1 + α 2 e α 2 λ w i = 0
ln L λ = k λ i = 1 k w i + i = 1 k z i α 1 w i e α 1 λ w i 1 + i = 1 k 1 z i α 2 w i e α 2 λ w i 1 + i = 1 k s i w i + α 1 w i e α 1 λ w i 1 + α 1 e α 1 λ w i + i = 1 k t i w i + α 2 w i e α 2 λ w i 1 + α 2 e α 2 λ w i = 0 .
These equations have no closed-form solution. If the MLE of an unknown parameter is proved to exist and be unique, then it is appropriate to use the Newton–Raphson method to make the approximations gained from the iteration approach to the real values gradually and acquire the MLE ultimately. The existence and uniqueness of the MLE can be explained by a claim that the Hessian matrix is negative-definite. This claim can be supported by decomposing the Hessian matrix as a sum of negative semi-definite matrices and proving that at least one of them is strictly negative-definite. However, here we provide another approach, following the idea of [13].
Theorem 1.
For given λ, ln L is a unimodal function of α 1 and α 2 .
Proof. 
Note that when α 1 is fixed, ln L and α 2 follow the relation below:
lim α 2 0 ln L = lim α 2 ln L = .
ln L and α 1 follow a similar relation when α 2 is determined. Then, because the Hessian matrix mentioned earlier is a negative-definite matrix, ln L is a concave function of α 1 and α 2 . Hence, the conclusion is verified.    □
Theorem 2.
ln L is a unimodal function of λ no matter what values α 1 and α 2 take.
Proof. 
The proof is provided in Appendix B.    □
The MLEs of unknown parameters can be obtained with Algorithm 1.
Algorithm 1 The algorithm to calculate MLE.
Step 1 
Set the initial values of unknown parameters as Θ ( 0 )
Step 2 
Calculate the first-order partial derivative ln L Θ Θ = Θ ( i ) and the second-order partial derivative H Θ ( i ) .
Step 3 
Make use of
ln L Θ Θ = Θ ( i ) = H Θ ( i ) Θ ( i + 1 ) Θ ( i )
to produce a better approximation Θ ( i + 1 ) .
Step 4 
Repeat Steps 2 and 3 until | Θ ( i + 1 ) Θ ( i ) | < ε . The optimal solution Θ ^ = Θ ( i ) is obtained and the iteration ends.
In the algorithm, H ( Θ ) is the Hessian matrix of the current point, ln L Θ denotes the first-order partial derivative, and ε is a very small constant greater than 0.

3. Bootstrap Confidence Intervals

In this section, we consider the confidence intervals obtained with the bootstrap method. The bootstrap method is applicable to a wide range of situations, especially in the case of small samples. Actually, the bootstrap method contains a process of repeated sampling. The second population is generated with the parameters calculated from the first samples. When the bootstrap method is applied, the first thing we have to do is to generate bootstrap samples. For simplicity, θ is used to represent any of the three unknown parameters. Different unknown parameters are assumed to be completely independent of each other. The specific steps are as follows.
Proposed and named by Efron, the percentile bootstrap method is concise. With samples obtained from Algorithm 2, the two-sided 100 ( 1 γ ) % percentile bootstrap confidence interval can be written as given below. In the interval expression, [ x ] means the maximum integer not larger than x:
θ ^ [ N γ / 2 ] , θ ^ [ N ( 1 γ / 2 ) ] .
Algorithm 2 The algorithm to generate bootstrap samples.
Step 1 
Experiment performs on two groups of samples from WE ( α 1 , λ ) and WE ( α 2 , λ ) , respectively. Record the observation as the joint progressive censoring data.
Step 2 
Record the MLE of Θ calculated from the censoring data as Θ ^ .
Step 3 
Generate new bootstrap samples under the JPC scheme with Θ ^ .
Step 4 
With samples obtained in Step 3, compute a new maximum likelihood estimate of the parameter that is estimated currently and use θ ^ to denote it.
Step 5 
Repeat Steps 3–4 N times and sort the solutions. Then θ ^ ( 1 ) , , θ ^ ( N ) is obtained.

4. Bayes Analysis

In this section, we analyze Bayesian inference. The Bayesian method is increasingly widely used. In this concept, unknown parameters are similar to ordinary random variables.

4.1. Prior and Posterior Distribution

Different from frequentist statistical methods, the Bayesian method requires a prior distribution. The following analysis is performed based on the assumption that the prior distributions of parameters are independent of each other. As all the parameters are unknown, the joint conjugate priors do not exist. Considering the parameters are non-negative, which is consistent with the value range of the gamma random variable, α 1 , α 2 , and λ are assumed to follow gamma distributions, in line with [14]. Suppose a k > 0 , b k > 0 , k = 0 , 1 , 2 . We have
α 1 G a m m a ( a 1 , b 1 ) , π ( α 1 ) = b 1 a 1 Γ ( a 1 ) α 1 a 1 1 e b 1 α 1 .
α 2 G a m m a ( a 2 , b 2 ) , π ( α 2 ) = b 2 a 2 Γ ( a 2 ) α 2 a 2 1 e b 2 α 2 .
λ G a m m a ( a 0 , b 0 ) , π ( λ ) = b 0 a 0 Γ ( a 0 ) λ a 0 1 e b 0 λ .
Here, a 0 , a 1 , a 2 and b 0 , b 1 , b 2 are set to provide the prior information about the unknown parameters. Therefore, the joint prior density function of α 1 , α 2 , λ is
π ( α 1 , α 2 , λ ) α 1 a 1 1 α 2 a 2 1 λ a 0 1 e b 1 α 1 + b 2 α 2 + b 0 λ .
The joint posterior distribution is presented as
π ( α 1 , α 2 , λ d a t a ) = π α 1 , α 2 , λ L α 1 , α 2 , λ d a t a 0 0 0 π α 1 , α 2 , λ L α 1 , α 2 , λ d a t a d α 1 d α 2 d λ .
Based on the joint prior distribution and the equation above, the joint posterior distribution is provided as
π α 1 , α 2 , λ d a t a α 1 + 1 k 1 α 1 a 1 k 1 1 α 2 + 1 k 2 α 2 a 2 k 2 1 λ a 0 + k 1 e b 1 α 1 + b 2 α 2 × e λ b 0 + i = 1 k w i × i = 1 k 1 e α 1 λ w i z i 1 e α 2 λ w i 1 z i × i = 1 k 1 α 1 e λ w i 1 + α 1 e α 1 λ w i s i 1 α 2 e λ w i 1 + α 2 e α 2 λ w i t i .
The squared error loss function has the form of
l S E g ^ ( Θ ) , g ( Θ ) = ( g ^ ( Θ ) g ( Θ ) ) 2 .
Here, g ( Θ ) means any function of Θ that may be needed in the following computation. The Bayesian estimate of it under the squared error loss function can be gained from
E ( g ( α 1 , α 2 , λ ) d a t a ) = 0 0 0 g ( α 1 , α 2 , λ ) π α 1 , α 2 , λ d a t a d α 1 d α 2 d λ .
Obviously, it is not feasible to solve the formula directly. Therefore, we apply the idea of Markov Chain Monte Carlo sampling to obtain the Bayesian estimates based on the squared loss function.

4.2. Metropolis–Hastings Algorithm

The Markov Chain Monte Carlo (MCMC) method is a random approximation method. Under the condition that the probability distribution is known, the random samples following the probability distribution can be generated, and the samples are used to estimate the population distribution. Proposed by [15,16], the Metropolis–Hastings algorithm is one of the most commonly used algorithms.
First, the conditional posterior distributions are given by
π α 1 λ , d a t a = π α 1 L α 1 , α 2 , λ d a t a 0 0 π α 1 L α 1 , α 2 , λ d a t a d α 1 d α 2 α 1 + 1 k 1 α 1 a 1 k 1 1 e b 1 α 1 × i = 1 k 1 e α 1 λ w i z i × i = 1 k 1 α 1 e λ w i 1 + α 1 e α 1 λ w i s i ,
π α 2 λ , d a t a = π α 2 L α 1 , α 2 , λ d a t a 0 0 π α 2 L α 1 , α 2 , λ d a t a d α 1 d α 2 α 2 + 1 k 2 α 2 a 2 k 2 1 e b 2 α 2 × i = 1 k 1 e α 2 λ w i 1 z i × i = 1 k 1 α 2 e λ w i 1 + α 2 e α 2 λ w i t i ,
π λ α 1 , α 2 , d a t a = π λ L α 1 , α 2 , λ d a t a 0 π λ L α 1 , α 2 , λ d a t a d λ λ a 0 + k 1 e λ b 0 + i = 1 k w i × i = 1 k 1 e α 1 λ w i z i 1 e α 2 λ w i 1 z i × i = 1 k 1 α 1 e λ w i 1 + α 1 e α 1 λ w i s i 1 α 2 e λ w i 1 + α 2 e α 2 λ w i t i .
Then, according to [10], the Metropolis–Hastings algorithm with normal proposal distribution is used to generate samples. The steps are given in Algorithm 3.
Algorithm 3 The algorithm to generate samples following the posterior distribution.
Step 1 
Choose initial values of ( α 1 , α 2 , λ ) as ( α 1 ( 0 ) , α 2 ( 0 ) , λ ( 0 ) ) .
Step 2 
Generate ( α 1 , α 2 , λ ) by using the proposal normal distribution N ( α 1 ( i 1 ) , σ 1 2 ) , N ( α 2 ( i 1 ) , σ 2 2 ) and N ( λ ( i 1 ) , σ 3 2 ) . Here, σ 1 2 , σ 2 2 , and σ 3 2 represent the mean squared errors of the maximum likelihood estimates of α 1 , α 2 and λ , respectively.
Step 3 
Calculate
P 1 * = π ( α 1 λ , d a t a ) π ( α 1 ( i 1 ) λ , d a t a ) ,
P 2 * = π ( α 2 λ , d a t a ) π ( α 2 ( i 1 ) λ , d a t a ) ,
and
P 3 * = π ( λ α 1 , α 2 , d a t a ) π ( λ ( i 1 ) α 1 , α 2 , d a t a ) .
Step 4 
Compute the acceptance probability P 1 = min 1 , P 1 * , P 2 = min 1 , P 2 * and P 3 = min 1 , P 3 * .
Step 5 
Generate samples u 1 , u 2 and u 3 from the uniform distribution U ( 0 , 1 ) . If u 1 P 1 , accept α 1 ( i ) = α 1 ; otherwise, set α 1 ( i ) = α 1 ( i 1 ) . The same thing applies to α 2 ( i ) and λ ( i ) with u 2 and u 3 .
Step 6 
Repeat Steps 2–5 N times to obtain enough samples.
Following [10], the initial burn-in samples here are set as N 0 , and the point estimates of parameters are given as follows:
α ˜ 1 M H = 1 N N 0 i = N 0 + 1 N α 1 ( i ) .
α ˜ 2 M H = 1 N N 0 i = N 0 + 1 N α 2 ( i ) .
λ ˜ M H = 1 N N 0 i = N 0 + 1 N λ ( i ) .
θ ( 1 ) , θ ( 2 ) , , θ ( N N 0 ) is gained by sorting the samples outside the burn-in period in ascending order. With that, the 100 ( 1 γ ) % credible interval of the unknown parameter is constructed as
θ ( N N 0 ) γ 2 , θ ( N N 0 ) 1 γ 2 .
The shortest one of all the possible intervals with the length [ ( N N 0 ) ( 1 γ ) ] is the highest posterior density (HPD) credible interval of θ .

5. Simulation Study and Real Data Analysis

5.1. Numerical Simulation

In this section, we investigate the performance of previously mentioned methods by simulating various parameter estimates under different JPC schemes. The JPC data following the WE distributions independently can be generated with Algorithm 4.
Algorithm 4 The algorithm to generate samples under the JPC scheme.
Step 1 
Generate m samples for population A from W E ( α 1 , λ ) and n samples for population B from W E ( α 2 , λ ) . Mix these samples and arrange them in ascending order.
Step 2 
At the i-th failure, the minimum of the joint samples is recorded as w i . Then figure out whether this unit comes from population A. If it is, record z i as 1, otherwise 0.
Step 3 
As R i is prefixed, determine the values of s i and t i randomly on the premise s i + t i = R i . Then s i units and t i units are withdrawn at random from population A and population B respectively.
Step 4 
Record the data as ( w i , z i , s i , t i ) and rebuild the joint samples with the remaining data.
Step 5 
Repeat Steps 2–4 until the k-th failure. There are k 1 = i = 1 k z i failures from population A and k 2 = k k 1 from population B. At this moment, withdraw all the rest units. Then s k = m i = 1 k 1 s i k 1 and t k = n i = 1 k 1 t i k 2 are recorded.
The simulation is performed with α 1 = 1 , α 2 = 2 and λ = 2 . Population sizes are set as m = 105 and n = 100 . We consider three cases, respectively, where the total failures are k = 180 , k = 200 , and k = 203 . To calculate the MLEs of parameters, real values are set as the initiation of the Newton–Raphson method to maximize the likelihood function. Then approximate confidence intervals for the parameters are calculated with the bootstrap method. In terms of Bayesian estimates, we compute them with the squared error loss function not only with the non-informative priors but also with the informative priors. For non-informative priors, we pick the hyperparameters as a 1 = b 1 = a 2 = b 2 = a 0 = b 0 = 10 5 based on the experience of [17]. As for informative priors, we select a 1 = 1 , b 1 = 1 , a 2 = 2 , b 2 = 1 , a 0 = 2 , and b 0 = 1 to comply with the principle that prior expectations match real expectations. During the calculation, we throw away the first one-tenth of the total iteration times as the burn-in samples.
Table 1 presents the average maximum likelihood estimates (AMLEs) of unknown parameters and their mean squared errors (MSEs). Here, R = ( 25 , 0 ( 179 ) ) means R 1 = 25 , R 2 = R 3 = = R 180 = 0 . A total of 1000 iterations are performed to gain the consequence. It can be seen from this table that the MSEs of α 1 ^ (AMLE of α 1 ) and α 2 ^ are relatively large, especially that of α 2 ^ . The large MSEs may result from some excessive estimates produced in the process of iteration with a few random datasets, thus causing a non-negligible impact on the squared error after the average. The MLEs of parameters perform better in the field of MSE when the remaining data increases. However, because the increase in data size is small compared to the total sample size, the effect is not obvious.
Given that the MSEs are large when all the three parameters are unknown, we consider the estimate under the circumstance where one of the shape parameters is known for some censoring schemes. Selected groups are the most inaccurate scheme in each category of the total failures. Table 2 provides the AMLEs of α 2 and λ and their MSEs when α 1 can be determined in advance. The result of the case that α 2 is known is presented in Table 3. It is clear that the MSE of α 2 ^ is decreased after knowing α 1 . When α 2 is known, the MSE of α 1 ^ is reduced to less than half of what it was before. The MSE of λ ^ reduces to about one-fifth of its original value in both cases. Hence, better statistical inference results can be obtained by applying the JPC model to the condition where the larger shape parameter is known.
In Table 4, we provide the average Bayesian estimates (ABEs) and their MSEs for non-informative and informative priors. From this table, it is obvious that although the MSE of α ˜ 2 (ABE of α 2 ) is still worse than that of α ˜ 1 and λ ˜ , it has been controlled to some extent. The accuracy of the ABEs for both non-informative and informative priors is significantly improved compared with the AMLEs. Sometimes the estimates of α 1 and λ with non-information prior perform better in the mean squared error, but under these circumstances the estimates with informative prior also have a good effect. The more important thing is that the stability of the inference for α 2 with informative prior is evidently better. Thus, if the information associated is available, it is suggested to conduct the Bayesian analysis with the informative prior.
There are average lengths (ALs) and the coverage percentages (CPs) of the 90% percentile bootstrap confidence intervals (CIs) and the 90% symmetric credible intervals (CRIs) derived from non-informative and informative prior in Table 5. It can be observed from Table 5 that the bootstrap intervals are wider and therefore have a higher coverage percentage. In contrast, the intervals obtained from the Bayesian method are more accurate. As the values gained from sampling are limited by the acceptance probability, the Metropolis–Hastings algorithm based on a non-informative prior generates a large number of repeated values in computation, resulting in relatively narrow intervals and low coverage. Hence, the effect of the intervals with a non-informative prior is worse than that with an informative prior.
Figure 3 provides the trace plots and autocorrelation plots of simulated draws of α 1 , α 2 , and λ conducted from the Metropolis–Hastings algorithm with the informative prior. Red segments in the trace plots represent the intervals that cannot cover the real values of the parameters. The autocorrelation plots reflect the degree of correlation between multiple different estimates of the same parameter. Autocorrelations of all the three parameters converge quickly to 0, which means the effect of the algorithm is satisfactory.

5.2. Real Data Analysis

The datasets used in this section are taken from [18], which provided the breaking strength of jute fiber of different gauge lengths. We choose 10-mm and 20-mm groups, and the raw data are presented in Table 6.
To make the data meet our requirement, the raw data are divided by 1000 as in [7]. This has no effect on the subsequent work. The total time in test (TTT) plot is a commonly used tool to judge the suitability of a model. It describes the value of the function
G ( r / n ) = i = 1 r x ( i ) + ( n r ) x ( i ) i = 1 n x ( i ) .
Here, r = 1 , 2 , , n and x ( i ) are the order statistics of the dataset.
If the hazards are constant, it is straight, whereas it is convex for decreasing hazards and concave for increasing hazards. Both TTT plots in Figure 4 show concave trends, implying increasing failure rate functions. Hence, the WE distribution can be applied to these data. With one parameter fixed and the ascending data from each dataset, we display the log-likelihood profiles of another parameter in Figure 5.
The maximum likelihood estimates of the parameters in distributions that the two sets of complete data follow, respectively, are calculated. The matching degree between the empirical distribution function and the fitted distribution is verified by the Kolmogorov–Smirnov (K-S) test. MLEs, K-S distances, and p-values are given in Table 7. The visual effect of the test is presented in Figure 6.
The new MLEs of parameters under the constraint λ 1 = λ 2 are calculated. The result is α ^ 1 = 1.263 , α ^ 2 = 3.922 , and λ ^ = 3.744 . The corresponding p-value of the likelihood-ratio test is 0.596. We do not reject the null hypothesis. With m = 30 and n = 30 , we generate two JPC schemes as follows.
Censoring 
Scheme 1. k = 56 and R = (4, 0(55)).
Remove 0.10115, 0.17725, 0.3039, and 0.70074 from the combined data group. The data remained is ordered as w. In this case, s is generated as ( 4 , 0 ( 55 ) ) , and t is set as ( 0 ( 56 ) ) . z is presented as { 0 1 0 0 1 0 0 0 1 0 0 0 1 1 0 1 1 0 1 0 0 0 1 0 1 1 0 1 1 0 1 0 1 1 0 1 0 1 1 0 0 0 0 1 0 1 0 1 0 1 1 1 0 0 0 1 }.
Table 8 presents the maximum likelihood estimates and Bayesian estimates (BEs) of α 1 , α 2 , and λ under Scheme 1. Table 9 provides the 90% percentile bootstrap confident intervals and the 90% Bayesian symmetric credible intervals. Considering the limitation of information, a non-informative prior is chosen for the Bayesian analysis. Perhaps due to the use of non-informative prior, the BEs of α 1 and α 2 are worse than the maximum likelihood estimates of them. Compared with the bootstrap CIs, the symmetric CRIs contain the MLEs of complete data and the interval lengths are more reasonable.
Censoring 
Scheme 2. k = 58 and R = (2, 0(57)).
Arrange the combined data in ascending order and the dataset without the 14th and 54th data is generated as w. With censoring scheme 2, we set s = ( 2 , 0 ( 57 ) ) , t = ( 0 ( 58 ) ) , and z is set as { 0 1 0 0 1 0 0 0 1 1 0 0 0 1 0 1 1 0 1 1 0 0 0 1 0 1 1 0 1 1 1 0 1 0 1 1 0 1 0 1 1 0 0 0 0 1 0 1 0 1 0 1 1 0 1 0 0 1 }.
The MLEs and the BEs of all the parameters for Scheme 2 are provided in Table 10. Under this scheme, Bayesian estimates of α 1 and λ are better than their maximum likelihood estimates. The 90% percentile bootstrap confidence intervals and the 90% Bayesian symmetric credible intervals are presented in Table 11, from which we can see that the bootstrap CIs are still wide. Because of the better completeness of the data, the symmetric CRIs become narrower and still contain the MLEs calculated from the original data.
Under the above two schemes, the effect of the bootstrap CIs is not ideal. A possible reason is that the iteration in the bootstrap method depends on the result of the first operation. This makes the result more susceptible to randomness, so the intervals become wide. That is why the lower bounds of α 1 and α 2 are close to zero. In contrast, the symmetric CRIs involve the prior information as well as samples. Consequently, they are less affected by randomness and have a better effect.

6. Conclusions

In this article, we apply the joint progressive type-II censoring scheme to two weighted exponential populations with the same scale parameter and different shape parameters. The maximum likelihood estimate, a classical point estimate is calculated. The Newton–Raphson method is utilized to solve the problem that the likelihood estimate cannot be expressed in closed form. The associated Hessian matrix is also presented in the paper. The bootstrap method provides the approximate intervals with low accuracy, so we introduce the Bayesian approach. The basics of Bayesian analysis are three independent Gamma distribution priors. Then we apply the MCMC algorithm to obtain the Bayesian estimates under the squared error loss function together with the symmetric intervals. With these procedures, we generate simulations under different censoring schemes. At the end of the article, two sets of real data following the weighted exponential distributions are used to validate the model studied in this paper.
There is great potential in the lifetime study of the multi-sample cases, especially for the two-parameter exponential family distributions. More work is needed in developing appropriate inferential procedures to reduce the MSE. The circumstance that parameters have an order relation is valuable to be focused on. The model with different scale parameters can be considered. The situation where the two parameters are all different is also worth further exploration.

Author Contributions

Investigation, Y.Q.; Supervision, W.G. All authors have read and agreed to the published version of the manuscript.

Funding

Wenhao’s work was partially supported by the Fund of China Academy of Railway Sciences Corporation Limited (No. 2020YJ120).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are openly available in [18].

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Elements of the Hessian Matrix

The second-order partial derivatives demanded in the Hessian matrix are provided as follows.
2 ln L α 1 α 2 = 0
2 ln L α 1 2 = k 1 2 α 1 + 1 α 1 2 α 1 + 1 2 i = 1 k z i λ 2 w i 2 e α 1 λ w i e α 1 λ w i 1 2 + i = 1 k s i 1 α 1 2 λ w i e α 1 λ w i α 1 λ w i + λ w i + 2 + 1 1 + α 1 e α 1 λ w i 2
2 ln L α 1 λ = i = 1 k z i w i e α 1 λ w i 1 1 α 1 λ w i e α 1 λ w i e α 1 λ w i 1 + i = 1 k s i w i e α 1 λ w i 1 e α 1 λ w i α 1 λ w i α 1 2 λ w i 1 + α 1 e α 1 λ w i 2
2 ln L α 2 2 = k 2 2 α 2 + 1 α 2 2 α 2 + 1 2 i = 1 k 1 z i λ 2 w i 2 e α 2 λ w i e α 2 λ w i 1 2 + i = 1 k t i 1 α 2 2 λ w i e α 2 λ w i α 2 λ w i + λ w i + 2 + 1 1 + α 2 e α 2 λ w i 2
2 ln L α 2 λ = i = 1 k 1 z i w i e α 2 λ w i 1 1 α 2 λ w i e α 2 λ w i e α 2 λ w i 1 + i = 1 k t i w i e α 2 λ w i 1 e α 2 λ w i α 2 λ w i α 2 2 λ w i 1 + α 2 e α 2 λ w i 2
2 ln L λ 2 = k λ 2 i = 1 k z i α 1 2 w i 2 e α 1 λ w i e α 1 λ w i 1 2 i = 1 k 1 z i α 2 2 w i 2 e α 2 λ w i e α 2 λ w i 1 2 i = 1 k s i α 1 2 w i 2 e α 1 λ w i 1 + α 1 1 + α 1 e α 1 λ w i 2 i = 1 k t i α 2 2 w i 2 e α 2 λ w i 1 + α 2 1 + α 2 e α 2 λ w i 2

Appendix B. Proof of Theorem 2

In this part, the proof of Theorem 2 is presented. As it can be seen from the form of the log-likelihood function, proving the following two parts is sufficient. Firstly, the reason why g ( λ ) = ln λ is concluded as a concave function of λ is as follows.
g ( λ ) = 1 λ and g ( λ ) = 1 λ 2 < 0
Then suppose that a and b denote any positive numbers, then statement below shows that h ( λ ) = ln ( a e b λ ) is a concave function.
h ( λ ) = b e b λ a e b λ and h ( λ ) = a b 2 e b λ ( a e b λ ) 2
No matter what values a and b take, h ( λ ) < 0 is always true so that ln L is a concave function of λ . Also, ln L tends to as λ tends to 0 or . In summary, ln L is a unimodal function of λ .

References

  1. Balakrishnan, N.; Burkschat, M.; Cramer, E.; Hofmann, G. Fisher information based progressive censoring plans. Comput. Stat. Data Anal. 2008, 53, 366–380. [Google Scholar] [CrossRef]
  2. Balakrishnan, N.; Cramer, E. The Art of Progressive Censoring: Applications to Reliability and Quality; Statistics for Industry and Technology; Springer: New York, NY, USA, 2014. [Google Scholar]
  3. Rasouli, A.; Balakrishnan, N. Exact Likelihood Inference for Two Exponential Populations under Joint Progressive Type-II Censoring. Commun. Stat.-Theory Methods 2010, 39, 2172–2191. [Google Scholar] [CrossRef]
  4. Parsi, S.; Bairamov, I. Expected values of the number of failures for two populations under joint Type-II progressive censoring. Comput. Stat. Data Anal. 2009, 53, 3560–3570. [Google Scholar] [CrossRef]
  5. Balakrishnan, N.; Su, F.; Liu, K.Y. Exact Likelihood Inference for k Exponential Populations under Joint Progressive Type-II Censoring. Commun. Stat.-Simul. Comput. 2015, 44, 902–923. [Google Scholar] [CrossRef]
  6. Krishna, H.; Goel, R. Inferences for two Lindley populations based on joint progressive type-II censored data. Commun. Stat.-Simul. Comput. 2020, 1–18. [Google Scholar] [CrossRef]
  7. Mondal, S.; Kundu, D. On the joint Type-II progressive censoring scheme. Commun. Stat. 2020, 49, 958–976. [Google Scholar] [CrossRef]
  8. Bayoud, H.A.; Raqab, M.Z. Classical and Bayesian inferences for two Topp-Leone models under joint progressive Type-II censoring. Commun. Stat.-Simul. Comput. 2022, 1–19. [Google Scholar] [CrossRef]
  9. Gupta, R.D.; Kundu, D. A new class of weighted exponential distributions. Statistics 2009, 43, 621–634. [Google Scholar] [CrossRef]
  10. Dey, S.; Kayal, T.; Tripathi, Y.M. Statistical Inference for the W(JPC)eighted Exponential Distribution under Progressive Type-II Censoring with Binomial Removal. Am. J. Math. Manag. Sci. 2018, 37, 188–208. [Google Scholar] [CrossRef]
  11. Ahmad, A.E.B.A.; Fawzy, M.A.; Ouda, H. Bayesian prediction of future observations from weighted exponential distribution constant-stress model based on Type-II hybrid censored data. Commun. Stat.-Theory Methods 2021, 50, 2732–2746. [Google Scholar] [CrossRef]
  12. Tian, W.; Yang, Y. Change point analysis for weighted exponential distribution. Commun. Stat.-Simul. Comput. 2022, 1–13. [Google Scholar] [CrossRef]
  13. Mondal, S.; Kundu, D. Point and interval estimation of Weibull parameters based on joint progressively censored data. Sankhya B 2019, 81, 1–25. [Google Scholar] [CrossRef]
  14. Dey, S.; Ali, S.; Park, C. Weighted exponential distribution: Properties and different methods of estimation. J. Stat. Comput. Simul. 2015, 85, 3641–3661. [Google Scholar] [CrossRef]
  15. Metropolis, N.; Rosenbluth, A.W.; Rosenbluth, M.N.; Teller, A.H.; Teller, E. Equation of State Calculations by Fast Computing Machines. J. Chem. Phys. 1953, 21, 1087–1092. [Google Scholar] [CrossRef]
  16. Hastings, W.K. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 1970, 57, 97–109. [Google Scholar] [CrossRef]
  17. Congdon, P. Applied Bayesian Modelling; John Wiley & Sons: Hoboken, NJ, USA, 2014. [Google Scholar]
  18. Xia, Z.; Yu, J.; Cheng, L.; Liu, L.; Wang, W. Study on the breaking strength of jute fibres using modified Weibull distribution. Compos. Part A Appl. Sci. Manuf. 2009, 40, 54–59. [Google Scholar] [CrossRef]
Figure 1. JPC scheme.
Figure 1. JPC scheme.
Symmetry 14 02031 g001
Figure 2. PDFs and hazard functions of the WE distribution for different parameters. (a) PDF of WE. (b) Hazard function of WE.
Figure 2. PDFs and hazard functions of the WE distribution for different parameters. (a) PDF of WE. (b) Hazard function of WE.
Symmetry 14 02031 g002
Figure 3. Trace plots and autocorrelation plots of simulated draws of unknown parameters. (a) Trace plot of simulated draws of α 1 . (b) Autocorrelation plot of simulated draws of α 1 . (c) Trace plot of simulated draws of α 2 . (d) Autocorrelation plot of simulated draws of α 2 . (e) Trace plot of simulated draws of λ . (f) Autocorrelation plot of simulated draws of λ .
Figure 3. Trace plots and autocorrelation plots of simulated draws of unknown parameters. (a) Trace plot of simulated draws of α 1 . (b) Autocorrelation plot of simulated draws of α 1 . (c) Trace plot of simulated draws of α 2 . (d) Autocorrelation plot of simulated draws of α 2 . (e) Trace plot of simulated draws of λ . (f) Autocorrelation plot of simulated draws of λ .
Symmetry 14 02031 g003
Figure 4. TTT plot for the breaking strength of jute fiber with gauge lengths of 10 mm and 20 mm. (a) TTT plot for data set 1. (b) TTT plot for data set 2.
Figure 4. TTT plot for the breaking strength of jute fiber with gauge lengths of 10 mm and 20 mm. (a) TTT plot for data set 1. (b) TTT plot for data set 2.
Symmetry 14 02031 g004
Figure 5. Log-likelihood profiles. Here P ( · ) means the value of the profile log-likelihood function. (a) Profile log-likelihood function of α 1 . (b) Profile log-likelihood function of λ 1 . (c) Profile log-likelihood function of α 2 . (d) Profile log-likelihood function of λ 2 .
Figure 5. Log-likelihood profiles. Here P ( · ) means the value of the profile log-likelihood function. (a) Profile log-likelihood function of α 1 . (b) Profile log-likelihood function of λ 1 . (c) Profile log-likelihood function of α 2 . (d) Profile log-likelihood function of λ 2 .
Symmetry 14 02031 g005
Figure 6. Fitness between the fitted distribution and the empirical distribution and Q-Q plots of the two datasets. Here, x means the value of a random variable. F ( x ) means the value of the function. (a) Fitness of dataset 1. (b) Q-Q plot of dataset 1. (c) Fitness of dataset 2. (d) Q-Q plot of dataset 2.
Figure 6. Fitness between the fitted distribution and the empirical distribution and Q-Q plots of the two datasets. Here, x means the value of a random variable. F ( x ) means the value of the function. (a) Fitness of dataset 1. (b) Q-Q plot of dataset 1. (c) Fitness of dataset 2. (d) Q-Q plot of dataset 2.
Symmetry 14 02031 g006
Table 1. AMLEs and MSEs of unknown parameters based on 1000 simulations for different JPC schemes.
Table 1. AMLEs and MSEs of unknown parameters based on 1000 simulations for different JPC schemes.
Censoring SchemeParameterAMLEMSE
k = 180 , R = ( 25 , 0 ( 179 ) ) α 1 1.01090.9884
α 2 2.03523.1777
λ 2.15230.1488
k = 180 , R = ( 0 ( 90 ) , 25 , 0 ( 89 ) ) α 1 1.01730.9498
α 2 2.12073.3621
λ 2.14050.1422
k = 180 , R = ( 0 ( 179 ) , 25 ) α 1 1.05571.0703
α 2 2.18393.5919
λ 2.13730.1538
k = 180 , R = ( 10 , 0 ( 178 ) , 15 ) α 1 0.99281.0968
α 2 2.07223.1370
λ 2.16360.1578
k = 180 , R = ( 2 ( 12 ) , 1 , 0 ( 167 ) ) α 1 1.04271.2400
α 2 2.15553.2140
λ 2.12680.1282
k = 200 , R = ( 5 , 0 ( 199 ) ) α 1 1.00470.8945
α 2 2.07793.0046
λ 2.14120.1378
k = 200 , R = ( 0 ( 199 ) , 5 ) α 1 1.03800.9558
α 2 2.12652.8904
λ 2.13500.1366
k = 200 , R = ( 1 ( 5 ) , 0 ( 195 ) ) α 1 1.00860.9361
α 2 2.03262.9477
λ 2.12940.1202
k = 203 , R = ( 0 ( 101 ) , 2 , 0 ( 101 ) ) α 1 0.96390.7899
α 2 2.01152.8106
λ 2.14750.1335
k = 203 , R = ( 1 , 0 ( 201 ) , 1 ) α 1 1.03240.9280
α 2 2.11742.7633
λ 2.12950.1348
Table 2. AMLEs and MSEs of α 2 and λ as α 1 is known for some scheme.
Table 2. AMLEs and MSEs of α 2 and λ as α 1 is known for some scheme.
Censoring SchemeParameterAMLEMSE
k = 180 , R = ( 0 ( 179 ) , 25 ) α 2 2.19482.8800
λ 2.01730.0231
k = 200 , R = ( 5 , 0 ( 199 ) ) α 2 2.17162.9085
λ 2.02100.0190
k = 203 , R = ( 0 ( 101 ) , 2 , 0 ( 101 ) ) α 2 2.17782.7660
λ 2.02120.0198
Table 3. AMLEs and MSEs of α 1 and λ as α 2 is known for some scheme.
Table 3. AMLEs and MSEs of α 1 and λ as α 2 is known for some scheme.
Censoring SchemeParameterAMLEMSE
k = 180 , R = ( 0 ( 179 ) , 25 ) α 1 1.10490.4455
λ 2.01710.0267
k = 200 , R = ( 5 , 0 ( 199 ) ) α 1 1.10830.4135
λ 2.01480.0241
k = 203 , R = ( 0 ( 101 ) , 2 , 0 ( 101 ) ) α 1 1.09450.3954
λ 2.02160.0233
Table 4. ABEs and MSEs of unknown parameters with informative and non-informative prior for different JPC scheme.
Table 4. ABEs and MSEs of unknown parameters with informative and non-informative prior for different JPC scheme.
Censoring SchemeParameterNIPIP
ABEMSEABEMSE
k = 180 , R = ( 25 , 0 ( 179 ) ) α 1 0.97990.00071.02400.1686
α 2 1.85930.91391.90760.4202
λ 1.96990.01382.12630.0435
k = 180 , R = ( 0 ( 90 ) , 25 , 0 ( 89 ) ) α 1 1.01630.00041.00500.1815
α 2 1.95670.12121.96500.4836
λ 2.00080.01382.10040.0314
k = 180 , R = ( 0 ( 179 ) , 25 ) α 1 0.95460.04021.00970.1675
α 2 1.97880.11522.02680.3218
λ 1.96130.01262.09730.0333
k = 180 , R = ( 10 , 0 ( 178 ) , 15 ) α 1 1.00020.00000.99740.1727
α 2 2.04990.07961.86630.3166
λ 2.02450.01252.12730.0407
k = 180 , R = ( 2 ( 12 ) , 1 , 0 ( 167 ) ) α 1 0.96680.01541.01000.1746
α 2 1.91460.26991.93870.3636
λ 1.97660.01412.07900.0262
k = 200 , R = ( 5 , 0 ( 199 ) ) α 1 1.00740.00011.07100.1654
α 2 1.97280.03352.01890.3814
λ 1.99530.01212.07990.0261
k = 200 , R = ( 0 ( 199 ) , 5 ) α 1 0.99830.00000.97040.1437
α 2 2.03300.79482.05390.4415
λ 1.98800.01072.08250.0248
k = 200 , R = ( 1 ( 5 ) , 0 ( 195 ) ) α 1 1.00870.00011.00620.1702
α 2 2.00210.00041.84810.3280
λ 2.00460.01142.09130.0301
k = 203 , R = ( 0 ( 101 ) , 2 , 0 ( 101 ) ) α 1 0.98520.00380.99030.1246
α 2 1.99980.00001.97650.3851
λ 1.99840.01022.07640.0240
k = 203 , R = ( 1 , 0 ( 201 ) , 1 ) α 1 1.03310.00290.94140.1807
α 2 1.96150.07631.93070.3147
λ 1.96490.01232.09580.0358
Table 5. ALs and CPs of 90% bootstrap CIs and Bayesian 90% symmetric CRIs with informative and non-informative priors for different JPC schemes.
Table 5. ALs and CPs of 90% bootstrap CIs and Bayesian 90% symmetric CRIs with informative and non-informative priors for different JPC schemes.
Censoring SchemeParameterBootstrap CICRI (NIP)CRI (IP)
ALCPALCPALCP
k = 180 , R = ( 25 , 0 ( 179 ) ) α 1 3.0310100%0.820075%2.055397%
α 2 5.9612100%0.260280%3.113496%
λ 1.1783100%0.366590%0.715091%
k = 180 , R = ( 0 ( 90 ) , 25 , 0 ( 89 ) ) α 1 2.8514100%0.242068%1.9871100%
α 2 5.5680100%0.446685%3.232399%
λ 1.1946100%0.373186%0.718896%
k = 180 , R = ( 0 ( 179 ) , 25 ) α 1 3.0145100%0.712569%2.077999%
α 2 5.9320100%0.386383%3.3462100%
λ 1.2603100%0.357882%0.747497%
k = 180 , R = ( 10 , 0 ( 178 ) , 15 ) α 1 2.8812100%0.495867%2.059699%
α 2 5.6835100%3.337775%3.095699%
λ 1.2358100%0.376288%0.742196%
k = 180 , R = ( 2 ( 12 ) , 1 , 0 ( 167 ) ) α 1 2.9570100%1.230861%2.033198%
α 2 5.4753100%0.906477%3.153598%
λ 1.1744100%0.349892%0.696699%
k = 200 , R = ( 5 , 0 ( 199 ) ) α 1 2.6735100%0.133265%2.079799%
α 2 5.4960100%0.190176%3.240398%
λ 1.1272100%0.352389%0.686399%
k = 200 , R = ( 0 ( 199 ) , 5 ) α 1 2.7077100%0.113562%1.966199%
α 2 5.4113100%1.110373%3.2687100%
λ 1.1193100%0.343891%0.694599%
k = 200 , R = ( 1 ( 5 ) , 0 ( 195 ) ) α 1 2.7460100%0.248367%1.984999%
α 2 5.0433100%0.207083%3.013796%
λ 1.1295100%0.353391%0.691195%
k = 203 , R = ( 0 ( 101 ) , 2 , 0 ( 101 ) ) α 1 2.5670100%0.109469%2.024199%
α 2 5.1235100%0.206576%3.169799%
λ 1.1254100%0.340588%0.686997%
k = 203 , R = ( 1 , 0 ( 201 ) , 1 ) α 1 2.7682100%0.728262%1.926195%
α 2 5.3929100%0.307682%3.059198%
λ 1.1237100%0.348384%0.685393%
Table 6. The breaking strength of jute fiber. Dataset 1 is the breaking strength of jute fiber with a gauge length of 10 mm, and dataset 2 is the breaking strength of jute fiber with a gauge length of 20 mm.
Table 6. The breaking strength of jute fiber. Dataset 1 is the breaking strength of jute fiber with a gauge length of 10 mm, and dataset 2 is the breaking strength of jute fiber with a gauge length of 20 mm.
Dataset 1Dataset 2
43.9350.16101.15108.94123.0636.7545.5848.0171.4683.55
141.38151.48163.40177.25183.1699.72113.85116.99119.86145.96
212.13257.44262.90291.27303.90166.49187.13187.85200.16244.53
323.83353.24376.42383.43422.11284.64350.70375.81419.02456.60
506.60530.55590.48637.66671.49547.44578.62581.60585.57594.29
693.73700.74704.66727.23778.17662.66688.16707.36756.70765.14
Table 7. MLEs and K-S distances.
Table 7. MLEs and K-S distances.
Data SetMLE from Complete DataK-S Distancep-Value
Shape ParameterScale Parameter
Dataset 1 α 1 = 0.001 λ 1 = 5.4650.0990.901
Dataset 2 α 2 = 6.625 λ 2 = 3.3190.1440.514
Table 8. MLEs and BEs for Scheme 1.
Table 8. MLEs and BEs for Scheme 1.
ParameterMaximum Likelihood EstimateBayes Estimate
α 1 1.14341.0926
α 2 4.12203.4510
λ 3.71113.7446
Table 9. 90% bootstrap CIs and 90% symmetric CRIs for Scheme 1.
Table 9. 90% bootstrap CIs and 90% symmetric CRIs for Scheme 1.
ParameterBootstrap CISymmetric CRI
Lower BoundUpper BoundLower BoundUpper Bound
α 1 0.00001.65620.60041.6166
α 2 0.00006.67512.66984.3559
λ 4.01377.08383.70113.7887
Table 10. MLEs and BEs for Scheme 2.
Table 10. MLEs and BEs for Scheme 2.
ParameterMaximum Likelihood EstimateBayes Estimate
α 1 1.20531.2500
α 2 3.72333.6424
λ 3.77743.7463
Table 11. 90% Bootstrap CIs and 90% symmetric CRIs for Scheme 2.
Table 11. 90% Bootstrap CIs and 90% symmetric CRIs for Scheme 2.
ParameterBootstrap CISymmetric CRI
Lower BoundUpper BoundLower BoundUpper Bound
α 1 0.00003.44581.10961.3899
α 2 0.00007.51222.92144.4671
λ 3.63196.54703.69793.7936
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Qiao, Y.; Gui, W. Statistical Inference of Weighted Exponential Distribution under Joint Progressive Type-II Censoring. Symmetry 2022, 14, 2031. https://doi.org/10.3390/sym14102031

AMA Style

Qiao Y, Gui W. Statistical Inference of Weighted Exponential Distribution under Joint Progressive Type-II Censoring. Symmetry. 2022; 14(10):2031. https://doi.org/10.3390/sym14102031

Chicago/Turabian Style

Qiao, Yinuo, and Wenhao Gui. 2022. "Statistical Inference of Weighted Exponential Distribution under Joint Progressive Type-II Censoring" Symmetry 14, no. 10: 2031. https://doi.org/10.3390/sym14102031

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop