1. Introduction
There are many situations in lifetesting and reliability experiments in which units are lost or removed from the test before failure. The data observed from such experiments are called censored data. To save time and costs, censored data are used. TypeI and TypeII censoring schemes are the two most frequently used censoring schemes. In TypeI censoring, failures are observed until the predetermined time
$\tau $ (time censoring), while in TypeII censoring (failure censoring), when the time of
r failures is reached, the experiment is terminated, where
r is specified before experimenting with
n items on the test:
$0<r<n$. Various modified censoring schemes such as progressive censoring and multiply censoring are also available and are used to analyze the lifetime data. In different situations, it is more common to provide the optimal test period and the corresponding number of failures needed for statistical inference. A mixture of TypeI and TypeII censoring schemes is known as a hybrid censoring scheme (HCS). This type of scheme has received considerable attention among practitioners. Several HCSs have been introduced in the literature. For example, Childs et al. [
1] introduced the generalized TypeI and TypeII HCSs, Kundu and Joarder [
2] introduced the progressively TypeII HCSs, and Balakrishnan et al. [
3] and Lone and Panahi [
4] introduced unified HCSs.
In reallife experiments, a product can fail for a variety of reasons, and these reasons are referred to as competing risks since we can only observe the product failing for one reason but not the others. In reliability and survival analysis, this kind of observations are modeled by a competing risks model. When using the competing risks models, our goal is to assess the risk of a particular cause in relation to other potential causes for failure. This model has been used earlier by different authors; for example, Cox [
5] discussed the competing risks model using the exponential populations.
Several properties of a competing risks model have been presented by Crowder [
6], Balakrishnan and Han [
7], Modhesh and AbdElmougod [
8], Bakoban and AbdElmougod [
9], Debnath and Mohiuddine [
10] and Alghamdi [
11]. Recently, the characteristics of the competing risks model under the accelerated life test model were discussed by many authors; for example, Ganguly and Kundu [
12] and Hanaa and Neveen [
13]. A joint censoring scheme (JCS) may occur while conducting comparative life tests on products from different lines of production under the same conditions. This type of censoring scheme has been discussed by different authors. For example, Rao et al. [
14] developed the rank order theory under JCS, while Johnson and Mehrotra, [
15] presented the most locally powerful rank tests under JCS. Mehrotra and Bhattacharyya [
16] used JCS to explore the problem of measuring the equality of two exponential distributions. The confidence intervals using JCS regarding the exponential distribution were developed by Mehrotra and Bhattacharyya [
17]. Balakrishnan and Rasouli [
18] and Rasouli and Balakrishnan [
19] developed the exact likelihood inferences for the exponential distributions under JSCs and progressive JSCs. The estimation and prediction of two exponential distributions are discussed in the work of Shafay et al. [
20]. Recently, this problem has been handled by Algarni et al. [
21], MondalKundu [
22], MondalKundu [
23], Almarashi et al. [
24], Tahani el al. [
25] and Abdulaziz et al. [
26]. To describe human mortality and provide actuarial tables, the Gompertz distribution was developed. This distribution is widely used as a life time distribution in demography, actuarial, biology, and medical research and plays a vital role in modelling survival times. Many product ’s life times are modelled in reliability and survival studies by an increasing hazard rate or a Gompertz distribution. Assuming skewness and kurtosis of this distribution are fixed constants and independent of the distribution parameters, the Gompertz distribution has been used to obtain agespecific fertility rates. Comparative life tests are adopted for products deriving from different lines of production under the same conditions in the presence of the competing risks model. The problem of inference of unknown quantities in the population is formulated using the population characteristics and censoring methodologies. Here, we discuss these problems when the failure time of population units has a Gompertz lifetime distribution with a CDF given by
The Gompertz distribution has density function that is in zero mode when
$0<\beta \le \theta $ and hence monotonically decreases at
$(0,\infty ).$ However, if
$\beta >\theta ,$ then take the mode
${t}_{mod}=\left(\frac{1}{\beta}\right)log\left(\frac{\beta}{\theta}\right)$; hence, it increases in
$(0,{t}_{mod})$ and decreases in
$({t}_{mod},\infty ).$ For more details, see Soliman et al. [
27,
28]. The statistical inference of Gompertz distribution for independent competing risks model was developed by Lodhi et al. [
29] and for dependent competing risks model was developed by Wang et al. [
30]. Gompertz distribution reduces to exponential distribution when
$\beta \to 0$. As far as we know, no works were observed under joint TypeII GHCS in the case of Gompertz distribution. In this paper, we adopted the joint TypeII GHCS in comparative Gompertz populations in the presence of a competing risks model. We used different methods of estimation: the ML, bootstrap and Bayes methods. The model parameters and reliability of the system were estimated using point and interval estimates. Different tolls such as MSE and coverage percentage were used to assess and compare the results through Monte Carlo simulation studys. Finally, we analysed a real data set to demonstrate our goals.
The rest of the article is organized as follows: A description of a generalized hybrid censoring scheme is presented in
Section 2. The model and its assumptions are formulated in
Section 3. In
Section 4, using joint TypeII GHC competing risks data, we discuss the maximum likelihood estimation MLE of the parameters, as well as the reliability and failure rate functions. Based on the asymptotic normality of the MLEs, the approximate confidence intervals were obtained in the same section. Two bootstrap confidence intervals (based on bootstrapp and bootstrapt methods) are discussed in
Section 5. The Bayes estimations under squared error loss function and gamma priors are obtained in
Section 6. An assessment and comparison of the results, using a Monte Carlo simulation study, are reported in
Section 7.
Section 8 deals with a reallife data set for illustration purposes. Finally, conclusions and concluding remarks are discussed in
Section 9.
2. Generalized Hybrid Censoring Scheme
For HCS, suppose that (
$\tau ,$ m) are the ideal test time and the corresponding number of failures. Hence, in TypeI HCS the test is terminated at min(
$\tau ,$ ${T}_{m}$), where
${T}_{m}$ is the failure time of
mth failure. The test is terminated at max(
$\tau ,$ ${T}_{m}$) in TypeII HCS. For an extensive review of HCSs, see Childs et al. [
1], Gupta and Kundu [
31], Zhang et al. [
32], Kundu and Pradhan [
33] and Algarn et al. [
34]. The problems of the low expected number of failures and long test time are still present in TypeI and TypeII HCSs. To solve these problems, Chanrasekar et al. [
35] established the generalized hybrid censoring scheme (GHCS), which can be described as follows:
TypeI GHCS: Consider a life testing experiment with
n units, two fixed positive integers
$({m}_{1},$ ${m}_{2})$ and the ideal test time
$\tau $ that was previously proposed, such that
$1<{m}_{1}<{m}_{2}\le n$. When the test is running, the failure time
${T}_{i},$ $i\ge 1$ is recorded until the failure
${T}_{{m}_{1}}$ is observed. Hence, if
${T}_{{m}_{1}}$ $<\tau $, then the test is terminated at
$\omega ,$ where
$\omega =$ min(
${T}_{{m}_{2}}$,
$\tau $). If
${T}_{{m}_{1}}$ $>\tau ,$ the test is terminated at
$\omega ={T}_{{m}_{1}}$. Therefore, the data under TypeI GHCS are
$\underline{\mathbf{t}}=({t}_{1},{t}_{2},\dots ,{t}_{k}),$ where the number of failed units
k and the corresponding test termination time
$\omega $ are defined by
$\left(k,\phantom{\rule{0.166667em}{0ex}}\omega \right)=\left({m}_{1},\phantom{\rule{0.166667em}{0ex}}{T}_{{m}_{1}}\right)$ if
${T}_{{m}_{1}}\ge \tau $,
$\left(k,\phantom{\rule{0.166667em}{0ex}}\omega \right)=\left({m}_{2},{T}_{{m}_{2}}\right)$, if
${T}_{{m}_{1}}<{T}_{{m}_{2}}\le \tau $,
$\left(k,\phantom{\rule{0.166667em}{0ex}}\omega \right)=\left({m}_{1}\le k\le {m}_{2},\phantom{\rule{0.166667em}{0ex}}\tau \right)$, if
${T}_{{m}_{1}}<\tau <{T}_{{m}_{2}}$. For more details, see Chakrabarty et al. [
36].
TypeII GHCS: Assume that
n units are involved in the experiment. The two times
$({\tau}_{1},$ ${\tau}_{2}),$ $0<{\tau}_{1}<{\tau}_{2}\le \infty $ and the integer number
m have been proposed previously. When the test is running the failure time
${T}_{i},$ $i\ge 1$ is recorded until the time
${\tau}_{1}$ is observed. If
${T}_{m}$ $<{\tau}_{1}$, then the test is terminated at
$\omega ={\tau}_{1}$. However, if
${\tau}_{1}<{T}_{m}$ $<{\tau}_{2}$, the test is terminated at
$\omega ={T}_{m}$ and if
${\tau}_{1}<{\tau}_{2}$ $<{T}_{m}$, the test is terminated at
$\omega ={\tau}_{2}$. Therefore, the data under TypeII GHCS:
$\underline{\mathbf{t}}\phantom{\rule{3.33333pt}{0ex}}=\{{t}_{1},{t}_{2},\dots ,{t}_{k}\},$ where
k is the number of failed units. The integer number
k and the corresponding test terminated time
$\omega $ are defined by
$\left(k,\phantom{\rule{4.pt}{0ex}}\omega \right)=\left(k>m,\phantom{\rule{4.pt}{0ex}}{\tau}_{1}\right),$ if
${T}_{m}<{\tau}_{1},$ $\left(k,\phantom{\rule{4.pt}{0ex}}\omega \right)=\left(m,\phantom{\rule{4.pt}{0ex}}{T}_{m}\right)\phantom{\rule{4.pt}{0ex}}\mathrm{if}\phantom{\rule{4.pt}{0ex}}{\tau}_{1}<{T}_{m}$ $<{\tau}_{2}$ and
$\left(k,\phantom{\rule{4.pt}{0ex}}\omega \right)=$ $\left(k<m,\phantom{\rule{4.pt}{0ex}}{\tau}_{2}\right),\phantom{\rule{4.pt}{0ex}}\mathrm{if}\phantom{\rule{4.pt}{0ex}}{\tau}_{1}<{\tau}_{2}<{T}_{m}$. In this paper, we adopted TypeII GHCS, which guarantees to terminate the experiment at a prefixed time
${\tau}_{2}\phantom{\rule{4pt}{0ex}}>{\tau}_{1}$, with
${\tau}_{1}\phantom{\rule{4pt}{0ex}}$ and
${\tau}_{2}$ as the shortest and longest test times, respectively. The time
${\tau}_{2}$ is the absolute longest time for which the experiment is allowed to continue, which is suitable for many applications. Hence, experiments using TypeII GHCS are guaranteed to be completed by time
${\tau}_{2}$, which is the suitable time for which the researcher is willing to continue the experiment. The possibility of removing units from the test, other than the last point, is not available in two GHCS schemes (TypeI and TypeII). However, the possibility of removing survival units from the test is available in generalized progressive censoring schemes (GPCSs); see Balakrishnan [
37], Balakrishnan and Cramer [
38] and Elsherpieny et al. [
39].
3. Modeling
Suppose that, from a population consisting of two lines ${\mathsf{\Omega}}_{1}$ and ${\mathsf{\Omega}}_{2}$, the joint random sample of size $N={n}_{1}+{n}_{2}$ is randomly selected as (${n}_{1}$ from ${\mathsf{\Omega}}_{1}$ and ${n}_{2}$ from ${\mathsf{\Omega}}_{2}$). We considered only two potential causes of failure, and we adopted TypeII GHCS with two times $({\tau}_{1},$ ${\tau}_{2}),$ $0<{\tau}_{1}<{\tau}_{2}\le \infty $ and the integer number $m.$ During the experiment, the failure time ${T}_{i}$, unit type ${\eta}_{i}=${1, 0} (where 1 means the unit from the line ${\mathsf{\Omega}}_{1}$ and 0 the unit from the line ${\mathsf{\Omega}}_{2}$) and the cause of failure ${\rho}_{i}=${1, 2} (failure under causes one or two) were recorded. When the first failure was observed, we recorded (${t}_{1},$ ${\eta}_{1},$ ${\rho}_{1}$); when the second failure was observed, (${t}_{2},{\eta}_{2},{\rho}_{2}$) was recorded. Under TypeII GHCS, the number of failure units and the corresponding test termination time were denoted by (k, $\omega $), respectively. The experiment was continued until the time ${\tau}_{1}$. If, ${T}_{m}$ $<{\tau}_{1}$ then the test was terminated at $\omega ={\tau}_{1}$. However, if ${\tau}_{1}<{T}_{m}$ $<{\tau}_{2}$, the test was terminated at $\omega ={T}_{m}$ and if ${\tau}_{1}<{\tau}_{2}$ $<{T}_{m}$, the test was terminated at $\omega ={\tau}_{2}$. Therefore, the observed joint TypeII GHC competing risks data were defined by: $\mathbf{t}\phantom{\rule{3.33333pt}{0ex}}=\left\{\right({t}_{1},$ ${\eta}_{1},$ ${\rho}_{1}),({t}_{2},$ ${\eta}_{2},$ ${\rho}_{2}),$ $\dots ,({t}_{k},{\eta}_{k},{\rho}_{k})\}$, where $\left(k,\phantom{\rule{4.pt}{0ex}}\omega \right)=\left(k>m,\phantom{\rule{4.pt}{0ex}}{\tau}_{1}\right),$ if ${T}_{m}<{\tau}_{1},$ $\left(k,\phantom{\rule{4.pt}{0ex}}\omega \right)=\left(m,\phantom{\rule{4.pt}{0ex}}{T}_{m}\right)\phantom{\rule{4.pt}{0ex}}\mathrm{if}\phantom{\rule{4.pt}{0ex}}{\tau}_{1}<{T}_{m}$ $<{\tau}_{2}$ and $\left(k,\phantom{\rule{4.pt}{0ex}}\omega \right)=$ $\left(k<m,\phantom{\rule{4.pt}{0ex}}{\tau}_{2}\right),\phantom{\rule{4.pt}{0ex}}\mathrm{if}\phantom{\rule{4.pt}{0ex}}{\tau}_{1}<{\tau}_{2}<{T}_{m}.$ The proposed model under joint TypeII GHC competing risks data $\mathbf{t}$ included the following assumptions
The number of failures taken from line ${\mathsf{\Omega}}_{1}$ is given by ${k}_{1}={\displaystyle \sum _{i=1}^{k}}{\eta}_{i}$ and those from line ${\mathsf{\Omega}}_{2}$ are given by ${k}_{2}={\displaystyle \sum _{i=1}^{k}}(1{\eta}_{i})$.
The number of failures taken from line ${\mathsf{\Omega}}_{1}$ under cause j is given by ${m}_{1j}={\displaystyle \sum _{i=1}^{k}}{\eta}_{i}\ast \delta (\rho =j)$ and those from line ${\mathsf{\Omega}}_{2}$ are given by ${m}_{2j}={\displaystyle \sum _{i=1}^{k}}(1{\eta}_{i})\ast \delta (\rho =j)$.
The latent failure time ${T}_{i}$ is defined by ${T}_{i}=min$(${T}_{is1},{T}_{is2}$), and s is used to define the unit type, $i=1,2,\dots ,k$.
The
ith failure time
${T}_{isj}$ of the line
${\mathsf{\Omega}}_{s}$ and cause
$j,$ $i=1,2,\dots ,k$ has the Gompertz lifetime distribution with CDF given by
The latent failure time
${T}_{i}$ $=min$(
${T}_{is1},{T}_{is2}$) has a Gompertz lifetime distribution with a CDF given by
The integer number of failure ${m}_{sj}$ is obtained from the line ${\mathsf{\Omega}}_{s}$ under j; $s,$ $j=1,$ 2 have the binomial distribution $B\left({k}_{s},\frac{{\theta}_{sj}}{{\theta}_{s1}+{\theta}_{s2}}\right)$.
The likelihood function of the joint TypeII GHC competing risks data
$\underline{\mathbf{t}}\phantom{\rule{3.33333pt}{0ex}}=\left\{\right({t}_{1},$ ${\eta}_{1},$ ${\rho}_{1}),({t}_{2},$ ${\eta}_{2},$ ${\rho}_{2}),$ $\dots ,({t}_{k},{\eta}_{k},{\rho}_{k})\},$ see Abdulaziz et al. [
26] is given by
where
${f}_{sj}(.)$ and
${S}_{sj}(.)$ are the density and reliability functions of type
s and cause
j, where
s,
j = 1, 2 and
$\delta ({\rho}_{i}=j)$ are defined by
5. Bootstrap Confidence Intervals
The bootstrap method is a resampling technique for statistical inference that can be used to construct confidence intervals (CIs) for the model parameters. In the literature, the bootstrap technique is frequently used to gauge an estimator’s bias and variance. This technique is widely used in calibrate hypothesis tests. There are two types of bootstrap techniques, parametric and nonparametric techniques; see Davison and Hinkley [
44] and Efron and Tibshirani [
45]. In the parametric bootstrap technique, the percentile bootstrap
p and bootstrap
t techniques are applied; see Efron [
46] and Hall [
47]. In this section, we adopted the percentile bootstrap
p and bootstrap
t techniques to formulate the confidence intervals of the model parameters, which can be implemented with the following algorithm (Algorithm 1).
Algorithm 1 Percentile bootstrapp and bootstrapt confidence interval. 
 Step 1:
For given the original joint competing risks TypeII GHC data $\underline{\mathbf{t}}\phantom{\rule{3.33333pt}{0ex}}=\left\{\right({t}_{1},$ ${\eta}_{1},$ ${\rho}_{1}),({t}_{2},$ ${\eta}_{2},$ ${\rho}_{2}),$ $\dots ,({t}_{k},{\eta}_{k},{\rho}_{k})\}$, compute the ML estimates of the model parameters $\widehat{\mathsf{\Theta}}=\{{\widehat{\theta}}_{11},$ ${\widehat{\theta}}_{12},$ ${\widehat{\theta}}_{21},$ ${\widehat{\theta}}_{22},$ ${\widehat{\beta}}_{1},$ ${\widehat{\beta}}_{2}\}$.  Step 2:
Generate two samples of size ${n}_{1}$ from Gompertz(${\widehat{\beta}}_{1},$ ${\widehat{\theta}}_{11}+{\widehat{\theta}}_{12}$) and sample of size ${n}_{2}$ from Gompertz(${\widehat{\beta}}_{2},$ ${\widehat{\theta}}_{21}+{\widehat{\theta}}_{22}$).  Step 3:
For a given $({\tau}_{1},$ ${\tau}_{2})$ and $m,$ generate the joint TypeII GHC competing risks data defined by ${\underline{\mathbf{t}}}^{*}=\left\{\right({t}_{1}^{*},$ ${\eta}_{1},$ ${\rho}_{1}),({t}_{2}^{*},$ ${\eta}_{2},$ ${\rho}_{2}),$ $\dots ,({t}_{k}^{*},{\eta}_{k},{\rho}_{k})\}.$  Step 4:
Using the bootstrap sample ${\underline{\mathbf{t}}}^{*}$, compute the integers $k,$ ${k}_{1},$ ${k}_{2}$ and determine the termination time $\omega .$  Step 5:
The numbers of failure ${m}_{sj}$ (obtained from the line ${\mathsf{\Omega}}_{s}$ under the obtained $j,$ where $s,$ $j=1,$ 2) are generated from the binomial distribution with parameters ${k}_{s}$ and $\frac{{\theta}_{sj}}{{\theta}_{s1}+{\theta}_{s2}}.$  Step 6:
The bootstrap estimates ${\widehat{\mathsf{\Theta}}}^{*}=\{{\widehat{\theta}}_{11}^{*},{\widehat{\theta}}_{12}^{*},{\widehat{\theta}}_{21}^{*},{\widehat{\theta}}_{22}^{*},{\widehat{\beta}}_{1}^{*},{\widehat{\beta}}_{2}^{*}\}$ are computed using (10) and (15).  Step 7:
Repeat Steps (2–6) $\mathbf{N}$ times.  Step 8:
The resulting bootstrap estimates are arranged in ascending order, $({\widehat{\mathsf{\Theta}}}_{i}^{*\left(1\right)},$ ${\widehat{\mathsf{\Theta}}}_{i}^{*\left(2\right)},$ $\dots ,{\widehat{\mathsf{\Theta}}}_{i}^{*\left(\mathbf{N}\right)})$, $i=1,$ $2,$ $\dots ,6.$

Percentile bootstrap confidence interval (PBCI)Let
$\u03dd\left(z\right)=P({\widehat{\mathsf{\Theta}}}_{i}^{*}\u2a7dz),$ $i=1,$ $2,$ $\dots ,6$ be the empirical cumulative distribution function of
${\widehat{\mathsf{\Theta}}}_{i}^{*};$ then, the point bootstrap estimate of
${\mathsf{\Theta}}_{i}$ is given by
The corresponding
$100(12\alpha )\%$ PBCIs are given by
where
${\widehat{\mathsf{\Theta}}}_{i}^{*}={\u03dd}^{1}\left(z\right)$.
Bootstrapt confidence interval (BTCI)
From the ascending order sample
$({\widehat{\mathsf{\Theta}}}_{i}^{*\left(1\right)},$ ${\widehat{\mathsf{\Theta}}}_{i}^{*\left(2\right)},$ $\dots ,{\widehat{\mathsf{\Theta}}}_{i}^{*\left(\mathbf{N}\right)})$,
$i=1,$ $2,$ $\dots ,$ 6, we built the order statistics values
${\Delta}_{i}^{*\left(1\right)}<{\Delta}_{i}^{*\left(2\right)}<\dots <{\Delta}_{i}^{*\left(\mathbf{N}\right)},$ where
Hence,
$100(12\alpha )\%$ BTCIs are given by
where
${\tilde{\Delta}}_{l\mathrm{boott}}^{*}$ is given by
and
${\u03dd}^{1}\left(z\right)=P({\widehat{\Delta}}_{i}^{*}\u2a7dz)$ is the cumulative distribution function of
${\widehat{\Delta}}_{i}^{*}$.
6. Bayesian Approach
In this section, to obtain the joint TypeII GHC competing risks data
$\underline{\mathbf{t}}\phantom{\rule{3.33333pt}{0ex}}=\left\{\right({t}_{1},$ ${\eta}_{1},$ ${\rho}_{1}),({t}_{2},$ ${\eta}_{2},$ ${\rho}_{2}),$ $\dots ,({t}_{k},{\eta}_{k},{\rho}_{k})\},$ we consider the problem of the Bayesian estimation of model parameters. We assume that the prior distributions for the unknown parameters are independent gamma priors. Therefore, the prior information formulated for the parameter vector
$\mathsf{\Theta}=\{{\theta}_{11},$ ${\theta}_{12},$ ${\theta}_{21},$ ${\theta}_{22},$ ${\beta}_{1},$ ${\beta}_{2}\}$ as
Hence, the joint prior density function of the model parameters is given by
The joint posterior density function of the model parameters is given by
Inserting (6) and (37) in (38) and ignoring the additive constant, the joint posterior density can be expressed as
Under the squared error loss (SEL) function, the Bayes estimate of the parameter is the posterior mean. Then, the Bayes estimate of the parameters or any function of the parameters, such as reliability or failure rate functions, say
$\mathsf{\Psi}$(
$\mathsf{\Theta}$), is given by
Equation (
40) shows that the Bayes estimate of
$\mathsf{\Psi}\left(\mathsf{\Theta}\right)$ needs to compute a highdimensional integral. Appropriate numerical methods could be used to approximate Bayesian estimation.
One of the most common methods applied in this paper is the Markov Chain Monte Carlo method (MCMC method). Compared with traditional methods, the MCMC method is more flexible and provides an alternative approach to parameter estimation. The key to the MCMC technique is obtaining posterior distribution in the empirical form and generating MCMC samples from the posterior distribution, and then computing Bayes estimators and constructing the associated credible intervals. Therefore, we describe this technique as follows.
From Equation (
39), the posterior full conditional density functions of the parameters and data can be obtained as
and
where
$j=1,2$ and, for example, (
${\theta}_{11}{\mathsf{\Theta}}_{{\theta}_{11}},\underline{\mathbf{t}})$ mean (
${\theta}_{11}{\theta}_{12},{\theta}_{21},{\theta}_{22},{\beta}_{1},{\beta}_{2},\underline{\mathbf{t}}).$ The full conditional posterior distributions show that the posterior distribution is reduced to four gamma distributions, for which any conventional methods of generating random numbers can be used. And two general unknown functions make it impossible to generate random samples directly from the conditional posterior distributions. Therefore, to generate random samples from the two unknown distributions, the Metropolis–Hastings (M–H) algorithm with normal proposal distribution can be used; see [
48]. The following steps describe the algorithm used to generate from the posterior distribution (Algorithm 2).
Algorithm 2 Gibbs with MH sampler algorithms. 
 Step 1:
Begin with the indicated number $J=1$ and the initial parameter values ${\mathsf{\Theta}}^{\left(0\right)}=\{{\widehat{\theta}}_{11},$ ${\widehat{\theta}}_{12},$ ${\widehat{\theta}}_{21},$ ${\widehat{\theta}}_{22},$ ${\widehat{\beta}}_{1},$ ${\widehat{\beta}}_{2}\}$.  Step 2:
The values ${\theta}_{1j}^{\left(J\right)}$ and ${\theta}_{2j}^{\left(J\right)}$ are generated from gamma distributions given by (40) and (41), respectively, $j=$ 1, 2.  Step 3:
The values ${\beta}_{j}^{\left(J\right)}$ generated under M–H algorithms with a normal proposal distribution with a mean ${\beta}_{j}^{(J1)}$ and variance ${e}_{j+4\phantom{\rule{4.pt}{0ex}}j+4}$, obtained from an approximate information matrix, $j=$ 1, 2, as follows
 (I)
For the index $j=1,2$, begin with starting points ${\beta}_{j}^{(J1)},$ where ${\beta}_{j}^{\left(0\right)}={\widehat{\beta}}_{j}$.  (II)
Generate a candidate sample points ${\beta}_{j}^{(*)}$, from N(${\beta}_{j}^{(J1)},{e}_{j+4\phantom{\rule{4.pt}{0ex}}j+4}$), as proposal distributions.  (III)
Compute the probability (the acceptance probability) from (43) and (44)
 (IV)
Generate ${U}_{j}$ from uniform (0, 1).  (V)
If ${U}_{j}\le {P}_{j}\left({\beta}_{j}^{(J1)},{\beta}_{j}^{(*)}\right),$ we accept the candidate sample points ${\beta}_{j}^{(*)}$ as ${\beta}_{j}^{\left(J\right)}$. Otherwise, the values ${\beta}_{j}^{(*)}$ are rejected and ${\beta}_{j}^{\left(J\right)}={\beta}_{j}^{(J1)}$ is set.
 Step 4:
Put $J=J+1$  Step 5:
Repeat steps (2–4) N times.  Step 6:
Put the generated parameter vector ${\mathsf{\Theta}}_{i}^{\left(J\right)}$ in ascending order; for example, ${\mathsf{\Theta}}_{i}^{\left[J\right]}$, $i=1,2,\dots ,6.$

6.1. MCMC Bayesian Point Estimations
The initial simulated variants of the algorithm are often discarded at the start of the analysis (burnin time) to eliminate the bias caused by the initially selected value. Suppose that the number of iterations needed to reach the stationary distribution is
${\mathbf{N}}^{*}$ (burnin). In all computations, we take the number
${\mathbf{N}}^{*}=1000$ iteration. Hence, the Bayes point estimator when using the MCMC method is given by
The corresponding variance in the Bayes estimate is given by
6.2. MCMC Bayesian Interval Estimations
To establish the twosided credible intervals of
$\mathsf{\Psi}\left(\mathsf{\Theta}\right)$; sort
$\mathsf{\Psi}\left({\mathsf{\Theta}}_{i}^{\left(l\right)}\right),i=1,2,3,4,5,6;\phantom{\rule{4pt}{0ex}}j={\mathbf{N}}^{*}+1,{\mathbf{N}}^{*}+2,\dots ,N.$ in ascending order. Hence,
$100(12\alpha )\%$ credible intervals of
$\mathsf{\Psi}\left(\mathsf{\Theta}\right)$ can be constructed as:
7. Simulation Studies
In this section, the estimation results obtained and developed in this paper are assessed and compared using the Monte Carlo simulation study. In our study, we assessed the effect of changing sample size
$N={n}_{1}+{n}_{2},$ and affected sample size
$m,$ two times (
${\tau}_{1}$,
${\tau}_{2})$ and parameters vector
$\mathsf{\Theta}=({\theta}_{11},$ ${\theta}_{12},$ ${\theta}_{21},$ ${\theta}_{22},$ ${\beta}_{1},$ ${\beta}_{2})$. Therefore, we adopted two sets of parameter values
${\mathsf{\Theta}}_{1}=\{0.05,0.1,0.07,0.12,0.4,0.5\}$ and
${\mathsf{\Theta}}_{2}=\{0.2$,
$0.3,0.4,0.2,1.0,1.0\}$. For the censoring schemes, different combinations were adopted and are shown in
Table 1,
Table 2,
Table 3 and
Table 4. The prior information was selected using the relation (prior mean
$\simeq \frac{{a}_{i}}{{b}_{i}}),$ where
${a}_{i}$ and
${b}_{i}$ are hyperparameters of gamma prior. The point estimate were tested by computing the mean squared error (MSE). The interval estimates were evaluated using average length (AL) criterion, as well as the coverage probabilities (CPs). Using the Bayesian approach, we adopted.
Noninformative prior (
${P}^{1}$) and informative prior (
${P}^{2}$), where P
${}^{1}\equiv $ (
${a}_{i},$ ${b}_{i})=($0.0001, 0.0001
$),$ and
${P}^{2}=\{$(0.5, 5), (0.5, 4), (1, 6), (1, 4), (1, 3), (2, 4)}
for ${\theta}_{1}$ and
${P}^{2}=\{$(1, 3),
(2, 5), (2, 4), (1, 3), (2, 2), (3, 2)}
for ${\theta}_{2}$ are selected. For the MCMC method, we reported 11,000 iterations and the first 1000 iterations were discarded. The simulation results were formulated according to the following algorithm (Algorithm 3).
Algorithm 3 Monte Carlo simulation study. 
 Step 1:
From Gompertz distribution with two parameters ${\theta}_{s1}+{\theta}_{s2}$ and ${\beta}_{s}$ generate samples of size ${n}_{1}$ and ${n}_{2},$ $s=1,2,$, respectively.  Step 2:
From the joint sample of size $n={n}_{1}+{n}_{2}$ and for given censoring parameters m, ${\tau}_{1},$ ${\tau}_{2}.$ If, ${T}_{m}$ $<{\tau}_{1}$; then, $k\ge m$ and the test is terminated at $\omega ={\tau}_{1}$. However, if ${\tau}_{1}<{T}_{m}$ $<{\tau}_{2},$ $k=m$ and the test is terminated at $\omega ={T}_{m}$ and if ${\tau}_{1}<{\tau}_{2}$ $<{T}_{m},$ $k\le m$ and the test is terminated at $\omega ={\tau}_{2}.$  Step 3:
From step 2, the number of failures $\mathbf{k}$, test termination time $\omega $ and failure times are generated. Hence, the observed joint TypeII GHC competing risks data are obtained.  Step 4:
The two values ${k}_{1}$ and ${k}_{2}$ (number of units from the first and second line in joint TypeII GHC competing risks data) are observed.  Step 5:
The integer numbers ${m}_{sj},$ $s,j=1,2$ are generated from binomial distributions.  Step 6:
We obtain various estimates by considering 1000 replications of samples. Steps (1–4) are repeated 1000 times.  Step 7:
For each sample, the MLE, bootstrap and Bayes estimate are computed.  Step 8:

Discussion: Recently, the problem of obtaining adequate information about the competing lifetime distributions and their parameters it has been of interest to many authors. Therefore, the reliability experimenter may resort to censoring techniques. In this paper, we proposed joint TypeII GHCS. The behavior of different estimation methods under different censoring schemes can be obtained from a simulation study. The numerical results presented in
Table 1,
Table 2,
Table 3 and
Table 4 show that the proposed model and the methods of estimation work well. The quality of the proposed model did not change for different model parameters. We summarize some points that describe the capabilities and the behavior of estimators as follows.
The values of MSEs decrease when sample size ${n}_{1}$ + ${n}_{2}$ or effected sample size m increases.
The model quality improves at increasing ${\tau}_{1}$ and ${\tau}_{2}$.
The results under classical ML and noninformative Bayes estimation are both closed.
Informative prior Bayes estimates present the best estimation.
Estimation results under two Gompertz distribution parameters are more acceptable.
Interval estimations are more acceptable using bootstrapt and informative Bayes estimation.
8. Real Data Analysis
Real datasets obtained from laboratory experiments were used to discuss the results of this paper. This data presented by Hoel [
49] describe the survival time of male mice under a conventional laboratory environment. The test time considered an age of 5–6 weeks and male mice were exposed to radiation dose of 300 roentgens. These data were analyzed by Pareek et al. [
50], Sarhan et al. [
51] and Cramer and Schmiedt [
52]. Data obtained under progressive first failure of compertz population were analyzed by Soliman et al. [
27,
28]. In this section, we considered two groups of radiated male mice, as shown in
Table 5. For causes of failure, we considered Thymine Lymphoma as the first cause and the other causes were considered the second cause of failure. The data were divided by 1000 for simplicity of computation. To generate the joint TypeII GHC competing risks sample, the following algorithms were used (Algorithm 4).
Table 5.
Two groups of failure for the laboratory radiation male mice ${\mathsf{\Omega}}_{1}$ and ${\mathsf{\Omega}}_{2}$.
Table 5.
Two groups of failure for the laboratory radiation male mice ${\mathsf{\Omega}}_{1}$ and ${\mathsf{\Omega}}_{2}$.
Thymic Lymphoma 
${\mathsf{\Omega}}_{1}$  159  189  191  198  200  207  220  235  245  250  256  261  265  266 
 280  343  356  383  403  414  428  432       
Other causes 
${\mathsf{\Omega}}_{1}$  40  42 
51  62  163  179  206  222  228  252  249  282  324  333 
 341  366  385  407  420  431  441  461  462  482  517  517  524  564 
 567  586  619  620  621  622  647  651  686  761  763    
Thymic Lymphoma 
${\mathsf{\Omega}}_{2}$  158  192  193  194  195  202  212  215  229  230  237  240  244  247 
 259  300  301  321  337  415  434  444  485  496  529  537  624  707 
 800              
Other causes 
${\mathsf{\Omega}}_{2}$  136  246  255  376  421  565  616  617  652  655  658  660  662  675 
 681  734  736  737  757  769  777  800  807  825  855  857  864  868 
 870  870  873  882  895  910  934  942  1015  1019     
Algorithm 4 Generate joint TypeII GHC competing risks data. 
 Step 1:
Suppose that the censoring scheme has m = 70, ${\tau}_{1}=0.2$, ${\tau}_{2}=0.4$ and (${n}_{1},$ ${n}_{2})=(61,$ $67)$.  Step 2:
For the joint sample of size $n={n}_{1}+{n}_{2}$ given in Table 5 and Table 6 and the corresponding censoring scheme, we observed that, ${\tau}_{1}<{\tau}_{2}$ $<{T}_{m}.$ Step 3:
Hence, the value of $k=58<m$ and the test was terminated at $\omega ={\tau}_{2}=0.4$.  Step 4:
For the joint TypeII GHC data of zise 58 given in Table 7, we obtained ${k}_{1}=35$ from the first line and $\phantom{\rule{4pt}{0ex}}{k}_{2}$ = 23 from the second line, and ( ${m}_{11},$ ${m}_{12},$ ${m}_{21},$ ${m}_{22}$) = (18, 17, 19, 4).

Table 6.
Jointly typeII GHCS competing risks sample from Hoal data with $m=50$.
Table 6.
Jointly typeII GHCS competing risks sample from Hoal data with $m=50$.
${t}_{i}$ 
0.04  0.042  0.051 
0.062  0.136  0.158 
0.159  0.163  0.179 
0.189  0.191  0.192  0.193  0.194 
${\eta}_{i}$  1  1  1  1  0  0 
1  1  1  1  1  0  0  0 
${\rho}_{i}$  2  2  2  2  2  1 
1  2  2  1  1  1  1 
1 
${t}_{i}$  0.195  0.198  0.2  0.202  0.206  0.207  0.212  0.215  0.22  0.222  0.228  0.229  0.23  0.235 
${\eta}_{i}$  0  1  1  0  1  1 
0  0  1  1  1  0  0  1 
${\rho}_{i}$  1  1  1  1  2  1 
1  1  1  2  2  1  1 
1 
${t}_{i}$  0.237  0.24  0.244  0.245  0.246  0.247  0.249  0.25  0.252  0.255  0.256  0.259  0.261  0.265 
${\eta}_{i}$  0  0  0  1  0  0 
1  1  1  0  1  0  1  1 
${\rho}_{i}$  1  1  1  1  2  1 
2  1  2  2  1  1  1 
1 
${t}_{i}$  0.266  0.28  0.282  0.3  0.301  0.321  0.324  0.333  0.337 
0.341  0.343  0.356 
0.366  0.376 
${\eta}_{i}$  1  1  1  0  0  0 
1  1  0  1  1  1  1  0 
${\rho}_{i}$  1  1  2  1  1  1 
2  2  1  2  1  1  2 
2 
${t}_{i}$  0.383  0.385             
${\eta}_{i}$  1  1             
${\rho}_{i}$  1  2             
Table 7.
Point estimates with 95% CIs of the parameters.
Table 7.
Point estimates with 95% CIs of the parameters.
Pa.  (.)${}_{\mathbf{ML}}$  (.)${}_{\mathbf{Boot}}$  (.)${}_{\mathbf{BMCMC}}$  ACI  Bootp  Boott  CI 

${\theta}_{11}$  0.3365  0.5412  0.4675  (0.0501, 0.6228)  (0.0472, 1.3214)  (0.1784, 0.9115) 
(0.1903,
0.9029) 
${\theta}_{12}$  0.3178 
0.4578  0.4442 
(0.0450, 0.5905)  (0.1472, 0.8897)  (0.1954, 0.8874) 
(0.1782,
0.8789) 
${\theta}_{21}$  0.3156 
0.4652  0.4636 
(0.0006, 0.6306)  (0.0015, 0.9541)  (0.1924, 0.8556) 
(0.1778,
0.8789) 
${\theta}_{22}$  0.0664 
0.1243  0.1184 
(0.0216, 0.1545)  (0.0824, 0.4123)  (0.0336, 0.2911) 
(0.0298,
0.2824) 
${\beta}_{1}$  5.1907 
5.3254  4.1962 
(2.1508, 8.2306)  (2.3652, 8.4562)  (1.4215, 7.1921) 
(1.3725,
7.0651) 
${\beta}_{2}$  4.5269 
4.7771  3.3093 
(0.8129, 8.2408)  (0.9112, 8.7214)  (0.741, 6.4007) 
(0.6402, 6.4894) 
Using the joint TypeII GHCS presented by
Table 6, we plotted the profile loglikelihood function (16) as in
Figure 1. The maximum values need to begin with initial values of the parameters
${\beta}_{1}$ and
${\beta}_{2}$, showing that the iteration can be run with initial values that are almost in the neighborhood of the maximum values in
Figure 1; therefore, the initial values were taken to be (
${\beta}_{1}$,
${\beta}_{2}$) = (5, 6). For Bayes estimation, we adopted noninformative prior with
${a}_{i}={b}_{i}=0.0001,$ $i=1,$ $2,$ …, 6. For the MCMC approach in Bayes method, we ran the chain 11,000 with the first 1000 values as burnin. The MCMC approach that describes the empirical posterior distribution is shown in
Figure 2,
Figure 3,
Figure 4,
Figure 5,
Figure 6 and
Figure 7. Hence, the results of the ML point and interval estimates and different Bayes estimates were computed and the results are presented in
Table 7 and
Table 8.