Next Article in Journal
Dynamic Parallel Mining Algorithm of Association Rules Based on Interval Concept Lattice
Previous Article in Journal
Properties of Fluctuating States in Loop Quantum Cosmology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Parameter Estimation of Lindley Distribution Based on Progressive Type-II Censored Competing Risks Data with Binomial Removals

Department of Mathematics, Beijing Jiaotong University, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(7), 646; https://doi.org/10.3390/math7070646
Submission received: 12 June 2019 / Revised: 15 July 2019 / Accepted: 15 July 2019 / Published: 19 July 2019
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
The competing risk model based on Lindley distribution is discussed under the progressive type-II censored sample data with binomial removals. The maximum likelihood estimation of the unknown parameters of the distribution is established. Using the Lindley approximation method, we also obtain the Bayesian estimation of the unknown parameters of the distribution under different loss functions. The performance of different estimates is studied in this article. A real practical dataset is analyzed for illustration.

1. Introduction

1.1. Lindley Distribution

In order to better investigate the problem of product lifetime, Reference [1] proposed a new distribution, known as the Lindley distribution. After being proposed, the Lindley distribution has attracted the attention of many statisticians. In recent years, statisticians have done a lot of research on the Lindley distribution. For example, Reference [2] explored the mathematical and statistical properties of Lindley distribution. Reference [3] put forward an expanded form of the Lindley distribution. A new two-parameter Lindley distribution was proposed and introduced by Reference [4]. Considering the Lindley distribution, a new life data modeling distribution was developed by Reference [5].
The Lindley distribution has a very wide application range in the fields of industry, medicine, biology and so on. For instance, Reference [6] used the Lindley distribution to study the reliability of application strength systems. Considering the generalized Lindley distribution, Reference [7] came up with a new bounded domain probability density function, and introduced a distorted premium principle based on the special category of this distribution.
If a random variable obeys the Lindley distribution, its probability density function (PDF) and cumulative distribution function (CDF) are as follows:
f ( x ; θ ) = θ 2 θ + 1 ( 1 + x ) e θ x , x > 0 , θ > 0 .
F ( x ; θ ) = 1 ( 1 + θ θ + 1 x ) e θ x , x > 0 , θ > 0 .
θ is the shape parameter of the Lindley distribution and it is a positive real number. Moreover, the density function of the Lindley distribution has a thin tail, because when x is large, its density function decreases exponentially.
Reference [2] revealed that the Lindley distribution is a weighted distribution of Gamma distribution and Exponential distribution. Therefore, in many cases, the Lindley distribution is more flexible than these two distributions.

1.2. Progressive Type-II Censored Data with Binomial Removals

In many lifetime studies, it is common that the lifetime of test units might not be accurately recorded. In practice, investigators have to process censored data because they frequently do not have enough time to record and observe the lifetimes of all subjects in the experiment. There are different types of censoring patterns. The most commonly used censoring schemes are type-I censoring and type-II censoring, in which the former is censored at fixed time and the latter is censored at a fixed number. In addition, for various reasons, some test units may have to be removed at various stages of research. This will lead to progressive censoring. For example, in medical research, reduced budgets or withdrawal of patients can lead to the progressive censoring of collected data. At present, a lot of research has been done on how to deal with the problem of progressively censored data. For example, Reference [8] considered the progressive censoring model, assuming that the lifetime distribution of each cause is competing and independent, obtained the maximum likelihood estimates and approximated maximum likelihood estimates of the unknown parameters of the Weibull distribution. The data obtained from the joint progressive type-II censoring scheme with samples from two production lines are studied by Reference [9]. Reference [10] investigated Bayesian interval prediction under progressive stress accelerated life test based on progressive type-II censored data.
There are also differences in data removal in progressive type-II censoring. The number of sample removals in each scenario obeys different distributions, including discrete uniform distribution and binomial distribution. Binomial distribution is usually used to describe the number of events in n experiments. Assuming that any single unit removed from the study is independent of other units but has the same probability p, the number of removed units will follow a binomial distribution. Reference [11] assumed that the censored units are random at every stage and follow a binomial distribution and considered the analysis of competing risks data which obey the Weibull distribution under the progressive type-II censoring model.
How to obtain progressive type-II censored data with binomial removals is described below.
Assume there are n units in the experiment first, when the first failure happens, R 1 units are removed from the rest n 1 units at random. When the second failure happens, R 2 units are removed from the rest n R 1 2 units at random and etc. In the end, when the m-th failure happens, all the other surviving units (we denote it R m ) are removed from the experiment. The removal number of all removal processes described above obeys binomial distribution. In the whole experiment, we get m complete failure data, others are all removed from the experiment and thus, n = m + R 1 + R 2 + + R m . We denote the progressive censoring scheme as a vector R = ( R 1 , R 2 , , R m ) . Complete sample and type-II right censoring are special situations of progressive type-II censoring. Let R 1 = 0 , R 2 = 0 , , R m = 0 , we can get a series of complete sample, let R 1 = 0 , , R m 1 = 0 , R m 0 , we can get the type-II censored sample.

1.3. Competing Risks

The competing risks model arises from reliability theory—it is widely applied in many fields such as biomedical science and finance. In many situations, there are many risks leading to the failures of units in a experiment, each one of them can make the unit fail, and we call them competing risks. For instance, the light bulb may fail due to several factors such as voltage magnitude and temperature level, researchers need to study the lifetime of the light bulbs considering all these factors.
In recent years, many scholars have proposed different methods for exploring data with competing risks. Reference [12] analyzed the competing risks data of missing failure causes under the accelerated failure time model. Reference [13] introduced competing risks data and critically commented on widely used statistical methods for estimating and modeling the amount of interest. Reference [14] analyzed the data of competing risks, including the method of calculating the cumulative incidence of events related to competing risks, comparing the cumulative incidence curves, and conducting regression analysis of competing risks. Reference [10] studied and analyzed a set of survival data at competing risks through a number of methodologies based on a study of cardiovascular diseases. Reference [15] presented a Bayesian method for the combined analysis of longitudinal measurement data and competing risk failure time data.
Without losing generality, assuming that there are two independent risks leading to unit failure, we denote the i-th failure time as X i : m : n and the failure times of the units depend on the first occurred failure model.
X i : m : n = min ( X i : m : n 1 , X i : m : n 2 ) .
Assume that the product’s lifetime under a single risk obeys the Lindley distribution, corresponding probability density function and survival function are
f j ( x ) = θ j 2 θ j + 1 ( 1 + x ) e θ j x , x > 0 , θ j > 0 , j = 1 , 2 .
F j ¯ ( x ) = 1 F j ( x ) = ( 1 + θ j θ j + 1 x ) e θ j x , x > 0 , θ j > 0 , j = 1 , 2 .
Considering progressive type-II censored data, this paper will discuss the parameter estimation of the Lindley distribution under this model.
This article is organized as follows. In Section 2, we will theoretically derive the maximum likelihood estimation of the parameters of the model. And in Section 2, we will obtain the Bayesian estimation of the parameters of the model considering three different loss functions. In Section 3, we will use the results of Section 2 to carry out simulation experiments. Moreover, we will analyze a set of real data in Section 4. In Section 5, we summarize all the conclusions of this article.

2. Formatting of Mathematical Components

2.1. Maximum Likelihood Estimation

Maximum likelihood estimation is a classical and effective method for estimating unknown parameters. In this part, we will theoretically discuss how to estimate unknown parameters in this model by using maximum likelihood estimation method.
In reality, the sample data we collect are often arranged in chronological order. We can denote the data as
x 0 = ( ( x 1 0 , S 1 0 , R 1 0 ) , · · · , ( x n 1 0 , S n 1 0 , R n 1 0 ) , ( x ( n 1 + 1 ) 0 , S ( n 1 + 1 ) 0 , R ( n 1 + 1 ) 0 ) , · · · , ( x m 0 , S m 0 , R m 0 ) ) .
where x i 0 , S i 0 and R i 0 ( 1 i m )represent the i-th failure time, the cause of the i-th failure and the number of products removed at the i-th failure respectively.
Next, we discuss the number of removals per censoring. Because at the stage of each removal, the number of units removed obeys the binomial distribution with probability p, we can obtain
P ( R 1 0 = r 1 0 ) = n m r 1 0 p r 1 0 ( 1 p ) n m r 1 0 , 0 r 1 0 n m .
P ( R i 0 = r i 0 | R ( i 1 ) 0 = r ( i 1 ) 0 , , R 1 0 = r 1 0 ) = n m l = 1 i 1 r l 0 r i 0 p r i 0 ( 1 p ) n m l = 1 i r l 0 .
where 0 r i 0 n m l = 1 i 1 r l 0 , i = 2 , 3 , , m 1 .
P ( R 0 = r 0 ) = P ( R m 0 = r m 0 | R m 1 0 = r m 1 0 , , R 1 0 = r 1 0 ) · · · P ( R 2 0 = r 2 0 | R 1 0 = r 1 0 ) × P ( R 1 0 = r 1 0 ) = ( n m ) ! i = 1 m 1 r i 0 ! ( n m i = 1 m 1 r i 0 ) ! p i = 1 m 1 r i 0 ( 1 p ) ( m 1 ) ( n m ) i = 1 m 1 ( m i ) r i 0 .
Then, the likelihood function is expressed as follows [16]
f ( x 0 | θ 1 , θ 2 ) = h ( x 0 | R 0 = r 0 ) P ( R 0 = r 0 ) .
h ( x 0 | R 0 = r 0 ) = C i = 1 m [ f 1 ( x i 0 ) F 2 ¯ ( x i 0 ) ] I ( δ i 0 = 1 ) [ f 2 ( x i 0 ) F 1 ¯ ( x i 0 ) ] I ( δ i 0 = 2 ) [ F 1 ¯ ( x i 0 ) F 2 ¯ ( x i 0 ) ] r i 0 .
Substituting (8) and (10) into (9), we can get
f ( x 0 | θ 1 , θ 2 ) = C i = 1 m [ f 1 ( x i 0 ) F 2 ¯ ( x i 0 ) ] I ( δ i 0 = 1 ) [ f 2 ( x i 0 ) F 1 ¯ ( x i 0 ) ] I ( δ i 0 = 2 ) [ F 1 ¯ ( x i 0 ) F 2 ¯ ( x i 0 ) ] r i 0 × ( n m ) ! i = 1 m 1 r i 0 ! ( n m i = 1 m 1 r i 0 ) ! p i = 1 m 1 r i 0 ( 1 p ) ( m 1 ) ( n m ) i = 1 m 1 ( m i ) r i 0 .
where f j and F ¯ j ( j = 1 , 2 ) are given in (3) and (4).
In order to simplify the above expression, it is necessary to organize the data. We classify the data according to the failure reasons of the units and the data within the class is still sorted according to the failure time. Hence, the sorted data can be written as follows.
x = ( ( x 1 , S 1 = 1 , R 1 ) , · · · , ( x n 1 , S n 1 = 1 , R n 1 ) , ( x n 1 + 1 , S n 1 + 1 = 2 , R n 1 + 1 ) , · · · , ( x m , S m = 2 , R m ) ) .
here n 1 is the number of units which failed because of factor 1.
The two groups of data before and after collation constitute bijection, which can be determined by each other. In other words, the data rearrangement only reorders the data but does not affect the values of the data itself. Therefore,
h ( x | R = r ) = h ( x 0 | R 0 = r 0 ) = C 1 i = 1 m [ f 1 ( x i ) F 2 ¯ ( x i ) ] I ( δ i = 1 ) [ f 2 ( x i ) F 1 ¯ ( x i ) ] I ( δ i = 2 ) [ F 1 ¯ ( x i ) F 2 ¯ ( x i ) ] r i .
Further, we can get
h ( x | R = r ) = C 1 ( θ 1 2 θ 1 + 1 ) n 1 ( θ 2 2 θ 2 + 1 ) n 2 × i = 1 m e ( r i + 1 ) ( θ 1 + θ 2 ) x i ( 1 + x i ) ( 1 + θ 1 θ 1 + 1 x i ) r i ( 1 + θ 2 θ 2 + 1 x i ) r i × i = 1 n 1 ( 1 + θ 2 θ 2 + 1 x i ) × i = n 1 + 1 m ( 1 + θ 1 θ 1 + 1 x i ) .
Here, C 1 is a constant.
For sorted data, the likelihood function can be expressed as
f ( x | θ 1 , θ 2 ) = f ( x 0 | θ 1 , θ 2 ) = h ( x | R = r ) P ( R 0 = r 0 ) .
Combining (14) and (15), we can obtain the likelihood function of observed data, which is,
L ( θ 1 , θ 2 , p | x , R ) = f ( x | θ 1 , θ 2 ) ( θ 1 2 θ 1 + 1 ) n 1 ( θ 2 2 θ 2 + 1 ) n 2 × i = 1 m e ( r i + 1 ) ( θ 1 + θ 2 ) x i ( 1 + x i ) ( 1 + θ 1 θ 1 + 1 x i ) r i ( 1 + θ 2 θ 2 + 1 x i ) r i × i = 1 n 1 ( 1 + θ 2 θ 2 + 1 x i ) × i = n 1 + 1 m ( 1 + θ 1 θ 1 + 1 x i ) × p i = 1 m 1 r i 0 ( 1 p ) ( m 1 ) ( n m ) i = 1 m 1 ( m i ) r i 0 .
The log-likelihood function is as follows:
l ( θ 1 , θ 2 , p | x , R ) = C 2 + 2 n 1 log θ 1 n 1 log ( θ 1 + 1 ) + 2 n 2 log θ 2 n 2 log ( θ 2 + 1 ) ( θ 1 + θ 2 ) i = 1 m x i ( r i + 1 ) + i = 1 m log ( 1 + x i ) + i = 1 m r i log ( 1 + θ 1 θ 1 + 1 x i ) + i = 1 m r i log ( 1 + θ 2 θ 2 + 1 x i ) + i = 1 n 1 log ( 1 + θ 2 θ 2 + 1 x i ) + i = n 1 + 1 m log ( 1 + θ 1 θ 1 + 1 x i ) + i = 1 m 1 r i 0 log p + ( m 1 ) ( n m ) i = 1 m 1 ( m i ) r i 0 log ( 1 p ) .
where C 2 is a constant.
Theorem 1.
Suppose the competing risks failure times follow the Lindley distribution with different parameters θ 1 and θ 2 under progressive type-II censoring with binomial removals (with parameter p), for θ 1 > 0 , θ 2 > 0 and 0 < p < 1 , the MLE of θ j , j = 1, 2 exists and is unique which can be obtained by solving following equations, respectively.
l ( θ 1 , θ 2 , p | x , R ) θ 1 = 0 .
l ( θ 1 , θ 2 , p | x , R ) θ 2 = 0 .
Proof of Theorem 1.
If other parameters ( θ 2 , p) are fixed, taking the partial derivative of (17), we can obtain
l 1 = l ( θ 1 , θ 2 , p | x , R ) θ 1 = 2 n 1 θ 1 n 1 θ 1 + 1 i = 1 m x i ( r i + 1 ) + 1 θ 1 + 1 i = 1 m r i x i θ 1 + 1 + θ 1 x i + 1 θ 1 + 1 i = n 1 + 1 m x i r i θ 1 + 1 + θ 1 x i .
We can deduce that
lim θ 1 + l 1 = i = 1 m x i ( r i + 1 ) < 0 ,
lim θ 1 0 + l 1 > lim θ 1 0 + ( 2 n 1 θ 1 n 1 θ 1 + 1 ) = + .
Take the derivative from l 1 , and we get the second derivative
l 1 = 2 n 1 θ 1 2 + n 1 ( θ 1 + 1 ) 2 1 ( θ 1 + 1 ) 2 i = 1 m r i x i θ 1 + 1 + θ 1 x i 1 θ 1 + 1 i = 1 m r i x i ( 1 + x i ) ( θ 1 + 1 + θ 1 x i ) 2 1 ( θ 1 + 1 ) 2 i = n 1 + 1 m r i x i θ 1 + 1 + θ 1 x i 1 θ 1 + 1 i = n 1 + 1 m r i x i ( 1 + x i ) ( θ 1 + 1 + θ 1 x i ) 2
Because
l 1 < 2 n 1 θ 1 2 + n 1 ( θ 1 + 1 ) 2 < 2 n 1 ( θ 1 + 1 ) 2 + n 1 ( θ 1 + 1 ) 2 < 0 .
Therefore, if θ 2 and p are fixed, for θ 1 , l is a concave function. l 1 is a monotonic decreasing function. Thus, under this condition, l 1 must have a unique root, and it will maximize (17).
Similarly, if θ 1 and p are fixed, for θ 2 , we know
l 2 = l ( θ 1 , θ 2 , p | x , R ) θ 2 = 2 n 2 θ 2 n 2 θ 2 + 1 i = 1 m x i ( r i + 1 ) + 1 θ 2 + 1 i = 1 m r i x i θ 2 + 1 + θ 2 x i + 1 θ 2 + 1 i = 1 n 1 r i x i θ 2 + 1 + θ 2 x i .
Take similar steps, we can draw the conclusion that under this condition, l 2 must have a unique root and it will maximize (17). ☐
Theorem 2.
Suppose the competing risks failure times follow the Lindley distribution with different parameters θ 1 and θ 2 under progressive type-II censoring with binomial removals (with parameter p), for θ 1 > 0 , θ 2 > 0 and 0 < p < 1 , the MLE of p exists and is unique which can be obtained by solving from the following equation.
l ( θ 1 , θ 2 , p | x , R ) p = 0 .
Proof of Theorem 2.
If other parameters ( θ 1 , p) are fixed, taking the partial derivative of (17), we can obtain
l 3 = l ( θ 1 , θ 2 , p | x , R ) p = i = 1 m 1 r i 0 p ( m 1 ) ( n m ) i = 1 m 1 ( m i ) r i 0 1 p .
i = 1 m 1 ( m i ) r i 0 < i = 1 m 1 ( m 1 ) r i 0 = ( m 1 ) ( n m ) , so we know ( m 1 ) ( n m ) i = 1 m 1 ( m i ) r i 0 > 0 .
We can deduce that
lim p 1 l 3 = ,
lim p 0 + l 3 = +
Besides, in this situation, the second derivative of (17) can be written as
l 3 = i = 1 m 1 r i 0 p 2 ( m 1 ) ( n m ) i = 1 m 1 ( m i ) r i 0 ( 1 p ) 2 < 0 .
Therefore, if θ 1 and θ 2 are fixed, for p, (17) is a concave function. l 3 is a monotonic decreasing function. l 3 must have a unique root and it will maximize (17). ☐
Hence, the maximum likelihood estimates of unknown parameters can be obtained by solving the roots of (18), (19) and (22). Since (18) and (19) are non-linear, it is infeasible to obtain their explicit solutions. In this situation, it is an effective method to obtain approximate solutions using a numerical method such as the Newton-Raphson algorithm.
As for p, we can figure out the explicit solution p ^ , which is:
p ^ = i = 1 m 1 r i 0 ( m 1 ) ( n m ) i = 1 m 1 ( m i ) r i 0 + i = 1 m 1 r i 0 .

2.2. Bayesian Estimation

Compared with maximum likelihood estimation, Bayesian estimation is a new but effective estimation method. It takes both prior information and sample information into account and estimates the unknown parameters of interest. Bayesian estimation is often better and more accurate than maximum likelihood estimation. In this section, we will give the Bayesian estimation of the model parameters θ 1 , θ 2 and p.
According to the discussion in the previous section, we have obtained
f ( x | R = r ) = h ( x | R = r ) × C 2 × p i = 1 m 1 r i 0 ( 1 p ) ( m 1 ) ( n m ) i = 1 m 1 ( m i ) r i 0 .
Now, we need to give the prior distribution of these parameters. Suppose that the prior distributions of θ 1 , θ 2 and p are G a m m a ( a 1 , b 1 ) , G a m m a ( a 2 , b 2 ) and B e t a ( c , d ) respectively ( a 1 > 0 , b 1 > 0 , a 2 > 0 , b 2 > 0 , c > 0 and d > 0 ). The pdfs for the prior distributions are given by
π 1 ( θ 1 ) = b 1 a 1 Γ ( a 1 ) e b 1 θ 1 θ 1 a 1 1 , θ 1 > 0 .
π 2 ( θ 2 ) = b 2 a 2 Γ ( a 2 ) e b 2 θ 2 θ 2 a 2 1 , θ 2 > 0 .
π 3 ( p ) p c 1 ( 1 p ) d 1 , 0 < p < 1 .
Therefore, the joint posterior distributions of parameters θ 1 , θ 2 and p are
π ( θ 1 , θ 2 , p | x ) = f ( x | θ 1 , θ 2 ) π 1 ( θ 1 ) π 2 ( θ 2 ) π 3 ( p ) θ 1 θ 2 p f ( x | θ 1 , θ 2 ) π 1 ( θ 1 ) π 2 ( θ 2 ) π 3 ( p ) d θ 1 d θ 2 d p .
Further, we can get
π ( θ 1 , θ 2 , p | x ) e b 1 θ 1 θ 1 a 1 1 e b 2 θ 2 θ 2 a 2 1 ( θ 1 2 θ 1 + 1 ) n 1 ( θ 2 2 θ 2 + 1 ) n 2 i = 1 m e ( θ 1 + θ 2 ) ( r i + 1 ) x i ( 1 + θ 1 θ 1 + 1 x i ) r i ( 1 + θ 2 θ 2 + 1 x i ) r i i = 1 n 1 ( 1 + θ 2 θ 2 + 1 x i ) i = n i + 1 m ( 1 + θ 1 θ 1 + 1 x i ) p i = 1 m 1 r i 0 + c 1 ( 1 p ) ( m 1 ) ( n m ) i = 1 m 1 ( m i ) r i 0 + d 1 .
Hence, we can deduce the conditional posterior distributions as follows.
π 1 ( θ 1 | θ 2 , p , x ) e b 1 θ 1 θ 1 a 1 1 ( θ 1 2 θ 1 + 1 ) n 1 i = 1 m e ( r i + 1 ) θ 1 x i ( 1 + θ 1 θ 1 + 1 x i ) r i i = n i + 1 m ( 1 + θ 1 θ 1 + 1 x i ) .
π 2 ( θ 2 | θ 1 , p , x ) e b 2 θ 2 θ 2 a 2 1 ( θ 2 2 θ 2 + 1 ) n 2 i = 1 m e ( r i + 1 ) θ 2 x i ( 1 + θ 2 θ 2 + 1 x i ) r i i = 1 n 1 ( 1 + θ 2 θ 2 + 1 x i ) .
π 3 ( p | θ 1 , θ 2 , x ) p i = 1 m 1 r i 0 + c 1 ( 1 p ) ( m 1 ) ( n m ) i = 1 m 1 ( m i ) r i 0 + d 1 .
In Bayesian estimation, loss function is a good method for judging the performance of parameter estimation. There are many different loss functions, symmetric or asymmetric. In this article, we consider three different loss functions. The first one is the squared error loss (SEL) function L 1 , which is symmetric. It is defined as
L 1 ( d ( γ ) , d ^ ( γ ) ) = ( d ( γ ) d ^ ( γ ) ) 2 .
The Bayes estimate of d ( γ ) for SEL can be written as
d ^ S E L = E γ ( γ | x ̲ )
The second loss function we consider is the LINEX loss (LL) function, which is asymmetric, and its definition is shown below.
L 2 ( d ( γ ) , d ^ ( γ ) ) = e h ( d ^ ( γ ) d ( γ ) ) h ( d ^ ( γ ) d ( γ ) ) 1 , h 0 .
And the Bayes estimate of d ( γ ) for LL loss function is given by
d ^ L L = 1 h log E γ ( e h γ | x ̲ ) .
as long as E γ ( · ) exists.
The last one is also asymmetric, it is the general entropy loss (EL) function which is defined as
L 3 ( d ( γ ) , d ^ ( γ ) ) = ( d ^ ( γ ) d ( γ ) ) q q log d ^ ( γ ) d ( γ ) 1 , q 0 .
Also, we can get the Bayes estimate of d ( γ ) , which is
d ^ E L = ( E γ ( γ q | x ̲ ) ) 1 q .
All d ( γ ) denotes the true value of the unknown parameter and d ^ ( γ ) denotes its estimation.
Observing Formulas (29) and (30), these two formulas are nonlinear. So for θ 1 and θ 2 , it is not feasible to directly solve Bayesian estimates under different loss functions. Thus, we consider using the Lindley approximation method to solve the corresponding numerical solution.
Taking the Bayes posteriori estimate under SEL as an example, the Bayes posteriori estimate we require is
E ( θ j | x ) = Θ θ j f ( x | θ j ) π ( θ j ) Θ f ( x | θ j ) π ( θ j ) d θ j d θ j , j = 1 , 2 .
It is not practicable to obtain the explicit solution of (32) which is in the form of the ratio of two integrals. Reference [17] studied the method of approximating the ratio of two integrals.
Here is a brief introduction to Lindley’s approximation method. Assume that the integral ratio to be calculated has the following form
Θ ω ( θ ) e l ( θ ) d θ Θ v ( θ ) e l ( θ ) d θ .
here θ = ( θ 1 , θ 2 , , θ n ) . l ( θ ) is logarithmic form of likelihood function, ω ( θ ) and v ( θ ) refer to arbitrary functions of parameters θ . Suppose v ( θ ) is a prior probability density function of parameter θ , ω ( θ ) = u ( θ ) π ( θ ) .
Further, we can see that the Bayes posteriori estimate is
E [ u ( θ ) | x ] = Θ u ( θ ) e l ( θ ) + ρ ( θ ) d θ Θ e l ( θ ) + ρ ( θ ) d θ .
where ρ ( θ ) = ln π ( θ ) . Then the approximate expression of the above equation is
u ^ B E = E [ u ( θ | x ) ] = u ^ M L E + 1 2 ( u i j + 2 u i ρ j ) σ i j + 1 2 l i j k u l σ i j σ k l .
Here, l i j k = l θ i θ j θ k , i , j , k = 0 , 1 , , m , m is the dimension of θ . u i j = 2 u θ i θ j , i , j = 0 , 1 , , m . l i j = 2 l θ i θ j , i , j = 0 , 1 , , m . σ i j is the inverse matrix element of matrix [ l i j ].
This is the approximate expression of E [ u ( θ ) | x ] . Here, we use Lindley approximation to solve Bayes estimates of θ 1 and θ 2 . The posterior estimation of parameter p obeys Gamma distribution and we can directly solve its Bayesian estimate.

3. Simulation Study

The first step in the simulation experiment is to generate the random numbers we need. In the third chapter, Reference [18] introduced how to generate progressive Type-II censored data for continuous distributions. Based on those previous studies and conclusions, we propose the procedures to generate the corresponding random numbers and Algorithm 1 is given by
Algorithm 1 Generating the progressively type II censored samples with competing risks.
1:
Generate a group of variables r i B ( n m j = 1 i 1 r j , p ) , i = 1 , 2 , , m 1 , r m = n m i = 1 m 1 r i .
2:
Generate m independent variables following standard uniform distribution.
3:
Set V i = W i 1 i + r m + · · · + r m 1 + i , i = 1 , · · · , m .
4:
Set U i = 1 V m V m 1 · · · V m i + 1 , i = 1 , · · · , m , here, progressive type-II censored competing risks data with binomial removals following standard uniform distribution is obtained.
5:
Let x i = F 1 ( U i ) , where F ( x ) = 1 F 1 ¯ ( x ) F 2 ¯ ( x ) .
6:
Generate the factors for failures δ i , i = 1 , , m . δ i follows Bernoulli distribution with p i , and we can get p i from the following equation:
p i = f 1 ( x i ) F 2 ¯ ( x i ) f 1 ( x i ) F 2 ¯ ( x i ) + f 2 ( x i ) F 1 ¯ ( x i ) .
We set appropriate parameters to facilitate the next simulation experiment. We take the true values of θ 1 and θ 2 as 1. Given prior information, we let hyper-parameters a 1 = a 2 = 2 , b 1 = b 2 = 1 . When there is no prior information, we let hyper-parameters a 1 = a 2 = b 1 = b 2 = 0 . To study the statistical regularity of these estimates, we make corresponding adjustments to the other parameters. At the same time, In order to study the estimate performance of p, we set the true values of p to be 0.3, θ 1 and θ 2 to be 1. For comparison, we consider two kinds of prior information of parameter p. In the case of prior information informative-I, c = d = 0.5 , in the case of prior information-II, c = d = 2.5 , and in the case of no prior information, c = d = 1 . As for the parameter settings in the loss function, for LL, we set h = 1 , and for EL, we set q = 1 .
At the same time, we choose absolute bias (Bias) and mean square error (MSE) to measure the quality of the estimation. They are
B i a s = 1 N i = 1 N | λ i ^ λ | . M S E = 1 N i = 1 N ( λ i ^ λ ) 2 .
where λ ^ is the parameter estimate, λ refers to the true value of the parameter, N refers to simulation times.
After the simulation experiment, the results are shown below, Table 1, Table 2 and Table 3 are obtained.
From Table 1 and Table 2, one can observe that
(1)
Under the same loss function measure, the Bias and MSE of the informative Bayesian estimates tend to be smaller than the Bias and MSE of the non-informative Bayesian estimates.
(2)
If n is fixed, the other parameters are also fixed, adjusting the value of m, m increases, and the Bias and MSE of the estimation get smaller. In general, n increases and the Bias and MSE are smaller.
(3)
The Bias and MSE of Bayesian estimates are smaller than the Bias and MSE of maximum likelihood estimates based on SEL loss function and LL loss function with prior information, while EL loss function estimates do not have such an obvious trend.
From Table 3, one can observe that
(1)
The selection of prior information is very important. Under different loss function measures, the selection of optimal prior distribution is different. Under SEL, the estimation is optimal for given information-I priori information, while the estimation error is the largest for given information-II priori information. Under EL, the estimation is optimal for given prior information of informative-II, and the estimation error is maximum for given prior information of informative-I. Under LL, given prior information of informative-II, the estimation is optimal.
(2)
If n is fixed, the other parameters are also fixed, with the increase of m, the Bias and MSE of the estimates of parameter p become larger in most cases. Only when given the prior information of Informative-II, under EL loss function, the Bias and MSE get smaller with the increase of m if n and other variables are fixed.
(3)
As a whole, the Bias and MSE is getting smaller with the increase of n, but this is not evident under EL loss function.
(4)
Under LL, The Bias and MSE of the Bayesian estimates are smaller than maximum likelihood estimation under given prior information Informative-II. In other cases, the Bias and MSE of Bayesian estimation and maximum likelihood estimation are similar.

4. Data Analysis

To illustrate the validity of the previous estimates, an example in Reference [19] is taken into account. This set of data consists of test failure time and failure mode or test truncation time for small electrical appliances. There are 18 different failure modes in the sample of small electrical appliances, but only 7 failure modes appear in the test, and only failure modes 6 and 9 have more than two times of failure. Failure mode 9 is the main concern. Therefore, failure mode 9 is considered to be one failure mode δ = 1 , and the other failure modes are another failure mode δ = 2 . The following is the specific data.
Data Set
(11, 2), (35, 2), (49, 2), (170, 2), (329, 2), (35, 2), (381, 2), (708, 2), (958, 2), (1062, 2), (1167, 1), (1594, 2),
(1925, 1), (1990, 2), (2223, 1), (2327, 2), (2400, 1), (2551, 2), (2565, 0), (2568, 1), (2694, 1), (2702, 2), (2761, 2),
(2831, 2), (3034, 1), (3059, 2), (3112, 1), (3214, 1), (3478, 1), (3504, 1), (4329, 1), (6976, 1), (7846, 1).
We use the Formulas (18), (19), (25), (29)–(31) and (35) which have been derived above to estimate the parameters of this set of real data. To illustrate the reasonableness and superiority of the fitting, we test the data by way of the Kolmogorov-Smirnov (K-S) test. The corresponding maximum distance D and p-value are 0.16546 and 0.263 respectively. Because the p-value is very high, we accept the null hypothesis that the data comes from the Lindley distribution. Then, we consider the censoring schemes and get the sample data (Table 4). The maximum likelihood estimates of θ 1 , θ 2 and p and the Bayes estimates under three loss function measures are obtained by substituting them into Formulas (18), (19), (25), (29)–(31) and (35). These estimates are listed in Table 5.

5. Conclusions

In this paper, when the lifetime distribution obeys the Lindley distribution, the progressive type-II censored competing risks data with binomial removals is adopted as the research object. Considering two competing risks, the maximum likelihood estimates of distribution parameters ( θ 1 , θ 2 and p) are obtained and the Bayes estimates of these parameters are obtained by using the loss functions of squared error loss function, LINEX loss function and general entropy loss function. In order to evaluate these estimates, Newton–Raphson algorithm and the Lindley approximation method are used to study the performance of the estimators. Among the Bayesian estimators of θ 1 and θ 2 , the information prior-based estimators perform better than the non-information prior-based estimators. Given prior information, the estimators under LINEX loss function and the squared error loss function perform better. In the Bayes estimates of p, given the appropriate prior information, the estimators’ performance under LINEX loss function is better. Compare the bayesian estimation with the maximum likelihood estimation. For θ 1 and θ 2 , with prior information, the bayesian estimator under LINEX loss function and the squared error loss function are superior to the maximum likelihood estimation. For p, with appropriate prior information, the Bayesian estimator under the LINEX loss function is superior to the maximum likelihood estimation.

Author Contributions

Methodology and Writing, J.N.; Supervision, W.G.

Funding

This research was supported by Project 202010004001 which was supported by National Training Program of Innovation and Entrepreneurship for Undergraduates.

Acknowledgments

The authors would like to thank the three referees and the editor for their careful reading and constructive comments which led to this substantive improved version.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lindley, D.V. Fiducial Distributions and Bayes Theorem. J. R. Stat. Soc. Ser. B Methodol. 1958, 20, 102–107. [Google Scholar] [CrossRef]
  2. Ghitany, M.E.; Atieh, B.; Nadarajah, S. Lindley distribution and its application. Math. Comput. Simul. 2008, 78, 493–506. [Google Scholar] [CrossRef]
  3. Bakouch, H.S.; Al-Zahrani, B.M.; Al-Shomrani, A.A.; Marchi, V.A.A.; Louzada, F. An extended Lindley distribution. J. Korean Stat. Soc. 2012, 41, 75–85. [Google Scholar] [CrossRef]
  4. Ghitany, M.E.; Al-Mutairi, D.K.; Balakrishnan, N.; Al-Enezi, L.J. Power Lindley distribution and associated inference. Comput. Stat. Data Anal. 2013, 64, 20–33. [Google Scholar] [CrossRef]
  5. Nadarajah, S.; Bakouch, H.S.; Tahmasbi, R. A generalized Lindley distribution. Sankhya B 2011, 73, 331–359. [Google Scholar] [CrossRef]
  6. Ghitany, M.E.; Al-Mutairi, D.K.; Aboukhamseen, S.M. Estimation of the Reliability of a Stress-Strength System from Power Lindley Distributions. Commun. Stat. Simul. Comput. 2015, 44, 118–136. [Google Scholar] [CrossRef]
  7. Gómez-Déniz, E.; Sordo, M.A.; Calderín-Ojeda, E. The log–lindley distribution as an alternative to the beta regression model with applications in insurance. Insur. Math. Econ. 2014, 54, 49–57. [Google Scholar] [CrossRef]
  8. Pareek, B.; Kundu, D.; Kumar, S. On progressively censored competing risks data for Weibull distributions. Comput. Stat. Data Anal. 2009, 53, 4083–4094. [Google Scholar] [CrossRef]
  9. Doostparast, M.; Ahmadi, M.V.; Ahmadi, J. Bayes Estimation Based on Joint Progressive Type II Censored Data Under LINEX Loss Function. Commun. Stat. Simul. Comput. 2013, 42, 1865–1886. [Google Scholar] [CrossRef]
  10. Al-Hussaini, E.K.; Abdel-Hamid, A.H.; Hashem, A.F. One-sample Bayesian prediction intervals based on progressively type-II censored data from the half-logistic distribution under progressive stress model. Metrika 2015, 78, 771–783. [Google Scholar] [CrossRef]
  11. Chacko, M.; Mohan, R. Bayesian analysis of Weibull distribution based on progressive type-II censored competing risks data with binomial removals. Comput. Stat. 2019, 34, 233–252. [Google Scholar] [CrossRef]
  12. Kim, H.T. Cumulative Incidence in Competing Risks Data and Competing Risks Regression Analysis. Clin. Cancer Res. 2007, 13, 559–565. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Wenhua, H.; Gang, L.; Ning, L. A Bayesian approach to joint analysis of longitudinal measurements and competing risks failure time data. Stat. Med. 2010, 28, 1601–1619. [Google Scholar]
  14. Bakoyannis, G.; Touloumi, G. Practical methods for competing risks data: A review. Stat. Methods Med Res. 2012, 21, 257–272. [Google Scholar] [CrossRef] [PubMed]
  15. Austin, P.C.; Lee, D.S.; Fine, J.P. Introduction to the Analysis of Survival Data in the Presence of Competing Risks. Circulation 2016, 133, 601–609. [Google Scholar] [CrossRef] [PubMed]
  16. Sarhan, A.M.; Alameri, M.; Al-Wasel, I. Analysis of progressive censoring competing risks data with binomial removals. Int. J. Math. Anal. 2008, 2, 965–976. [Google Scholar]
  17. Lindley, D.V. Approximate Bayesian methods. Trabajos De Estadistica Y De Investigacion Operativa 1980, 31, 223–245. [Google Scholar] [CrossRef]
  18. Balakrishnan, N.; Cramer, E. Progressive Type-II Censoring: Distribution Theory; Springer: New York, NY, USA, 2014. [Google Scholar]
  19. Lawless, J.F. Statistical Models and Methods for Lifetime Data. Technometrics 1983, 25, 111–112. [Google Scholar]
Table 1. Bias and mean square error (MSE) of the Bayes estimates and maximum likelihood estimates of θ 1 .
Table 1. Bias and mean square error (MSE) of the Bayes estimates and maximum likelihood estimates of θ 1 .
θ 1
p n m SEL EL LL ML
BiasMSEBiasMSEBiasMSEBiasMSEBiasMSEBiasMSEBiasMSE
Informative-INon-InformativeInformative-INon-InformativeInformative-INon-Informative
0.330200.18960.05730.20220.06530.19760.06280.20510.06560.18680.05650.19500.05790.19770.0655
250.16870.04530.17800.05110.17740.04840.18760.05290.16780.04520.17610.04960.18080.0515
50300.15860.04040.16550.04250.15620.03730.17260.04580.16050.04160.16780.04330.16080.0405
400.14250.03160.14540.03290.14510.03230.14870.03420.14600.03310.14840.03330.14290.0314
60400.13750.02940.15090.03550.14810.03330.14350.03200.14060.03060.15090.03520.14660.0331
500.12890.02540.13150.02610.12920.02540.13940.02950.12920.02560.13660.02870.13130.0260
0.630200.20110.06640.21020.07120.18570.05410.20490.06530.18860.05690.20760.06500.20940.0727
250.16940.04590.17880.04820.18180.05010.18580.05260.16990.04630.19400.05900.18080.0503
50300.16200.04110.16640.04360.16150.04050.17030.04430.16550.04170.17160.04510.16630.0435
400.14420.03220.14370.03200.14630.03240.16210.03890.14100.03070.14540.03160.15120.0352
60400.13640.02960.14650.03410.15030.03390.14720.03350.13880.02940.14640.03250.15210.0347
500.12940.02610.13310.02690.13130.02680.13510.02790.12990.02520.13740.02920.13230.0261
0.930200.20110.06560.19990.06350.19550.06130.19770.06220.18860.05750.21010.06960.20970.0686
250.18130.05110.18700.05460.18370.05150.19230.05780.16730.04470.17940.04960.18140.0498
50300.15920.03900.16190.04030.16390.04110.18000.04960.15830.03980.16730.04400.17550.0471
400.13890.03000.14380.03150.14570.03230.15290.03490.14000.03040.14850.03420.14880.0337
60400.14180.03290.14840.03320.14760.03350.14680.03220.14390.03180.14840.03390.14330.0326
500.13580.02760.13700.02790.13160.02670.14140.03010.13010.02590.13930.02900.13230.0267
Table 2. Bias and MSE of the Bayes estimates and maximum likelihood estimates of θ 2 .
Table 2. Bias and MSE of the Bayes estimates and maximum likelihood estimates of θ 2 .
θ 2
p n m SEL EL LL ML
BiasMSEBiasMSEBiasMSEBiasMSEBiasMSEBiasMSEBiasMSE
Informative-INon-InformativeInformative-INon-InformativeInformative-INon-Informative
0.330200.20110.06530.20360.06460.20860.06870.21210.06970.19170.05950.20040.06160.19860.0619
250.17070.04610.18770.05520.19450.06020.19080.05650.17650.04970.18570.05400.18410.0528
50300.15990.04080.16540.04310.16590.04390.16770.04480.16240.04110.17180.04590.16800.0429
400.14070.03040.15230.03560.14650.03270.15320.03580.14150.03080.14790.03420.14500.0327
60400.14120.03180.14470.03290.15150.03580.15680.03720.14750.03340.14810.03310.14680.0329
500.13090.02650.13130.02630.13080.02670.14440.03020.13490.02750.13260.02660.13310.0265
0.630200.19580.06500.19850.06080.21470.07450.21410.07440.19020.05950.21020.07030.20940.0687
250.18090.05250.18190.05300.18770.05360.19770.06040.18110.05250.18800.05580.18330.0521
50300.16310.04130.17000.04430.17600.04800.18130.05280.16660.04330.17030.04310.17030.0445
400.13950.03080.15260.03500.14590.03290.15750.03720.13890.02970.14800.03340.14500.0319
60400.13870.03010.14850.03390.14620.03340.15200.03580.14480.03250.15100.03490.14420.0320
500.12740.02560.13590.02850.13260.02690.14220.03100.13100.02620.14020.02930.13740.0286
0.930200.18710.05810.19980.06440.21130.07050.22050.07470.20130.06330.20470.06410.20660.0695
250.17440.04920.17930.05260.18300.05150.19840.06110.18680.05520.19140.05630.18300.0518
50300.17030.04510.16510.04190.17380.04620.18330.05190.16310.04140.17090.04580.17100.0451
400.14360.03200.14680.03380.14980.03460.15480.03640.14470.03230.15170.03490.15230.0347
60400.14010.03120.14860.03400.14560.03380.15110.03560.14280.03120.14500.03250.14620.0317
500.12410.02430.13350.02720.13650.02820.14090.02990.12890.02530.14020.02930.12940.0259
Table 3. Bias and MSE of Bayes estimates and the maximum likelihood estimates of p.
Table 3. Bias and MSE of Bayes estimates and the maximum likelihood estimates of p.
p
n m SEL EL LL ML
BiasMSEBiasMSEBiasMSEBiasMSEBiasMSEBiasMSEBiasMSEBiasMSEBiasMSEBiasMSE
Non-InformativeInformative-IInformative-IINon-InformativeInformative-IInformative-IINon-InformativeInformative-IInformative-II
30200.10000.01640.09700.01570.10400.01760.17120.03130.17640.03340.15530.02590.09310.01450.10430.01780.07880.01000.09170.0136
250.14530.03590.13700.03170.16090.04140.17630.03460.18800.03880.14970.02530.12660.02650.15480.04300.09600.01420.13280.0303
50300.06670.00760.06500.00690.07100.00820.16980.03000.17360.03120.16160.02710.06290.00660.06760.00750.05860.00540.06400.0067
400.09840.01600.09210.01420.11190.02010.17040.03120.17590.03310.15840.02700.09360.01410.09630.01540.07880.00990.09410.0151
60400.06530.00730.06360.00660.06750.00770.16900.02970.17120.03050.16120.02710.06140.00610.06710.00730.05990.00590.06510.0067
500.09740.01570.09340.01380.10220.01750.17200.03170.17710.03350.15640.02620.09040.01370.09350.01500.07680.00940.09150.0137
Table 4. Progressive type-II censored data with binomial removals.
Table 4. Progressive type-II censored data with binomial removals.
Scheme 1 ( p = 0.3 , m = 25 )
R i 0 [3,3,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
Data(11, 2), (329, 2), (1062, 2), (1594, 2), (1925, 1), (1990, 1), (2327, 2), (2400, 1), (2451, 2),
(2471, 1), (2551, 1), (2568, 1), (2694, 1), (2702, 2), (2761, 2), (2831, 2), (3034, 1),(3059, 2),
(3112, 1), (3214, 1), (3478, 1), (3504, 1), (4329, 1), (6976, 1), (7846, 1)
Scheme 2 ( p = 0.3 , m = 25 )
R i 0 [2,5,2,1,1,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0]
Data(11, 2), (170, 2), (1167, 1),(1990, 1), (2327, 2), (2451, 2), (2471, 1), (2568, 1), (2694, 1),
(2702, 2),(2831, 2),(3034, 1), (3059, 2), (3112, 1), (3214, 1), (3478, 1), (3504, 1), (4329, 1),
(6976, 1), (7846, 1)
Scheme 3 ( p = 0.3 , m = 15 )
R i 0 [6,4,2,2,0,1,2,0,0,0,0,0,1,0,0]
Data(11, 2), (958, 2), (1990, 1),(2400, 1), (2551, 1), (2568, 1), (2702, 2), (3034, 1), (3059, 2),
(3112, 1), (3214, 1), (3478, 1), (3504, 1),(6976, 1), (7846, 1)
Scheme 4 ( p = 0.6 , m = 25 )
R i 0 [7,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
Data(11, 2), (1062, 2), (1167, 1), (1925, 1), (1990, 1), (2223, 1), (2327, 2), (2400, 1), (2451, 2),
(2471, 1), (2551, 1), (2568, 1), (2694, 1), (2702, 2), (2761, 2), (2831, 2), (3034, 1), (3059, 2),
(3112, 1), (3214, 1), (3478, 1), (3504, 1), (4329, 1), (6976, 1), (7846, 1)
Scheme 5 ( p = 0.6 , m = 20 )
R i 0 [5,5,2,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
Data(11, 2), (708, 2), (1990, 1),(2400, 1), (2451, 2), (2471, 1), (2568, 1), (2694, 1), (2702, 2),
(2761, 2), (2831, 2),(3034, 1), (3059, 2), (3112, 1), (3214, 1), (3478, 1), (3504, 1), (4329, 1),
(6976, 1), (7846, 1)
Scheme 6 ( p = 0.6 , m = 15 )
R i 0 [13,3,0,2,0,0,0,0,0,0,0,0,0,0,0]
Data(11, 2), (2327, 2), (2551, 1), (2568, 1), (2761, 2), (2831, 2), (3034, 1), (3059, 2), (3112, 1),
(3214, 1), (3478, 1),(3504, 1), (4329, 1), (6976, 1), (7846, 1)
Scheme 7 ( p = 0.9 , m = 25 )
R i 0 [8,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
Data(11, 2),(1167, 1), (1594, 2), (1925, 1), (1990, 1), (2223, 1), (2327, 2), (2400, 1), (2451, 2),
(2471, 1), (2551, 1), (2568, 1), (2694, 1), (2702, 2), (2761, 2), (2831, 2), (3034, 1), (3059, 2),
(3112, 1), (3214, 1), (3478, 1), (3504, 1), (4329, 1), (6976, 1), (7846, 1)
Scheme 8 ( p = 0.9 , m = 20 )
R i 0 [12,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
Data(11, 2), (2223, 1), (2400, 1), (2451, 2), (2471, 1), (2551, 1), (2568, 1), (2694, 1), (2702, 2),
(2761, 2), (2831, 2), (3034, 1), (3059, 2), (3112, 1), (3214, 1), (3478, 1), (3504, 1), (4329, 1),
(6976, 1), (7846, 1)
Scheme 9 ( p = 0.9 , m = 15 )
R i 0 [17,1,0,0,0,0,0,0,0,0,0,0,0,0,0]
Data(11, 2),(2551, 1),(2694, 1), (2702, 2), (2761, 2), (2831, 2), (3034, 1), (3059, 2), (3112, 1),
(3214, 1),(3478, 1),(3504, 1), (4329, 1), (6976, 1), (7846, 1)
Table 5. Maximum Likelihood Estimates and Bayes estimates of θ 1 , θ 2 and p.
Table 5. Maximum Likelihood Estimates and Bayes estimates of θ 1 , θ 2 and p.
SEL EL
Schemes θ 1 ( 10 4 ) θ 2 ( 10 4 ) p θ 1 ( 10 4 ) θ 2 ( 10 4 ) p
14.76273.10690.47374.62122.89870.2963
24.15132.37750.31114.01132.14960.2241
34.14762.37380.30653.98912.11820.2250
44.76482.38060.81824.63662.14360.4211
54.15162.37770.50004.01062.14830.3171
64.08762.37480.67863.91302.09870.3913
74.76502.38081.00004.63652.14350.4706
84.76402.37980.93334.60982.09930.4643
94.12192.37820.95003.93842.08700.4737
LL ML
Schemes θ 1 ( 10 4 ) θ 2 ( 10 4 ) p θ 1 ( 10 4 ) θ 2 ( 10 4 ) p
14.76273.10690.43804.7683723.1126060.4444
24.15132.37750.29324.158022.3841860.2955
34.14762.37380.29344.158022.3841860.2951
44.76472.38060.79254.7683722.3841860.8000
54.15152.37770.47704.158022.3841860.4815
64.08762.37470.66274.0970342.3841860.6667
74.76492.38071.00004.7683722.3841860.8889
84.76392.37970.92634.7683722.3841860.9286
94.12192.37820.94614.1278712.3841860.9474

Share and Cite

MDPI and ACS Style

Nie, J.; Gui, W. Parameter Estimation of Lindley Distribution Based on Progressive Type-II Censored Competing Risks Data with Binomial Removals. Mathematics 2019, 7, 646. https://doi.org/10.3390/math7070646

AMA Style

Nie J, Gui W. Parameter Estimation of Lindley Distribution Based on Progressive Type-II Censored Competing Risks Data with Binomial Removals. Mathematics. 2019; 7(7):646. https://doi.org/10.3390/math7070646

Chicago/Turabian Style

Nie, Jiaxin, and Wenhao Gui. 2019. "Parameter Estimation of Lindley Distribution Based on Progressive Type-II Censored Competing Risks Data with Binomial Removals" Mathematics 7, no. 7: 646. https://doi.org/10.3390/math7070646

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop