Next Article in Journal
SDATA: Symmetrical Device Identifier Composition Engine Complied Aggregate Trust Attestation
Previous Article in Journal
Dynamic Spatiotemporal Correlation Graph Convolutional Network for Traffic Speed Prediction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bayesian Inference for Inverse Power Exponentiated Pareto Distribution Using Progressive Type-II Censoring with Application to Flood-Level Data Analysis

1
Department of Mathematics, Faculty of Science, Mansoura University, Mansoura 35516, Egypt
2
Department of Statistics and Operation Research, College of Science, Qassim University, Buraydah 51482, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Symmetry 2024, 16(3), 309; https://doi.org/10.3390/sym16030309
Submission received: 31 January 2024 / Revised: 23 February 2024 / Accepted: 27 February 2024 / Published: 5 March 2024

Abstract

:
Progressive type-II (Prog-II) censoring schemes are gaining traction in estimating the parameters, and reliability characteristics of lifetime distributions. The focus of this paper is to enhance the accuracy and reliability of such estimations for the inverse power exponentiated Pareto (IPEP) distribution, a flexible extension of the exponentiated Pareto distribution suitable for modeling engineering and medical data. We aim to develop novel statistical inference methods applicable under Prog-II censoring, leading to a deeper understanding of failure time behavior, improved decision-making, and enhanced overall model reliability. Our investigation employs both classical and Bayesian approaches. The classical technique involves constructing maximum likelihood estimators of the model parameters and their bootstrap covariance intervals. Using the Gibbs process constructed by the Metropolis–Hasting sampler technique, the Markov chain Monte Carlo method provides Bayesian estimates of the unknown parameters. In addition, an actual data analysis is carried out to examine the estimation process’s performance under this ideal scheme.

1. Introduction

In survival analysis and reliability studies, censoring occurs when it is not feasible to collect comprehensive data on the sample. When analyzing lifetime data in such cases, censored samples are used. Analyzing data from various life-testing experiments is receiving more attention. In order to analyze lifetime data, several censoring schemes have been introduced in the literature. Type-I and type-II censoring schemes are the two most widely used applications in reliability analysis. However, during the experiment, none of these censoring schemes permit the removal of experimental units. Progressive Type-I and Type-II censoring schemes permit the removal of experimental units while the experiment is being performed. In the last ten to twelve years, statistical literature studies have focused on progressive censoring schemes due to their flexibility. Type-I censoring establishes an end time for experiments regardless of the number of failures, while Type-II censoring terminates an experiment following a particular number of failures. Type-II censoring is a method that ‘stops’ an experiment once a pre-established number of failures has occurred, regardless of the time that has passed since the failure occurred. There is no way to remove live units from the experiment at any other time than the point where it ends, which is one of the scheme’s defects. As a solution to this drawback, Cohen [1] introduced progressive censoring schemes. For reliability experiments, progressive censoring has a significant impact on setting up duration experiments. It is common in industrial experiments to immediately terminate experiments and limit the overall number of failures due to different causes (such as when expensive items have to be destroyed when it takes a long time to complete experiments). Progressive censoring techniques cut down on testing times and costs by permitting the removal of survival elements at various stages of the experiment.
In this study, three parameters for IPEP distribution were proposed by Khalifa et al. [2]. The random variable, y, has an IPEP distribution, given that its pdf and cdf are as follows:
f ( y ; θ , η , γ ) = θ η γ y γ 1 1 + y γ η 1 1 1 + y γ η θ 1 ; y , θ , η , γ > 0 ,
where θ and η are the shape parameters and γ is the scale parameter.
The cdf is defined as follows:
F ( y ; θ , η , γ ) = 1 1 1 + y γ η θ ; y , θ , η , γ > 0 ,
whereas the S ( y ) and h ( y ) functions for the IPEP distribution, respectively, are obtained by the following:
S ( y ; θ , η , γ ) = 1 F ( y ) = 1 1 + y γ η θ ; y , θ , η , γ > 0
and,
h ( y ; θ , η , γ ) = f ( y ) S ( y ) = θ η γ y γ 1 1 + y γ η 1 1 1 + y γ η 1 ; y , θ , η , γ > 0 .
Khalifa et al. [2] explained the IPEP model, a more accurate and flexible extension of the exponentiated Pareto distribution for fitting engineering and medical data, and introduced some statistical characteristics.
Furthermore, this dataset, which explores a novel area of application, has been evaluated for weather phenomena (rainfall, floods, droughts, etc.); flood data provided by Dumonceaux and Antle [3] have been used for application. The data are components of a sequence of inundation layers designed to establish a connection between the observations and forecasts obtained from the river gauge and a graphical depiction of the areas affected by high water levels. The purpose of this paper is to investigate how to select models that represent river floods. Our focus is on methods that use the maximum annual flood series, as well as some features of the most commonly used parametric models in hydrology. Classical goodness-of-fit procedures have limitations, and we propose a simple method based on the “flood rate” in order to screen alternative models.
As a result, the remainder of this work is arranged as follows: In Section 2, Prog-II censoring is presented. In Section 3, the literature review is discussed. In Section 4, the M L E s of θ , η and γ , and the Fisher information matrix are formulated. In Section 5, Bootstrap CIs are introduced. In Section 6, for various loss functions, including SEL and LINEX functions, Bayes estimates for the previously mentioned parameters and reliability functions are additionally obtained. In Section 7, the M-H algorithm is introduced. In Section 8, the dataset is studied. In Section 9, a simulation study is performed. Lastly, the conclusion is provided in Section 10.

2. Progressive Type-II Censoring

A brief description of a Prog-II censored experiment is as follows. Select R 1 , , R m , m non-negative integers so that R 1 + + R m = n m , for m < n . Consider a trial where n identical items are tested. It is supposed that the n units’ lifetime distributions have the same distribution function ( F ) and are i . d . d random variables. When the first failure occurs, let us say that x 1 : m : n , R 1 surviving items are randomly selected to be eliminated. In the same way, the surviving items are eliminated at the time of the second failure, say x 2 : m : n , R 2 , and so on. Ultimately, all of the remaining items are removed at the time of the m t h failure, let us say x m : m : n ; see Balakrishnan and Cramer [4].
As a result, for a Prog-II censoring scheme, ( n , m , R 1 , , R m ) , we observe the following sample: x 1 : m : n < x 2 : m : n < < x m : m : n . The joint pdf for x 1 : m : n < x 2 : m : n < < x m : m : n is as follows:
f x 1 : m : n < x 2 : m : n < < x m : m : n = C i = 1 m f x i : m : n 1 F x i : m : n R i , 0 < x 1 < < x m < ,
where C is a constant independent of the parameters and it is determined by C = n ( n R 1 1 ) ( n R 1 R 2 R m 1 m + 1 ) ; R i is the censoring scheme, and F x i : m : n and f x i : m : n are the cdf and probability density functions of X 1 : m : : n , X 2 : m : n , , X m : m : n , respectively, see Figure 1.

3. Literature Review

Many studies, such as Mait and Kaya [5], Meeker and Escobar [6], and Cai et al. [7], have explored Prog-II censoring with differed failure time breakdowns. Additionally, Panah and Asadi [8] and Mousa and Jaheen [9] have applied Prog-II censoring to estimate parameters from different lifetime distributions. Balakrishnan [10] investigated the characteristics under progressively censored samples and reported that a collection of several advances in the inferential procedures depend on types-I and -II censored samples, Almetwally et al. [11] studied the MLE and BE method to determine the parameters of the EGE distribution under progressive censoring. Buzaridah et al. [12] and Khalifa et al. [13] studied estimations of the distribution parameters under Prog-II censored data and Chen and Gui [14] presented statistical inference of the joint Prog-II censored generalized inverted exponential distribution.
This study seeks to fill this need by pursuing three objectives. Estimating parameters: We estimate the parameters, S ( y ) and h ( y ) , of the IPEP using the standard M L E technique. Further, we examine ACI and two CIs based on bootstrap methods, concerning the model parameters. As far as we know, there is currently no study available on the estimation of model parameters or reliability features of the IPEP using Prog-II censoring schemes. Bayesian estimation (BE): We calculate the BEs for the unknown IPEP parameters, together with the corresponding S ( y ) and h ( y ) functions. The Bayesian methodology is utilized for both LFs, considering two commonly used LFs in Bayesian estimation, which are the SEL and the LINEX. To determine the unknown parameters’ BEs, we utilize MCMC methods. Specifically, we apply the Gibbs process inside the M-H approach. We evaluate the effectiveness of the offered approaches by conducting a comprehensive simulation analysis, specifically focusing on their simulated MSE. Another criterion is employed to compare the 95 % confidence intervals derived using the asymptotic M L E and C R I . This comparison is based on a distinct criterion. To make a comparison, C P and A C L s are utilized.

4. Inference Procedures

In this part, we will develop the inference processes for the parameters and reliability function of the IPEP distribution using the scheme R = ( R 1 , R 2 , , R m ) .

4.1. Maximum Likelihood Estimator

Let us suppose that n components are tested with corresponding lifetimes that are equally distributed using the cdf from (2) and the pdf from (1). If y 1 , y 2 , y 3 , , y m are denoted as Prog-II censored samples of size m, then the likelihood function is provided by using the Prog-II censored data.
L ( y ̠ | θ , η , γ ) = C i = 1 m f ( y i : m : n | θ , η , γ ) 1 F ( y i : m : n | θ , η , γ ) R i ,
the constant C = n ( n R 1 1 ) ( n i = 1 n 1 R i + 1 ) is independent of the parameters, and f ( y i : m : n | θ , η , γ ) is the pdf of y in (1), while F ( y i : m : n | θ , η , γ ) is the cdf of y in (2).
Consequently, the joint p d f is as follows:
L ( y ; θ , η , γ ) = C θ n η n γ n i = 1 n y i γ 1 1 + y i γ η 1 1 1 + y i γ η θ 1 × i = 1 n 1 1 + y i γ η θ R i .
Following that, the natural logarithm of the likelihood function, without the normalization constant, is as follows:
( y ; θ , η , γ ) = n ln θ + n ln η + n ln γ ( γ + 1 ) i = 0 n ln ( y i ) ( η + 1 ) i = 0 n ln 1 + y i γ + ( θ 1 ) i = 0 n ln 1 1 + y i γ η + θ i = 0 n R i ln 1 1 + y i γ η .
Using the log-likelihood (8) to construct the nonlinear equations, the M L E s of the parameters θ , η and γ are obtained as follows:
θ = n θ + i = 0 n ln 1 1 + y i γ η + i = 0 n R i ln 1 1 + y i γ η ,
η = n η i = 0 n ln 1 + y i γ + ( θ 1 ) i = 0 n 1 + y i γ η ln 1 + y i γ i = 0 n 1 1 + y i γ η θ i = 0 n R i 1 + y i γ η ln 1 + y i γ i = 0 n 1 1 + y i γ η
and
γ = n γ i = 0 n ln y i ( η + 1 ) i = 0 n y i θ ln y i i = 0 n 1 + y i γ + η θ 1 i = 0 n y i γ ln y i 1 + y i γ η 1 i = 0 n 1 1 + y i γ η + η θ i = 0 n R i y i γ ln y i 1 + y i γ η 1 i = 0 n 1 1 + y i γ η .
The estimates should be obtained using a numerical method like Newton–Raphson since (9)–(11) cannot be solved analytically. Ahmed [15] provides a clear illustration of the algorithm.

4.2. Fisher’s Information Matrix

The normal approximation of the M L E s can help construct the approximate CI and conduct hypothesis testing for the parameters θ , η , and γ . I n θ , η , γ , and the corresponding 3 × 3 observed information matrix is given as follows:
I n = 2 θ 2 2 θ η 2 θ γ 2 η θ 2 η 2 2 η γ 2 γ θ 2 γ η 2 γ 2 = I 11 I 12 I 13 I 21 I 22 I 23 I 31 I 32 I 33 .
Considering that the M L E of ϕ ^ = ( θ ^ , η ^ , γ ^ ) illustrates asymptotic normality, we may express it as ϕ ^ N ( ϕ , I n 1 ( ϕ ^ ) ) . As a result, the 100 ( 1 ϑ ) % approximate C I s are defined by the following:
ϕ ^ L = ϕ ^ z ϑ 2 v a r ϕ ^ a n d ϕ ^ U = ϕ ^ + z ϑ 2 v a r ϕ ^ ,
where 0 < ϑ < 1 and z ϑ 2 represent the upper ϑ 2 percentiles of the standard normal, respectively.
So, a major diagonal of I n 1 ( ϕ ^ ) is formed by the elements v a r θ ^ , v a r η ^ and v a r γ ^ . It is also necessary to know the variance of the parameters θ , η , and γ , to clarify the A C I of S ( t ) and h ( t ) functions. See, Greene [16] for more information on using the delta method to approximately estimate the variance of S ^ ( t ) and h ^ ( t ) . Following this technique, it can determine the variance of S ^ ( t ) and h ^ ( t ) , respectively
v a r ( S ^ ( t ) ) = S ^ ( t ) T V ^ S ^ ( t )
and
v a r ( h ^ ( t ) ) = h ^ ( t ) T V ^ h ^ ( t ) ,
where S ^ ( t ) and h ^ ( t ) for θ , η , and γ are denoted by S ^ ( t ) and h ^ ( t ) .
Following that, S ( t ) and h ( t ) can have a 100 ( 1 ϑ ) % two-sided confidence estimate.
S ^ ( t ) ± z ϑ 2 v a r ( S ^ ( t ) )
and
h ^ ( t ) ± z ϑ 2 v a r ( h ^ ( t ) ) .

5. Bootstrap Confidence Intervals

The bootstrap techniques are more likely to give C I s that are more approximative. As a consequence, we present C I s based on bootstrap techniques.

5.1. Boot-p Method

(1)
Given the initial data y = y 1 : m : n , y 2 : m : n , , y m : m : n , a n d θ ^ , η ^ , γ ^ , these are found by minimizing Equations (9)–(11).
(2)
Utilizing the algorithm explained in Balakrishnan and Sandhu [17], a Prog-II censoring sample y = y 1 : m : n , y 2 : m : n , , y m : m : n from the IPEP distribution, with the Prog-II censoring scheme R 1 , R 2 , , R m .
(3)
The bootstrap sample is used to derive the M L E s and ω ^ (in our case, ω =   θ , η , γ , S ( t ) and h ( t ) ) .
(4)
Steps (2) and (3) occur frequently during N boot times, and the result is ω ^ 1 , ω ^ 2 , , ω ^ N B o o t , where ω ^ i = θ ^ i , η ^ i , γ ^ i , S ^ i ( t ) , h ^ i ( t ) .
(5)
Ascending orders are applied to ω ^ i , where i = 1 , 2 , , N b o o t , and the results are ω ^ ( 1 ) ,   ω ^ ( 2 ) , , ω ^ ( N B o o t ) .
Let G 1 ( z ) = P ω ^ z be the cdf of ω ^ . For the value of z, ω ^ b o o t P = G 1 1 ( z ) is determined. For the value of ω ^ , the estimated bootstrap-p, 100 ( 1 ϑ ) % C I is denoted by the following:
ω ^ b o o t P ϑ 2 , ϑ ^ b o o t P 1 ϑ 2 .

5.2. Boot-t Method

(1)–(3) The parametric Boot-p is the same.
( 4 ) For the variance–covariance matrix I 1 = θ , η , γ , the estimates of the variances S ( t ) and h ( t ) are obtained using the asymptotic variance–covariance matrix.
( 5 ) We calculate the statistic I 1 ω as follows:
T ω = ( ω ^ ω ^ ) v a r ω ^ ,
where ω ^ denotes the MLEs of the unknown parameters while v a r ω ^ denotes the asymptotic variance.
( 6 ) We repeat steps (2)–(5) N B o o t times and determine bootstrap estimates as T ( 1 ) ω , T ( 2 ) ω , , T ( N B o o t ) ω .
Assume that G 2 ( z ) = P T z represents the cdf of T ^ for the specified z. T ^ b o o t t = G 2 1 ( z ) is explained for the provided z. Approximately, the bootstrap-p 100 ( 1 ϑ ) % C I of ω ^ = θ ^ , η ^ , γ ^ , S ^ ( t ) , h ^ ( t ) is as follows:
ω ^ b o o t t ϑ 2 , ω ^ b o o t p 1 ϑ 2 .

6. Bayesian Estimation

Now, we derive BE under the assumption that the parameters are random. We describe parameter uncertainty by methods of a joint prior distribution that was constructed before the failure data were gathered. In this situation, the Bayesian method will be helpful since it enables us to use prior information when analyzing failure data. In Bayesian analysis, the parameters are updated using data to fit the unknown numbers as random variables. Compared to S E L and L I N E X S ( t ) and h ( t ) , as well as some unknown parameters, are all estimated using the BE. We assume θ , η , and γ to be three independent parameters, following the gamma prior distributions, as follows:
π 1 θ θ a 1 1 e b 1 θ , θ > 0 , a 1 > 0 , b 1 > 0 , π 2 η η a 2 1 e b 2 η , η > 0 , a 2 > 0 , b 2 > 0 , π 3 γ γ a 3 1 e b 3 γ , γ > 0 , a 3 > 0 , b 3 > 0 ,
where the parameters are assumed to be unknown and the hyperparameters a i and b i , i = 1 , 2 , 3 are selected to represent the prior assumption. Additionally, the posterior distribution of the parameters, indicated by π θ , η , γ y ̠ , approximately the same, can be generated by combining the prior with the likelihood function in (7), and applying Bayes’ theorem, which can be expressed as follows:
π θ , η , γ y ̠ = π 1 θ π 2 η π 3 γ L ( θ , η , γ y ̠ ) 0 0 0 π 1 θ π 2 η π 3 γ L ( θ , η , γ y ̠ ) d θ d η d γ .
The joint posterior density function from (15) is used to generate samples using the M C M C approach. Estimating the value of integrals is the major objective of the M C M C technique. Within M-H samplers, we perform the M C M C method using the Gibbs procedure. The joint posterior distribution is given as follows:
π θ , η , γ y ̠ θ a 1 + m 1 η a 2 + m 1 γ a 3 + m 1 e b 1 θ b 2 η b 3 γ i = 1 m y i γ 1 i = 1 m 1 + y i γ η 1 1 1 + y i γ η θ 1 i = 1 m 1 1 + y i γ η θ R i .
For θ , η , and γ , the full conditionals are obtained as follows:
π 1 θ η , γ , y ̠ θ a 1 + m 1 e b 1 θ i = 1 m 1 1 + y i γ η θ 1 1 1 + y i γ η θ R i ,
π 2 η θ , γ , y ̠ η a 2 + m 1 e b 2 η i = 1 m 1 + y i γ η 1 1 1 + y i γ η θ 1 × i = 1 m 1 1 + y i γ η θ R i
and
π 3 γ η , θ , y ̠ γ a 3 + m 1 e b 3 γ i = 1 m y i γ 1 1 + y i γ η 1 1 1 + y i γ η θ 1 × i = 1 m 1 1 + y i γ η θ R i .
It should be observed that the integral calculations in (17)–(19) cannot be resolved analytically. Therefore, samples use the M C M C approach, the joint posterior density function in (16). The plots shown in Figure 2, Figure 3 and Figure 4 indicate that, despite the difficulty in directly sampling them using conventional methods, they are comparable to normal distributions.

6.1. Loss Function

In Bayesian analysis, loss function (LF) plays an essential role in reducing the risk connected to the estimator. We consider both symmetric and asymmetric LFs in this analysis. Compared to an asymmetric LF, which provides distinct Bayes estimators as the posterior distribution, a symmetric LF is straightforward and generally used.

6.1.1. Symmetric Bayes Estimation

To obtain the S E L for estimates, the parameters θ , η , and γ are as follows:
θ ^ S B = E θ η , γ , y ̠ = K 1 0 0 0 θ a 1 + m 1 e b 1 θ i = 1 m 1 1 + y i γ η θ 1 i = 1 m 1 1 + y i γ η θ R i d θ d η d γ ,
η ^ S B = E η θ , γ , y ̠ = K 1 0 0 0 η a 2 + m 1 e b 2 η i = 1 m 1 + y i γ η 1 1 1 + y i γ η θ 1 × i = 1 m 1 1 + y i γ η θ R i d θ d η d γ
and
γ ^ S B = E γ θ , η , y ̠ = K 1 0 0 0 γ a 3 + m 1 e b 3 γ i = 1 m y i γ 1 1 + y i γ η 1 1 1 + y i γ η θ 1 × i = 1 m 1 1 + y i γ η θ R i d θ d η d γ ,
where
k = 0 0 0 π θ , η , γ L ( θ , η , γ y ̠ ) d θ d η d γ = 0 0 0 θ a 1 + m 1 η a 2 + m 1 γ a 3 + m 1 e b 1 θ b 2 η b 3 γ i = 1 m y i γ 1 i = 1 m 1 + y i γ η 1 1 1 + y i γ η θ 1 i = 1 m 1 1 + y i γ η θ R i d θ d η d γ .

6.1.2. A Symmetric Bayes Estimation

The L I N E X can be used to derive the BE of θ , η and γ , as the following:
θ ^ L B = 1 c log E e c θ y ̠ , c 0 ,
where
E e c θ y ̠ = K 1 0 0 0 θ a 1 + m 1 e θ ( c + b 1 ) i = 1 m 1 1 + y i γ η θ 1 1 1 + y i γ η θ R i d θ d η d γ ,
η ^ L B = 1 c log E e c η y ̠ , c 0 ,
where
E e c η y ̠ = K 1 0 0 0 η a 2 + m 1 e η ( c + b 2 ) i = 1 m 1 + y i γ η 1 1 1 + y i γ η θ 1 × i = 1 m 1 1 + y i γ η θ R i d θ d η d γ
and
γ ^ L B = 1 c log E e c γ y ̠ , c 0 ,
where
E e c γ y ̠ = K 1 0 0 0 σ a 3 + m 1 e γ ( c + b 3 ) i = 1 m y i γ 1 1 + y i γ η 1 1 1 + y i γ η θ 1 × i = 1 m 1 1 + y i γ η θ R i d θ d η d γ .
In order to obtain posterior distribution samples for computing the BE of θ , η and γ of IPEP( θ , η , γ ), with distribution under Prog-II censored, we suggest using the M C M C approach. Additional Metropolis-in-Gibbs samplers and Gibbs sampling are essentially classes of M C M C techniques.

7. Metropolis–Hastings Algorithm

A general-purpose M C M C approach is the M-H algorithm, originally presented by Metropolis et al. [18] and extended by Hastings [19]. The M-H algorithm can select random samples from the known target distribution, regardless of how complicated it is. We can produce samples from the posterior distribution for unknown parameters to estimate interval approximations and Bayesian point estimators.
Then, the M–H algorithm is employed as follows:
(1) We begin with a first estimate of θ ( 0 ) , η ( 0 ) , γ ( 0 ) .
(2) At j = 1 .
(3) We utilize the subsequent M H method and produce the values θ ( j ) , η ( j ) , and γ ( j ) from
π 1 θ ( j 1 ) η ( j 1 ) , γ ( j 1 ) , y ̠ , π 2 η ( j 1 ) θ ( j ) , γ ( j 1 ) , y ̠ and π 3 γ ( j 1 ) θ ( j ) , η ( j ) , y ̠ .
Using the standard propositional distributions
N ( θ ( j 1 ) , v a r ( θ ) ) , N ( η ( j 1 ) , v a r ( η ) ) and N ( γ ( j 1 ) , v a r ( γ ) ) .
(i) We produce a proposal θ from N ( θ ( j 1 ) , v a r ( θ ) ) , η from N ( η ( j 1 ) , v a r ( η ) ) and γ from N ( γ ( j 1 ) , v a r ( γ ) ) .
(ii) We estimate the probabilities of approval
ψ θ = 1 , π 1 θ η ( j 1 ) , γ ( j 1 ) , y ̠ π 1 θ ( j 1 ) η ( j 1 ) , γ ( j 1 ) , y ̠ ,
ψ η = 1 , π 2 η θ ( j ) , γ ( j 1 ) , y ̠ π 2 η ( j 1 ) θ ( j ) , γ ( j 1 ) , y ̠
and
ψ γ = 1 , π 3 γ θ ( j ) , η ( j ) , y ̠ π 3 γ ( j 1 ) θ ( j ) , η ( j ) , y ̠ .
(iii) From a Uniform ( 0 , 1 ) distribution, we produce u 1 , u 2 and u 3 .
(iv) We implement the proposition and place θ ( j ) = θ ; otherwise, we place θ ( j ) = θ ( j 1 ) .
(v) We implement the proposition and place η ( j ) = η ; otherwise, we place η ( j ) = η ( j 1 ) .
(vi) We implement the proposition and place γ ( j ) = γ ; otherwise, we place γ ( j ) = γ ( j 1 ) .
(vii) The S ( t ) and h ( t ) calculations are as follows:
S ( j ) ( t ) = 1 1 + t γ j η j θ j ; t , θ , η , γ > 0 , h ( j ) ( t ) = θ j η j γ j t γ j 1 1 + t γ j η j 1 1 1 + t γ j η j θ j 1 1 1 + t γ j η j θ j ; t , θ , η , γ > 0 .
(4) At j = j + 1
(5) Steps (3)–(4) N times should be replicated.
(6) The initial M simulated variants are removed after producing the convergence to remove the initial value selection. The sample that was chosen is ω ( j ) = ( θ ( j ) , η ( j ) , γ ( j ) , S ( j ) ( t ) , h ( j ) ( t ) ), where j = M + 1 , , N ,ḟor a sufficiently sized N generates an approximate posterior sample that may be utilized to obtain the BE of
ω ^ M = 1 N M j = M + 1 N ω ( j ) .
where the Markov chain’s burn-in time of ( N = 1000 ) is used to compute the H P D interval of ω = ( θ , η , γ , S ( t ) , h ( t ) ), the M C M C sample of ω ( j ) =( θ ( j ) , η ( j ) , γ ( j ) , and the S ( j ) ( t ) , h ( j ) ( t ) ) order ( ω 1 , ω 2 , ω 3 , , ω N as ω [ 1 ] < ω [ 2 ] < ω [ N ] ).
Then, we construct the 100( 1 ν )% CI of ( ( ω ( 1 ) , ω [ N ( 1 ν ) + 1 ] ) , , ( ω [ N ν ] , ω [ N ] ) ).

8. Application to Real Data and Simulation Study

To illustrate the practical applications of the provided techniques on estimation and prediction problems, we provide numerical outcomes for the estimation of IPEP model parameters using Prog-II censoring on actual datasets. The analysis involves comparing the performance of BE with MLE by analyzing both simulated and actual data.

Flood-Level Data

Using real datasets provided by Dumonceaux and Antle [12], the 20 flood levels represent a real dataset of Susquehanna River’s maximum flood levels at Harrisburg, Pennsylvania, between 1890 and 1969. The data are as follows:
Prog-II samples were generated from the complete dataset, with m = 10 and a censoring scheme of R = ( 5 ( 2 ) , 0 ( 18 ) ) , as presented in Table 1. Prior to conducting the analysis, it is crucial to evaluate the adequacy of the presented model to represent the data. The K-S distance measures the difference between both the fitted and empirical distribution functions, yielding a K-S value of 0.084 and an associated p-value of 0.95 . These findings indicate that the distribution of the IPEP provides a satisfactory fit to the given dataset. Figure 5 represents the empirical quantile function of IPEP distribution for the flood-level data.
Point estimates for the parameters θ , η , and γ , along with S ( t ) and h ( t ) at time t = 0.4 , were obtained using real data on Prog-II censoring schemes presented in Table 1. All these point estimates, including M L E , Boot-p, Boot-t, and BE under S E L and L I N E X , are presented in Table 2. The 95 % confidence intervals (CIs) for the BE estimators and MLEs are reported in Table 3.
It is well established that the L I N E X approximates the behavior of the S E L function due to its symmetrical nature for values of c near to zero. Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10 display the M C M C method’s posterior density function plots as well as the trace plots of the unknown parameters θ , η , γ , and S ( t ) and h ( t ) functions. Furthermore, the MLE for the parameters θ , η , γ as well as the S ( t ) and h ( t ) are unique and represent the actual maximums, as shown in Figure 11.
Furthermore, a comparison of the CIs for MLEs and BEs in Table 3 demonstrates that Bayes estimators possess narrower CIs compared to their MLEs for all three parameters ( θ , η , and γ ). For the parameters and reliability functions, Table 3 shows the 95 % ACIs for MLE and CRIs for MCMC (BE). Figure 11 visualizes the profile’s likelihood, illustrating the uniqueness and similarity of the maximum values with the MLE results. In Table 2 and Table 3, it is seen that BEs outperform MLEs in the Prog-II censoring samples. Compared to the approximation CIs, the Bayes CRIs have the shortest confidence lengths. This study may effectively assist in the failure analysis of the airplane’s air conditioning dataset since the distribution of IPED is appropriate to the applicable data.

9. Simulation Study

To produce the censored Prog-II illustration with initial values θ 0 = 1.2 , η 0 = 1 and γ 0 = 0.5 from the IPEP distribution, a comparison of the various techniques used by the estimators of results θ , η , γ , S ( t ) and h ( t ) , at t = 0.4 , has been discussed in calculating their MSE for k = 1 , 2 , 3 , 4 , 5 , t = 0.4 and M L E s ( ω k ) = 1 / M i = 1 M ω ^ k ( i ) ω k 2 ,where M = 10,000 represents the number of generated samples. Another criterion is used to compare the 95 % CI obtained using the asymptotic M L E and C R I , compared using a different criterion. For comparison, C P and A C L are used. The C P of an interval of confidence is the percentage of times that the initial significance value is found in the interval. Under the following data created from the IPEP distribution using the quantile function of y, Monte Carlo experiments were conducted, where y follows IPEP distribution and is obtained by using F I P E P ( Y , θ , η , γ ) = U as follows:
y = 1 1 U 1 θ 1 η 1 1 γ .
The random variable Y = Q ( U ) is given by Equation ( 26 ) , if U is a uniform variate on the unit interval ( 0 , 1 ) .
The subsequent progressive approaches are generated as follows:
I: R 1 = n m , R i = 0 for i 1 .
II: R ( m / 2 ) = R ( m / 2 ) + 1 = n m / 2 , R i = 0 for i m / 2 and i m / 2 + 1 .
I I I : R m = n m , R i = 0 for i m .
Table 4, Table 5, Table 6, Table 7 and Table 8 show the results of the estimate parameters and their M S E s , while Table 9 displays the values of the A C L and C P of C I s .

Simulation Results

From the following observations, we note the following:
  • According to Table 4, Table 5, Table 6, Table 7 and Table 8, as sample sizes increase, M S E s become smaller, and BEs have the smallest M S E s for the parameters θ , η , γ , S ( t ) and h ( t ) . Therefore, when all variables are considered, BEs outperform M L E s .
  • The estimates from Bayes for θ , η , γ , S ( t ) , and h ( t ) are better in that the M S E s are smaller.
  • For smaller M S E s with c = 2.0 and 0.0001 , the L I N E X estimates with c = 2.0 are greater.
  • The performance of Scheme I is greater than Schemes I I and I I I due to the smaller M S E s for the failure time intervals of fixed-value samples n and m.
  • The results from the C R I s are more perfect than those from the A C I s for identified failures, approaches, and sample sizes, as shown in Table 9.

10. Conclusions

In the field of distribution theory, there is a continuous effort to generalize current distributions. The aim of this work is to develop more reliable and adaptable models that may be used in a variety of censoring contexts. Many different kinds of techniques are being investigated to attain this goal, as indicated by the enormous amount of literature. The reliability and efficiency of the distribution used in fitting the given data have a considerable impact on the subsequent analysis and empirical results. This work focuses on the problem of estimating unknown parameters in the formulation of an IPEP distribution using a Prog-II censoring strategy. Our strategy combines both classical and Bayesian approaches. We calculated approximation CIs and bootstrap CIs for the IPEP distribution’s unknown parameters. Furthermore, we used MCMC with the MH technique to calculate BEs for both LFs, across their related HPD interval estimations. According to the simulation results, BEs with the LINEX loss function exceed all of the traditional estimates. We also determined the best censoring approach for life-testing experiments using three criteria schemes, which is an important consideration for reliability researchers. As an actual data application, we used the flood dataset for all estimations in our research study. Future studies might investigate statistics applied to the IPEP distribution. In future works, we will use Joint Prog-II censored data to estimate the IPEP distribution parameters and compare them to all other censored algorithms that we will use.

Author Contributions

Conceptualization, D.A.R., B.S.E.-D., E.H.K. and H.N.A.; methodology, D.A.R. and E.H.K.; software, D.A.R.; validation, D.A.R., B.S.E.-D., H.N.A. and E.H.K.; formal analysis, D.A.R. and H.N.A.; investigation, D.A.R.; resources, D.A.R., B.S.E.-D. and H.N.A.; data curation, E.H.K.; writing—original draft preparation, D.A.R., B.S.E.-D., H.N.A. and E.H.K.; writing—review and editing, D.A.R., B.S.E.-D., H.N.A. and E.H.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Deanship of Scientific Research, Qassim University.

Data Availability Statement

Data are available in this paper.

Acknowledgments

The researchers would like to thank the Deanship of Scientific Research, Qassim University.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

IPEPinverse power exponentiated Pareto
WGEWeibull generalized exponential
Prog-IIprogressive type-II
Ycontinuous random variable
i . d . d independent, identically distributed
cdfcumulative distribution function
pdfprobability distribution function
S y reliability function
h y hazard function
MLEmaximum likelihood estimation
BEBayes estimation
LFssymmetric and asymmetric loss functions
CIsconfidence intervals
ACIasymptotic confidence intervals
M-HMetropolis–Hastings
MCMC  Markov chain Monte Carlo
MSEmean square error
CRIcredible interval
SELsquared error loss function
LINEXlinear exponential loss function
ACLaverage confidence interval length
CPcoverage probability
R i survival unit at time i
S ^ y gradient of S ^ Y
h ^ y gradient of h ^ Y

References

  1. Cohen, A.C. Progressively censored samples in life testing. Technometrics 1963, 5, 327–329. [Google Scholar] [CrossRef]
  2. Khalifa, E.H.; Ramadan, D.A.; El-Desouky, B.S. A new three-Parameters Inverse Power Exponentiated Pareto Distribution: Properties and its Applications. J. Fac. Sci. 2022, 36, 2022. [Google Scholar]
  3. Dumonceaux, R.; Antle, C.E. Discrimination between the log-normal and the Weibull distributions. Technometrics 1973, 15, 923–926. [Google Scholar] [CrossRef]
  4. Balakrishnan, N.; Cramer, E. The Art of Progressive Censoring; Statistics for Industry and Technology; Birkhäuser: New York, NY, USA, 2014. [Google Scholar]
  5. Maiti, K.; Kayal, S. Estimation of parameters and reliability characteristics for a generalized Rayleigh distribution under progressive type-II censored sample. Commun. Stat.-Simul. Comput. 2021, 50, 3669–3698. [Google Scholar] [CrossRef]
  6. Meeker, W.Q.; Escobar, L.A.; Pascual, F.G. Statistical Methods for Reliability Data; John Wiley and Sons: Hoboken, NJ, USA, 2022. [Google Scholar]
  7. Cai, Y.; Gui, W. Classical and Bayesian inference for a progressive first-failure censored left-truncated normal distribution. Symmetry 2021, 13, 490. [Google Scholar] [CrossRef]
  8. Panahi, H.; Asadi, S. On adaptive progressive hybrid censored Burr type III distribution: Application to the nano droplet dispersion data. Qual. Technol. Quant. Manag. 2021, 18, 79–201. [Google Scholar] [CrossRef]
  9. Mousa, M.A.; Jaheen, Z.F. Bayesian prediction for progressively censored data from the Burr model. Stat. Pap. 2002, 43, 587–593. [Google Scholar] [CrossRef]
  10. Balakrishnan, N. Progressive censoring methodology: An appraisal. Test 2007, 16, 211–259. [Google Scholar] [CrossRef]
  11. Almetwally, E.M.; Almongy, H.M.; El sayed Mubarak, A. Bayesian and maximum likelihood estimation for the Weibull generalized exponential distribution parameters using progressive censoring schemes. Pak. J. Stat. Oper. Res. 2018, 14, 853–868. [Google Scholar] [CrossRef]
  12. Buzaridah, M.M.; Ramadan, D.A.; El-Desouky, B.S. Estimation of some lifetime parameters of flexible reduced logarithmic-inverse Lomax distribution under progressive type-II censored data. J. Math. 2022, 2022, 1690458. [Google Scholar] [CrossRef]
  13. Khalifa, E.; Ahmed, D.; El Desouky, B. Estimation to the Parameters of Truncated Weibull Rayleigh Distribution under Progressive type-II Censoring Scheme with an application to lifetime data. Alfarama J. Basic Appl. Sci. 2022, 3, 315–334. [Google Scholar] [CrossRef]
  14. Chen, Q.; Gui, W. Statistical inference of the generalized inverted exponential distribution under joint progressively type-II censoring. Entropy 2022, 24, 576. [Google Scholar] [CrossRef] [PubMed]
  15. Ahmed, E.A. Estimation of some lifetime parameters of generalized Gompertz distribution under progressively type-II censored data. Appl. Math. Model. 2009, 39, 5567–5578. [Google Scholar] [CrossRef]
  16. Greene, W. Discrete Choice Modeling; Palgrave Macmillan: London, UK, 2009; pp. 473–556. [Google Scholar]
  17. Balakrishnan, N.; Sandhu, R.A. Best linear unbiased and maximum likelihood estimation for exponential distributions under general progressive type-II censored samples. Sankhyā Indian J. Stat. Ser. B 1996, 58, 1–9. [Google Scholar]
  18. Metropolis, N.; Rosenbluth, A.W.; Rosenbluth, M.N.; Teller, A.H.; Teller, E. Equation of state calculations by fast computing machines. J. Chem. Phys. 1953, 21, 1087–1092. [Google Scholar] [CrossRef]
  19. Hastings, W.K. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 1970, 57, 97–109. [Google Scholar] [CrossRef]
Figure 1. The Prog-II censoring scheme is represented schematically.
Figure 1. The Prog-II censoring scheme is represented schematically.
Symmetry 16 00309 g001
Figure 2. For θ , we display the conditional posterior density of MCMC.
Figure 2. For θ , we display the conditional posterior density of MCMC.
Symmetry 16 00309 g002
Figure 3. For η , we display the conditional posterior density of MCMC.
Figure 3. For η , we display the conditional posterior density of MCMC.
Symmetry 16 00309 g003
Figure 4. For γ , we display the conditional posterior density of MCMC.
Figure 4. For γ , we display the conditional posterior density of MCMC.
Symmetry 16 00309 g004
Figure 5. The IPEPD fitted to the real dataset.
Figure 5. The IPEPD fitted to the real dataset.
Symmetry 16 00309 g005
Figure 6. For θ , we display the posterior density function and trace plots.
Figure 6. For θ , we display the posterior density function and trace plots.
Symmetry 16 00309 g006
Figure 7. For η , we display the posterior density function and trace plots.
Figure 7. For η , we display the posterior density function and trace plots.
Symmetry 16 00309 g007
Figure 8. For γ , we display the posterior density function and trace plots.
Figure 8. For γ , we display the posterior density function and trace plots.
Symmetry 16 00309 g008
Figure 9. For S ( t ) , we display the posterior density function and trace plots.
Figure 9. For S ( t ) , we display the posterior density function and trace plots.
Symmetry 16 00309 g009
Figure 10. For h ( t ) , we display the posterior density function and trace plots.
Figure 10. For h ( t ) , we display the posterior density function and trace plots.
Symmetry 16 00309 g010
Figure 11. The profile log-likelihood plots for MLE for the parameters and reliability functions.
Figure 11. The profile log-likelihood plots for MLE for the parameters and reliability functions.
Symmetry 16 00309 g011
Table 1. Progressively censored sample based on data of 20 flood levels.
Table 1. Progressively censored sample based on data of 20 flood levels.
y i 0.645 0.613 0.315 0.449 0.297
R i 55000
y i 0.402 0.379 0.423 0.379 0.324
R i 00000
y i 0.269 0.740 0.218 0.412 0.494
R i 00000
y i 0.416 0.338 0.392 0.484 0.265
R i 00000
Table 2. M L E and Bayes M C M C estimations for real data under S E L and L I N E X .
Table 2. M L E and Bayes M C M C estimations for real data under S E L and L I N E X .
Parameters MLE Boot-pBoot-t SEL LINEX
c = 0 . 2 c = 0 . 2 c = 0 . 0001
θ 18.4965 21.451 19.752 26.7214 31.1442 20.924 26.7209
η 3.24641 3.532 3.125 3.19389 3.34409 2.86674 3.19388
γ 1.19367 1.354 1.124 1.06675 1.08007 1.05268 1.06675
S ( 0.4 ) 0.675392 0.732 0.643 0.475241 0.496605 0.447277 0.47524
h ( 0.4 ) 2.13903 2.975 2.562 3.75233 9.59553 2.75506 3.75213
Table 3. The 95 % C I s of the parameters and reliability functions for real data.
Table 3. The 95 % C I s of the parameters and reliability functions for real data.
Parameters MCMC ACIs
θ 19.8089 , 31.4986 140.335 , 177.328
η 1.87614 , 3.70367 8.0912 , 14.584
γ 0.846438 , 1.23629 2.04768 , 4.43503
S ( 0.4 ) 0.0481525 , 0.670927 0.500616 , 0.850169
h ( 0.4 ) 1.86308 , 10.4146 0.640487 , 3.63758
Table 4. The biased and M S E (in brackets) of the ML and BE for θ .
Table 4. The biased and M S E (in brackets) of the ML and BE for θ .
MLE SEL LINEX
n , m CS c = 2 . 0 c = 2 . 0 c = 0 . 0001
15 , 10  I  1.1804
1.7371
1.2234
1.7403
1.2238
1.7412
1.223
1.7395
1.2234
1.7403
I I   1.137
1.4764
1.1376
1.4774
1.1394
1.4814
1.1361
1.4753
1.1376
1.4774
I I I   1.2754
1.8757
1.2746
1.8731
1.2747
1.8732
1.2745
1.8729
1.2746
1.8731
20 , 15  I  0.7293
0.267
0.7298
0.2675
0.7298
0.2675
0.7298
0.2675
0.7298
0.2675
I I   1.0663
1.3621
1.066
1.3614
1.0662
1.3616
1.0658
1.3612
1.066
1.3614
I I I   1.2044
1.6615
1.2046
1.6617
1.2047
1.6619
1.2045
1.6615
1.2046
1.6617
30 , 20  I  0.9835
1.0346
0.9854
1.041
0.9863
1.0464
0.9844
1.0368
0.9854
1.041
I I   1.0083
1.0445
1.0089
1.0454
1.009
1.0454
1.0089
1.0453
1.0089
1.0454
I I I   1.1336
1.383
1.134
1.3833
1.1342
1.3837
1.1338
1.3829
1.134
1.3833
40 , 20  I   1.104
1.4097
1.1043
1.4095
1.1044
1.4098
1.1042
1.4093
1.1043
1.4095
I I   1.0195
1.1165
1.0202
1.119
1.0207
1.1211
1.0198
1.1179
1.0202
1.119
I I I   1.2004
1.8254
1.2012
1.8297
1.2013
1.8299
1.2011
1.8295
1.2012
1.8297
50 , 20  I  1.1123
1.3729
1.1126
1.3734
1.1126
1.3734
1.1126
1.3733
1.1126
1.3734
I I   0.9788
0.9609
0.9799
0.9643
0.9803
0.9655
0.9795
0.9633
0.9799
0.9643
I I I   1.1804
1.8265
1.18
1.8246
1.18
1.8247
1.18
1.8244
1.18
1.8246
Table 5. The biased and M S E (in brackets) of the ML and BE for η .
Table 5. The biased and M S E (in brackets) of the ML and BE for η .
MLE SEL LINEX
n , m CS c = 2 . 0 c = 2 . 0 c = 0 . 0001
15 , 10  I  1.1914
1.0985
1.1905
1.0958
1.1907
1.0963
1.1907
1.0952
1.1905
1.0958
I I   1.132
1.0029
1.1309
1.0027
1.132
1.0053
1.132
1.0005
1.1309
1.0027
I I I   1.2519
1.3043
1.2513
1.3032
1.2514
1.3034
1.2514
1.3031
1.2513
1.3032
20 , 15  I  0.9739
0.8107
0.9745
0.8133
0.9745
0.8133
0.9745
0.8133
0.9745
0.8133
I I   1.0529
0.9099
1.0536
0.9114
1.0536
0.9116
1.0536
0.9113
1.0536
0.9114
I I I   1.1891
1.1426
1.1887
1.142
1.1889
1.1423
1.1889
1.1417
1.1887
1.142
30 , 20  I  1.0363
0.8282
1.0362
0.8275
1.0364
0.8281
1.0364
0.8271
1.0362
0.8275
I I   1.0433
0.8355
1.0442
0.837
1.0444
0.8378
1.0444
0.8364
1.0442
0.837
I I I   1.1396
1.0202
1.1394
1.0206
1.1397
1.021
1.1397
1.0201
1.1394
1.0206
40 , 20  I  1.1084
1.0142
1.1082
1.014
1.1084
1.0142
1.1084
1.0138
1.1082
1.014
I I   1.0482
0.8571
1.0475
0.8559
1.0476
0.8561
1.0476
0.8556
1.0475
0.8559
I I I   1.1136
1.1825
1.1132
1.1816
1.1132
1.1817
1.1132
1.1815
1.1132
1.1816
50 , 20  I  1.1107
0.9518
1.1108
0.9518
1.1108
0.9518
1.1108
0.9518
1.1108
0.9518
I I   1.0042
0.7413
1.0057
0.744
1.0061
0.7449
1.0061
0.7428
1.0057
0.744
I I I   1.057
1.1936
1.057
1.1938
1.0571
1.1939
1.0571
1.1938
1.057
1.1938
Table 6. The biased and M S E (in brackets) of the ML and BE for γ .
Table 6. The biased and M S E (in brackets) of the ML and BE for γ .
MLE SEL LINEX
n , m CS c = 2 . 0 c = 2 . 0 c = 0 . 0001
15 , 10  I  0.6214
0.426
0.6217
0.4261
0.6217
0.4261
0.6217
0.4261
0.6217
0.4261
I I   0.6941
0.6214
0.6943
0.6217
0.6944
0.6217
0.6944
0.6217
0.6943
0.6217
I I I   0.8014
1.0307
0.8013
1.0302
0.8013
1.0302
0.8013
1.0302
0.8013
1.0302
20 , 15  I  0.7573
0.6106
0.7575
0.6102
0.7575
0.6102
0.7575
0.6102
0.7575
0.6102
I I   0.658
0.3965
0.6582
0.3964
0.6582
0.3964
0.6582
0.3964
0.6582
0.3964
I I I   0.6391
0.4083
0.6389
0.4084
0.6389
0.4084
0.6389
0.4084
0.6389
0.4084
( 30 , 20 )  I  0.5899
0.2561
0.5899
0.2562
0.5899
0.2562
0.5899
0.2562
0.5899
0.2562
I I   0.6055
0.2851
0.6054
0.2851
0.6054
0.2851
0.6054
0.2851
0.6054
0.2851
I I I   0.6025
0.3124
0.6027
0.3124
0.6027
0.3124
0.6027
0.3124
0.6027
0.3124
40 , 20  I  0.5828
0.2621
0.5828
0.2622
0.5828
0.2622
0.5828
0.2622
0.5828
0.2622
I I   0.5815
0.2204
0.5817
0.2203
0.5817
0.2203
0.5817
0.2203
0.5817
0.2203
I I I   0.7225
0.5905
0.7225
0.5907
0.7225
0.5907
0.7225
0.5907
0.7225
0.5907
50 , 20  I  0.557
0.2226
0.557
0.2227
0.557
0.2227
0.557
0.2227
0.557
0.2227
I I   0.5896
0.2418
0.5896
0.2417
0.5896
0.2417
0.5896
0.2417
0.5896
0.2417
I I I   0.9735
1.4031
0.9735
1.4029
0.9735
1.403
0.9735
1.4029
0.9735
1.4029
Table 7. The biased and M S E (in brackets) of the ML and BE for S ( t ) .
Table 7. The biased and M S E (in brackets) of the ML and BE for S ( t ) .
MLE SEL LINEX
n , m CS c = 2 . 0 c = 2 . 0 c = 0 . 0001
15 , 10  I  0.6588
0.0125
0.6586
0.0125
0.6586
0.0125
0.6586
0.0125
0.6586
0.0125
I I   0.6604
0.0121
0.6599
0.0121
0.6601
0.0121
0.6601
0.0121
0.6599
0.0121
I I I   0.6689
0.0121
0.6688
0.0121
0.6688
0.0121
0.6688
0.0121
0.6688
0.0121
20 , 15  I  0.6877
0.0068
0.6877
0.0068
0.6877
0.0068
0.6877
0.0068
0.6877
0.0068
I I   0.6608
0.0081
0.661
0.0081
0.661
0.0081
0.661
0.0081
0.661
0.0081
I I I   0.6652
0.0082
0.665
0.0082
0.665
0.0082
0.665
0.0082
0.665
0.0082
30 , 20  I  0.6678
0.0065
0.6675
0.0066
0.6676
0.0065
0.6676
0.0066
0.6675
0.0066
I I   0.6613
0.0055
0.6614
0.0055
0.6614
0.0055
0.6614
0.0055
0.6614
0.0055
I I I   0.6641
0.0051
0.664
0.0051
0.664
0.0051
0.664
0.0051
0.664
0.0051
40 , 20  I  0.6651
0.0056
0.6649
0.0056
0.6649
0.0056
0.6649
0.0057
0.6649
0.0056
I I   0.6598
0.0049
0.6594
0.005
0.6594
0.005
0.6594
0.005
0.6594
0.005
I I I   0.6547
0.0048
0.6545
0.0048
0.6545
0.0048
0.6545
0.0048
0.6545
0.0048
50 , 20  I  0.6582
0.0055
0.6581
0.0055
0.6581
0.0055
0.6581
0.0055
0.6581
0.0055
I I   0.6534
0.0045
0.6536
0.0045
0.6537
0.0045
0.6537
0.0045
0.6536
0.0045
I I I   0.6449
0.0064
0.645
0.0064
0.645
0.0064
0.645
0.0064
0.645
0.0064
Table 8. The biased and M S E (in brackets) of the ML and BE for h ( t ) .
Table 8. The biased and M S E (in brackets) of the ML and BE for h ( t ) .
MLE SEL LINEX
n , m CS c = 2 . 0 c = 2 . 0 c = 0 . 0001
15 , 10  I  0.3279
0.0191
0.3285
0.0192
0.3286
0.0192
0.3286
0.0192
0.3285
0.0192
I I   0.348
0.039
0.3484
0.0391
0.3486
0.0392
0.3486
0.0391
0.3484
0.0391
I I I   0.3653
0.0801
0.3653
0.08
0.3653
0.08
0.3653
0.08
0.3653
0.08
20 , 15  I  0.3123
0.0231
0.3125
0.0231
0.3125
0.0231
0.3125
0.0231
0.3125
0.0231
I I   0.3256
0.0141
0.3255
0.0141
0.3255
0.0141
0.3255
0.0141
0.3255
0.0141
I I I   0.3263
0.0155
0.3263
0.0156
0.3263
0.0156
0.3263
0.0156
0.3263
0.0156
30 , 20  I  0.3016
0.0062
0.3019
0.0062
0.302
0.0062
0.302
0.0062
0.3019
0.0062
I I   0.3192
0.0092
0.3192
0.0093
0.3192
0.0093
0.3192
0.0093
0.3192
0.0093
I I I   0.3212
0.0087
0.3214
0.0087
0.3214
0.0087
0.3214
0.0087
0.3214
0.0087
40 , 2  I  0.3064
0.0063
0.3066
0.0063
0.3066
0.0063
0.3066
0.0063
0.3066
0.0063
I I   0.3245
0.0124
0.3252
0.0128
0.3252
0.0128
0.3252
0.0128
0.3252
0.0128
I I I   0.3556
0.0314
0.3557
0.0315
0.3557
0.0315
0.3557
0.0315
0.3557
0.0315
50 , 20  I  0.3101
0.0066
0.3102
0.0066
0.3102
0.0066
0.3102
0.0066
0.3102
0.0066
I I   0.3323
0.0119
0.3323
0.0119
0.3324
0.0119
0.3324
0.0119
0.3323
0.0119
I I I   0.4108
0.0743
0.4107
0.0743
0.4107
0.0743
0.4107
0.0743
0.4107
0.0743
Table 9. The parameters and reliability functions that display the ACLs and CPs of 95% CIs.
Table 9. The parameters and reliability functions that display the ACLs and CPs of 95% CIs.
θ η γ S ( t ) h ( t )
n , m CS MLE MCMC MLE MCMC MLE MCMC MLE MCMC MLE MCMC
15 , 10  I  17.5868
0.9544
0.0123
0.9737
14.7808
0.93
0.0095
0.9311
44.8695
0.9427
0.003
0.9433
0.5004
0.9586
0.0036
0.925
0.5997
0.9456
0.004
0.956
I I   29.5999
0.9262
0.0206
0.9468
30.8139
0.9464
0.018
0.931
368.392
0.9613
0.0047
0.9679
0.7324
0.9568
0.0068
0.9682
1.0057
0.926
0.0065
0.9676
I I I   18.991
0.9676
0.0127
0.953
14.1414
0.9307
0.0095
0.9268
36.015
0.9476
0.0033
0.9312
0.4436
0.9486
0.0034
0.9421
0.6129
0.9577
0.0034
0.9709
20 , 15  I  4.7392
0.9673
0.0032
0.9253
4.9634
0.9671
0.003
0.9576
1.8623
0.953
0.0015
0.9345
0.434
0.96
0.001
0.9362
0.4675
0.9373
0.0012
0.9562
I I   12.933
0.9645
0.008
0.9673
10.5979
0.9375
0.0058
0.9502
51.1296
0.9523
0.0022
0.9741
0.4007
0.926
0.0023
0.941
0.4346
0.9633
0.0025
0.9442
I I I   16.0775
0.9539
0.01
0.9292
12.5219
0.9464
0.0077
0.9675
37.8186
0.963
0.0022
0.97
0.3714
0.9341
0.0031
0.931
0.408
0.9605
0.0028
0.949
30 , 20  I  10.1025
0.9602
0.0084
0.9504
9.0456
0.9551
0.0056
0.9301
27.1896
0.9259
0.0018
0.9547
0.3698
0.9708
0.0026
0.9601
0.3686
0.9461
0.0024
0.9723
I I   9.2107
0.9305
0.0058
0.9251
8.2765
0.9593
0.0061
0.9364
64.0026
0.9499
0.0019
0.9687
0.3549
0.9412
0.0023
0.9289
0.4618
0.9523
0.0022
0.9495
I I I   16.9095
0.9603
0.0114
0.9468
14.7537
0.9279
0.0097
0.9253
78.1061
0.9649
0.0024
0.9354
0.4086
0.9272
0.0037
0.967
0.5387
0.9749
0.0032
0.9633
40 , 20  I  11.0281
0.9487
0.007
0.9568
9.151
0.9525
0.0063
0.9511
36.255
0.9374
0.0018
0.9493
0.3995
0.9602
0.0027
0.9396
0.3577
0.9555
0.0025
0.9591
I I   15.0009
0.9709
0.0106
0.9469
14.0093
0.9721
0.0084
0.9281
49.6802
0.9748
0.0023
0.9332
0.2979
0.9492
0.0033
0.9629
0.4564
0.9531
0.0033
0.966
I I I   16.2131
0.9422
0.0106
0.9384
11.1727
0.9699
0.0072
0.931
20.4228
0.9561
0.0025
0.9254
0.2753
0.9499
0.0028
0.9683
0.4936
0.9697
0.0025
0.9289
50 , 20  I  8.4469
0.9474
0.0058
0.9484
6.5389
0.9671
0.0045
0.9646
3.003
0.966
0.0016
0.9447
0.3366
0.9305
0.0019
0.9365
0.3503
0.9274
0.0018
0.9386
I I   14.27
0.9707
0.0102
0.9353
13.3122
0.9712
0.0088
0.927
108.914
0.9532
0.0023
0.9442
0.3081
0.9662
0.0033
0.9515
0.5134
0.9434
0.0034
0.937
I I I   14.3368
0.9707
0.0097
0.9501
8.7387
0.9386
0.0058
0.9325
31.4154
0.94
0.0042
0.9449
0.2689
0.9519
0.0024
0.9269
0.7921
0.9728
0.0024
0.9656
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Khalifa, E.H.; Ramadan, D.A.; Alqifari, H.N.; El-Desouky, B.S. Bayesian Inference for Inverse Power Exponentiated Pareto Distribution Using Progressive Type-II Censoring with Application to Flood-Level Data Analysis. Symmetry 2024, 16, 309. https://doi.org/10.3390/sym16030309

AMA Style

Khalifa EH, Ramadan DA, Alqifari HN, El-Desouky BS. Bayesian Inference for Inverse Power Exponentiated Pareto Distribution Using Progressive Type-II Censoring with Application to Flood-Level Data Analysis. Symmetry. 2024; 16(3):309. https://doi.org/10.3390/sym16030309

Chicago/Turabian Style

Khalifa, Eman H., Dina A. Ramadan, Hana N. Alqifari, and Beih S. El-Desouky. 2024. "Bayesian Inference for Inverse Power Exponentiated Pareto Distribution Using Progressive Type-II Censoring with Application to Flood-Level Data Analysis" Symmetry 16, no. 3: 309. https://doi.org/10.3390/sym16030309

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop