Next Article in Journal
Optimal Strategy of the Dynamic Mean-Variance Problem for Pairs Trading under a Fast Mean-Reverting Stochastic Volatility Model
Previous Article in Journal
Global Stability of Traveling Waves for the Lotka–Volterra Competition System with Three Species
Previous Article in Special Issue
Support Vector Machine with Robust Low-Rank Learning for Multi-Label Classification Problems in the Steelmaking Process
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generalized Bayes Estimation Based on a Joint Type-II Censored Sample from K-Exponential Populations

1
Department of Mathematics, College of Science, Taibah University, Al-Madinah Al-Munawarah 30002, Saudi Arabia
2
Department of Mathematics, Faculty of Science, Al-Azhar University, Nasr City 11884, Egypt
3
Department of Statistics and Operations Research, College of Science, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia
4
Department of Mathematical Sciences, College of Science, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(9), 2190; https://doi.org/10.3390/math11092190
Submission received: 17 March 2023 / Revised: 3 May 2023 / Accepted: 4 May 2023 / Published: 6 May 2023
(This article belongs to the Special Issue Mathematical Modeling and Optimization of Process Industries)

Abstract

:
Generalized Bayes is a Bayesian study based on a learning rate parameter. This paper considers a generalized Bayes estimation to study the effect of the learning rate parameter on the estimation results based on a joint censored sample of type-II exponential populations. Squared error, Linex, and general entropy loss functions are used in the Bayesian approach. Monte Carlo simulations were performed to assess how well the different approaches perform. The simulation study compares the Bayesian estimators for different values of the learning rate parameter and different losses.

1. Introduction

Generalized Bayes is a Bayesian study based on a learning rate parameter ( 0 < η < 1 ) . As a fractional power on the likelihood function L L ( θ ; d a t a ) for the parameter θ Θ , the traditional Bayesian framework is obtained for η = 1 . In this paper, we will show the effect of the learning rate parameter on the estimation results. That is, if the prior distribution of the parameter θ is π ( θ ) , then the generalized Bayes posterior distribution for θ is:
π * ( θ | d a t a ) L η π ( θ ) , θ Θ , 0 < η < 1 .
For more details on the generalized Bayes method and the choice of the value of the rate parameter, see, for example [1,2,3,4,5,6,7,8,9,10]. In addition, we refer readers to [11,12] for recent work on Bayesian inversion.
An exact inference method based on maximum likelihood estimates (MLEs), and compared its performance with approximate, Bayesian, and bootstrap methods developed by [13]. A joint progressive censoring of type-II and the expected values of the number of failures for two populations under joint progressive censoring of type-II introduced and studied by [14]. Exact likelihood inference for two exponential populations under joint progressive censoring of type II was studied by [15]. A precise result based on maximum likelihood estimates developed by [16]. A study of Bayesian estimation and prediction based on a joint type- II censored sample from two exponential populations was presented by [17]. Exact likelihood inference for two populations of two-parameter exponential distributions under joint censoring of type II was studied by [18].
Suppose that products from k different lines are manufactured in the same factory and that k independent samples of size n j , 1 j k are selected from these k lines and simultaneously subjected to a lifetime test. To reduce the cost of the experiment and shorten the duration of the experiment, the experimenter can terminate the lifetime test experiment once a certain number (say r) of failures occur. In this situation, one is interested in either a point or interval estimate of the mean lifetime of the units produced by these k lines.
Suppose { X j n j , j = 1 , , k } are k -samples where, X j n j = { X j 1 , X j 2 , , X j n j } are the lifetimes of n j copies of product line A j and assumed to be independent and identically distributed (iid) random variables from a population with cumulative distribution function (cdf) F j ( x ) and probability density function (pdf) f j ( x ) .
Furthermore, let N = j = 1 k n j be the total sample size and r be the total number of observed failures. Let W 1 W N denote the order statistics of the N random variables { X j n j , j = 1 , , k } . Under the joint Type-II censoring scheme for the k -samples, the observable data consist of ( δ , W ) , where W = ( W 1 , , W r ) , W i { X j i n j i , j i = 1 , , k } , and r is a predefined integerand δ = ( δ 1 j , , δ r j ) associated with ( j 1 , , j r ) is defined by:
δ i j = 1 , i f j = j i 0 , o t h e r w i s e .
If r j = i = 1 r δ i j denotes the number of X j -failures in W and r = j = 1 k r j , then, the joint density function of ( δ , W ) is given by:
f ( δ , w ) = c r i = 1 r j = 1 k ( f j ( w i ) ) δ i j . j = 1 k ( F _ j ( w r ) ) n j r j
where F _ j = 1 F j is the survival functions of the j th population and c r = j = 1 k n j ! ( n j r j ) ! .
The main goal of this paper is to consider the Bayesian estimation of the parameter based on the learning rate parameter under a joint censoring scheme of type-II for exponential populations when censoring is applied to the samples in a combined manner. Section 2 presents the maximum likelihood and generalized Bayes estimators, using squared error, Linex, and general entropy loss functions in the Bayesian approach to estimate the population parameters. A numerical investigation of the results from Section 2 is presented in Section 3. Finally, we conclude the paper in Section 4.

2. Estimation of the Parameters

Suppose that for 1 j k , the k populations are exponential with the following pdf and cdf:
f h = θ j e x p ( θ j x ) , F j = 1 e x p ( θ j x ) , x > 0 , θ j > 0 .
Then, the likelihood function in (3) becomes:
L ( Θ , δ , w ) = c r i = 1 r j = 1 k { θ j e x p ( θ j w i ) } δ i j j = 1 k { e x p ( θ j w r ) } n j r j = c r j = 1 k θ j r j e x p { θ j u j }
where Θ = ( θ 1 , , θ k ) and u j = i = 1 r w i δ i j + w r ( n j r j ) .

2.1. Maximum Likelihood Estimation

From (5), the MLE of θ j , for 1 j k , is given by:
θ ^ j M = r j u j .
Remark 1.
MLEs of  θ j  exist if we have at least  k  failures  ( r k ) which means at least one failure from each sample, i.e.,  1 r j r k + 1  and  r j n j .
We determined the MLEs to compare their results with those of Bayesian estimation, which uses the three types of loss functions for different values of the rate parameters, as described in Section 3.

2.2. Generalized Bayes Estimation

Since the parameters Θ are assumed to be unknown, we can consider the conjugate prior distributions of Θ as independent gamma prior distributions, i.e., θ j G a m ( a j , b j ) . Therefore, the joint prior distribution of Θ is given by:
π ( Θ ) = j = 1 k π j ( θ j ) ,
where
π j ( θ j ) = b j a j Γ ( a j ) θ j a j 1 e b j θ j ,
and Γ ( ) denotes the complete gamma function.
Combining (5) and (7), after raising (5) to the fractional power η , the posterior joint density function of Θ is then:
π * Θ d a t a ) = j = 1 k ( u j η + b j ) r j η + a j θ j r j η + a j 1 Γ ( r j η + a j ) e x p [ { θ j ( u j η + b j ) } ] .
Since π j is a conjugate prior, we see that if θ j G a m a j , b j , then it has the posterior density function as ( θ j | d a t a ) G a m ( r j η + a j , u j η + b j ) .
In generalized Bayes estimation, we consider three types of loss functions:
(i)
The squared error loss function (SE), which is classified as a symmetric function and gives equal importance to losses for overestimates and underestimates of the same magnitude;
(ii)
The Linex loss function, which is asymmetric;
(iii)
The generalization of the entropy (GE) loss function.
Using (9), the Bayesian estimators of θ j under the squared error (SE) loss function are:
θ ^ j S = E ( θ j ) = r j η + a j u j η + b j ,         1 j k .
Under the Linex loss function, the Bayesian estimators of θ j are given by:
θ ^ j L = 1 ν E e ν θ j = r j η + a j ν l o g 1 + ν u j η + b j ,         ν 0 ,     1 j k ,
and under the GE loss function, the Bayesian estimators of θ j are given by
θ ^ j E = { E ( θ j c ) } 1 c = Γ r j η + a j c Γ r j η + a j 1 c 1 u j η + b j , 1 j k .
Remark 2.
Obviously,  θ ^ j  for  1 j k  in the above three cases are the unique Bayes estimators of  θ j  and thus admissible. The estimators  θ ^ j J , are Bayes estimators of  θ j  using the noninformative Jeffreys priors  π J j = 1 k 1 θ j , obtained directly by substituting  a j = b j = 0  into (9), so that (10) leads to MLEs  θ ^ j M .
Remark 3.
For  c = 1 , 1 , 2 , the Bayes estimates  θ ^ j E  agree with the Bayes estimates under the following losses: weighted squared error loss function, squared error loss function, and precautionary loss function.

3. Numerical Study

This section examines the results of a Monte Carlo simulation study to evaluate the performance of the inference procedures derived in the previous section. An example is then presented to illustrate the inference methods discussed here.

3.1. Simulation Study

We have considered the different choices for the three populations sample sizes ( n 1 , n 2 , n 3 ) and also for r . We choose the exponential parameters ( θ 1 , θ 2 , θ 3 ) as (1, 2, 3) and for the Monte Carlo simulations, we use 10,000 replicates. Using (6), we obtain the MLEs of θ 1 , θ 2 , θ 3 and their estimated risk, which are shown in Table 1.
In the simulation study, it should be noted that some of the simulated samples do not meet the condition in Remark 1 and, therefore, must be discarded. Thus, the average values of the observed failures ( r ¯ 1 , r ¯ 2 , r ¯ 3 ) are calculated and reported in Table 1. For the Bayesian study, the hyperparameters are represented by Δ = ( a 1 , b 1 , a 2 , b 2 , a 3 , b 3 ) , where Δ = Δ 1 = ( 1,1 , 2,1 , 3,1 ) .
The results of Bayesian estimators of θ 1 , θ 2 , θ 3 at Δ 1 ; c = 1 , 0.5 , 0.75 ; ν = 0.1 , 0.5 ; η = 0.1 , 0.5 , 0.9 are shown in Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7, where the loss at c = 1 is SE. Table 2, Table 3 and Table 4 studied generalized Bayes under general entropy loss function for the chosen values c = 1 , 0.75 , 0.5 .
Table 5, Table 6 and Table 7 studied generalized Bayes under Linex loss function for υ = 0.1 , 0.5 ,   η = 0.1 , 0.5 , 0.9 .

3.2. Illustrative Example

To illustrate the usefulness of the results developed in the previous sections, we consider three samples, each of size n 1 = n 2 = n 3 = 10 , from Nelson’s data (groups 1, 4, and 5) (p. 462, [19]), corresponding to the breakdown in minutes of an insulating fluid subjected to a high load. These failure times as samples X i , i = 1,2 , 3 , and their order statistics for ( W , j i ) are shown in Table 8.
For r = 20 , 25 , 30 , the MLE and Bayesian estimates of the parameters are shown in Table 9 and Table 10 for η = 0.1 , 0.5 , and using Δ = Δ 2 = ( 1 , 2.6 , 1 , 2 , 1 , 3 ) ; c = 1 , 0.75 , 0.5 and ν = 0.1, 0.5, respectively.

4. Conclusions

In this work, we considered a joint type-II censoring scheme when the lifetimes of three populations have exponential distributions. We obtained the MLEs and the Bayesian estimates of the parameters using different values for the learning rate parameter η and the loss functions SE, GE, and Linex in a simulation study and an illustrative example. In both methods, the MLEs and the generalized Bayes estimates θ ^ 1 , θ ^ 2 , θ ^ 3 become better with more significant ni; i = 1, 2, 3 for different values of r; of course, the estimates become better, but the Bayes estimators are better than the MLEs. From Table 2 to Table 7, it can be seen that the results improve as c increases. Generally, the best result is obtained for generalized Bayes estimators for η = 0.1, i.e., the result improve better when η becomes small. Studying this work with a different type of censoring might be interesting.

Author Contributions

Conceptualization, Y.A.-A.; methodology, Y.A.-A. and M.K.; software, G.A.; validation, G.A.; formal analysis, M.K.; resources, Y.A.-A.; writing—original draft, Y.A.-A. and M.K.; writing—review & editing, G.A.; supervision, M.K.; project administration, Y.A.-A. and G.A. All authors have read and agreed to the published version of the manuscript.

Funding

Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R226), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Data Availability Statement

The data used to support the findings of this study are included in the article.

Acknowledgments

The authors thank the anonymous reviewers and the editor for their constructive criticism and valuable suggestions, which have greatly improved the presentation and explanations in this article. This work was supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R226), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bissiri, P.G.; Holmes, C.C.; Walker, S.G. A general framework for updating belief distributions. J. R. Stat. Soc. Ser. B Stat. Methodol. 2016, 78, 1103–1130. [Google Scholar] [CrossRef] [PubMed]
  2. De Heide, R.; Kirichenko, A.; Grunwald, P.; Mehta, N. Safe-Bayesian generalized linear regression. Int. Conf. Artif. Intell. Stat. 2020, 108, 2623–2633. [Google Scholar]
  3. Gruünwald, P. The safe Bayesian: Learning the learning rate via the mixability gap. In Algorithmic Learning Theory: 23rd International Conference, ALT 2012, Lyon, France, 29–31 October 2012; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7568, pp. 169–183. [Google Scholar]
  4. Gruünwald, P. Safe probability. J. Stat. Plan. Inference 2018, 195, 47–63. [Google Scholar] [CrossRef]
  5. Gruünwald, P.; van Ommen, T. Inconsistency of Bayesian inference for misspecied linear models, and a proposal for repairing it. Bayesian Anal. 2017, 12, 1069–1103. [Google Scholar] [CrossRef]
  6. Holmes, C.C.; Walker, S.G. Assigning a value to a power likelihood in a general Bayesian model. Biometrika 2017, 104, 497–503. [Google Scholar]
  7. Lyddon, S.P.; Holmes, C.C.; Walker, S.G. General Bayesian updating and the loss-likelihood bootstrap. Biometrika 2019, 106, 465–478. [Google Scholar] [CrossRef]
  8. Martin, R. Invited comment on the article by van der Pas, Szabo, and van der Vaart. Bayesian Anal. 2017, 12, 1254–1258. [Google Scholar]
  9. Martin, R.; Ning, B. Empirical priors and coverage of posterior credible sets in a sparse normal mean model. Sankhya A 2020, 82, 477–498. [Google Scholar] [CrossRef]
  10. Miller, J.W.; Dunson, D.B. Robust Bayesian inference via coarsening. J. Am. Stat. Assoc. 2019, 114, 1113–1125. [Google Scholar] [CrossRef] [PubMed]
  11. Khodadadian, A.; Noii, N.; Parvizi, M.; Abbaszadeh, M.; Wick, T.; Heitzinger, C. A Bayesian estimation method for variational phase-field fracture problems. Comput. Mech. 2020, 66, 827–849. [Google Scholar] [CrossRef] [PubMed]
  12. Noii, N.; Khodadadian, A.; Ulloa, J.; Aldakheel, F.; Wick, T.; François, S.; Wriggers, P. Bayesian inversion with open-source codes for various one-dimensional model problems in computational mechanics. Arch. Comput. Methods Eng. 2022, 29, 4285–4318. [Google Scholar] [CrossRef]
  13. Balakrishnana, N.; Rasouli, A. Exact likelihood inference for two exponential populations under joint Type-II censoring. Comput. Stat. Data Anal. 2008, 52, 2725–2738. [Google Scholar] [CrossRef]
  14. Parsi, S.; Bairamov, I. Expected values of the number of failures for two populations under joint Type-II progressive censoring. Comput. Stat. Data Anal. 2009, 53, 3560–3570. [Google Scholar] [CrossRef]
  15. Rasouli, A.; Balakrishnan, N. Exact likelihood inference for two exponential populations under joint progressive type-II censoring. Commun. Stat. Theory Methods 2010, 39, 2172–2191. [Google Scholar] [CrossRef]
  16. Su, F. Exact Likelihood Inference for Multiple Exponential Populations under Joint Censoring. Open Access Dissertations and Theses Paper 7589. Ph.D Thesis, McMaster University, Hamilton, ON, Canada, 2013. [Google Scholar]
  17. Shafay, A.R.; Balakrishnan, N.Y.; Abdel-Aty, Y. Bayesian inference based on a jointly type-II censored sample from two exponential populations. J. Stat. Comput. Simul. 2014, 84, 2427–2440. [Google Scholar] [CrossRef]
  18. Abdel-Aty, Y. Exact likelihood inference for two populations from two-parameter exponential distributions under joint Type-II censoring. Commun. Stat. Theory Methods 2017, 46, 9026–9041. [Google Scholar] [CrossRef]
  19. Nelson, W. Applied Life Data Analysis; Wiley: New York, NY, USA, 1982. [Google Scholar]
Table 1. Average value of r ¯ 1 , r ¯ 2 , r ¯ 3 and the average value and estimated risk (ER) of the MLEs θ ^ 1 M , θ ^ 2 M , θ ^ 3 M for different choices of n 1 , n 2 , n 3 , r .
Table 1. Average value of r ¯ 1 , r ¯ 2 , r ¯ 3 and the average value and estimated risk (ER) of the MLEs θ ^ 1 M , θ ^ 2 M , θ ^ 3 M for different choices of n 1 , n 2 , n 3 , r .
n 1 , n 2 , n 3 r r ¯ 1 , r ¯ 2 , r ¯ 3 θ ^ 1 M ER ( θ ^ 1 M ) θ ^ 2 M ER ( θ ^ 2 M ) θ ^ 3 M ER( θ ^ 3 M )
(10, 10, 10)15(3.2, 5.2, 6.6)1.65782.48782.57681.40833.63711.7036
20(4.6, 7, 8.4)1.32930.82322.39811.09863.51541.4353
25(6.6, 8.8, 9.6)1.20900.56352.33280.92053.41281.2615
(7, 10, 13)15(2.1, 4.8, 8.1)2.49208.33302.62551.53843.47421.4147
20(3, 6.6, 10.4)1.69382.23292.44201.12703.40241.2115
25(4.3, 8.5, 12.2)1.35950.88562.34500.94303.28791.0874
(13, 10, 7)15(4.4, 5.6, 5)1.35230.85752.53331.34293.93892.2551
20(6.4, 7.5, 6.1)1.21050.57382.40121.06823.73941.9000
25(9.1, 9.1, 6.8)1.13810.44232.30210.89483.59071.7008
(20, 20, 20)30(6.1, 10.4, 13.4)1.21880.57912.23980.76723.31251.0128
40(9.1, 14.1, 16.8)1.13850.41862.19010.64093.24800.8648
50(13.2, 17.6, 19.2)1.09480.32802.16340.56863.23700.8013
(14, 20, 26)30(3.9, 9.7, 16.4)1.42971.58872.27550.82723.23510.8782
40(5.8, 13.3, 20.9)1.24910.60992.20360.66183.18380.7458
50(8.6, 17, 24.4)1.16220.45342.16390.57243.17720.6906
(26, 20, 14)30(8.8, 11.2, 10)1.13860.41962.22340.72773.42421.2191
40(12.9, 14.9, 12.2)1.09490.32632.18990.62193.36091.0732
50(18.2, 18.2, 13.6)1.06670.27332.16210.56213.30130.9981
(30, 30, 30)45(9.3, 15.6, 20.1)1.14210.40902.15760.58353.19450.7693
60(13.7, 21.1, 25.2)1.0890.31502.12530.49543.17380.6840
75(19.7, 26.5, 28.8)1.06290.25252.11120.43643.15980.6248
(21, 30, 39)45(5.9, 14.5, 24.6)1.23360.610242.17450.61163.15350.6711
60(8.8, 19.9, 31.3)1.15100.43012.12950.50673.13240.5986
75(12.9, 25.4, 36.7)1.11380.34522.10400.45083.11810.5397
(39, 30, 21)45(13.2, 16.9, 14.9)1.09100.31792.14770.556043.27190.9112
60(19.3, 22.4, 18.3)1.06330.25492.12120.48723.24610.8316
75(27.3, 27.3, 20.4)1.04240.21272.115310.43483.20570.7510
Table 2. Average value of Bayesian estimators θ ^ 1 E , θ ^ 2 E , θ ^ 3 E and estimated risk for different choices of n 1 , n 2 , n 3 , r and = 1 , c = 1 , 0.75 , 0.5 , η = 0.1.
Table 2. Average value of Bayesian estimators θ ^ 1 E , θ ^ 2 E , θ ^ 3 E and estimated risk for different choices of n 1 , n 2 , n 3 , r and = 1 , c = 1 , 0.75 , 0.5 , η = 0.1.
n 1 , n 2 , n 3 r η = 0.1, c = −1
θ ^ 1 E ER ( θ ^ 1 E ) θ ^ 2 M ER ( θ ^ 2 E ) θ ^ 3 E ER( θ ^ 3 E )
(10, 10, 10)151.06390.12892.05140.17923.05200.2142
201.05780.14962.04680.20333.05790.2191
251.04870.16102.03410.21643.04510.2218
(20, 20, 20)301.05470.16072.05330.22113.06060.2535
401.04540.17002.04830.22823.04920.2677
501.04460.16552.05370.23233.05120.2691
(30, 30, 30)451.04770.16762.03830.22943.03930.2778
601.04390.165662.05370.23053.05450.2792
751.03510.16042.02130.22993.05160.2775
n 1 , n 2 , n 3 r η = 0.1, c = −0.75
θ ^ 1 E ER ( θ ^ 1 E ) θ ^ 2 M ER ( θ ^ 2 E ) θ ^ 3 E ER( θ ^ 3 E )
(10, 10, 10)150.96900.11981.95280.16922.95850.2070
200.96890.13961.96520.18992.95150.2180
250.96800.15241.95300.19872.95150.2228
(20, 20, 20)300.98370.15191.98160.21242.98220.2563
400.99110.15791.97480.21842.97770.2579
500.98840.15741.97560.21782.97140.2590
(30, 30, 30)450.97300.15381.98820.21892.96540.2641
600.99390.15621.98550.22222.98540.2720
750.99920.15261.98370.22043.00730.2705
n 1 , n 2 , n 3 r η = 0.1, c = −0.5
θ ^ 1 E ER ( θ ^ 1 E ) θ ^ 2 M ER ( θ ^ 2 E ) θ ^ 3 E ER( θ ^ 3 E )
(10, 10, 10)150.87790.15731.85970.21762.85030.2425
200.89060.17711.87680.23762.85900.2488
250.88940.16451.88300.23102.85450.2594
(20, 20, 20)300.91120.17751.88230.22582.89960.2945
400.91940.17261.90150.22732.88340.2640
500.93430.16661.91110.22562.90090.2709
(30, 30, 30)450.92810.17711.90250.23012.89950.2723
600.93000.16381.92810.23162.90650.2813
750.94600.15181.93880.22412.89610.2675
Table 3. Average value of Bayesian estimators θ ^ 1 E , θ ^ 2 E , θ ^ 3 E and estimated risk for different choices of n 1 , n 2 , n 3 , r and = 1 , c = 1 , 0.75 , 0.5 ,   η = 0.5.
Table 3. Average value of Bayesian estimators θ ^ 1 E , θ ^ 2 E , θ ^ 3 E and estimated risk for different choices of n 1 , n 2 , n 3 , r and = 1 , c = 1 , 0.75 , 0.5 ,   η = 0.5.
n 1 , n 2 , n 3 r η = 0.5, c = −1
θ ^ 1 E ER ( θ ^ 1 E ) θ ^ 2 E ER ( θ ^ 2 E ) θ ^ 3 E ER( θ ^ 3 E )
(10, 10, 10)151.19720.42742.21050.56153.19330.6662
201.14730.40712.19020.55283.18510.6597
251.11190.37042.17290.52293.15840.6367
(20, 20, 20)301.15100.37352.13890.50183.16450.6193
401.11770.32012.09830.45433.15870.5890
501.03960.27422.09350.43343.11360.5620
(30, 30, 30)451.08810.30762.10950.44233.09560.5516
601.06690.26722.07740.40593.17800.5307
751.05240.22982.07320.36643.11160.5005
n 1 , n 2 , n 3 r η = 0.5, c = −0.75
θ ^ 1 E ER ( θ ^ 1 E ) θ ^ 2 E ER ( θ ^ 2 E ) θ ^ 3 E ER( θ ^ 3 E )
(10, 10, 10)151.14390.39472.13450.54463.14900.6481
201.14120.38742.13430.52793.13050.6435
251.11300.35392.10980.50553.12510.6169
(20, 20, 20)301.08860.34752.12740.48263.11180.6067
401.06450.30732.08750.44943.07350.5728
501.06520.27102.10570.42213.10000.5494
(30, 30, 30)451.05650.29972.10200.43533.10700.5508
601.06760.25872.09580.40193.07330.5188
751.04020.22302.04710.36253.05630.4894
n 1 , n 2 , n 3 r η = 0.5, c = −0.5
θ ^ 1 E ER ( θ ^ 1 E ) θ ^ 2 E ER ( θ ^ 2 E ) θ ^ 3 E ER( θ ^ 3 E )
(10, 10, 10)151.09310.37042.09160.50803.09200.6268
201.08570.36392.08000.50613.08100.6153
251.06800.33352.05660.48853.06870.6097
(20, 20, 20)301.07680.34012.09290.48063.06830.5892
401.05150.29752.05720.44503.10140.5688
501.03390.26392.07870.41913.06390.5405
(30, 30, 30)451.0724 0.29632.05830.42743.08720.5358
601.02520.25402.09570.39073.05580.5065
751.01530.21692.03540.35723.07660.4857
Table 4. Average value of Bayesian estimators θ ^ 1 E , θ ^ 2 E , θ ^ 3 E and estimated risk for different choices of n 1 , n 2 , n 3 , r and = 1 , c = 1 , 0.75 , 0.5 , η = 0.9.
Table 4. Average value of Bayesian estimators θ ^ 1 E , θ ^ 2 E , θ ^ 3 E and estimated risk for different choices of n 1 , n 2 , n 3 , r and = 1 , c = 1 , 0.75 , 0.5 , η = 0.9.
n 1 , n 2 , n 3 r η = 0.9, c = −1
θ ^ 1 E ER ( θ ^ 1 E ) θ ^ 2 E ER ( θ ^ 2 E ) θ ^ 3 E ER( θ ^ 3 E )
(10, 10, 10)151.29240.56622.25480.74273.31180.9021
201.19900.50452.23620.68363.28270.8566
251.18050.44572.20980.64793.23530.8091
(20, 20, 20)301.16640.42802.17390.58443.21050.7427
401.10120.35412.14140.51863.19370.7014
501.07630.29962.13980.48263.17010.6397
(30, 30, 30)451.11270.34682.15000.50003.22100.6480
601.09110.29162.09270.43053.15160.5857
751.07640.24522.11180.40113.13840.5517
n 1 , n 2 , n 3 r η = 0.9, c = −0.75
θ ^ 1 E ER ( θ ^ 1 E ) θ ^ 2 E ER ( θ ^ 2 E ) θ ^ 3 E ER( θ ^ 3 E )
(10, 10, 10)151.22420.53902.22620.71903.24480.8698
201.17040.48682.19290.68053.25370.8441
251.12170.41862.18490.63233.20400.7893
(20, 20, 20)301.16330.42952.14190.57093.18160.7413
401.10090.35482.07630.51313.15700.6703
501.07100.29322.11850.47733.14970.6551
(30, 30, 30)451.06100.33142.12600.49333.11460.6249
601.06660.28382.10650.42813.14170.5752
751.03530.23272.11630.39863.09070.5386
n 1 , n 2 , n 3 r η = 0.9, c = −0.5
θ ^ 1 E ER ( θ ^ 1 E ) θ ^ 2 E ER ( θ ^ 2 E ) θ ^ 3 E ER( θ ^ 3 E )
(10, 10, 10)151.19660.51692.21780.70473.19510.8582
201.14080.47472.19260.66923.17700.8263
251.11710.41282.15100.61773.17550.7808
(20, 20, 20)301.12100.40902.17760.57493.18540.7334
401.07670.33992.09590.50713.12430.6774
501.07380.29052.09300.47263.11300.6230
(30, 30, 30)451.05290.32572.12110.48923.09680.6160
601.06110.27992.10810.43293.07070.5613
751.04110.23812.08100.38913.08850.5405
Table 5. Average value of Bayesian estimators θ ^ 1 L , θ ^ 2 L , θ ^ 3 L and estimated risk for different choices of n 1 , n 2 , n 3 , r and = 1 , υ = 0.1 , 0.5 , η = 0.1.
Table 5. Average value of Bayesian estimators θ ^ 1 L , θ ^ 2 L , θ ^ 3 L and estimated risk for different choices of n 1 , n 2 , n 3 , r and = 1 , υ = 0.1 , 0.5 , η = 0.1.
n 1 , n 2 , n 3 r η = 0.1, υ = 0.1
θ ^ 1 L ER ( θ ^ 1 L ) θ ^ 2 L ER ( θ ^ 2 L ) θ ^ 3 L ER( θ ^ 3 L )
(10, 10, 10)151.02100.11701.97430.16532.92860.2008
201.01420.13801.97680.18412.92910.2122
251.01010.14921.97070.18882.92830.2173
(20, 20, 20)301.00510.15311.99620.20682.94300.2423
401.01760.15891.99970.21282.94960.2489
501.00700.15992.00260.21522.97370.2562
(30, 30, 30)451.01320.15561.98230.21442.96690.2624
601.03280.15712.00140.21612.98300.2670
751.02860.15251.99160.21472.98120.2658
n 1 , n 2 , n 3 r η = 0.1, υ = 0.5
θ ^ 1 L ER ( θ ^ 1 L ) θ ^ 2 L ER ( θ ^ 2 L ) θ ^ 3 L ER( θ ^ 3 L )
(10, 10, 10)150.88780.13711.72210.30932.55110.4943
200.89930.16041.74330.32082.56960.4673
250.90160.15291.74960.29502.57510.4683
(20, 20, 20)300.90710.14821.76060.26912.61170.4220
400.92950.15651.78840.28182.64410.4162
500.93930.15271.81660.27202.66100.4167
(30, 30, 30)450.92000.14881.80620.27942.67130.4136
600.93720.15151.82860.25582.68700.3724
750.94950.14361.85500.25232.71530.3719
Table 6. Average value of Bayesian estimators θ ^ 1 L , θ ^ 2 L , θ ^ 3 L and estimated risk for different choices of n 1 , n 2 , n 3 , r and = 1 , υ = 0.1 , 0.5 , η = 0.5.
Table 6. Average value of Bayesian estimators θ ^ 1 L , θ ^ 2 L , θ ^ 3 L and estimated risk for different choices of n 1 , n 2 , n 3 , r and = 1 , υ = 0.1 , 0.5 , η = 0.5.
n 1 , n 2 , n 3 r η = 0.5, υ = 0.1
θ ^ 1 L ER ( θ ^ 1 L ) θ ^ 2 L ER ( θ ^ 2 L ) θ ^ 3 L ER( θ ^ 3 L )
(10, 10, 10)151.16640.39812.15870.53533.12470.6201
201.13040.38592.14630.52183.14950.6164
251.10770.35162.12390.49703.07510.6016
(20, 20, 20)301.12540.35422.12490.48143.12960.5894
401.08910.31112.12560.45373.12190.5709
501.07260.27332.08120.41933.10960.5423
(30, 30, 30)451.09230.30312.09660.42313.06810.5447
601.08820.26502.04970.38943.07330.5108
751.02810.22462.08050.36243.13130.4823
n 1 , n 2 , n 3 r η = 0.5, υ = 0.5
θ ^ 1 L ER ( θ ^ 1 L ) θ ^ 2 L ER ( θ ^ 2 L ) θ ^ 3 L ER( θ ^ 3 L )
(10, 10, 10)151.07650.33001.96050.43642.86930.5419
201.06580.32941.96620.43882.88990.5379
251.05880.31331.96980.42532.88560.5215
(20, 20, 20)301.04180.31262.00130.42072.92820.5278
401.06190.28521.99510.40122.96640.5122
501.07000.26162.01690.38812.95890.4893
(30, 30, 30)451.03190.27832.01010.39142.96230.5000
601.04430.24742.06330.36582.96770.4717
751.05230.21862.01660.33782.95030.4375
Table 7. Average value of Bayesian estimators θ ^ 1 L , θ ^ 2 L , θ ^ 3 L and estimated risk for different choices of n 1 , n 2 , n 3 , r and = 1 , υ = 0.1 , 0.5 , η = 0.9.
Table 7. Average value of Bayesian estimators θ ^ 1 L , θ ^ 2 L , θ ^ 3 L and estimated risk for different choices of n 1 , n 2 , n 3 , r and = 1 , υ = 0.1 , 0.5 , η = 0.9.
n 1 , n 2 , n 3 r η = 0.9, υ = 0.1
θ ^ 1 L ER ( θ ^ 1 L ) θ ^ 2 L ER ( θ ^ 2 L ) θ ^ 3 L ER( θ ^ 3 L )
(10, 10, 10)151.25620.54692.25200.71913.27320.8542
201.18070.47652.18590.67333.23250.8172
251.14460.42052.16170.61253.19430.7835
(20, 20, 20)301.15990.42112.17080.58253.18870.7187
401.09890.35012.12930.50983.16580.6590
501.07610.29562.14740.48393.14440.6305
(30, 30, 30)451.07940.34102.12280.48573.13810.6270
601.07350.28202.08720.43013.09660.5732
751.03800.23702.09310.39523.11600.5311
n 1 , n 2 , n 3 r η = 0.9, υ = 0.5
θ ^ 1 L ER ( θ ^ 1 L ) θ ^ 2 L ER ( θ ^ 2 L ) θ ^ 3 L ER( θ ^ 3 L )
(10, 10, 10)151.17400.46822.12890.60623.04480.7299
201.14860.44432.10510.59593.05770.7124
251.09370.38852.06510.55463.01210.6752
(20, 20, 20)301.11460.38922.09600.53203.04750.6468
401.07070.32592.10910.48913.03290.6075
501.07690.28722.08080.45443.03300.5710
(30, 30, 30)451.09660.32872.09290.45713.02830.5798
601.04320.27432.07720.41163.01080.5305
751.04840.23142.05130.37073.02920.5052
Table 8. The failure time data for X1, X2, and X3, and their order (w, ji), where δji = 1.
Table 8. The failure time data for X1, X2, and X3, and their order (w, ji), where δji = 1.
SampleData
X11.89, 4.03, 1.54, 0.31, 0.66, 1.7, 2.17, 1.82, 9.99, 2.24
X21.17, 3.87, 2.8, 0.7, 3.82, 0.02, 0.5, 3.72, 0.06, 3.57
X38.11, 3.17, 5.55, 0.80, 0.20, 1.13, 6.63, 1.08, 2.44, 0.78
Ordered data (w, ji)
(0.02,2), (0.06,2), (0.20,3), (0.31,1), (0.50,2), (0.66,1), (0.70,2), (0.78,3), (0.80,3), (1.083)
(1.13,3), (1.17,2), (1.54,1), (1.70,1), (1.82,1), (1.89,1), (2.17,1), (2.24,1), (2.44,3), (2.80,2)
(3.17,3), (3.57,2), (3.72,2), (3.82,2), (3.87,2), (4.03,1), (5.55,3), (6.63,3), (8.11,3), (9.99,1)
Table 9. ML and Bayesian estimates of the parameters θ1, θ2, θ3 for different choices of r, η = 0.1 and ∆ = ∆2.
Table 9. ML and Bayesian estimates of the parameters θ1, θ2, θ3 for different choices of r, η = 0.1 and ∆ = ∆2.
rr1, r2, r3 θ ^ 1 θ ^ 2 θ ^ 3
20 (8, 6, 6)ML0.47590.36470.3706
Bayesian η = 0.12
SE c = −10.42050.43900.3464
GE c = −0.750.39370.40790.3219
GE c = −0.50.40980.42660.3366
Linex ν = 0.10.41560.43300.3427
Linex ν = 0.50.39770.41130.3289
25 (8, 10, 7)ML0.47590.49430.3663
Bayesianη = 0.12
SE c = −10.42050.49710.3462
GE c = −0.750.39370.46840.3230
GE c = −0.50.36650.43930.2994
Linex ν = 0.10.41560.49110.3427
Linex ν = 0.50.39770.46860.3297
30 (10, 10, 10)ML0.37950.49430.3346
Bayesianη = 0.12
SE c = −10.38200.49710.3339
GE c = −0.750.36000.46840.3146
GE c = −0.50.33760.43930.2951
Linex ν = 0.10.37840.49110.3312
Linex ν = 0.50.36490.46860.3207
Table 10. Bayesian estimates of the parameters θ1, θ2, θ3 for different choices of r; η = 0.5 and ∆ = ∆2.
Table 10. Bayesian estimates of the parameters θ1, θ2, θ3 for different choices of r; η = 0.5 and ∆ = ∆2.
rr1, r2, r3 θ ^ 1 θ ^ 2 θ ^ 3
20(8, 6, 6)SE c = −10.45430.39120.3605
GE c = −0.750.44330.37940.3497
GE c = −0.50.43220.36760.3387
Linex ν = 0.10.4523 0.3893 0.3589
Linex ν = 0.50.4443 0.3819 0.3526
25(8, 10, 7)SE c = −10.4543 0.4953 0.3584
GE c = −0.750.4433 0.4852 0.3488
GE c = −0.50.4322 0.4751 0.3391
Linex ν = 0.10.45230.4932 0.3570
Linex ν = 0.50.4443 0.4853 0.3515
30(10, 10, 10)SE c = −10.3803 0.4953 0.3344
GE c = −0.750.3726 0.4852 0.3276
GE c = −0.50.3648 0.4751 0.3207
Linex ν = 0.10.3791 0.4932 0.3334
Linex ν = 0.50.3744 0.4853 0.3298
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abdel-Aty, Y.; Kayid, M.; Alomani, G. Generalized Bayes Estimation Based on a Joint Type-II Censored Sample from K-Exponential Populations. Mathematics 2023, 11, 2190. https://doi.org/10.3390/math11092190

AMA Style

Abdel-Aty Y, Kayid M, Alomani G. Generalized Bayes Estimation Based on a Joint Type-II Censored Sample from K-Exponential Populations. Mathematics. 2023; 11(9):2190. https://doi.org/10.3390/math11092190

Chicago/Turabian Style

Abdel-Aty, Yahia, Mohamed Kayid, and Ghadah Alomani. 2023. "Generalized Bayes Estimation Based on a Joint Type-II Censored Sample from K-Exponential Populations" Mathematics 11, no. 9: 2190. https://doi.org/10.3390/math11092190

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop