Next Article in Journal
Finite Element Modeling and Analysis of Perforated Steel Members under Blast Loading
Previous Article in Journal
Modelling Detection Distances to Small Bodies Using Spacecraft Cameras
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generalized Fiducial Inference for the Generalized Rayleigh Distribution

1
Department of Mathematics, Xi’an University of Technology, Xi’an 710054, China
2
College of Big Data and Internet, Shenzhen Technology University, Shenzhen 518118, China
3
College of Computer Science and Technology, Qingdao University, Qingdao 266071, China
*
Author to whom correspondence should be addressed.
Modelling 2023, 4(4), 611-627; https://doi.org/10.3390/modelling4040035
Submission received: 16 October 2023 / Revised: 11 November 2023 / Accepted: 14 November 2023 / Published: 17 November 2023

Abstract

:
This article focuses on the interval estimation of the generalized Rayleigh distribution with scale and shape parameters. The generalized fiducial method is used to construct the fiducial point estimators as well as the fiducial confidence intervals, and then their performance is compared with other methods such as the maximum likelihood estimation, Bayesian estimation and parametric bootstrap method. Monte Carlo simulation studies are carried out to examine the efficiency of the methods in terms of the mean square error, coverage probability and average length. Finally, two real data sets are presented to demonstrate the applicability of the proposed method.

1. Introduction

In parametric models, the Rayleigh distribution plays a crucial role in the field of reliability theory and life analysis. As a result, statisticians have been interested in defining novel classes of univariate distributions by adding one or more shape parameters to provide greater flexibility in modeling real data in more application fields. Surles and Padgett [1] introduced the two-parameter Burr Type X distribution and officially named it the generalized Rayleigh (GR) distribution. A random variable X is distributed as the GR distribution if its probability density function (PDF) is
f ( x ; β , λ ) = 2 β λ 2 x e ( λ x ) 2 1 e ( λ x ) 2 β 1 , x > 0 ,
and the responding cumulative distribution function (CDF) is
F ( x ; β , λ ) = 1 e ( λ x ) 2 β , x > 0 ,
and denoted as G R ( β , λ ) , where β > 0 , λ > 0 are the shape and scale parameters, respectively.
Moreover, the survival function and the hazard function of x G R ( β , λ ) are
S ( x ; β , λ ) = 1 1 e ( λ x ) 2 β , x > 0 ,
h ( x ; β , λ ) = 2 β λ 2 x e ( λ x ) 2 1 e ( λ x ) 2 β 1 1 1 e ( λ x ) 2 β , x > 0 ,
respectively. It is observed that for β 0.5 , the PDF of the GR distribution is a decreasing function, and it is a right-skewed unimodal function for β > 0.5 . The GR distribution can be used quite effectively in modeling both strength data and general lifetime data. In many cases, it can be used effectively as an alternative to the gamma or Weibull distribution. It is worth nothing that the two-parameter GR distribution is a specific case within the exponentiated Weibull distribution, which belongs to the exponentiated Weibull distribution proposed by Mudholkar and Srivastava [2]. It is one of the canonical life models in the software industry, bearings and other industrial equipment, vacuum electronic components and numerous others. The field has a wide range of applications.
Many authors handled the GR distribution in terms of statistical inference. For instance, Kundu and Raqabb [3] proposed several estimation programs to estimate the unknown parameters. Abd-Elfattah [4] studied goodness-of-fit tests for the GR distribution with unknown parameters. Raqab [5] studied the inference for the GR distribution based on progressively censored data. Naqash et al. [6] studied the Bayesian estimation of the GR distribution under different prior conditions when λ is known. It is found that the Bayesian estimator with double prior gamma-exponential distribution has less posterior standard error value. Zhang [7] studied the parameter estimation of reliability of the GR distribution under progressive type-II censoring.
In addition to the classical and Bayesian methods, fiducial inference, originally proposed by Fisher [8], stands as a powerful statistical approach. His original concept for fiducial inference was to overcome a limitation in the Bayesian framework (assuming a prior distribution when there was insufficient or no information about the parameters). The generalized fiducial inference (GFI) introduced by Hannig et al. [9] has been recognized by more people. Wandler and Hannig [10] applied GFI on the largest mean of a multivariate normal distribution. Wandler and Hannig [11] used the fiducial framework to make inference about both parameters and extreme quantiles of the generalized Pareto distribution. These intervals were designed to maintain stated coverage while having average interval lengths comparable or shorter than other methods, as summarized in Hannig et al. [12]. Li [13] studied the Fiducial inference with Birnbaum–Saunders distribution. Yan and Liu [14] studied the generalized fiducial inference method for generalized exponential distribution. Qi et al. [15] studied the fiducial distribution for the skew normal distribution. Cetinkaya [16] studied Chen distribution with the fiducial inference method. Tian et al. [17] studied the generalized fiducial confidence intervals of the difference of medians for the independent log-normal distributions. Based on our knowledge, the performance of MLE is often unstable; it becomes necessary to explore alternative inference methods for evaluating the parameters of the GR distribution. When prior information about the parameters is insufficient, selecting a suitable prior distribution can significantly impact the effectiveness of Bayesian inference. In this context, the application of the GFI method could prove valuable in tackling these challenges. Based on this fact, we introduce the GFI to the GR distribution. The performance of interval estimation with different unknown parameters under different sample sizes is studied and compared with the other three methods.
In this paper, we will use the GFI to construct the fiducial point estimators and the fiducial confidence intervals for the GR distribution. The rest of the article is organized as follows. The methods of constructing confidence intervals based on classical frequency inference, bootstrap, GFI and Bayesian approaches are discussed in Section 2. Simulations are conducted to compare the performances of the estimation methods in Section 3. Two real data sets are analyzed for illustrating the usefulness of the proposed methods in Section 4. Some conclusions are provided in Section 5.

2. Methods

2.1. Frequentist Inference

Given observed data x = x 1 , x n T from the G R ( β , λ ) , the log-likelihood function L ( β , λ x ) is
L ( β , λ x ) = C + n ln β + 2 n ln λ + i = 1 n ln x i λ 2 i = 1 n x i 2 + ( β 1 ) i = 1 n ln 1 e ( λ x i ) 2 .
The MLEs of β and λ can be derived from
L β = n β + i = 1 n ln 1 e ( λ x i ) 2 = 0 , L λ = 2 n λ 2 λ i = 1 n x i 2 + 2 λ ( β 1 ) i = 1 n x i 2 e λ x i 2 1 e ( λ x i ) 2 = 0 .
MLEs of β and λ are computed by solving nonlinear equations; the log-likelihood function with respect to λ is an unimodal function. The second-order partial derivative is less than 0; for more details, see Kundu and Raqab [1].
Confidence intervals for β and λ can be obtained using the following asymptotic normal distribution for β ^ and λ ^ , that is,
( β ^ , λ ^ ) T L N 2 ( β , λ ) T , I 0 1 ,
where L represents that as the sample size increases, the distribution of the random variable or estimator tends toward a probability distribution, and the inverse of the observed Fisher information matrix for I 0 is
I 0 1 = 2 L β 2 2 L β λ 2 L β λ 2 L λ 2 | ( β , λ ) T = ( β ^ , λ ^ ) T 1 = var ( β ^ ) cov ( β ^ , λ ^ ) cov ( β ^ , λ ^ ) var ( λ ^ ) .
Consequently, the asymptotic 100 ( 1 α ) % confidence intervals for β and λ are
I β : β ^ ± z α / 2 var ( β ^ ) and I λ : λ ^ ± z α / 2 var ( λ ^ ) ,
where var ( β ^ ) , var ( λ ^ ) is obtained from the observed information matrix; z α / 2 represents α / 2 percentile of standard normal distribution.

2.2. Bootstrap Technique

The bootstrap method introduced by Efron [18] is a resampling technique based on the random selection of new samples from the original sample to construct a sampling distribution for a particular statistic. The bootstrap procedure requires the following steps:
(1)
Randomly generate sample data ( x 1 , , x n ) from the GR( β , λ ). The MLEs for the unknown parameter ( β , λ ) are calculated, and the estimated result is denoted as ( β ^ , λ ^ ).
(2)
Use β ^ , λ ^ to generate a bootstrap sample of observations ( x 1 * , , x n * ) from GR distribution.
(3)
Based on the bootstrap sample, compute the MLEs of ( β , λ ), and the estimated result is denoted as ( β ^ * , λ ^ * ) .
(4)
Repeat Steps (2)–(3) for B times. Obtain estimates of ( β , λ ), denoted as ( β ^ 1 * , , β ^ B * ) and ( λ ^ 1 * , , λ ^ B * ) .
(5)
The parameter estimates of parameters β and λ obtained at B times are sorted in ascending order respectively. The 100 ( 1 α % ) percentile bootstrap confidence intervals ( PBCl ) for β are
I P B C I β : β ^ B ( α / 2 ) * β β ^ B ( 1 α / 2 ) * .
Same treatment with λ ^ * leads to the BPCI of λ as
I P B C I λ : λ ^ B ( α / 2 ) * λ λ ^ B ( 1 α / 2 ) * .

2.3. Generalized Fiducial Inference

Let the data-generating equation be
x = G ( U , θ ) ,
where x = x 1 , x n T is the data, θ Θ R p is a p-dimensional vector and U is a complete known random vector. Under certain specific differentiability conditions, Hannig et al. [14] derived a user-friendly formula for calculating the generalized fiducial distribution (GFD) for θ :
f F ( θ ) = f ( x , θ ) J ( x , θ ) Θ f x , θ J x , θ d θ .
where f ( x θ ) represents the joint density function of x and
J ( x , θ ) = D d d θ G ( u , θ ) u = G 1 ( x , θ ) .
where D ( A ) = i = 1 i 1 < < i p n det ( A ) i and the above sum goes over n p of p-tuples of indexes i = 1 i 1 < < i p n , and the submatrix ( A ) i is the p × p matrix formed by the rows i 1 , , i p of A. For more details, see Hannig et al. [14].
Regarding our concerned issue, we have that
U i = F x i ; β , λ , i = 1 , , n ,
where F x i ; β , λ 1 e ( λ x i ) 2 β is the distribution function of G R ( β , λ ) and U i follows a uniform distribution on ( 0 , 1 ) . According to (10), we can derive the data-generating equation x i = G u i , β , λ , , which is
x i = 1 λ ln 1 u i 1 β .
Then, we have
G β u i = 1 e λ x i 2 β = 1 2 β λ 2 x i e λ x i 2 1 ln 1 e ( λ x i ) 2 ,
and
G λ u i = 1 e λ x i 2 β = x i λ .
Together (12), (15) with (16), it follows that
J ( x , β , λ ) = D d d θ G ( u , θ ) u = G 1 ( x , θ ) = 1 2 β λ 3 1 i < j n x i x j e λ x j 2 1 ln 1 e ( λ x j ) 2 x j x i e λ x i 2 1 ln 1 e ( λ x i ) 2 ,
where g x i , x j , λ = x i x j e λ x j 2 1 ln 1 e ( λ x j ) 2 x j x i e λ x i 2 1 ln 1 e ( λ x i ) 2 .
Finally, we can derive the following GFD of ( β , λ ) :
f F ( β , λ x ) = f ( x β , λ ) J ( x , β , λ ) 0 0 f ( x β , λ ) J ( x , β , λ ) d β d λ ,
where f ( x β , λ ) = i = 1 n f i x i ; β , λ and
f i x i ; β , λ f ( x ; β , λ ) = 2 β λ 2 x e ( λ x ) 2 1 e ( λ x ) 2 β 1 .
Specifically,
f F ( β , λ x ) β n λ 2 n i = 1 n x i e i = 1 n ( λ x i ) 2 i = 1 n 1 e ( λ x i ) 2 β 1 · 1 2 β λ 3 1 i < j n g x i , x j , λ β n 1 λ 2 n 3 i = 1 n x i e i = 1 n ( λ x i ) 2 + ( β 1 ) ln 1 e ( λ x i ) 2 1 i < j n g x i , x j , λ ,
On the one hand, the conditional fiducial density function of β given λ can be derived as follows:
f F ( β λ , x ) β n 1 e β i = 1 n ln 1 e λ x i 2 ,
and the conditional fiducial density function of λ given β is
f F ( λ β , x ) λ 2 n 3 e i = 1 n ( λ x i ) 2 ( β 1 ) ln 1 e ( λ x i ) 2 1 i < j n g x i , x j , λ .
Obviously, it is easy to see that the sample of β can be implemented using the G a n , i = 1 n ln 1 e λ x i 2 generating routine. By observing Equation (22), we know that the conditional posterior distribution of λ cannot be reduced analytically into a common distribution. Therefore, we propose to use the normal proposal distribution of Metropolis–Hasting (M-H) sampling in the Gibbs algorithm to obtain samples; for more details, see Tierney [19]. The Gibbs algorithm with M-H sampling for the fiducial inference of the GR distribution can be given as follows:
Step 1.
Set the initial value ( β 0 , λ 0 ) = ( β ^ , λ ^ ) to be given by the MLEs.
Step 2.
Set i = 1.
Step 3.
Let β i and λ i be the values of i t h iteration.
  • Step 2-1 Generate β i from gamma proposal distribution, G a n , i = 1 n ln 1 e λ x i 2 .
  • Step 2-2 The procedures to generate λ i using the M-H algorithm are listed as follows:
    (i)
    Generate λ * from normal proposal distribution, N ( λ i 1 , v a r ( λ ^ ) ) .
    (ii)
    Evaluate the acceptance probability
    Ω λ = min 1 , f F λ * β i , x f F λ i 1 β i , x .
    (iii)
    Generate a random variable v from a uniform ( 0 , 1 ) distribution.
    (iv)
    If v < Ω λ , accept the proposal and set λ i = λ * , else set λ i = λ i 1 .
Step 4.
Set i = i + 1 .
Step 5.
Repeat Steps 2–4 for N times.
Step 6.
Calculate fiducial point estimators of β and λ by
β ^ F = 1 N M i = M + 1 N β i , λ ^ F = 1 N M i = M + 1 N λ i ,
where M is the burn-in period, which is based on the saved N M samples. The burn-in period is an initial segment of the generated Markov chain that is discarded or ignored. After the burn-in period, the Markov chain is considered to be in a stationary state, and the saved samples provide a representation of the target distribution. Arrange β M + 1 , , β N from small to large. Select the ( N M ) α / 2 th and ( N M ) 1 α / 2 th of the permutation as β ^ α / 2 and β ^ 1 α / 2 , respectively.
Then, the 100 ( 1 α ) % fiducial credible intervals are
I F α : β ^ α / 2 α β ^ 1 α / 2 ,
I F λ : λ ^ α / 2 λ λ ^ 1 α / 2 .

2.4. Bayesian Inference

Naqash et al. [6] illustrated that the reference prior of the G R ( β , λ ) results in an improper posterior distribution. A Bayesian approach is adopted, where the representative priors on β and λ are gamma priors. However, these priors are not conjugate, necessitating an additional assumption that β and λ are independent a priori distributions, as follows,
π ( β ) β a 1 1 e b 1 β , β > 0 π ( λ ) λ a 2 1 e b 2 λ , λ > 0 .
Since all the hyperparameters a 1 , a 2 , b 1 , b 2 are considered to be known and non-negative, the joint posterior density function of β and λ is expressed as follows:
f B ( β , λ x ) = f ( x β , λ ) π ( λ ) π ( β ) 0 0 f ( x β , λ ) π ( λ ) π ( β ) d β d λ ,
where f ( x β , λ ) is the joint density of x = x 1 , x n T . If no information is available, let a 1 = a 2 = b 1 = b 2 = 0 . Therefore,
f B ( β , λ x ) β n 1 λ 2 n 1 i = 1 n x i e λ 2 Σ x i 2 i = 1 n 1 e ( λ x i ) 2 β 1 .
Then, the full conditional function of β given λ is
f B ( β λ , x ) β n 1 e β i = 1 n ln 1 e λ x i 2 .
Therefore, it is evident that the sample of β can be accomplished using the G a n , i = 1 n ln 1 e λ x i 2 generating routine. The full conditional function of λ given β is as follows:
f B ( λ β , x ) λ 2 n 1 e i = 1 n ( λ x i ) 2 ( β 1 ) ln 1 e ( λ x i ) 2 .
Using the fiducial process, we can easily obtain estimates of β and λ through the Gibbs algorithm for M-H sampling. Finally, the 100 ( 1 α ) % credible intervals for the Bayesian estimates proposed can be formulated as
I B β : β ^ α / 2 β β ^ 1 α / 2 ,
I B λ : λ ^ α / 2 λ λ ^ 1 α / 2 .

3. Simulation

In this section, we use R version 4.1.1 program software to compare the performance of the different methods proposed in the previous section for GR distribution parameter estimation and confidence intervals.
In point estimation, mean square error (MSE) is used to compare the performance of MLE, GFI and Bayesian methods, where M S E ( θ ) = ( 1 / N ) k = 1 N θ ^ ( k ) θ 2 , where N = 1000 .
In the interval estimation, we use coverage probability (CP) and average length (AL) to compare MLE, confidence intervals for the bootstrap method, fiducial confidence intervals for GFI and credible intervals for the Bayesian method.
Moreover, the frequentist confidence intervals (FrCI), the parameter bootstrap confidence intervals (PBCI), the Bayesian credible posterior intervals (BaCI) and the fiducial confidence intervals (FiCI) are considered to illustrate the usefulness of the proposed method.
In the simulation, the Markov chain Monte Carlo (MCMC) method is used to obtain estimation of parameters. We run the Markov chain with N = 10,500 iterations; the first M = 500 values are discarded as the burn-in period. In the bootstrap simulation, we used B = 1000 regenerated samples. The simulation study is carried out for different parameter values and different sample sizes. We consider six different combinations of the parameters ( β , λ ) as (0.5, 1), (0.5, 2), (0.5, 3), (1, 1), (1, 2) and (1, 3). The chosen sample sizes are n = 10, 20, 30 and 50. Based on these samples, the MSE, CP and AL of the 95% fiducial (confidence or credible) intervals are calculated for each method.
We simulated the MSE of GR distribution parameters under MLE, GFI and Bayesian estimation based on different parameter values and different samples. The simulation results are shown in Table 1. It can be seen from the table that when the sample size increases, the MSE of MLE, GFI and the Bayesian method gradually becomes smaller and very close. It can also be seen from the table that when n = 10 and 20, GFI is superior to MLE and the Bayesian method.
From Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7, we have the following conclusions. As the sample size increased, the ALs of the confidence interval decreased. The FrCI and BaCI have better CPs when the sample size is small. From the perspective of interval coverage, the actual coverage and nominal coverage of FrCI and BaCI of parameters β and λ are very close to each other. The actual coverage of FrCI of parameter β is too conservative, while the difference between the actual coverage and the nominal coverage of FrCI of parameter λ is obvious. The actual coverage of PBCI of parameter β and λ is significantly smaller than the nominal coverage. The FrCI performs better than BaCI in terms of ALs in most large cases.

4. Application

4.1. Ball Bearing Data

In this section, real-life data are illustrated to compare different estimation procedures studied in this study. Schafer [20] provides the numbers of 10 6 revolutions before failure for each of 23 ball bearings in a life test. Meintanis et al. [21] pointed out that the GR distribution could fit this sample well. The data are given in Appendix A.
We calculate the Kolmogorov–Smirnov (KS) test statistics and p-values between the empirical distribution and the cumulative distribution function. The KS statistic is 0.157 and p-value is 0.625. Therefore, based on the p-value, we cannot reject this data set from the GR distribution. Figure 1 shows the fitted density and CDF plot of the data set. The point estimates, 95% confidence (fiducial and credible) intervals of β and λ are shown in Table 8 and Table 9. From Table 9, we can see that the interval estimation of FrCI is narrower than that of other methods, as is consistent with our simulation study. The GFI method provides better results than the MLE, bootstrap and Bayesian methods. Figure 2 depicts the track plots, estimated marginal posterior densities and autocorrelation coefficient plots. The observation of Figure 2 indicates that the Markov chain generated during the GFI process exhibits convergence and stability.

4.2. COVID-19 Mortality Rate Data

In this section, for illustrative purposes, real-life data for COVID-19 mortality rate are illustrated to compare different estimation procedures studied in this study. We consider the set of real-life data, which is reported in Almetwally [21]. These data represent a COVID-19 mortality rate belonging to France of 51 days, which is recorded from 1 January to 20 February 2021. The data are given in the Appendix B.
For this example, the KS statistic is 0.118, and the associated p value is 0.474. Therefore, the GR distribution has a good fitting effect on this set of data. Figure 3 shows the fitted density and CDF plot of the data set. The point estimates and 95% confidence (fiducial and credible) intervals of β and λ are shown in Table 10 and Table 11. Because the sample size is large enough, we can see that the estimates of the three methods are very close in Table 10. The GFI method of the GR distribution is the best estimation method, according to Table 11. Figure 4 depicts the track plots, estimated marginal posterior densities and autocorrelation coefficient plots. Observe that the Markov chain generated in the GFI process in Figure 4 has convergence and stability.

5. Conclusions

This paper studies the statistical inference of the GR distribution, constructs the maximum likelihood estimation, bootstrap and Bayesian methods, and proposes another GFI method for comparison through a large number of simulation studies and two real examples to evaluate the performance of our proposed method. From the simulation studies, in most cases, the values of the MSE estimated by the GFI method for the parameters of the GR distribution are better than other estimation methods, even in the case of small sample sizes. The values of the AL of fiducial (confidence or credible) intervals of all methods decrease with the increase in sample size. The effect of MLE is not ideal when the sample size is small, and the bootstrap method does not reach the nominal level in the whole simulation process. The fiducial estimates based on Hannig’s method are better than the Bayesian estimates even when the sample size is small. In the future, we will improve or propose some new methods to increase the values of the CP without affecting ALs. In addition, we will extend our method to multivariate distributions.

Author Contributions

W.T. and C.T.: conceptualization, methodology, validation, investigation, resources, supervision, project administration, visualization, writing—review and editing; X.Z.: software, formal analysis, data duration, writing—original draft preparation, visualization. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Natural Science Foundation of Top Talent of SZTU under grant GDRC202214.

Data Availability Statement

Datasets are provided in the paper.

Acknowledgments

The authors would like to thank the editor, and two anonymous referees for their careful reading of this article and for their constructive suggestions which considerably improved this article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Data Set

Appendix A.1. Ball Bearing Data

17.8828.9233.0041.5242.1245.6048.8051.84
51.9654.1255.5667.8068.6468.6468.8884.12
93.1298.64105.12105.84127.92128.04173.40

Appendix A.2. COVID-19 Mortality Rate Data

0.09950.05250.06150.04550.14740.33730.10870.10550.2235
0.06330.05650.25770.13450.08430.10230.22960.06910.0505
0.14340.23260.10890.12060.22420.07860.05870.15160.2070
0.11700.11410.27050.07930.06350.14740.23450.11310.1129
0.20540.06000.05340.14220.22350.09080.10920.19580.0580
0.05020.12290.17380.09170.07870.1654

Appendix B. Code

ACI<-function(alpha,beta,n){
x<-rGR(n,alpha,beta)
fn<-function(theta){
alpha<-theta[1]
beta<-theta[2]
n<-length(x)
logL<-n*log(alpha)+2*n*log(beta)+sum(log(x))-beta^2*sum(x^2)+(alpha-1)*sum(log(1-exp(-beta^2*x^2)))
return(-logL)
}
res<-optim(theta<-c(alpha,beta),fn,method=“L-BFGS-B”,lower=c(0.01,0.01),hessian=T)
res$par
sqrt(diag(solve(res$hessian)))
down<-res$par-qnorm(0.975)*sqrt(diag(solve(res$hessian)))
up<-res$par+qnorm(0.975)*sqrt(diag(solve(res$hessian)))
return(c(res$par[1],res$par[2],down[1],up[1],down[2],up[2]))
}
p=0
q=0
m=1000
est1<-rep(0,m)
est2<-rep(0,m)
AL1<-rep(0,m)
AL2<-rep(0,m)
for(i in 1:m){
ACI=ACI(alpha,beta,n)
est1[i]<-ACI[1]
est2[i]<-ACI[2]
AL1[i]<-ACI[4]-ACI[3]
AL2[i]<-ACI[6]-ACI[5]
if(ACI[3]<alpha&alpha<ACI[4]){
p=p+1
}
if(ACI[5]<beta&beta<ACI[6]){
q=q+1
}
bias1<-mean(est1)-alpha
var1<-var(est1)*(m-1)/m
mse1<-bias1^2+var1
bias2<-mean(est2)-beta
var2<-var(est2)*(m-1)/m
mse2<-bias2^2+var2
coverage1<-c(mean(est1),mean(est2), mse1, mse2,p/m,q/m,mean(AL1),mean(AL2))
#bootstrap
sim<-1000
NBOOT<-500
n<-30
alpha<-1
beta<-1
dis<-0
num<-0
DIS<-0
NUM<-0
rGR<-function(n,alpha,beta){
sqrt(-log(1-runif(n)^(1/alpha)))/beta
}
mle<-function(n,alpha,beta,x){
fn<-function(theta){
alpha<-theta[1]
beta<-theta[2]
n<-length(x)
logL<-n*log(2)+n*log(alpha)+2*n*log(beta)+sum(log(x))-beta^2*sum(x^2)+(alpha-1)
*sum(log(1-exp(-beta^2*x^2)))
return(-logL)
}
res<-optim(theta<-c(alpha,beta),fn,method=“L-BFGS-B”,lower=c(0.01,0.01),hessian=F)
res$par
return(res$par)
}
alphastar<-rep(0,NBOOT)
betastar<-rep(0,NBOOT)
BOOT.P<-function(n,alpha,beta){
para<-mle(n,alpha,beta,x)
alphahat<-para[1]
betahat<-para[2]
for(i in 1:NBOOT){
xstar<-rGR(n,alphahat,betahat)
par<-mle(n,alphahat,betahat,xstar)
par1<-par[1]
par2<-par[2]
alphastar[i]<-par1
betastar[i]<-par2
}
low1<-quantile(alphastar,0.025,names=F)
up1<-quantile(alphastar,0.975,names=F)
low2<-quantile(betastar,0.025,names=F)
up2<-quantile(betastar,0.975,names=F)
num1<-sum(alpha>low1 && alpha<up1)
dis1<-up1-low1
num2<-sum(beta>low2 && beta<up2)
dis2<-up2-low2
return(c(dis1,num1,dis2,num2))
}
for(i in 1:sim){
x<-rGR(n, alpha,beta)
samples<- BOOT.P(n,alpha,beta)
dis<-dis+samples[1]
num<-num+samples[2]
DIS<-DIS+samples[3]
NUM<-NUM+samples[4]
}
dis/sim
num/sim
DIS/sim
NUM/sim
#fudicial or bayes
# fbeta<-function(theta1,theta2){
#  theta2^(2*n-1)*exp(-theta2^2*sum(data^2))*prod((1-exp(-theta2^2*data^2))^(theta1-1))
# }
fbeta<-function(theta1,theta2){
for(i in 1:(n-1)){
for(j in (i+1):n){
g<-abs(data[i]*(exp((theta2*data[j])^2)-1)*log(1-exp(-(theta2*data[j])^2))/data[j]-data[j]*
(exp((theta2*data[i])^2)-1)*log(1-exp(-(theta2*data[i])^2))/data[i])
result<-result+g
}
}
theta2^(2*n-3)*exp(-theta2^2*sum(data^2))*prod((1-exp(-theta2^2*data^2))^(theta1-1))*result
}
gibbs_mh <- function(n,theta1,theta2) {
#data<-rGR(n,theta1,theta2)
num_samples <- 10500
burn_in <- 500
theta1 <- 1
theta2 <- 1
samples <- matrix(NA, nrow = num_samples, ncol = 2)
for (i in 1:num_samples) {
theta1 <- rgamma(1, shape = n, rate = -sum(log(1-exp(-(theta2*data)^2))))
theta2_candidate <- rnorm(1, mean = theta2, sd =0.2)
}
acceptance_prob <- min(1,fbeta(theta1,theta2_candidate)/fbeta(theta1,theta2))
if (runif(1) < acceptance_prob) {
theta2 <- theta2_candidate
}
samples[i, ] <- c(theta1, theta2)
}
samples <- samples[-c(1:burn_in), ]
theta1hat<-mean(samples[,1])
theta2hat<-mean(samples[,2])
low<-quantile(samples[seq(1,num_samples-burn_in,1),1],0.025,names=F)
up<-quantile(samples[seq(1,num_samples-burn_in,1),1],0.975,names=F)
LOW<-quantile(samples[seq(1,num_samples-burn_in,1),2],0.025,names=F)
UP<-quantile(samples[seq(1,num_samples-burn_in,1),2],0.975,names=F)
return(c(theta1hat,theta2hat,low,up,LOW,UP))
}
result<-0
p=0
q=0
est1<-rep(0,m)
est2<-rep(0,m)
AL1<-rep(0,m)
AL2<-rep(0,m)
for(i in 1:m){
data<-rGR(n,theta1,theta2)
ACI=gibbs_mh(n,theta1,theta2)
est1[i]<-ACI[1]
est2[i]<-ACI[2]
AL1[i]<-ACI[4]-ACI[3]
AL2[i]<-ACI[6]-ACI[5]
if(ACI[3]<theta1&theta1<ACI[4]){
p=p+1
}
if(ACI[5]<theta2&theta2<ACI[6]){
q=q+1
}
}
bias1<-mean(est1)-theta1
var1<-var(est1)*(m-1)/m
mse1<-bias1^2+var1
bias2<-mean(est2)-theta2
var2<-var(est2)*(m-1)/m
mse2<-bias2^2+var2
coverage1<-c(mean(est1),mean(est2), mse1, mse2,p/m,q/m,mean(AL1),mean(AL2))

References

  1. Surles, J.G.; Padgett, W.J. Inference for reliability and stress-strength for a scaled Burr type X distribution. Lifetime Data Anal. 2001, 7, 187–200. [Google Scholar] [CrossRef] [PubMed]
  2. Mudholkar, G.S.; Srivastava, D.K. Exponentiated weibull family for analyzing bathtub failure-rate data. IEEE Trans. Reliab. 1993, 42, 299–302. [Google Scholar] [CrossRef]
  3. Kundu, D.; Raqab, M.Z. Generalized Rayleigh distribution: Different methods of estimations. Comput. Stat. Data Anal. 2005, 49, 187–200. [Google Scholar] [CrossRef]
  4. Abd-Elfattah, A.M. Goodness of fit test for the generalized Rayleigh distribution with unknown parameters. Math. Sci. Lett. 2011, 81, 357–366. [Google Scholar] [CrossRef]
  5. Raqab, M.Z.; Madi, M.T. Inference for the generalized Rayleigh distribution based on progressively censored data. J. Stat. Plan. Inference 2011, 141, 3313–3322. [Google Scholar] [CrossRef]
  6. Naqash, S.; Ahmad, S.P.; Ahmed, A. Bayesian Analysis of Generalized Rayleigh Distribution. J. Stat. Comput. Simul. 2016, 6, 85–96. [Google Scholar]
  7. Zhang, Z.; Gui, W. Statistical inference of reliability of Generalized Rayleigh distribution under progressively type-II censoring. J. Comput. Appl. Math. 2019, 361, 295–312. [Google Scholar] [CrossRef]
  8. Fisher, R.A. Inverse probability. In Mathematical Proceedings of the Cambridge Philosophical Society; Cambridge University Press: Cambridge, UK, 1930; Volume 26, pp. 528–535. [Google Scholar]
  9. Hannig, J. On generalized fiducial inference. Stat. Sin. 2009, 19, 491–544. [Google Scholar]
  10. Wandler, D.V.; Hannig, J. Fiducial inference on the largest mean of a multivariate normal distribution. J. Multivar. Anal. 2011, 102, 87–104. [Google Scholar] [CrossRef]
  11. Wandler, D.V.; Hannig, J. Generalized fiducial confidence intervals for extremes. Extremes 2012, 15, 67–87. [Google Scholar] [CrossRef]
  12. Hannig, J.; Iyer, H.; Lai, R.C.; Lee, T.C. Generalized fiducial inference: A review and new results. J. Am. Stat. Assoc. 2016, 111, 1346–1361. [Google Scholar] [CrossRef]
  13. Li, Y.; Xu, A. Fiducial inference for Birnbaum-Saunders distribution. J. Stat. Comput. Simul. 2016, 86, 1673–1685. [Google Scholar] [CrossRef]
  14. Yan, L.; Liu, X. Generalized fiducial inference for generalized exponential distribution. J. Stat. Comput. Simul. 2018, 88, 1369–1381. [Google Scholar] [CrossRef]
  15. Qi, X.; Li, H.; Tian, W.; Yang, Y. Confidence Interval, Prediction Interval and Tolerance Interval for the Skew Normal Distribution: A Pivotal Approach. Symmetry 2022, 14, 855. [Google Scholar] [CrossRef]
  16. Çetinkaya, Ç. Generalized Fiducial Inference for the Chen Distribution. Istat. J. Turk. Stat. Assoc. 2022, 14, 74–86. [Google Scholar]
  17. Tian, W.; Yang, Y.; Tong, T. Confidence Intervals Based on the Difference of Medians for Independent Log-Normal Distributions. Mathematics 2022, 10, 2989. [Google Scholar] [CrossRef]
  18. DiCiccio, T.J.; Efron, B. Bootstrap confidence intervals. Stat. Sci. 1996, 11, 189–228. [Google Scholar] [CrossRef]
  19. Tierney, L. Markov chains for exploring posterior distributions. Ann. Stat. 1994, 22, 1701–1728. [Google Scholar] [CrossRef]
  20. Meintanis, S.G. A new approach of goodness-of-fit testing for exponentiated laws applied to the generalized Rayleigh distribution. Comput. Stat. Data Anal. 2008, 52, 2496–2503. [Google Scholar] [CrossRef]
  21. Almetwally, E.M. Application of COVID-19 pandemic by using odd lomax-G inverse Weibull distribution. Math. Sci. Lett. 2021, 10, 47–57. [Google Scholar]
Figure 1. Graphical fitting of the GR distribution.
Figure 1. Graphical fitting of the GR distribution.
Modelling 04 00035 g001
Figure 2. Trace, density and autocorrelation coefficient graphs for β (on up) and λ (on down) via GFI methods.
Figure 2. Trace, density and autocorrelation coefficient graphs for β (on up) and λ (on down) via GFI methods.
Modelling 04 00035 g002
Figure 3. Graphical fitting of the GR distribution.
Figure 3. Graphical fitting of the GR distribution.
Modelling 04 00035 g003
Figure 4. Trace, density and autocorrelation coefficient graphs for β (on up) and λ (on down) via GFI methods.
Figure 4. Trace, density and autocorrelation coefficient graphs for β (on up) and λ (on down) via GFI methods.
Modelling 04 00035 g004
Table 1. Mean (MSE) for the point estimations of β and λ .
Table 1. Mean (MSE) for the point estimations of β and λ .
( β , λ ) n β ^ MLE β ^ GFI β ^ BAY λ ^ MLE λ ^ GFI λ ^ BAY
(0.5, 1)100.668 (0.188)0.613 (0.131)0.621 (0.133)1.175 (0.156)1.054 (0.108)1.077 (0.109)
200.562 (0.033)0.546 (0.030)0.552 (0.033)1.068 (0.041)1.034 (0.041)1.037 (0.042)
300.541 (0.018)0.524 (0.017)0.533 (0.018)1.047 (0.025)1.012 (0.025)1.025 (0.026)
500.524 (0.009)0.517 (0.009)0.518 (0.009)1.030 (0.014)1.012 (0.014)1.013 (0.014)
(0.5, 2)100.660 (0.152)0.615 (0.138)0.639 (0.163)2.309 (0.545)2.094 (0.431)2.166 (0.426)
200.561 (0.033)0.546 (0.033)0.549 (0.031)2.141 (0.195)2.068 (0.163)2.075 (0.172)
300.543 (0.019)0.530 (0.017)0.535 (0.019)2.092 (0.110)2.052 (0.103)2.055 (0.105)
500.524 (0.009)0.517 (0.009)0.516 (0.009)2.057 (0.058)2.017 (0.058)2.022 (0.056)
(0.5, 3)100.649 (0.137)0.621 (0.165)0.643 (0.185)3.416 (1.131)3.145 (0.880)3.259 (1.058)
200.580 (0.041)0.549 (0.031)0.554 (0.031)3.239 (0.465)3.044 (0.365)3.125 (0.411)
300.542 (0.019)0.526 (0.016)0.534 (0.017)3.128 (0.244)3.021 (0.212)3.074 (0.238)
500.520 (0.009)0.519 (0.009)0.519 (0.009)3.074 (0.131)3.031 (0.123)3.027 (0.123)
(1, 1)101.458 (1.851)1.358 (1.461)1.484 (3.108)1.115 (0.077)1.031 (0.060)1.071 (0.067)
201.160 (0.185)1.133 (0.182)1.148 (0.203)1.053 (0.028)1.006 (0.024)1.029 (0.026)
301.092 (0.095)1.064 (0.090)1.082 (0.091)1.028 (0.017)1.012 (0.016)1.016 (0.016)
501.061 (0.049)1.038 (0.048)1.053 (0.049)1.021 (0.010)1.008 (0.009)1.009 (0.009)
(1, 2)101.457 (1.697)1.337 (1.548)1.476 (3.637)2.222 (0.308)2.046 (0.222)2.132 (0.285)
201.155 (0.188)1.124 (0.187)0.145 (0.195)2.089 (0.118)2.026 (0.100)2.054 (0.106)
301.096 (0.098)1.078 (0.094)1.080 (0.097)2.064 (0.069)2.010 (0.062)2.030 (0.066)
501.058 (0.048)1.042 (0.045)1.052 (0.047)2.039 (0.037)2.009 (0.037)2.019 (0.037)
(1, 3)101.440 (1.530)1.325 (1.299)1.375 (1.548)3.355 (0.715)3.089 (0.555)3.142 (0.581)
201.180 (0.193)1.109 (0.143)1.148 (0.180)3.164 (0.250)3.028 (0.211)3.086 (0.232)
301.109 (0.102)1.081 (0.096)1.078 (0.092)3.123 (0.173)3.031 (0.146)3.048 (0.142)
501.066 (0.053)1.048 (0.043)1.052 (0.045)3.062 (0.091)3.023 (0.082)3.031 (0.084)
Table 2. Empirical coverage and average length of 95% two-sided confidence intervals for β = 0.5 , λ = 1 .
Table 2. Empirical coverage and average length of 95% two-sided confidence intervals for β = 0.5 , λ = 1 .
CPAL
n θ FrCI PBCI BaCI FiCI FrCI PBCI BaCI FiCI
10 β 0.9690.8340.9410.9471.0471.4920.9841.012
λ 0.9280.8430.9420.9491.1021.3241.0991.102
20 β 0.9610.8820.9410.9510.5990.6370.5880.599
λ 0.9470.8790.9440.9540.7410.8850.7410.732
30 β 0.9540.9060.9480.9540.4630.5500.4590.460
λ 0.9420.9050.9460.9480.5940.6340.5970.591
50 β 0.9540.9260.9490.9530.3450.3560.3420.341
λ 0.9410.9250.9510.9480.4560.4750.4550.454
Table 3. Empirical coverage and average length of 95% two-sided confidence intervals for β = 0.5 , λ = 2 .
Table 3. Empirical coverage and average length of 95% two-sided confidence intervals for β = 0.5 , λ = 2 .
CPAL
n θ FrCI PBCI BaCI FiCI FrCI PBCI BaCI FiCI
10 β 0.9770.8410.9410.9481.0291.6511.0160.982
λ 0.9270.8540.9380.9432.1902.8882.1642.162
20 β 0.9650.9020.9480.9510.5950.6270.5860.584
λ 0.9390.8970.9440.9471.4831.6461.4831.475
30 β 0.9590.9310.9510.9440.4660.6750.4600.457
λ 0.9430.9150.9510.9481.1871.2281.1881.184
50 β 0.9540.9370.9530.9510.3450.3750.3410.342
λ 0.9460.9340.9460.9470.9100.9480.9110.910
Table 4. Empirical coverage and average length of 95% two-sided confidence intervals for β = 0.5 , λ = 3 .
Table 4. Empirical coverage and average length of 95% two-sided confidence intervals for β = 0.5 , λ = 3 .
CPAL
n θ FrCI PBCI BaCI FiCI FrCI PBCI BaCI FiCI
10 β 0.9720.8590.9550.9441.0261.4901.0440.999
λ 0.9320.8430.9420.9503.2913.8113.3013.290
20 β 0.9700.8860.9460.9460.5980.7630.5800.586
λ 0.9400.9110.9510.9422.2262.6922.2432.210
30 β 0.9580.8960.9500.9480.4650.5840.4600.453
λ 0.9390.9230.9540.9381.7881.9691.7801.778
50 β 0.9590.9100.9530.9500.3460.4020.3430.344
λ 0.9440.9170.9410.9471.3641.3881.3631.369
Table 5. Empirical coverage and average length of 95% two-sided confidence intervals for β = 1 , λ = 1 .
Table 5. Empirical coverage and average length of 95% two-sided confidence intervals for β = 1 , λ = 1 .
CPAL
n θ FrCI PBCI BaCI FiCI FrCI PBCI BaCI FiCI
10 β 0.9680.8630.9400.9462.7573.0892.8412.623
λ 0.9250.8680.9440.9450.8540.8870.8730.855
20 β 0.9670.8830.9490.9521.3921.9781.3881.364
λ 0.9390.8960.9500.9420.5880.6370.5940.584
30 β 0.9620.9140.9480.9491.0631.3321.0491.039
λ 0.9430.9120.9470.9500.4750.4990.4780.473
50 β 0.9580.9300.9440.9460.7810.8850.7700.770
λ 0.9470.9260.9430.9480.3660.3760.3660.364
Table 6. Empirical coverage and average length of 95% two-sided confidence intervals for β = 1 , λ = 2 .
Table 6. Empirical coverage and average length of 95% two-sided confidence intervals for β = 1 , λ = 2 .
CPAL
n θ FrCI PBCI BaCI FiCI FrCI PBCI BaCI FiCI
10 β 0.9710.8250.9370.9472.7043.4952.6562.557
λ 0.9170.8800.9490.9501.7082.0081.7461.731
20 β 0.9680.8910.9470.9501.3951.9511.3771.356
λ 0.9380.8830.9440.9411.1771.2751.1901.182
30 β 0.9610.9040.9450.9511.0631.3231.0541.041
λ 0.9410.9140.9410.9490.9520.9970.9560.950
50 β 0.9540.9260.9470.9430.7780.8840.7760.768
λ 0.9470.9280.9440.9490.7310.7490.7330.730
Table 7. Empirical coverage and average length of 95% two-sided confidence intervals for β = 1 , λ = 3 .
Table 7. Empirical coverage and average length of 95% two-sided confidence intervals for β = 1 , λ = 3 .
CPAL
n θ FrCI PBCI BaCI FiCI FrCI PBCI BaCI FiCI
10 β 0.9610.8570.9430.9432.7343.4712.6702.617
λ 0.9290.8450.9480.9412.5493.0342.6142.583
20 β 0.9660.8720.9550.9451.3942.0651.3851.331
λ 0.9410.8760.9500.9461.7831.9091.7831.760
30 β 0.9680.9180.9490.9451.0641.3121.0411.054
λ 0.9360.9040.9450.9491.4351.5011.4401.428
50 β 0.9520.9260.9460.9470.7860.8590.7770.772
λ 0.9450.9310.9520.9481.0981.1281.1011.103
Table 8. Estimations for the data of ball bearings.
Table 8. Estimations for the data of ball bearings.
MLEGFIBayes
β (SE)1.202 (0.072)1.160 (0.071)1.192 (0.071)
λ (SE)0.013 (0.0004)0.013 (0.0004)0.013 (0.0004)
Table 9. Confidence intervals and lengths for the result of ball bearings.
Table 9. Confidence intervals and lengths for the result of ball bearings.
β λ
Interval Length Interval Length
FrCI[0.526, 1.878]1.352[0.010, 0.017]0.007
PBCI[0.776, 2.372]1.596[0.010, 0.018]0.008
FICI[0.623, 1.876]1.253[0.009, 0.016]0.007
BaCI[0.673, 1.973]1.300[0.010, 0.017]0.007
Table 10. Estimations for the data of COVID-19 mortality rate.
Table 10. Estimations for the data of COVID-19 mortality rate.
MLEGFIBayes
β (SE)1.109 (0.029)1.093 (0.029)1.093 (0.029)
λ (SE)7.022 (0.089)6.919 (0.089)6.924 (0.089)
Table 11. Confidence intervals and lengths for the result of COVID-19 mortality rate.
Table 11. Confidence intervals and lengths for the result of COVID-19 mortality rate.
β λ
Interval Length Interval Length
FrCI[0.696, 1.521]0.825[5.773, 8.270]2.497
PBCI[0.794, 1.811]1.017[6.040, 8.548]2.508
FiCI[0.750, 1.534]0.784[5.718, 8.075]2.377
BaCI[0.719, 1.512]0.793[5.706, 8.111]2.405
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, X.; Tian, W.; Tian, C. Generalized Fiducial Inference for the Generalized Rayleigh Distribution. Modelling 2023, 4, 611-627. https://doi.org/10.3390/modelling4040035

AMA Style

Zhu X, Tian W, Tian C. Generalized Fiducial Inference for the Generalized Rayleigh Distribution. Modelling. 2023; 4(4):611-627. https://doi.org/10.3390/modelling4040035

Chicago/Turabian Style

Zhu, Xuan, Weizhong Tian, and Chengliang Tian. 2023. "Generalized Fiducial Inference for the Generalized Rayleigh Distribution" Modelling 4, no. 4: 611-627. https://doi.org/10.3390/modelling4040035

Article Metrics

Back to TopTop