Next Article in Journal
Vacuum Currents for a Scalar Field in Models with Compact Dimensions
Previous Article in Journal
Maintaining Symmetry between Convolutional Neural Network Accuracy and Performance on an Edge TPU with a Focus on Transfer Learning Adjustments
Previous Article in Special Issue
Different Methods for Estimating Default Parameters of Alpha Power-Transformed Power Distributions Using Record-Breaking Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Wald Intervals via Profile Likelihood for the Mean of the Inverse Gaussian Distribution

by
Patchanok Srisuradetchai
1,
Ausaina Niyomdecha
1 and
Wikanda Phaphan
2,3,*
1
Department of Mathematics and Statistics, Faculty of Science and Technology, Thammasat University, Pathum Thani 12120, Thailand
2
Department of Applied Statistics, Faculty of Applied Science, King Mongkut’s University of Technology North Bangkok, Bangkok 10800, Thailand
3
Research Group in Statistical Learning and Inference, King Mongkut’s University of Technology North Bangkok, Bangkok 10800, Thailand
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(1), 93; https://doi.org/10.3390/sym16010093
Submission received: 25 December 2023 / Revised: 9 January 2024 / Accepted: 10 January 2024 / Published: 11 January 2024
(This article belongs to the Special Issue Symmetry in Probability Theory and Statistics)

Abstract

:
The inverse Gaussian distribution, known for its flexible shape, is widely used across various applications. Existing confidence intervals for the mean parameter, such as profile likelihood, reparametrized profile likelihood, and Wald-type reparametrized profile likelihood with observed Fisher information intervals, are generally effective. However, our simulation study identifies scenarios where the coverage probability falls below the nominal confidence level. Wald-type intervals are widely used in statistics and have a symmetry property. We mathematically derive the Wald-type profile likelihood (WPL) interval and the Wald-type reparametrized profile likelihood with expected Fisher information (WRPLE) interval and compare their performance to existing methods. Our results indicate that the WRPLE interval outperforms others in terms of coverage probability, while the WPL typically yields the shortest interval. Additionally, we apply these proposed intervals to a real dataset, demonstrating their potential applicability to other datasets that follow the IG distribution.

1. Introduction

The inverse Gaussian (IG) distribution, also known as the Wald distribution, is of considerable significance in various scientific and applied research fields owing to its distinctive properties and flexibility [1]. Researchers have widely applied the IG distribution across multiple disciplines since Schrödinger [2] introduced it and Wald [3] extensively studied it. Notably, its skewness and relationship with Brownian motion make it particularly effective for modeling asymmetric data. Folks and Chhikara [1] have thoroughly explored the mathematical and statistical properties of this distribution.
Researchers utilized the IG distribution to examine particle movement in bloodstreams [4], while Onar and Padgett [5] applied it to determine the tensile strength of carbon fibers. Jain and Jain [6] used it for estimating device failure time reliability. In finance, it has been instrumental in modeling stock returns, particularly addressing data skewness [7]. Environmental applications include modeling ecological phenomena and air pollution, as explored in various studies [8,9]. The IG distribution has proven invaluable in medical research, especially in survival analysis, due to its efficacy in handling time-to-event data [10]. Its application extends to engineering and quality control, aiding in reliability and life data analysis [11].
The IG distribution has found applications in traffic engineering, where it models vehicular flow [12], and in neuroscience, particularly in the study of spike-response variability in locust auditory neurons, which helps identify two noise sources that impact spike timing [13]. Its adaptability is also evident in agricultural settings for modeling growth rates [14]. In the field of insurance and risk analysis, the IG distribution has been used to model bodily injury claims and to analyze economic data concerning Italian households’ incomes [15].
The IG distribution for a random variable X has a probability density function given by:
f ( x ; μ , λ ) = λ 2 π x 3 exp λ ( x μ ) 2 2 μ 2 x ,
where x > 0 , μ > 0 , and λ > 0 . The shapes of the IG distributions with varying parameter sets are depicted in Figure 1. For lower values of λ (0.5), the distribution demonstrates a higher peak and a steep decline in probability, which suggests a sharper distribution. As λ increases, the peak becomes less pronounced, indicating a distribution with a heavier tail. Furthermore, an increase in μ causes a rightward shift in the distribution, representing higher mean values.
The mean of the IG distribution is μ, and the variance is μ 3 / λ [3]. The maximum likelihood estimates (MLEs) of μ and λ are:
μ ^ M L = X ¯ = i = 1 n X i / n   and   λ ^ M L = n / i = 1 n 1 / X i 1 / X ¯ ,
where X 1 , , X n are random samples from IG( μ , λ ) . Furthermore, it is also known that X ¯ ~ I G ( μ , n λ ) , λ i = 1 n 1 / X i 1 / X ¯ ~ χ n 1 2 , and X ¯ and λ i = 1 n 1 / X i 1 / X ¯ are independent [16]. Folks and Chhikara [1] proved that the uniformly minimum variance unbiased estimators for μ and λ are:
μ ^ U M V U E = X ¯ = i = 1 n X i / n   and   λ ^ U M V U E = n 3 / i = 1 n 1 / X i 1 / X ¯ .
The necessity of using confidence intervals (CIs) for estimating the mean of the IG distribution rather than relying solely on point estimators is pivotal in statistical analysis. Confidence intervals provide a range within which the true mean is likely to fall, reflecting the uncertainty inherent in using sample data to estimate population parameters. In contrast to a single point estimate, confidence intervals offer a more nuanced and informative picture, essential for rigorous statistical inference and decision-making in fields like survival analysis and reliability engineering, where precise estimation is key [17].
This paper focuses on Wald’s interval, a fundamental tool in statistical inference known for its simplicity and broad applicability. Wald’s interval is derived from the Wald test, which is based on the asymptotic normality of maximum likelihood estimators. It offers a direct method for constructing confidence intervals, particularly when the sample size is large [18]. However, the finite sample distributions of Wald tests are often not well defined [19]. Constructing Wald CIs for single-parameter models is straightforward, but when dealing with the IG distribution, which involves two parameters, the profile likelihood method is more advantageous. Non-normal distributions or small sample sizes may render Wald CIs unsuitable. To fix this, we can use reparameterization to create Wald CIs using the profile likelihood method. This makes sure that the sampling distribution is more like the normal distribution, which is a key assumption of the Wald method. Furthermore, using expected Fisher information in Wald CIs can offer more stability and be less sensitive to sample-specific irregularities.
In this study, CIs for the mean parameter of the IG distribution are constructed for scenarios where the shape parameter is unknown, focusing primarily on marginal intervals. We investigate two specific types of CIs: the first is the Wald-type CI using profile likelihood without reparameterization, and the second is the Wald-type CI incorporating reparameterization and utilizing expected Fisher information. These two proposed intervals are compared with existing intervals through simulation studies.
The rest of the paper is structured as follows: Section 2 presents intervals in literature. Section 3 presents the mathematical derivation of the Wald-type profile likelihood and Wald-type reparametrized profile likelihood with expected Fisher information intervals. Section 4 details the properties of these proposed intervals. Section 5 describes the simulation studies conducted to compare the performance of the proposed intervals with existing methods. Section 6 applies the proposed intervals to a real dataset. The paper concludes with a discussion in Section 7, where the findings are summarized and potential avenues for future research are outlined. Table 1 provides a list of detailed abbreviations and definitions used in this paper.

2. Intervals in Literature

2.1. Wald-Type Confidence Interval

The Wald confidence interval is typically constructed around a maximum likelihood estimator (MLE), leveraging the properties of a normal distribution, especially for large samples [20]. The calculation of this interval is based on the Wald test, which is used to evaluate the null hypothesis H 0 : θ = θ 0 against an alternative H a : θ = θ 1 . Under H 0 , two key statistics are used, both exhibiting asymptotic normal distributions as the sample size increases:
I θ ^ M L θ ^ M L θ 0 ~ a N ( 0 , 1 )   and   J θ ^ M L θ ^ M L θ 0 ~ a N ( 0 , 1 ) ,
where I θ ^ M L and J θ ^ M L represent the estimated observed and expected Fisher information, respectively [21]. The observed and expected Fisher information are defined as:
I θ = 2 L θ ; x ˜ θ 2   and   J θ = E 2 L θ ; X ˜ θ 2 .
Construct the Wald confidence interval using the formula:
θ ^ M L ± z α / 2 1 / I θ ^ M L   or   θ ^ M L ± z α / 2 1 / J θ ^ M L .
It is worth noting that the Wald statistics can be written in
W = I θ ^ M L θ ^ M L θ 2 ,
which is the quadratic approximation of 2 log Λ ( θ ) = 2 log L θ ; x ˜ / L ( θ ^ M L ; x ˜ ) . The Wald statistic follows an asymptotic chi-squared distribution, with the degrees of freedom equal to the number of parameters being tested.

2.2. Profile-Likelihood-Based Confidence Interval

Statistical inference uses the profile likelihood confidence interval method to estimate confidence intervals for a parameter of interest in a model with multiple parameters. This approach is particularly useful in complex models where direct computation of the confidence interval for a parameter is challenging due to the presence of nuisance parameters—other parameters in the model that are not of primary interest [22,23]. In the profile likelihood method, the process involves:
  • The likelihood function: Suppose we have a likelihood function L ( θ , ϕ ) , where θ is the parameter of interest and ϕ represents nuisance parameters. The full likelihood is a function of both of these sets of parameters.
  • Profiling out nuisance parameters: To focus on θ , we maximize the likelihood function over the nuisance parameters ϕ for each fixed value of θ . This gives us the profile likelihood function for θ : L P ( θ ) = max ϕ L ( θ , ϕ ) .
  • Estimation of the parameter of interest: The estimate θ ^ is obtained by maximizing the profile likelihood as follows:
θ ^ = arg max θ L P ( θ ) ;
4.
Constructing the confidence interval: The confidence interval for θ is then constructed based on the profile likelihood, so the interval is as follows:
θ 2 log L P ( θ ) L P ( θ ^ ) χ 1 α , d . f . 2 ,
where χ 1 α , d . f . 2 is the critical value from the chi-squared distribution with degrees of freedom equal to the number of parameters being estimated (often 1 for a single parameter), and α is the significance level (e.g., 0.05 for a 95% confidence interval).
While likelihood-based and profile-likelihood-based intervals typically lack a closed form, which can be seen as a drawback compared to the more straightforward Wald-type interval with its closed-form solutions for many situations, this study focuses on applying the construction of a Wald-type interval to the profile likelihood. This approach aims to leverage the benefits of both methods, offering a more practical solution for statistical analysis.

2.3. Existing Confidence Intervals

For the IG distribution, as shown in (1), there are two parameters. In cases where the shape parameter is known, Arefi et al. [24] proposed CIs for the mean parameter, which are (1) the Wald CI: X ¯ ± z 1 α / 2 X ¯ 3 / 2 / n λ ; (2) the score CI, derived from solving z 1 α / 2 n λ X ¯ μ / μ 3 z 1 α / 2 ; and (3) the CI obtained from the likelihood ratio:
n λ X ¯ n λ + k n λ X ¯ μ n λ X ¯ n λ k n λ X ¯ ,
where k = χ 1 , ( 1 α ) 2 and 0 < k < n λ / X ¯ . In a case where both parameters are unknown, Srisuradetchai [25] proposed the formula for the profile-likelihood-based (PL) CI as:
n + n 2 + B n X ¯ B μ n n 2 + B n X ¯ B ,
where B = μ ^ i = 1 n X i 1 n μ ^ exp χ 1 , ( 1 α ) 2 / n i = 1 n X i 1 . Díaz-Francés [26] derived the reparameterized profile likelihood (RPL) CI as:
φ ^ + n ( c 2 / n 1 ) λ ^ i = 1 n X i 1 μ φ ^ n ( c 2 / n 1 ) λ ^ i = 1 n X i 1 ,
where c = exp χ 1 α , 1 2 / n and φ ^ = μ ^ 1 = n / i = 1 n X i . Srisuradetchai [27] used reparameterized profile likelihoods to construct a Wald-type reparameterized profile likelihood using observed Fisher information (WRPLO) CI for the mean of the IG distribution. The interval is:
n i = 1 n X i + z 1 α 2 1 λ ^ i = 1 n X i 1 μ   n i = 1 n X i z 1 α 2 1 λ ^ i = 1 n X i 1 ,
where λ ^ = i = 1 n X i 1 n 1 X ¯ 1 .
In the literature, the Wald-type profile-likelihood-based (WPL) interval and the Wald-type reparameterized profile-likelihood interval with expected Fisher information (WRPLE) are not present. Using expected Fisher information generally leads to intervals that are more stable across different samples, while intervals based on observed Fisher information can be more sensitive to the specificities of the data set. Furthermore, these two types of CIs will be derived and compared to the PL, RPL, and WRPLO through Monte Carlo simulations.

3. Mathematical Results

This section will focus on the mathematical derivation of two statistical intervals: the WPL and the WRPLE.

3.1. Wald-Type Profile-Likelihood-Based Interval

The full log-likelihood function based on the observed random sample size of n , x o b s = x 1 , x 2 , , x n , is as follows:
l ( μ , λ ) = log L ( μ , λ ) = n 2 log λ n 2 log ( 2 π ) 3 2 i = 1 n log x i λ 2 i = 1 n x i μ 2 + i = 1 n 1 x i + n λ μ .
To focus on μ , we maximize the likelihood function over the nuisance parameters λ for each fixed value of μ by solving λ l ( μ , λ ) = s e t 0 . Then,
λ ˜ = i = 1 n x i n μ 2 + i = 1 n x i 1 n 2 μ 1 .
This gives us the log profile likelihood function for θ :
l P ( μ ) = l ( μ , λ ˜ ) = c n 2 + n 2 log ( n μ ) n 2 log μ i = 1 n x i 1 + i = 1 n x i μ 2 n .
The estimate μ ^ is obtained by maximizing the profile likelihood l P ( μ ) . Consider
S p ( μ ) = l p ( μ ) μ = μ c n 2 + n 2 log ( n μ ) n 2 log μ i = 1 n x i 1 + 1 μ i = 1 n x i 2 n = n 2 μ n 2 i = 1 n x i 1 n μ ^ μ 2 μ i = 1 n x i 1 + 1 μ i = 1 n x i 2 n = n 2 μ n 2 μ i = 1 n x i 1 1 μ 2 i = 1 n x i i = 1 n x i 1 + 1 μ 2 i = 1 n x i 2 n μ = n 2 μ n 2 μ i = 1 n x i 1 + 1 μ 2 i = 1 n x i 2 n μ 1 i = 1 n x i 1 1 μ 2 i = 1 n x i .
Then, solving S p ( μ ) = s e t 0 will give μ ^ = i = 1 n x i / n . Next, we will find the observed Fisher information:
I p ( μ ) = S p ( μ ) μ = n 2 μ 2 3 n 2 μ 4 i = 1 n x i n i = 1 n x i 1 2 μ 2 i = 1 n x i 1 + 1 μ 2 i = 1 n x i 2 n μ 1
n i = 1 n x i 1 2 μ n 2 μ 3 i = 1 n x i 2 n μ 2 2 μ 3 i = 1 n x i i = 1 n x i 1 + 1 μ 2 i = 1 n x i 2 n μ 2 .
And the inverse of the Fisher information is I p 1 ( μ ) = 1 / I p ( μ ) . Term I p 1 ( μ ) can be simplified as follows:
I p 1 ( μ ^ ) = n 2 x ¯ 2 + 3 n 2 2 x ¯ 3 n i = 1 n x i 1 2 x ¯ 2 i = 1 n x i 1 + n x ¯ 2 n x ¯ 1 + n i = 1 n x i 1 2 x ¯ n 2 2 x ¯ 2 2 n x ¯ 2 2 n x ¯ 2 i = 1 n x i 1 + n x ¯ 2 n x ¯ 2 1 = n 2 x ¯ 2 n 2 x ¯ 2 i = 1 n x i 1 3 n x ¯ i = 1 n x i 1 n x ¯ 1 1 = n 2 x ¯ 2 1 i = 1 n x i 1 n x ¯ i = 1 n x i 1 n x ¯ + 2 n x ¯ i = 1 n x i 1 n x ¯ 1 = n 2 x ¯ 2 2 n x ¯ i = 1 n x i 1 n x ¯ 1 = n 2 x ¯ 3 i = 1 n x i 1 n x ¯ 1 1 = x ¯ 3 n λ ^ .
Thus, the ( 1 α ) % WPL interval is
X ¯ z 1 α 2 X ¯ 3 n λ ^ μ X ¯ + z 1 α 2 X ¯ 3 n λ ^ ,
and when we substitute λ ^ = n / i = 1 n ( 1 / X i 1 / X ¯ ) , the corresponding WPL interval for μ will be
X ¯ z 1 α 2 X ¯ n X ¯ i = 1 n 1 X i 1 X ¯   μ   X ¯ + z 1 α 2 X ¯ n X ¯ i = 1 n 1 X i 1 X ¯ ,
where z 1 α 2 is the ( 1 α / 2 ) th quantile of the standard normal.

3.2. Wald-Type Reparameterized Profile Likelihood with Expected Fisher Information

The full log-likelihood function based on the observed random sample size of n , x o b s = x 1 , x 2 , , x n , is as follows:
l ( μ , λ ) = log L ( μ , λ ) = n 2 log λ n 2 log ( 2 π ) 3 2 i = 1 n log x i λ 2 i = 1 n x i μ 2 + i = 1 n 1 x i + n λ μ
Using reparameterization μ = φ 1 , the full log-likelihood function becomes
l ( φ , λ ) = n 2 log ( λ ) n 2 log ( 2 π ) 3 2 i = 1 n log x i λ φ 2 i = 1 n x i 2 λ 2 i = 1 n x i 1 + n λ φ .
To focus on φ , we maximize l ( φ , λ ) over the nuisance parameters λ for each fixed value of μ by solving λ l ( φ , λ ) = s e t 0 . Then,
λ ˜ ( φ ) = n φ 2 i = 1 n x i 2 n φ + i = 1 n x i 1 1 ,
By plugging in λ ˜ ( φ ) into l ( φ , λ ) , we obtain the log reparameterized profile likelihood function as follows:
l p ( φ , λ ˜ ( φ ) ) = n 2 log ( λ ˜ ( φ ) ) λ ˜ ( φ ) φ 2 i = 1 n x i 2 λ ˜ ( φ ) 2 i = 1 n x i 1 + n λ ˜ ( φ ) φ + c = n 2 log ( λ ˜ ( φ ) ) λ ˜ ( φ ) 2 φ 2 i = 1 n x i 2 n φ + i = 1 n x i 1 + c = n 2 log ( n ) n 2 log ( φ 2 i = 1 n x i 2 n φ + i = 1 n x i 1 ) n 2 + c ,
where c = n 2 log ( 2 π ) 3 2 i = 1 n log x i . The score function of log reparametrized profile likelihood is as follows:
S p ( φ ) = l p ( φ , λ ˜ ( φ ) ) φ = n 2 φ 2 i = 1 n x i 2 n φ + i = 1 n x i 1 1 2 φ i = 1 n x i 2 n = n φ i = 1 n x i n φ 2 i = 1 n x i 2 n φ + i = 1 n x i 1 1 .
The observed Fisher information is the negative of the second derivative of log reparameterized profile likelihood, as follows:
I p ( φ ) = S p ( φ ) φ = ( n φ i = 1 n x i n 2 ) ( 2 φ i = 1 n x i 2 n ) φ 2 i = 1 n x i 2 n φ + i = 1 n x i 1 2 n i = 1 n x i φ 2 i = 1 n x i 2 n φ + i = 1 n x i 1 = n i = 1 n x i φ 2 i = 1 n x i 2 n φ + i = 1 n x i 1 ( n φ i = 1 n x i n 2 ) ( 2 φ i = 1 n x i 2 n ) φ 2 i = 1 n x i 2 n φ + i = 1 n x i 1 2 = n φ 2 i = 1 n x i 2 + n i = 1 n x i i = 1 n x i 1 + 2 n 2 φ i = 1 n x i 2 n 3 φ 2 i = 1 n x i 2 n φ + i = 1 n x i 1 2 .
The expectation of the observed Fisher information can be calculated as follows:
J p ( φ ) = E I p ( φ ) = E n φ 2 i = 1 n X i 2 + n i = 1 n X i i = 1 n X i 1 + 2 n 2 φ i = 1 n X i 2 n 3 φ 2 i = 1 n X i 2 n φ + i = 1 n X i 1 2
Using a first-order Taylor approximation for a function of two variables, the expectation becomes:
J p ( φ ) E n φ 2 i = 1 n X i 2 + n i = 1 n X i i = 1 n X i 1 + 2 n 2 φ i = 1 n X i 2 n 3 E φ 2 i = 1 n X i 2 n φ + i = 1 n X i 1 2 .
Since E X = μ , V a r X = μ 3 / λ , E 1 / X = 1 / μ + 1 / λ , V a r 1 / X = 1 / μ λ + 2 / λ 2 , and E i = 1 n X i i = 1 n X i 1 = n 2 + n 2 μ / λ n μ / λ , term
E n φ 2 i = 1 n X i 2 + n i = 1 n X i i = 1 n X i 1 + 2 n 2 φ i = 1 n X i 2 n 3 = n φ 2 E i = 1 n X i 2 + n E i = 1 n X i i = 1 n X i 1 + 2 n 2 φ E i = 1 n X i + E 2 n 3 = n φ 2 n μ 3 λ + n 2 μ 2 + n n 2 + n 2 μ λ n μ λ + 2 n 2 φ n μ 2 n 3 = n 2 φ 2 μ 3 λ n 3 φ 2 μ 2 + n 3 + n 3 μ λ n 2 μ λ + 2 n 3 φ μ 2 n 3 = n 2 φ 2 φ 3 λ n 3 φ 2 φ 2 + n 3 + n 3 φ λ n 2 φ λ + 2 n 3 φ φ 2 n 3 = n 3 φ λ 2 n 2 φ λ = n λ n ( n 2 ) φ .
The denominator of (18) can also be expressed as:
E φ 2 i = 1 n X i 2 n φ + i = 1 n X i 1 2 = E φ 4 i = 1 n X i 2 4 n φ 3 i = 1 n X i + 2 φ 2 i = 1 n X i i = 1 n X i 1 + 4 n 2 φ 2 4 n φ i = 1 n X i 1 + i = 1 n X i 1 2 .
Because
E i = 1 n X i 1 = n μ + n λ   and   E i = 1 n X i 1 2 = n μ λ + 2 n λ 2 + n 2 μ 2 + 2 n 2 μ λ + n 2 λ 2 ,
(19) will become:
E φ 2 i = 1 n x i 2 n φ + i = 1 n x i 1 2 = φ 4 n μ 3 λ + n 2 μ 2 4 n φ 3 n μ + 2 φ 2 n 2 + n 2 μ λ n μ λ + 4 n 2 φ 2 4 n φ n μ + n λ + n μ λ + 2 n λ 2 + n 2 μ 2 + 2 n 2 μ λ + n 2 λ 2 .
With μ = φ 1 ,
E φ 2 i = 1 n X i 2 n φ + i = 1 n X i 1 2 = φ 4 n λ φ 3 + n 2 φ 2 4 n φ 3 n φ + 2 φ 2 n 2 + n 2 φ λ n φ λ + 4 n 2 φ 2 4 n φ n φ + n λ + n φ λ + 2 n λ 2 + n 2 φ 2 + 2 n 2 φ λ + n 2 λ 2 = n λ n + 2 λ
The expected Fisher information is calculated as follows:
J p ( φ ) = ( n 2 ) ( n + 2 ) n λ φ ,
and the corresponding standard error of the estimator φ is as follows:
s . e . ( φ ^ ) = J p 1 ( φ ^ ) = ( n + 2 ) ( n 2 ) φ ^ n λ ^ = ( n + 2 ) ( n 2 ) i = 1 n ( 1 / X i 1 / X ¯ ) n 2 X ¯ .
The WRPLE interval of φ will be
1 X ¯ z 1 α 2 1 n ( n + 2 ) ( n 2 ) X ¯ i = 1 n 1 X i 1 X ¯   φ 1 X ¯ + z 1 α 2 1 n ( n + 2 ) ( n 2 ) X ¯ i = 1 n 1 X i 1 X ¯ ,
where z 1 α 2 is the ( 1 α / 2 ) th quantile of the standard normal. Therefore, the WRPLE interval of μ is as follows:
1 X ¯ + z 1 α 2 1 n ( n + 2 ) ( n 2 ) X ¯ i = 1 n 1 X i 1 X ¯   1 μ 1 X ¯ z 1 α 2 1 n ( n + 2 ) ( n 2 ) X ¯ i = 1 n 1 X i 1 X ¯ 1 .

4. Some Properties of the Proposed Intervals

Because the Wald statistics in (6) is the quadratic approximation of 2 log Λ ( θ ) and from (7), some conditions are required for constructing a confidence interval.

4.1. A Condition for the WPL Interval

Let X 1 , X 2 , , X n be a random sample of size n from a population with an inverse Gaussian distribution with unknown mean parameter μ and shape parameter λ . Because the Wald statistics is a quadratic approximation of 2 log Λ ( μ ) = 2 log L P μ ; x ˜ / L P ( μ ^ ; x ˜ ) , and 2 log Λ ( μ ) has asymptotically chi-squared distribution, the lower and upper bounds of the WPL interval can be obtained if
lim μ 0 + I p ( μ ^ ) ( μ μ ^ ) 2 χ 1 α , 1 2   and   lim μ I p ( μ ^ ) ( μ μ ^ ) 2 χ 1 α , 1 2 ,
respectively. Because lim μ 0 + I p ( μ ^ ) ( μ μ ^ ) 2 = I p ( μ ^ ) μ ^ 2 , and from (14) I p ( μ ^ ) = n λ ^ / μ ^ 3 , lim μ 0 + I p ( μ ^ ) ( μ μ ^ ) 2 = n λ ^ / μ ^ . By substituting μ ^ = X ¯ and λ ^ = n / i = 1 n ( 1 / X i 1 / X ¯ ) , the condition for the lower bound of the WPL interval is as follows:
n 2 X ¯ i = 1 n 1 X i 1 X ¯ χ 1 α , 1 2 .
And lim μ I p ( μ ^ ) ( μ μ ^ ) 2 = , which is always greater than χ 1 α , 1 2 . This means that an upper bound for the WPL interval does always exist, but a lower bound does depend on the data.

4.2. A Condition for the WRPLE Interval

Like the conditions of the WPL interval, the WRPLE interval can be found if and only if
lim φ 0 + J p ( φ ^ ) ( φ φ ^ ) 2 χ 1 α , 1 2   and   lim φ ^ J p ( φ ^ ) ( φ φ ^ ) 2 χ 1 α , 1 2 .
Because lim φ 0 + J p ( φ ^ ) ( φ φ ^ ) 2 = J p ( φ ^ ) φ ^ 2 and from (18), the condition for the lower bound of the WPL interval is
n 2 X ¯ ( n 2 ) ( n + 2 ) i = 1 n ( 1 / X i 1 / X ¯ ) 1 χ 1 α , 1 2 .
For the upper bound, lim φ ^ J p ( φ ^ ) ( φ φ ^ ) 2 = , so the upper bound does always exist.

5. Simulation Studies

In the simulation studies, the sample sizes vary, including 5, 10, 15, 30, 45, 60, and 100. The mean parameter values are set at 1, 3, and 7, while the shape parameter values are 0.5, 1, and 3. The performance of two proposed distributions will be compared with the PL interval proposed by Srisuradetchai [25], the RPL approach by Díaz-Francés [26], and the WRPLO interval by Srisuradetchai [27]. Performance is evaluated in terms of coverage probability (CP) and average interval length (AIL). Results for the PL, RPL, and WRPLO are summarized in Table 2, Table 3 and Table 4, and those for the proposed intervals are in Table 5 and Table 6.
From Table 2, Table 3, Table 4, Table 5 and Table 6, we observe that with a constant shape parameter and sample size, an increase in the mean of the IG distribution tends to decrease the CP value, while the AIL increases noticeably. Conversely, with a fixed mean and sample size, an increase in the shape parameter slightly raises the CP. For instance, Table 5 shows that with a mean of 3 and a sample size of 15, the CPs are 0.4913, 0.7718, and 0.8788 for shape parameters of 0.5, 1, and 3, respectively. Moreover, as the sample size increases, the CP generally increases, but the AIL decreases. Table 3 illustrates that for the RPL approach, a larger sample size is required as the mean and/or shape of the IG distribution increase.
Generally, the average lengths of the intervals PL, RPL, WRPLO, WRPLE, and WPL vary notably. The WPL tends to provide shorter intervals compared to WRPLE, which exhibits longer average lengths in many scenarios. PL and RPL usually fall in between these extremes, with WRPLO showing variable performance.
Figure 2 shows the performance of the proposed intervals WRPLE and WPL compared with the existing intervals PL, RPL, and WRPLO. The WRPLE method demonstrates high CP across various sample sizes and distribution parameters, making it a strong contender. Interestingly, the PL method also shows robust performance, particularly in certain conditions, potentially ranking as the second best in terms of CP. For instance, with a mean of 3 and a shape parameter of 0.5 at a sample size of 15, PL achieves a CP of around 0.92, notably higher than WPL. This highlights PL’s effectiveness under specific parameter configurations.
Both the mean and shape parameters of the IG distribution indeed affect the CP. A higher mean tends to decrease CP across most methods, indicating sensitivity to central tendency changes. Conversely, an increase in the shape parameter generally leads to a slight increase in CP, reflecting its impact on data skewness and variability.
Sample size is a crucial factor affecting CP. As the sample size increases, the CP generally improves, aligning more closely with the nominal level. This increase in CP with larger sample sizes is particularly pronounced for the WRPLE and WPL methods, underscoring their suitability for larger datasets.
In summary, the WRPLE method consistently shows the highest CP, making it the top performer. The PL method, often outperforming the WPL, ranks second in many scenarios. The WRPLO method follows, demonstrating solid performance but not quite matching the PL. The RPL and WPL methods, while effective, generally show lower CPs, positioning them lower in the hierarchy. This ranking, based on our simulation studies, suggests that while the WRPLE method is the most reliable overall, the effectiveness of each method varies significantly depending on the specific sample size, mean, and shape parameters of the inverse Gaussian distribution.

6. Application to a Real Dataset

The dataset, sourced from Lu and Chi [28], comprises 30 sequential observations of March precipitation in Minneapolis/St. Paul. The dataset is as follows:
0.77, 1.74, 0.81, 1.20, 1.95, 1.20, 0.47, 1.43, 3.37, 2.20, 3.00, 3.09, 1.51, 2.10, 0.52,
1.62, 1.31, 0.32, 0.59, 0.81, 2.81, 1.87, 1.18, 1.35, 4.75, 2.48, 0.96, 1.89, 0.90, 2.05.
Table 7 summarizes the descriptive statistics. The mean and standard deviation of precipitation are 1.675 and 1, respectively. Four candidate distributions—exponential, Cauchy, log-logistic, and inverse Gaussian—are fitted to the dataset. The results, summarized in Table 8, indicate that the fitted inverse Gaussian distribution has the highest p-value and the lowest Akaike information criterion (AIC), suggesting that it is the most suitable for this dataset. The estimated mean and shape parameters for the inverse Gaussian distribution are 1.675 and 3.584, respectively. From Figure 3, which displays the probability plot, it is evident that the points of the log-logistic and inverse Gaussian distributions align more closely with the diagonal line compared to the exponential and Cauchy distributions.
The 95% confidence intervals for the March precipitation dataset are computed using formulas (9), (10), (11), (15), and (23) for the PL, RPL, WRPLO, WPL, and WRPLE methods, respectively. Observation of the interval lengths in Table 9 indicates that the results align with the simulation study. This study found that WPL typically produces shorter intervals compared to WRPLE, while both PL and RPL generally fall between these two in terms of interval length.

7. Conclusions

The mathematically derived WPL and WRPLE intervals have a closed form, making them easy for users to calculate for a given dataset. Simulation studies show the WRPLE interval provides a coverage probability close to 0.95, the nominal confidence level, compared to other intervals. For large sample sizes (at least 30), WRPLE and WRPLO are comparable, but WRPLE is superior for small sample sizes. The WPL, however, seems to have lower performance relative to WRPLE and other existing intervals in many cases; this implies that reparameterization is essential for constructing Wald-type confidence intervals. Additionally, as sample size increases, coverage probability improves for all interval types.
Future research could explore the simultaneous confidence interval construction for both parameters of the IG distribution. Additionally, there is potential to focus on other characteristics as parameters of interest, such as the variance of the IG distribution. This could broaden the applicability of confidence intervals in various statistical contexts. Another significant direction for future work includes developing software packages to facilitate the practical application of these intervals, enhancing their usability in statistical analysis.

Author Contributions

Conceptualization, P.S. and W.P.; methodology, P.S.; software, A.N.; validation, P.S., A.N. and W.P.; formal analysis, P.S.; investigation, W.P. and A.N.; resources, W.P.; data curation, A.N.; writing—original draft preparation, P.S. and A.N.; writing—review and editing, W.P.; visualization, P.S. and A.N.; supervision, P.S.; project administration, W.P.; funding acquisition, W.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by King Mongkut’s University of Technology North Bangkok. Contract no. KMUTNB-66-BASIC-04.

Data Availability Statement

The dataset used in this study can be found in the work of Lu and Chi [28].

Acknowledgments

The authors would like to thank the editor and the reviewers for their valuable comments and suggestions to improve this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Folks, J.L.; Chhikara, R.S. The Inverse Gaussian Distribution and Its Statistical Application—A Review. J. R. Stat. Soc. Ser. B Methodol. 1978, 40, 263–289. [Google Scholar] [CrossRef]
  2. Schrödinger, E. Theory of Parabolic and Rising Experiments on Particles with Brownian Motion. Phys. Z. 1915, 16, 289–295. [Google Scholar]
  3. Wald, A. Sequential Analysis; Wiley: New York, NY, USA, 1947. [Google Scholar]
  4. Wise, M.E. Skew Distributions in Biomedicine Including Some with Negative Powers of Time. In A Modern Course on Statistical Distributions in Scientific Work; Patil, G.P., Kotz, S., Ord, J.K., Eds.; NATO Advanced Study Institutes Series; Springer: Dordrecht, The Netherlands, 1975; Volume 17. [Google Scholar]
  5. Onar, A.; Padgett, W.J. Accelerated Test Models with the Inverse Gaussian Distribution. J. Stat. Plan. Inference 2000, 89, 119–133. [Google Scholar] [CrossRef]
  6. Jain, R.K.; Jain, S. Inverse Gaussian Distribution and Its Application to Reliability. Microelectron. Reliab. 1996, 36, 1323–1335. [Google Scholar] [CrossRef]
  7. Barndorff-Nielsen, O.E.; Shephard, N. Modelling by Lévy Processes for Financial Econometrics. In Lévy Processes; Barndorff-Nielsen, O.E., Resnick, S.I., Mikosch, T., Eds.; Birkhäuser: Boston, MA, USA, 2001. [Google Scholar]
  8. McCarthy, M. Bayesian Methods for Ecology; Cambridge University Press: Cambridge, UK, 2007. [Google Scholar]
  9. Chankham, W.; Niwitpong, S.; Niwitpong, S. Measurement of Dispersion of PM 2.5 in Thailand Using Confidence Intervals for the Coefficient of Variation of an Inverse Gaussian Distribution. PeerJ 2022, 10, e12988. [Google Scholar] [CrossRef] [PubMed]
  10. Hougaard, P. Frailty Models for Survival Data. Lifetime Data Anal. 1995, 1, 255–273. [Google Scholar] [CrossRef] [PubMed]
  11. Lai, C.-D.; Xie, M. Stochastic Ageing and Dependence for Reliability, 1st ed.; Springer: New York, NY, USA, 2006. [Google Scholar]
  12. Krbálek, M.; Hobza, T.; Patočka, M.; Krbálková, M.; Apeltauer, J.; Groverová, N. Statistical Aspects of Gap-Acceptance Theory for Unsignalized Intersection Capacity. Physica A Stat. Mech. Appl. 2022, 594, 127043. [Google Scholar] [CrossRef]
  13. Fisch, K.; Schwalger, T.; Lindner, B.; Herz, A.V.M.; Benda, J. Channel Noise from Both Slow Adaptation Currents and Fast Currents Is Required to Explain Spike-Response Variability in a Sensory Neuron. J. Neurosci. 2012, 32, 17332–17344. [Google Scholar] [CrossRef]
  14. Stein, A.J.; Lindsey, J.K. Statistical Analysis of Stochastic Processes in Time. Environ. Ecol. Stat. 2006, 13, 247–248. [Google Scholar]
  15. Punzo, A. A New Look at the Inverse Gaussian Distribution with Applications to Insurance and Economic Data. J. Appl. Stat. 2019, 46, 1260–1287. [Google Scholar] [CrossRef]
  16. Tweedie, M.C.K. Statistical Properties of Inverse Gaussian Distributions. II. Ann. Math. Statist. 1957, 28, 696–705. [Google Scholar] [CrossRef]
  17. Hougaard, P. Univariate Survival Data. In Analysis of Multivariate Survival Data. Statistics for Biology and Health; Springer: New York, NY, USA, 2000. [Google Scholar]
  18. Davidson, R.; MacKinnon, J.G. Estimation and Inference in Econometrics; Oxford University Press: New York, NY, USA, 1993. [Google Scholar]
  19. Martin, V.; Hurn, S.; Harris, D. Econometric Modelling with Time Series: Specification, Estimation and Testing; Cambridge University Press: Cambridge, UK, 2013; p. 138. [Google Scholar]
  20. Rohde, C.A. Introductory Statistical Inference with the Likelihood Function, 1st ed.; Springer: London, UK, 2014. [Google Scholar]
  21. Kummaraka, U.; Srisuradetchai, P. Interval Estimation of the Dependence Parameter in Bivariate Clayton Copulas. Emerg. Sci. J. 2023, 7, 1478–1490. [Google Scholar] [CrossRef]
  22. Pawitan, Y. In All Likelihood: Statistical Modelling and Inference Using Likelihood; Clarendon Press: Oxford, UK, 2001. [Google Scholar]
  23. Murphy, S.A.; van der Vaart, A.W. On Profile Likelihood. J. Am. Stat. Assoc. 2000, 95, 449–465. [Google Scholar] [CrossRef]
  24. Arefi, M.; Mohtashami Borzadaran, G.R.; Vaghei, Y. A Note on Interval Estimation for the Mean of Inverse Gaussian Distribution. SORT 2008, 32, 49–56. [Google Scholar]
  25. Srisuradetchai, P. Simple Formulas for Profile- and Estimated-Likelihood Based Confidence Intervals for the Mean of Inverse Gaussian. J. KMUTNB 2017, 27, 467–479. (In Thai) [Google Scholar]
  26. Díaz-Francés, E. Simple Estimation Intervals for Poisson, Exponential, and Inverse Gaussian Means Obtained by Symmetrizing the Likelihood Function. Am. Stat. 2016, 70, 171–180. [Google Scholar] [CrossRef]
  27. Srisuradetchai, P. Using Re-Parametrized Profile Likelihoods to Construct Wald Confidence Intervals for the Mean of Inverse Gaussian Distribution. In Proceedings of the 19th National Graduate Research Conference, Khon Kaen University, Khon Kaen, Thailand, 9 March 2018. (In Thai). [Google Scholar]
  28. Lu, W.; Shi, D. A New Compounding Life Distribution: The Weibull–Poisson Distribution. J. Appl. Stat. 2012, 39, 21–38. [Google Scholar] [CrossRef]
Figure 1. Shapes of the IG distributions with different parameters.
Figure 1. Shapes of the IG distributions with different parameters.
Symmetry 16 00093 g001
Figure 2. Coverage probability across different intervals in relation to sample size for each scenario of inverse Gaussian distribution (the dashed line represents the nominal confidence level, 0.95).
Figure 2. Coverage probability across different intervals in relation to sample size for each scenario of inverse Gaussian distribution (the dashed line represents the nominal confidence level, 0.95).
Symmetry 16 00093 g002
Figure 3. A plot of the probabilities of each fitted distribution (x-axis) against the empirical probabilities (y-axis).
Figure 3. A plot of the probabilities of each fitted distribution (x-axis) against the empirical probabilities (y-axis).
Symmetry 16 00093 g003
Table 1. List of abbreviations and acronyms used in the paper.
Table 1. List of abbreviations and acronyms used in the paper.
AbbreviationsDefinitions
AICAkaike information criterion
AILAverage interval length
CIConfidence interval
CPCoverage probability
IGInverse Gaussian
MLEMaximum likelihood estimator
PLProfile likelihood
RPLReparameterized profile likelihood
WPLWald-type profile likelihood (without reparameterization)
WRPLEWald-type reparameterized profile likelihood using expected Fisher information
WRPLOWald-type reparameterized profile likelihood using observed Fisher information
Table 2. CPs and AILs of the PL interval proposed by Srisuradetchai [25].
Table 2. CPs and AILs of the PL interval proposed by Srisuradetchai [25].
λ (Shape)nCoverage ProbabilityAverage Interval Length
μ (Mean) μ (Mean)
137137
0.550.79640.57600.352022.536724.481533.5161
100.92080.77600.545122.229343.1449122.6111
150.93150.86510.66738.1085101.021479.8984
300.94210.93930.84881.577248.3700238.8934
450.94530.95060.91291.046325.0285151.0399
600.95060.94990.94350.85058.8848170.8275
1000.94860.94580.95660.60914.018493.0158
150.86730.71910.526933.034036.838750.0072
100.92820.88870.74395.007349.192496.2266
150.93980.93140.83731.7234131.7031125.8797
300.94280.94790.94070.856112.5698210.7568
450.94590.94700.95070.65044.5521112.0056
600.94910.94950.94900.54683.450329.6077
1000.95100.95130.94870.41012.343710.9928
350.90000.86190.76812.967131.432294.2752
100.92780.92760.91070.899414.7727125.6737
150.93970.93090.93880.66585.218968.1577
300.94290.94450.94570.43722.566413.3481
450.94780.94680.94530.35161.95528.3623
600.94910.94680.94610.30111.64596.6398
1000.94770.95250.94770.23041.23004.6786
Table 3. CPs and AILs of the RPL approach proposed by Díaz-Francés [26].
Table 3. CPs and AILs of the RPL approach proposed by Díaz-Francés [26].
λ (Shape)nCoverage ProbabilityAverage Interval Length
μ (Mean) μ (Mean)
137137
0.550.32950.11190.04726.45855.48855.3135
100.65040.19390.0731.083011.871133.6625
150.86650.30580.10642.84953.728615.1804
300.94270.71770.24811.53667.363755.9877
450.94390.91470.42731.04404.284657.7240
600.94940.94850.62580.84317.101823.9830
1000.94890.94700.92090.60893.9953102.7412
150.58210.22210.1012.98895.25366.6847
100.89940.45060.16062.54464.564431.1419
150.93900.67350.26131.65037.45036.2086
300.94300.93990.61120.85247.261359.9609
450.94840.93890.87470.65164.504226.5516
600.94740.94460.94070.54943.451133.0396
1000.95390.94830.95020.41032.352210.9170
350.88260.57180.28520.454211.364123.5666
100.93210.89960.56700.88962.291310.6808
150.93280.93680.81450.66615.133835.5910
300.94260.94310.94480.43932.551313.2340
450.94520.94760.94640.35191.94988.3212
600.94750.94840.94520.30041.64656.6264
1000.94530.94560.94550.23031.23194.6810
Table 4. CPs and AILs of the WRPLO proposed by Srisuradetchai [27].
Table 4. CPs and AILs of the WRPLO proposed by Srisuradetchai [27].
λ (Shape)nCoverage ProbabilityAverage Interval Length
μ (Mean) μ (Mean)
137137
0.550.76480.55890.360527.194428.641533.6888
100.90190.76850.533513.716747.198264.8575
150.92580.86420.660641.797169.001977.2883
300.92990.93910.84501.450360.1097109.0317
450.94190.94730.90781.009420.2874236.1456
600.94360.94630.93920.83157.8830210.4629
1000.94600.94840.95650.60343.925554.9630
150.82310.71200.520814.146442.512466.7006
100.90640.87420.73483.346951.0088137.9784
150.91990.91650.83481.441956.3881139.6932
300.93580.93490.93570.82428.5811112.2808
450.94330.94290.95050.63474.3697101.5731
600.94160.94090.94800.53853.356434.0264
1000.94620.94700.94610.40582.327410.7083
350.84330.82160.75392.234179.781863.8266
100.90260.90050.88990.778512.3920115.2107
150.92070.92210.92500.61614.337369.4735
300.93630.93740.93970.42282.448711.9999
450.94250.94600.94180.34261.90338.0710
600.94450.94610.94320.29601.61156.4183
1000.94530.94850.94650.22801.22174.6426
Table 5. CPs and AILs of the WPL interval.
Table 5. CPs and AILs of the WPL interval.
λ (Shape)nCoverage ProbabilityAverage Interval Length
μ (Mean) μ (Mean)
137137
0.550.55250.17260.04201.05801.76592.0279
100.77490.30310.03391.28952.34672.6358
150.84880.49130.05861.32392.90653.3715
300.88800.79040.31351.01144.05265.5555
450.91090.85000.59500.82224.17527.3170
600.91970.87990.73790.71823.80108.7419
1000.92960.89760.84820.55542.91129.9903
150.73020.40290.13701.17273.67453.6544
100.84920.65960.23331.15913.49254.8826
150.88180.77180.38980.98943.91766.0905
300.91180.86480.73960.71043.71688.8713
450.92630.89700.84100.57853.06589.9320
600.93080.89810.86550.50302.64469.4724
1000.93640.92170.89470.39002.04477.3870
350.81720.73640.50440.85973.54177.0483
100.88420.85490.73850.66863.47488.8143
150.90760.87880.82200.56072.94999.4084
300.92690.91110.88300.40442.11507.6076
450.93600.92380.90090.33271.73486.2790
600.94140.93100.90940.28961.50555.3961
1000.94700.94180.93150.22501.17404.2096
Table 6. CPs and AILs of the WRPLE interval.
Table 6. CPs and AILs of the WRPLE interval.
λ (Shape)nCoverage ProbabilityAverage Interval Length
μ (Mean) μ (Mean)
137137
0.550.81530.57250.339131.721330.506123.4398
100.93220.77280.538018.540644.899954.5505
150.95230.87320.667123.4037284.235889.1180
300.94940.94820.85281.698281.8574176.3652
450.95000.95430.91681.081125.1390122.5613
600.95130.95330.94570.866010.1873131.8133
1000.95530.94850.95990.61754.082774.6419
150.89280.74090.524116.090744.385766.7506
100.95110.89920.741016.4258111.830089.6045
150.95160.94370.84472.1310125.7302125.1637
300.95610.95580.94240.893311.1814145.2397
450.95180.94950.96190.67454.7340124.8338
600.94940.94820.95520.55903.525439.7383
1000.94930.95220.95290.41502.391211.1029
350.94070.89760.78986.513656.5863120.0086
100.95220.94760.92011.049421.7278187.6656
150.95500.94980.95360.72556.1024165.2150
300.95360.95030.95260.45332.694715.6644
450.95090.94990.95210.35882.01948.5897
600.95560.95570.94980.30681.67886.7826
1000.94830.94990.95380.23241.24384.7345
Table 7. Descriptive statistics of March precipitation.
Table 7. Descriptive statistics of March precipitation.
n MinimumMaximumMedianMeanSkewnessSD
300.3204.7501.4701.6751.14471.0006
Table 8. Maximum likelihood estimates, goodness-of-fit testing, and AIC for the March precipitation dataset.
Table 8. Maximum likelihood estimates, goodness-of-fit testing, and AIC for the March precipitation dataset.
DistributionEstimatesChi-Squared Statisticp-ValueAIC
Exponential θ ^ = 0.5970 9.00490.108892.9487
Cauchy θ ^ = 1.4251 , 0.5457 3.69640.448694.8484
Log-logistic θ ^ = 2.7880 , 1.4407 2.58730.629081.8615
Inverse Gaussian θ ^ = 1.6749 ,   3.5840 2.56620.632881.2077
Table 9. 95% confidence intervals for the means of the March precipitation dataset.
Table 9. 95% confidence intervals for the means of the March precipitation dataset.
Interval95% Confidence IntervalInterval Length
Existing intervals:
PL 1.3371 ,   2.2413 0.9042
RPL 1.3392 ,   2.2587 0.9195
WRPLO 1.3457 ,   2.2175 0.8718
Proposed intervals:
WPL 1.2652 ,   2.0847 0.8197
WRPLE 1.3277 ,   2.2682 0.9405
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Srisuradetchai, P.; Niyomdecha, A.; Phaphan, W. Wald Intervals via Profile Likelihood for the Mean of the Inverse Gaussian Distribution. Symmetry 2024, 16, 93. https://doi.org/10.3390/sym16010093

AMA Style

Srisuradetchai P, Niyomdecha A, Phaphan W. Wald Intervals via Profile Likelihood for the Mean of the Inverse Gaussian Distribution. Symmetry. 2024; 16(1):93. https://doi.org/10.3390/sym16010093

Chicago/Turabian Style

Srisuradetchai, Patchanok, Ausaina Niyomdecha, and Wikanda Phaphan. 2024. "Wald Intervals via Profile Likelihood for the Mean of the Inverse Gaussian Distribution" Symmetry 16, no. 1: 93. https://doi.org/10.3390/sym16010093

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop