Next Article in Journal
Geometric Properties of Normalized Galué Type Struve Function
Next Article in Special Issue
g.ridge: An R Package for Generalized Ridge Regression for Sparse and High-Dimensional Linear Models
Previous Article in Journal
Criteria for the Uniqueness of a Solution to a Differential-Operator Equation with Non-Degenerate Conditions
Previous Article in Special Issue
Investigating the Lifetime Performance Index under Ishita Distribution Based on Progressive Type II Censored Data with Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Statistical Inference of Normal Distribution Based on Several Divergence Measures: A Comparative Study

1
Department of Mathematics, Al-Balqa Applied University, Alsalt 19117, Jordan
2
Department of Mathematics, University of Petra, Amman 11196, Jordan
3
Department of Statistics, Yarmouk University, Irbid 21163, Jordan
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2024, 16(2), 212; https://doi.org/10.3390/sym16020212
Submission received: 25 December 2023 / Revised: 5 February 2024 / Accepted: 6 February 2024 / Published: 9 February 2024
(This article belongs to the Special Issue Research Topics Related to Skew-Symmetric Distributions)

Abstract

:
Statistical predictive analysis is a very useful tool for predicting future observations. Previous literature has addressed both Bayesian and non-Bayesian predictive distributions of future statistics based on past sufficient statistics. This study focused on evaluating Bayesian and Wald predictive-density functions of a future statistic V based on a past sufficient statistic W obtained from a normal distribution. Several divergence measures were used to assess the closeness of the predictive densities to the future density. The difference between these divergence measures was investigated, using a simulation study. A comparison between the two predictive densities was examined, based on the power of a test. The application of a real data set was used to illustrate the results in this article.

1. Introduction

One of the most fundamental and significant fields in statistics is predictive analysis, which involves extracting information from historical and current data to predict future trends and behaviors. One of the many possible forms of prediction is the Bayesian predictive approach, which was first introduced by Aitchison [1], who demonstrated its advantage on the Kullback–Leibler (KL) divergence over plug-in predictive densities. Bayesian predictive-density estimation has been applied to different statistical models, including but not limited to the following: Aitchison and Dunsmore [2], who obtained Bayesian predictive distributions based on random samples from binomial, Poisson, gamma, two-parameter exponential, and normal distributions; Escobar et al. [3], who discussed the application of Bayesian inference to density-estimation models, using Dirichlet-process mixtures; and Hamura et al. [4], who used a number of Bayesian predictive densities to introduce prediction for the exponential distribution. Hamura et al. [5] studied the Bayesian prediction distribution of a chi-squared distribution, given a random sample from another chi-squared distribution under the Kullback–Leibler divergence.
Another form of prediction is the Wald predictive approach [6], which follows a non-Bayesian framework. This approach was considered via the concept of predictive likelihood [7]. Awad et al. [8] introduced a review of several prediction procedures and a comparison between them. Wald predictive density was among these procedures.
One way to check the validity of the prediction procedure is by testing the closeness between the classical distribution of future statistics and the predictive distribution by using divergence measures between the two probability distributions. Several divergence measures have been defined in the literature and have been used to measure the distance between pairs of probability-density functions.
Considering the two probability distributions f 1 ( x ) and f 2 ( x ) , Kullback and Leibler [9] introduced the Kullback–Leibler divergence measure, which measures the information gain between two distributions. The Kullback–Leibler divergence measure belongs to the family of Shannon-entropy distance measures, defined as
K ( f 1 f 2 ) = E f 1 log f 1 ( X ) f 2 ( X ) .
Lin [10] introduced the Jensen–Shannon divergence, which is a symmetric extension of the Kullback–Leibler divergence that has a finite value. In a similar approach, Johnson and Sinanovic [11] provided a new divergence measure known as the resistor-average measure. This measure is closely related to the Kullback–Leibler divergence mentioned in (1), but it is symmetric.
Do et al. [12] combined the similarity measurement of feature extraction into a joint modeling and classification scheme. They computed the Kullback–Leibler distance between the estimated models. Amin et al. [13] used the Kullback–Leibler divergence to develop a data-based Bayesian-network learning strategy; the proposed approach captures the nonlinear dependence of high-dimensional process data.
Jeffreys [14] introduced and studied a divergence measure called the Jeffreys-distance or J-divergence measure, which is considered as a symmetrization of the Kullback–Leibler divergence. The Jeffreys measure is a member of the family of Shannon-entropy distance measures, defined as
J ( f 1 , f 2 ) = E f 1 log f 1 ( X ) f 2 ( X ) E f 2 log f 1 ( X ) f 2 ( X ) ,
J ( f 1 , f 2 ) = K ( f 1 f 2 ) + K ( f 2 f 1 ) .
Taneja et al. [15] gave two different parametric generalizations of the Jeffreys measure. Cichocki [16] discussed the basic characteristics of the extensive families of alpha, beta, and gamma divergences including the Jeffreys divergence. They linked these divergences and showed their connections to the Tsallis and Rényi entropies. Sharma et al. [17] found the closeness between two probability distributions, using three similarity measures derived from the concepts of Jeffreys divergence and Kullback–Leibler divergence.
Rényi [18] introduced the Rényi divergence measure, which is related to Rényi entropy and depends on a parameter called r-order. The Rényi measure of order r is defined as
K r 1 ( f 1 f 2 ) = 1 r 1 log E f 1 f 1 ( X ) f 2 ( X ) r 1 ; r 1 , r > 0 .
Note that lim r 1 K r 1 ( f 1 f 2 ) = K ( f 1 f 2 ) . In other words, the Rényi measure of order 1 is basically the Kullback–Leibler measure.
The Rényi measure is used in physics and is called Tsallis entropy [19]. Krishnamurthy et al. [20] constructed a nonparametric estimator of Rényi divergence between continuous distributions. Their method consists of constructing estimators for certain integral functionals of two densities and transforming them into divergence estimators. Sason et al. [21] created integral formulas for the Rényi measure. Using these formulas, one can obtain bounds on the Rényi divergence as a function of the variational distance, assuming bounded relative information.
Sharma and Autar [22] suggested another generalization of Kullback–Leibler’s information, called relative information of type r, which is given by
  1 K r ( f 1 f 2 ) = 1 r 1 E f 1 f 1 ( X ) f 2 ( X ) r 1 1 ; r 1 , r > 0 .
Note that lim r 1   1 K r ( f 1 f 2 ) = K ( f 1 f 2 ) . Taneja and Kumar [23] proposed a modified version of the relative-information-of-type-r measure as a parametric generalization of the Kullback–Leibler measure and then considered it in terms of Csiszár’s f-divergence.
Sharma and Mittal [24] generalized the Kullback–Leibler divergence measure to a measure called the r-order-and-s-degree divergence measure, defined as
K r s ( f 1 f 2 ) = 1 s 1 E f 1 f 1 ( X ) f 2 ( X ) r 1 s 1 r 1 1 ; r , s 1 , r > 0 .
From Equation (5), we can see that:
(i)
lim s 1 K r s ( f 1 f 2 ) = K r 1 ( f 1 f 2 ) ,  which is basically the Rényi measure defined in (3);
(ii)
lim r 1 K r s ( f 1 f 2 ) = K 1 s ( f 1 f 2 ) ; where
K 1 s ( f 1 f 2 ) = 1 s 1 e ( ( s 1 ) K ( f 1 f 2 ) ) 1 ; s 1 ;
(iii)
lim s 1 K 1 s ( f 1 f 2 ) = K ( f 1 f 2 ) , which is basically the Kullback–Leibler measure defined in (1).
Taneja and Kumar [23] conducted a full study about the r-order-and-s-degree divergence measure.
The chi-square divergence measure or Pearson χ 2 measure [25] belongs to a family of measures called squared- L 2 distance measures, defined as
χ 2 ( f 1 f 2 ) = E f 1 f 1 ( X ) f 2 ( X ) 2 f 1 ( X ) f 2 ( X ) = E f 1 f 1 ( X ) f 2 ( X ) 1 .
Note that χ 2 ( f 1 f 2 ) = K 2 2 ( f 1 f 2 ) = 1 K 2 ( f 1 f 2 ) .
The symmetric version of chi-square divergence is given in [23,26].
Hellinger [27] defined the Hellinger divergence measure. This measure belongs to the family of squared-chord distance measures, defined as
H ( f 1 , f 2 ) = 1 2 E f 1 f 1 ( X ) f 2 ( X ) 2 f 1 ( X ) = 1 2 E f 1 1 f 2 ( X ) f 1 ( X ) 2 .
Note that H ( f 1 , f 2 ) = 1 2 K 1 2 1 2 ( f 1 f 2 ) = 1 2 1 K 1 2 ( f 1 f 2 ) .
González-Castro et al. [28] estimated the prior probability that minimizes the divergence by using the Hellinger measure to measure the disparity between the test data distribution and the validation distributions generated in a fully controlled manner. Recently, Dhumras et al. [29] proposed a new kind of Hellinger measure for a single-valued neutrosophic hypersoft set and applied it to dealing with the symptomatic detection of COVID-19 data.
Bhattacharyya [30] defined the Bhattacharyya measure. This measure belongs to the family of squared-chord distance measures, defined as
B ( f 1 , f 2 ) = E f 1 f 2 ( X ) f 1 ( X ) .
Note that
B ( f 1 , f 2 ) = 1 H ( f 1 , f 2 ) .
The Bhattacharyya measure is closely related to the Hellinger measure defined in (7).
Aherne et al. [31] presented the original geometric interpretation of the Bhattacharyya measure and explained the use of the metric in the Bhattacharyya bound. Patra et al. [32] proposed an innovative method for determining the similarity of a pair of users in sparse data. The proposed method is used to determine how the two evaluated items are relevant to each other. Similar to both Bhattacharyya’s and Hellinger’s measures, Pianka’s measure [33] between two probability distributions has values between 0 and 1. Recently, Alhihi et al. [34] introduced Pianka’s overlap coefficient for two exponential populations. A complete study of several divergence measures can be found in [26,35,36].
From the divergence measures (1)–(8), one can conclude that:
  • Each of the divergence measures (1)–(6) is positive if f 1 f 2 and equals zero if and only if f 1 = f 2 .
  • From (7) 0 < H ( f 1 , f 2 ) < 1 if f 1 f 2 and H ( f 1 , f 2 ) = 0 if and only if f 1 = f 2 .
  • From (8), 0 < B ( f 1 , f 2 ) < 1 if f 1 f 2 and B ( f 1 , f 2 ) = 1 if and only if f 1 = f 2 .
For this paper, we evaluated the Bayesian and the Wald predictive distributions of a future statistic based on a past sufficient statistic from samples taken from the normal density. Several divergence measures were used to test the closeness between the classical distribution of the future statistic and the predictive distributions, including the two approaches: Bayesian and Wald. We used the hypothesis-testing technique to compare the behavior of these divergence measures with respect to the closure test between the two distributions. The main contribution of this work is to provide a comprehensive study of eight divergence measures in a case of normal distribution using both Bayesian and non-Bayesian (Wald) predictive approaches. To the best of our knowledge, this study has not been previously mentioned in the literature.
The rest of this paper is organized as follows: In Section 2, we evaluate the Bayesian and the Wald predictive distributions of future statistic V based on past sufficient statistic W from normal density. In Section 3, we find divergence measures between the classical distribution of future statistics and the predictive distributions found in Section 2. In Section 4, we obtain percentiles of divergence measures and test closures of the predictive distributions, using the two approaches (Bayesian and Wald) to the classical distribution. A real-life application is presented in Section 5. Finally, we provide a brief conclusion in Section 6.

2. Predictive Distributions

Let X 1 , , X n be independent and identically distributed (i.i.d.) past random variables and let Y 1 , , Y m be i.i.d. future random variables, where X = ( X 1 , , X n ) and Y = ( Y 1 , , Y m ) are independent. Consider a past sufficient statistic W = r ( X 1 , X n ) and a future statistic V = u ( Y 1 , , Y m ) . In this section, we construct Bayesian and Wald predictive-density functions of the future statistic V based on the past sufficient statistic W from the normal density.

2.1. Bayesian Predictive Distribution

Let X 1 , , X n be i . i . d . past random variables with probability-density function (pdf) f ( x θ ) and let Y 1 , , Y m be i . i . d . future random variables with pdf k ( y θ ) . Assume that θ is a random variable with prior density π ( θ ) for θ Θ . Consider a past sufficient statistic W = r ( X 1 , , X n ) with pdf s ( w θ ) and a future statistic V = u ( Y 1 , , Y m ) with pdf g ( v θ ) . The posterior-density function [37] of θ , given W = w , is defined as
p ( θ w ) = s ( w θ ) π ( θ ) Θ s ( w θ ) π ( θ ) d θ .
The following theorem provides a Bayesian method to construct predictive distributions.
Theorem 1 
([37]). Let W = r ( X 1 , , X n ) be a past sufficient statistic and let θ be a random variable with prior density π ( θ ) for θ Θ . Assume that V = u ( Y 1 , , Y m ) is a future statistic with pdf g ( v θ . The Bayesian predictive-density function of V, given W = w , is given by
h ( v w ) = Θ g ( v θ ) p ( θ w ) d θ ,
 where p ( θ w ) is the posterior-density function of θ, given W = w , defined in (10).
The next two theorems provide the Bayesian predictive distributions for some statistics. Theorem 2 below presents the Bayesian predictive distribution of the future statistic V = i = 1 m Y i based on the past sufficient statistic W = i = 1 n X i from random samples taken from normal density; the choice of these statistics is based on the fact that we need statistics that best describe the characteristics of the data set, and they are closely related to the mean of the data set. Note that the choice of W is a completely sufficient statistic for the parameter under consideration. A similar result is obtained in Theorem 4 for the same statistics V and W, to obtain the Wald predictive distribution.
We use the following notations to represent some distributions that appear in the results of Theorems 2 and 3 below: the normal distribution with mean μ and variance σ 2 is denoted by N ( μ , σ 2 ) ; the gamma distribution with shape parameter α and scale parameter β is denoted by Gamma ( α , β ) ; and the generalized inverted beta distribution with shape parameters α , β , and p is denoted by InBe ( α , β , p ) .
Theorem 2. 
Let X 1 , , X n i . i . d .   N ( θ 1 , 1 / θ 2 ) and Y 1 , , Y m i . i . d .   N ( δ θ 1 , 1 / θ 2 ) , where δ is known. Assume that X = ( X 1 , , X n ) and Y = ( Y 1 , , Y m ) are independent and that V = i = 1 m Y i and W = i = 1 n X i . If θ 2 is known and θ 1 is unknown with prior distribution N ( a , 1 / b ) , where a and b are assumed to be known, the Bayesian predictive distribution of V, given W = w , is
N m δ ( a b + w θ 2 ) b + n θ 2 , m b + ( n + m δ 2 ) θ 2 ( b + n θ 2 ) θ 2 .
Proof. 
It is easy to see that the past sufficient statistic W = i = 1 n X i , given θ 1 , follows the normal distribution N ( n θ 1 , n / θ 2 ) and that V = i = 1 m Y i , given θ 1 , follows the normal distribution N ( m δ θ 1 , m / θ 2 ) . In other words, the classical distribution of the future statistic V, given θ 1 , is
g ( v θ 1 ) = θ 2 2 π m e x p θ 2 2 m v m δ θ 1 2 .
Now, using (10), the posterior-density function of θ 1 , given W = w , is equal to
p ( θ 1 | w ) e x p θ 2 2 n ( w n θ 1 ) 2 e x p b 2 ( θ 1 a ) 2 e x p b + n θ 2 2 θ 1 a b + w θ 2 b + n θ 2 2 .
Therefore, the posterior-density function of θ 1 , given W = w , is the normal distribution N a b + w θ 2 b + n θ 2 , 1 b + n θ 2 .
Applying Equation (11), the Bayesian predictive-density function of V, given W = w , can be derived as
h ( v w ) = θ 2 2 π m exp θ 2 2 m v m δ θ 1 2 · b + n θ 2 2 π exp b + n θ 2 2 θ 1 a b + w θ 2 b + n θ 2 2 d θ 1 .
By evaluating this integral, the Bayesian predictive-density function of V, given W = w , is
h ( v w ) = ( b + n θ 2 ) θ 2 2 π m ( b + ( n + m δ 2 ) θ 2 ) e x p ( b + n θ 2 ) θ 2 2 m b + ( n + m δ 2 ) θ 2 v δ m ( a b + w θ 2 ) b + n θ 2 2 ,
where < v < , < w < .
Which represents the normal distribution N m δ ( a b + w θ 2 ) b + n θ 2 , m b + ( n + m δ 2 ) θ 2 ( b + n θ 2 ) θ 2 . □
The following theorem presents the Bayesian predictive distribution of the future statistic V = i = 1 m ( Y i θ 1 ) 2 based on the past sufficient statistic W = i = 1 n ( X i θ 1 ) 2 from random samples taken from normal density. The choice of these statistics is based on the fact that we need statistics that best describe the characteristics of the data set. The choice of V and W is closely related to the variance of the data set, and it is often used to represent data too. Note that a similar result is obtained in Theorem 5 for the same statistics V and W, to obtain the Wald predictive distribution.
Theorem 3. 
Let X 1 , , X n i i d   N ( θ 1 , 1 / θ 2 ) and Y 1 , , Y m i i d   N ( θ 1 , δ / θ 2 ) , where δ is known. Assume that X = ( X 1 , , X n ) and Y = ( Y 1 , , Y m ) are independent and that V = i = 1 m ( Y i θ 1 ) 2 and W = i = 1 n ( X i θ 1 ) 2 . If θ 1 is known and θ 2 is unknown with prior distribution Gamma ( a , 1 / b ) , where a and b are assumed to be known, the Bayesian predictive distribution of V, given W = w , is
I n B e ( m 2 , n 2 + a , δ ( w + 2 b ) ) .
Proof. 
It is easy to see that the past sufficient statistic W = i = 1 n ( X i θ 1 ) 2 , given θ 2 , follows the gamma distribution Gamma n 2 , 2 θ 2 and that V = i = 1 m ( Y i θ 1 ) 2 , given θ 2 , follows Gamma m 2 , 2 δ θ 2 . Using Equation (10), the posterior-density function of θ 2 , given W = w , is Gamma n + 2 a 2 , 2 w + 2 b . Applying Equation (11), the Bayesian predictive-density function of V, given W = w , can be derived as
h ( v w ) = 0 v ( m / 2 1 ) ( θ 2 2 δ ) ( m / 2 ) e x p θ 2 2 δ v Γ ( m 2 ) · θ 2 ( n + 2 a ) / 2 1 ( w + 2 b 2 ) ( n + 2 a ) / 2 e x p θ 2 ( w + 2 b 2 ) Γ ( n + 2 a 2 ) d θ 2 = v m / 2 1 ( δ ( w + 2 b ) ) ( n + 2 a ) / 2 B ( m 2 , n + 2 a 2 ) ( v + δ ( w + 2 b ) ) ( m + n + 2 a ) / 2 , where v > 0 and w > 0 .
As a result, the Bayesian predictive-density function of V, given W = w , follows the generalized inverted beta distribution InBe m 2 , n + 2 a 2 , δ ( w + 2 b ) . □

2.2. Wald Predictive Distribution

In this subsection, the Wald predictive-density function of the future statistic V based on the past sufficient statistic W from normal density is derived.
Let X 1 , , X n be i . i . d . past random variables with pdf f ( x ; θ ) and let Y 1 , , Y m be i . i . d . future random variables with pdf k ( y ; θ ) , where { θ Θ } is an unknown parameter and X = ( X 1 , , X n ) and Y = ( Y 1 , , Y m ) are independent. Consider a past sufficient statistic W = r ( X 1 , , X n ) with pdf s ( w ; θ ) and future statistic V = u ( Y 1 , , Y m ) with pdf g ( v ; θ ) . The Wald predictive-density function [6] of V, given W = w , is defined as
q ( v w ) = g ( v ; θ ^ w ) ,
where θ ^ W is the maximum likelihood estimator (MLE) of θ based on the distribution of the past sufficient statistic W, for which
sup θ L ( θ ) = L ( θ ^ W ) , where L ( θ ) = s ( w ; θ ) .
The next two theorems present the Wald predictive distribution for some statistics. Theorem 4 considers the future statistic V = i = 1 m Y i and the past sufficient statistic W = i = 1 n X i , while Theorem 5 considers the future statistic V = i = 1 m ( Y i θ 1 ) 2 and the past sufficient statistic W = i = 1 n ( X i θ 1 ) 2 .
Theorem 4. 
Let X 1 , , X n i i d   N ( θ 1 , 1 / θ 2 ) and Y 1 , , Y m i i d   N ( δ θ 1 , 1 / θ 2 ) , where δ is known. Assume that X = ( X 1 , , X n ) and Y = ( Y 1 , , Y m ) are independent and that V = i = 1 m Y i and W = i = 1 n X i . If θ 1 is unknown and θ 2 is known, the Wald predictive distribution of V, given W = w , is
N δ m w n , m θ 2 .
Proof. 
Using the fact that W = i = 1 n X i follows N n θ 1 , n θ 2 , we obtain
L ( θ 1 ) = s ( w ; θ 1 ) = θ 2 2 n π e x p θ 2 2 n ( w n θ 1 ) 2 ; < θ 1 < , < w < .
As a result, the MLE of θ 1 is θ 1 ^ W = W n . Now, the distribution of V = i = 1 m Y i is N m δ θ 1 , m θ 2 . By applying Equation (16), the Wald predictive distribution of V, given W = w , is equal to
q ( v w ) = θ 2 2 m π e x p θ 2 2 m v δ m w n 2 ; v > 0 , w > 0 ,
which represents the normal distribution N δ m w n , m θ 2 as required. □
Theorem 5. 
Let X 1 , , X n i i d   N ( θ 1 , 1 / θ 2 ) and Y 1 , , Y m i i d   N ( θ 1 , δ / θ 2 ) , where δ is known. Assume that X = ( X 1 , , X n ) and Y = ( Y 1 , , Y m ) are independent and that V = i = 1 m ( Y i θ 1 ) 2 and W = i = 1 n ( X i θ 1 ) 2 . If θ 1 is known and θ 2 is unknown, the Wald predictive distribution of V, given W = w , is
G a m m a m 2 , 2 δ w n .
Proof. 
Using the fact that the past sufficient statistic W = i = 1 n ( X i θ 1 ) 2 follows Gamma n 2 , 2 θ 2 , we obtain
L ( θ 2 ) = s ( w ; θ 2 ) = w n 2 1 ( θ 2 2 ) n 2 e θ 2 2 w Γ n 2 ; θ 2 > 0 , w > 0 .
As a result, the MLE of θ 2 is θ 2 ^ W = n W . Now, the distribution of V = i = 1 m ( Y i θ 1 ) 2 follows Gamma m 2 , 2 δ θ 2 . By applying (16), we obtain the result that the Wald predictive distribution of V, given W = w , follows Gamma m 2 , 2 δ w n as required. □

3. Divergence Measures between the Classical Distribution of Future Statistic and Predictive Distributions

In this section, several divergence measures between the classical distribution of the future statistic V and the predictive distribution of V, given the past sufficient statistic W, are found for both prediction cases—Bayesian and Wald—by considering the future statistic V = i = 1 m Y i and the past sufficient statistic W = i = 1 n X i , where the Bayesian and Wald predictive distributions are found in Theorems 2 and 4, respectively. The other case, where V = i = 1 m ( Y i θ 1 ) 2 and W = i = 1 n ( X i θ 1 ) 2 can be considered as a possible idea for future research.
The next theorem gives formulas for the following divergence measures between the classical distribution of the future statistic g ( v θ 1 ) in Equation (13) and the Bayesian predictive distribution h ( v w ) in Equation (14): the Kullback–Leibler measure ( K ( g h ) ), the Jeffreys measure ( J ( g , h ) ), the Rényi measure ( K r 1 ( g h ) ), the relative-information-of-type-r measure (   1 K r ( g h ) ), the r-order-and-s-degree measure ( K r s ( g h ) ), the chi-square measure ( χ 2 ( g h ) ), the Hellinger measure ( H ( g , h ) ), and the Bhattacharyya measure ( B ( g , h ) ).
For each of the aforementioned measures, we also find its average under the prior distribution of θ 1 . For instance, the Kullback–Leibler measure and its average under the prior distribution of θ between the two densities g and h, respectively, are presented as
K ( g h ) = E g log g ( V θ 1 ) h ( V w ) and A K ( g h ) = E θ 1 E g log g ( V θ 1 ) h ( V w ) .
Similarly, the average of the divergence measures under the prior distribution of θ 1 between two probability density functions g and h for each of the remaining divergence measures are denoted as A K ( g h ) , A J ( g , h ) , A K r 1 ( g h ) , A 1 K r ( g h ) , A K r s ( g h ) , A K 1 s ( g h ) , A χ 2 ( g h ) , and A H ( g , h ) .
Theorem 6. 
Under the assumption of Theorem 2, let g ( v | θ 1 ) be the classical distribution of the future statistic and let h ( v | w ) be the Bayesian predictive distribution of V, given W = w , from the normal density. The value of the following divergence measures and their average under the prior distribution of θ 1 between g ( v | θ 1 ) and h ( v | w ) —(1) the Kullback–Leibler measure, (2) the Jeffreys measure, (3) the Rényi measure, (4) the relative-information-of-type-r measure, (5) the r-order-and-s-degree measure, (6) the chi-square measure, (7) the Hellinger measure, and (8) the Bhattacharyya measure—are equal to, respectively:
(1) 
K ( g h ) = 1 2 log b + ( n + m δ 2 ) θ 2 b + n θ 2 + m δ 2 ( b + n θ 2 ) θ 2 2 ( b + ( n + m δ 2 ) θ 2 ) θ 1 a b + w θ 2 b + n θ 2 2 1 b + n θ 2 , and
A K ( g h ) = 1 2 log b + ( n + m δ 2 ) θ 2 b + n θ 2 + m δ 2 ( b + n θ 2 ) θ 2 2 ( b + ( n + m δ 2 ) θ 2 ) a a b + w θ 2 b + n θ 2 2 + n θ 2 b + n θ 2 ;
(2) 
J ( g , h ) = m δ 2 θ 2 2 ( b + ( n + m δ 2 ) θ 2 ) m δ 2 θ 2 b + n θ 2 + ( m δ 2 θ 2 + 2 ( b + n θ 2 ) ) ( θ 1 a b + w θ 2 b + n θ 2 ) 2 , and
A J ( g , h ) = m δ 2 θ 2 2 ( b + ( n + m δ 2 ) θ 2 ) m δ 2 θ 2 b + n θ 2 + ( m δ 2 θ 2 + 2 ( b + n θ 2 ) ) ( a a b + w θ 2 b + n θ 2 ) 2 + 1 b ;
(3) 
K r 1 ( g h ) = 1 2 ( r 1 ) log b + ( n + m δ 2 ) θ 2 b + ( n + m r δ 2 ) θ 2 + 1 2 log b + ( n + m δ 2 ) θ 2 b + n θ 2 + m r ( b + n θ 2 ) δ 2 θ 2 2 ( b + ( n + m r δ 2 ) θ 2 ) θ 1 a b + w θ 2 b + n θ 2 2 , where r 1 , r > 0 , and
A K r 1 ( g h ) = 1 2 ( r 1 ) log b ( b + ( n + m r δ 2 ) θ 2 ) c + b m r ( w a n ) 2 δ 2 θ 2 3 2 ( b + n θ 2 ) c , where r 1 , r > 0 , c > 0 ;
(4) 
  1 K r ( g h ) = 1 r 1 1 + b + ( n + m δ 2 ) θ 2 b + n θ 2 r 1 2 b + ( n + m δ 2 ) θ 2 b + ( n + m r δ 2 ) θ 2 × e x p m r ( r 1 ) ( b + n θ 2 ) δ 2 θ 2 2 ( b + ( n + m r δ 2 ) θ 2 ) θ 1 a b + w θ 2 b + n θ 2 2 , where r 1 , r > 0 , and
A 1 K r ( g h ) = 1 r 1 1 + b ( b + ( n + m r δ 2 ) θ 2 ) c e x p b m r ( r 1 ) ( w a n ) 2 δ 2 θ 2 3 2 ( b + n θ 2 ) c ,  where  r 1 , r > 0 , c > 0 ;
(5) 
K r s ( g h = 1 s 1 1 + b + ( n + m δ 2 ) θ 2 b + n θ 2 s 1 2 b + ( n + m δ 2 ) θ 2 b + ( n + m r δ 2 ) θ 2 s 1 2 ( r 1 ) × e x p m r ( s 1 ) ( b + n θ 2 ) δ 2 θ 2 2 ( b + ( n + m r δ 2 ) θ 2 ) θ 1 a b + w θ 2 b + n θ 2 2 , where r , s 1 , r > 0 , and
A K r s ( g h ) = 1 s 1 1 + b ( b + ( n + m r δ 2 ) θ 2 ) c s 1 2 ( r 1 ) e x p b m r ( s 1 ) ( w a n ) 2 δ 2 θ 2 3 2 ( b + n θ 2 ) c ,  where  r , s 1 , r > 0 , c > 0 ;
(6) 
χ 2 ( g h ) = 1 + b + ( n + m δ 2 ) θ 2 b + n θ 2 b + ( n + m δ 2 ) θ 2 b + ( n + 2 m δ 2 ) θ 2 e x p m r ( b + n θ 2 ) δ 2 θ 2 2 ( b + ( n + m r δ 2 ) θ 2 ) θ 1 a b + w θ 2 b + n θ 2 2 , and
A χ 2 ( g h ) = 1 + b ( b + ( n + 2 m δ 2 ) θ 2 ) b 2 + b n θ 2 2 m n δ 2 θ 2 2 e b m ( w a n ) 2 δ 2 θ 2 3 ( b + n θ 2 ) ( b 2 + b n θ 2 2 m n δ 2 θ 2 2 ) , where 0 < δ < b θ 2 b + n θ 2 2 n m b ;
(7) 
H ( g , h ) = 1 b + n θ 2 b + ( n + m δ 2 ) θ 2 4 2 b + 2 ( n + m δ 2 ) θ 2 2 b + ( 2 n + m δ 2 ) θ 2 e x p m r ( b + n θ 2 ) δ 2 θ 2 4 ( b + ( n + m r δ 2 ) θ 2 ) θ 1 a b + w θ 2 b + n θ 2 2 , and
A H ( g , h ) = 1 4 b ( b + ( n + m r δ 2 ) θ 2 ) 4 b 2 + b ( 4 n + 3 m δ 2 ) θ 2 + m n δ 2 θ 2 2 e x p b m ( w a n ) 2 δ 2 θ 2 3 8 ( b + n θ 2 ) ( 4 b 2 + b ( 4 n + 3 m δ 2 ) θ 2 + m n δ 2 θ 2 2 ) ,  where  r 1 , r > 0 ;
(8) 
B ( g , h ) = b + n θ 2 b + ( n + m δ 2 ) θ 2 4 2 b + 2 ( n + m δ 2 ) θ 2 2 b + ( 2 n + m δ 2 ) θ 2 e x p m r ( b + n θ 2 ) δ 2 θ 2 4 ( b + ( n + m r δ 2 ) θ 2 ) θ 1 a b + w θ 2 b + n θ 2 2 , and
A B ( g , h ) = 4 b ( b + ( n + m r δ 2 ) θ 2 ) 4 b 2 + b ( 4 n + 3 m δ 2 ) θ 2 + m n δ 2 θ 2 2 e x p b m ( w a n ) 2 δ 2 θ 2 3 8 ( b + n θ 2 ) ( 4 b 2 + b ( 4 n + 3 m δ 2 ) θ 2 + m n δ 2 θ 2 2 ) ,  where r 1 , r > 0 ,
where c = b 2 + b ( n m r ( r 2 ) δ 2 ) θ 2 m n r ( r 1 ) δ 2 θ 2 2 .
Proof. 
By using (13) and the result of Theorem 2, the following expectations can be calculated easily and will be used in the proof:
E g V m δ θ 1 2 = m θ 2 ,
E g V m δ ( a b + w θ 2 ) b + n θ 2 2 = m θ 2 + m δ θ 1 m δ ( a b + w θ 2 ) b + n θ 2 2 ,
E h V m δ ( a b + w θ 2 ) b + n θ 2 2 = m ( b + ( n + m δ 2 ) θ 2 ( b + n θ 2 ) θ 2 ,
and
E h V m δ θ 1 2 = m ( b + ( n + m δ 2 ) θ 2 ( b + n θ 2 ) θ 2 + m δ ( a b + w θ 2 ) b + n θ 2 m δ θ 1 2 .
In addition, using g ( v θ 1 ) in (13) and h ( v w ) in (14), the following ratios (after simplification) will be used in the proof:
g ( v θ 1 ) h ( v w ) = b + ( n + m δ 2 ) θ 2 b + n θ 2 e x p θ 2 2 m v m δ θ 1 2 + ( b + n θ 2 ) θ 2 2 m ( b + ( n + m δ 2 ) θ 2 ) v m δ ( a b + w θ 2 ) b + n θ 2 2 ,
h ( v w ) g ( v θ 1 ) = b + n θ 2 b + ( n + m δ 2 ) θ 2 e x p θ 2 2 m v m δ θ 1 2 ( b + n θ 2 ) θ 2 2 m ( b + ( n + m δ 2 ) θ 2 ) v m δ ( a b + w θ 2 ) b + n θ 2 2 .
(1)
By using Equation (1), the Kullback–Leibler divergence between g ( v θ 1 ) and h ( v w ) equals
K ( g h ) = E g log g ( V θ 1 ) h ( V w ) = log b + ( n + m δ 2 ) θ 2 b + n θ 2 θ 2 2 m E g V m δ θ 1 2 + ( b + n θ 2 ) θ 2 2 m ( b + ( n + m δ 2 ) θ 2 ) E g V m δ ( a b + w θ 2 ) b + n θ 2 2 , by using ( 20 ) and ( 21 ) = 1 2 log b + ( n + m δ 2 ) θ 2 b + n θ 2 1 2 + ( b + n θ 2 ) θ 2 2 m ( b + ( n + m δ 2 ) θ 2 ) m θ 2 + m δ θ 1 m δ ( a b + w θ 2 ) b + n θ 2 2 = 1 2 log b + ( n + m δ 2 ) θ 2 b + n θ 2 + m δ 2 ( b + n θ 2 ) θ 2 2 ( b + ( n + m δ 2 ) θ 2 θ 1 a b + w θ 2 b + n θ 2 2 1 b + n θ 2 .
Now, to find the average of the Kullback–Leibler measure under the prior distribution of θ 1 between g ( v θ 1 ) and h ( v w ) , we use the assumption θ 1 N ( a , 1 / b ) . As a result,
E θ 1 θ 1 a b + w θ 2 b + n θ 2 2 = 1 b + a a b + w θ 2 b + n θ 2 2 . Thus,
A K ( g h ) = E θ 1 K ( g h ) = 1 2 log b + ( n + m δ 2 ) θ 2 b + n θ 2 + m δ 2 ( b + n θ 2 ) θ 2 2 ( b + ( n + m δ 2 ) θ 2 ) E θ 1 θ 1 a b + w θ 2 b + n θ 2 2 1 b + n θ 2 = 1 2 log b + ( n + m δ 2 ) θ 2 b + n θ 2 + m δ 2 ( b + n θ 2 ) θ 2 2 ( b + ( n + m δ 2 ) θ 2 ) a a b + w θ 2 b + n θ 2 2 + n θ 2 b + n θ 2 .
(2)
First, we need to find K ( h g ) . By applying definition (1) and using Equations (22) and (23) we obtain
K ( h g ) = E h log h ( V w ) g ( V θ 1 ) = log b + ( n + m δ 2 ) θ 2 b + n θ 2 + θ 2 2 m E h V m δ θ 1 2 ( b + n θ 2 ) θ 2 2 m ( b + ( n + m δ 2 ) θ 2 ) ×   E h V m δ ( a b + w θ 2 ) b + n θ 2 2 = 1 2 log b + ( n + m δ 2 ) θ 2 b + n θ 2 + m θ 2 δ 2 2 ( θ 1 a b + w θ 2 b + n θ 2 ) 2 + 1 b + n θ 2 .
From (2) and K ( h g ) calculated in part (1) above, the Jeffreys divergence and its average under the prior distribution of θ 1 between g ( v θ 1 ) and h ( v w ) are, respectively, provided by
J ( g , h ) = K ( g h ) + K ( h g ) = m δ 2 ( b + n θ 2 ) θ 2 2 ( b + ( n + m δ 2 ) θ 2 ) ( θ 1 a b + w θ 2 b + n θ 2 ) 2 1 b + n θ 2 + m θ 2 δ 2 2 ( θ 1 a b + w θ 2 b + n θ 2 ) 2 + 1 b + n θ 2 = m δ 2 θ 2 2 ( b + ( n + m δ 2 ) θ 2 ) m δ 2 θ 2 b + n θ 2 + ( m δ 2 θ 2 + 2 ( b + n θ 2 ) ) ( θ 1 a b + w θ 2 b + n θ 2 ) 2 .
Now, to find the average of the Jeffreys measure under the prior distribution of θ 1 between g ( v θ 1 ) and h ( v w ) , we use the assumption θ 1 N ( a , 1 / b ) . As a result, E θ 1 θ 1 a b + w θ 2 b + n θ 2 2 = 1 b + a a b + w θ 2 b + n θ 2 2 . Thus,
A J ( g , h ) = E θ 1 ( J ( g , h ) ) = m δ 2 θ 2 2 ( b + ( n + m δ 2 ) θ 2 ) m δ 2 θ 2 b + n θ 2 + ( m δ 2 θ 2 + 2 ( b + n θ 2 ) ) E θ 1 ( θ 1 a b + w θ 2 b + n θ 2 ) 2 = m δ 2 θ 2 2 ( b + ( n + m δ 2 ) θ 2 ) m δ 2 θ 2 b + n θ 2 + ( m δ 2 θ 2 + 2 ( b + n θ 2 ) ) ( ( a a b + w θ 2 b + n θ 2 ) 2 + 1 b ) .
(3)
First, we find
E g g ( V θ 1 ) h ( V w ) r 1 = b + ( n + m δ 2 ) θ 2 b + n θ 2 r 1 2 ×   E g e x p ( r 1 ) θ 2 2 m V m δ θ 1 2 + ( r 1 ) ( b + n θ 2 ) θ 2 2 m ( b + ( n + m δ 2 ) θ 2 ) V m δ ( a b + w θ 2 ) b + n θ 2 2 = b + ( n + m δ 2 ) θ 2 b + n θ 2 r 1 2 b + ( n + m δ 2 ) θ 2 b + ( n + m r δ 2 ) θ 2 ×   e x p m r ( r 1 ) ( b + n θ 2 ) δ 2 θ 2 2 ( b + ( n + m r δ 2 ) θ 2 ) θ 1 a b + w θ 2 b + n θ 2 2 .
The average of E g g ( V θ 1 ) h ( V w ) r 1 under the prior distribution of θ 1 is
E θ 1 E g g ( V θ 1 ) h ( V w ) r 1 = b + ( n + m δ 2 ) θ 2 b + n θ 2 r 1 2 b + ( n + m δ 2 ) θ 2 b + ( n + m r δ 2 ) θ 2 × e x p m r ( r 1 ) ( b + n θ 2 ) δ 2 θ 2 2 ( b + ( n + m r δ 2 ) θ 2 ) θ 1 a b + w θ 2 b + n θ 2 2 b 2 π e x p b 2 θ 1 a 2 d θ 1 = b ( b + ( n + m r δ 2 ) θ 2 ) c e x p b m r ( r 1 ) ( w a n ) 2 δ 2 θ 2 3 2 ( b + n θ 2 ) c .
Note that E θ E g g ( V θ ) h ( V w ) r 1 converges if c > 0 ; that is,
b 2 + b ( n m r ( r 2 ) δ 2 ) θ 2 m n r ( r 1 ) δ 2 θ 2 2 > 0 .
From (3) we obtain the Rényi divergence measure and its average under the prior distribution of θ 1 between g ( v θ 1 ) and h ( v w ) as required.
(4)
From the results of part (3) above and using Equation (4) we obtain the relative-information-of-type-r measure and its average under the prior distribution of θ 1 between g ( v θ 1 ) and h ( v w ) as required.
(5)
From the results of part (3) above and using Equation (5) we obtain the r-order-and-s-degree divergence measure and its average under the prior distribution of θ 1 between g ( v θ 1 ) and h ( v w ) as required.
(6)
Substituting r = 2 in the relative-information-of-type-r measure found in part (4) above, we obtain the chi-square divergence and its average under the prior distribution of θ 1 between g ( v θ 1 ) and h ( v w ) as required. Note that A χ 2 ( g h ) converges if
b 2 + b n θ 2 2 m n δ 2 θ 2 2 > 0 ,
which implies 0 < δ < b θ 2 b + n θ 2 2 n m b .
(7)
Substituting r = 1 2 in the relative-information-of-type-r measure found in part (4) above and multiplying by 1 2 , we obtain the Hellinger measure and its average under the prior distribution of θ 1 between g ( v θ 1 ) and h ( v w ) as required. Note that A H ( g , h ) converge if
4 b 2 + b ( 4 n + 3 m δ 2 ) θ 2 + m n δ 2 θ 2 2 > 0 ,
which implies that δ > 0 .
(8)
Using the relationship of the Bhattacharyya and Hellinger measures described in Equation (9), we obtain the result of the Bhattacharyya measure and its average under the prior distribution of θ 1 between g ( v θ 1 ) and h ( v w ) as required.
In the same way, Theorem 7 below provides similar results for the same divergence measures considered in Theorem 6, but this time between the classical distribution of the future statistic g ( v ; θ 1 ) in Equation (13) and the Wald predictive distribution q ( v w ) in Equation (18).
Theorem 7. 
Under the assumption of Theorem 4, let g ( v ; θ 1 ) be the classical distribution of the future statistic and q ( v w ) be the Wald predictive distribution of V, given W = w , from the normal density. The value of the following divergence measures and their average under the distribution of W between g ( v ; θ 1 ) and q ( v w ) —(1) the Kullback–Leibler measure, (2) the Jeffreys measure, (3) the Rényi measure, (4) the relative-information-of-type-r measure, (5) the r-order-and-s-degree-measure, (6) the chi-square measure, (7) the Hellinger measure, and (8) the Bhattacharyya measure—are equal to, respectively:
(1) 
K ( g q ) = m δ 2 θ 2 2 n 2 w n θ 1 2 and A K ( g q ) = m δ 2 2 n ;
(2) 
J ( g , q ) = m δ 2 θ 2 n 2 w n θ 1 2 and A J ( g , q ) = m δ 2 n ;
(3) 
K r 1 ( g q ) = m r δ 2 θ 2 w n θ 1 2 2 n 2 , r 1 , r > 0 , < w < and A K r 1 ( g q ) = 1 2 ( r 1 ) log n n m r ( r 1 ) δ 2 , r 1 , 0 < r < 1 2 + 1 4 + n m δ 2 ;
(4) 
  1 K r ( g q ) = 1 r 1 1 + e x p m r ( r 1 ) δ 2 θ 2 w n θ 1 2 2 n 2 , r 1 , r > 0 , < w < and A 1 K r ( g q ) = 1 r 1 1 + n n m r ( r 1 ) δ 2 , r 1 , 0 < r < 1 2 + 1 4 + n m δ 2 ;
(5) 
K r s ( g q ) = 1 s 1 1 + e x p m r ( s 1 ) δ 2 θ 2 w n θ 1 2 2 n 2 , r , s 1 , r > 0 , < w < and A K r s ( g q ) = 1 s 1 1 + n n m r ( r 1 ) δ 2 s 1 2 ( r 1 ) , r 1 , 0 < r < 1 2 + 1 4 + n m δ 2 ;
(6) 
χ 2 ( g q ) = 1 + e x p 2 m r δ 2 θ 2 w n θ 1 2 2 n 2 , < w < , and A χ 2 ( g q ) = 1 + n n 2 m δ 2 , 0 < δ < n 2 m ;
(7) 
H ( g , q ) = 1 e x p m r δ 2 θ 2 w n θ 1 2 8 n 2 , < w < and A H ( g , q ) = 1 2 n 4 n + m δ 2 ;
(8) 
B ( g , q ) = e x p m r δ 2 θ 2 w n θ 1 2 8 n 2 , < w < and A B ( g , q ) = 2 n 4 n + m δ 2 .
Proof. 
As the distribution of V = i = 1 m Y i is N ( m δ θ 1 , m θ 2 ) and by Theorem 4 the Wald predictive distribution of V, given W = w , is N ( m δ w n , m θ 2 ) , the following ratios (after simplification) will be used in the proof:
g ( v ; θ 1 ) q ( v w ) = θ 2 2 π m e x p θ 2 2 m ( v m δ θ 1 ) 2 / θ 2 2 π m e x p θ 2 2 m ( v m δ w n ) 2 = e x p θ 2 2 m ( v m δ θ 1 ) 2 ( v m δ w n ) 2 and q ( v w ) g ( v ; θ 1 ) = e x p θ 2 2 m ( v m δ θ 1 ) 2 + ( v m δ w n ) 2 .
(1)
The Kullback–Leibler measure between g ( v ; θ ) and q ( v w ) , defined in Equation (1), is equal to
K ( g q ) = E g log g ( V ; θ 1 ) q ( V w ) = E g θ 2 2 m ( V m δ θ 1 ) 2 ( V m δ w n ) 2 W = m δ 2 θ 2 2 n 2 w n θ 1 2 .
Now, the average of the Kullback–Leibler measure under the distribution of W between g ( v ; θ 1 ) and q ( v w ) is equal to
A K ( g q ) = E W E g log g ( V ; θ ) q ( V w ) = m δ 2 θ 2 2 n 2 E W W n θ 1 2 = m δ 2 θ 2 2 n 2 n θ 2 = m δ 2 2 n .
(2)
First, we need to find K ( q g ) . By applying definition (1), we obtain
K ( q g ) = E q log q ( V w ) g ( V ; θ ) = E q θ 2 2 m ( v m δ θ 1 ) 2 ( v m δ w n ) 2 W = m δ 2 θ 2 2 n 2 w n θ 1 2 .
From the definition in (2) and K ( g q ) calculated in part (1) above, the Jeffreys divergence between g ( v ; θ ) and q ( v w ) is given by
J ( g , q ) = K ( g q ) + K ( q g ) = m δ 2 θ 2 2 n 2 w n θ 1 2 + m δ 2 θ 2 2 n 2 w n θ 1 2 = m δ 2 θ 2 n 2 w n θ 1 2 .
Now, to find the average of the Jeffreys measure under the distribution of W between g ( v ; θ 1 ) and q ( v w ) , first we need A K ( q g ) = m δ 2 2 n , and by using A K ( g q ) found in part (1) above, we obtain
A J ( g , q ) = A K ( g q ) + A K ( q g ) = m δ 2 2 n + m δ 2 2 n = m δ 2 n .
(3)
First, we find
E g g ( V ; θ 1 ) q ( V w ) r 1 = E g e x p ( r 1 ) θ 2 2 m ( v m δ θ 1 ) 2 ( v m δ w n ) 2 W = e x p m r ( r 1 ) δ 2 θ 2 w n θ 1 2 2 n 2 .
The average of E g g ( V ; θ 1 ) q ( V w ) r 1 under the distribution of w is
E W E g g ( V ; θ ) q ( V w ) r 1 = e x p m r ( r 1 ) δ 2 θ 2 w n θ 1 2 2 n 2 1 2 π n e 1 2 n ( w n θ ) 2 d w = n n m r ( r 1 ) δ 2 .
Note that E W E g g ( V ; θ 1 ) q ( V w ) r 1 converges if n m r ( r 1 ) δ 2 > 0 ; that is,
0 < r < 1 2 + 1 4 + n m δ 2 .
From the definition in Equation (3), we obtain the Rényi divergence measure and its average under the distribution of W between g ( v ; θ 1 ) and q ( v w ) as required.
(4)
From the results of part (3) above, and using the definition in Equation (4), we obtain the relative-information-of-type-r measure and its average under the distribution of W between g ( v ; θ 1 ) and q ( v w ) as required.
(5)
From the results of part (3) above, and using Equation (5), we obtain the r-order-and-s-degree divergence measure and its average under the distribution of W between g ( v ; θ 1 ) and q ( v w ) as required.
(6)
Substituting r = 2 in the relative-information-of-type-r measure found in part (4) above, we obtain the chi-square divergence and its average under the distribution of W between g ( v ; θ 1 ) and q ( v w ) as required.
(7)
Substituting r = 1 2 in the relative-information-of-type-r measure found in part (4) above and then multiplying by 1 2 , we obtain the Hellinger measure and its average under the distribution of W between g ( v ; θ 1 ) and q ( v w ) as required.
(8)
Using the relationship of the Bhattacharyya and Hellinger measures described in Equation (9), we obtain the result of the Bhattacharyya measure and its average under the distribution of W between g ( v ; θ 1 ) and q ( v w ) as required.
In the next section, we test the closeness between the classical distribution and the prediction distributions (Bayesian and Wald), using several divergence measures. In addition, we present a simulation study to compare the two prediction approaches based on the power of a test.

4. Simulation Study

In the previous section, we used several divergence measures to determine the distance between the classical distribution of a future statistic g and the predictive distribution h , using the Bayesian approach, and the predictive distribution q , using the Wald approach with random samples taken from the normal density. For this section, we used a simulation study to achieve two goals: Firstly, we tested the closeness between g and h, based on the first seven divergence measures found in Theorem 6, and the closeness between g and q, based on the first seven divergence measures found in Theorem 7, and we decided whether there was a difference in the behavior of these divergence measures with respect to the closeness test for each case of the two prediction approaches. Secondly, we decided which of the two prediction approaches was more appropriate in the current setting.
To achieve the first goal, we employed hypothesis testing to test the closeness between the classical distribution and the predictive distribution using the two prediction approaches. This technique consisted of two steps: first, the percentiles were simulated from the divergence measures to be used in making the decision based on the test criteria; next, the hypothesis testing was applied, to test the closeness between the classical and the predictive distributions, and this closeness was measured, based on the divergence measures.
For the second goal, the power of the test was determined numerically, based on the results of the simulation procedure, to decide which of the two prediction approaches was more appropriate in the in the current setting.
Simulation of Percentiles
To find the percentiles of the divergence measures, we applied the following steps:
  • A random sample of size n was generated from the past underlying distribution and was used to calculate W, where W = i = 1 n X i , as defined in Theorem 2.
  • Based on the obtained values of W, the divergence measures and their averages under the prior distribution of θ 1 between g and h derived in (1)–(7) of Theorem 6 were calculated for fixed n, m, δ ,  a, b, and θ 1 .
  • Steps (1) and (2) above were repeated 10,000 times.
  • The 10,000 values of each of the simulated divergence measures obtained in step (3) were used to produce simulated percentiles of size α = 0.005 , 0.01 , 0.025 , 0.05 , 0.1 , 0.25 , 0.5 , 0.75 , 0.9 , 0.95 , 0.975 , 0.99 , and 0.995 .
Table S1 in the Supplementary Materials presents the percentiles of the divergence measures (1)–(7) found in Theorem 6 between the classical distribution of future statistic g ( v θ 1 ) and the Bayesian predictive distribution h ( v w ) based on the past sample from N ( θ 1 , 1 ) and the future sample from N ( δ θ 1 , 1 ) , for n = 18 , m = 5 , θ 1 = 2.5 , δ = 0.1 , a = 12.5 , b = 5 . The number of simulated samples was n i = 10,000 .
Table S2 in the Supplementary Materials presents the percentiles of the average divergence measures (1)–(7) found in Theorem 6 between between the classical distribution of the future statistic g ( v θ 1 ) and the Bayesian predictive distribution h ( v w ) based on the past sample from N ( θ 1 , 1 ) and the future sample from N ( δ θ 1 , 1 ) ,   n = 18 , m = 5 , δ = 0.1 , a = 12.5 , b = 5 . The number of simulated samples was n i = 10,000 .
The above simulation of percentiles procedure steps (1) through (4) was also performed in the Wald predictive distribution approach. In this case, the divergence measures derived in (1)–(7) of Theorem 7 were calculated for fixed n, m, and δ . Table S3 in the Supplementary Materials shows the percentiles of the divergence measures (1)–(7) found in Theorem 7 between g ( v ; θ 1 ) and q ( v w ) based on the past sample from N ( θ 1 , 1 ) and the future sample from N ( δ θ 1 , 1 ) , for n = 18 , m = 5 , θ 1 = 2.5 , δ = 0.1 . The number of simulated samples was n i = 10,000 .
Testing Closeness
At this stage, we tested the closeness of the future density g ( v | θ 1 ) and the Bayesian predictive density h ( v | w ) , using the following hypothesis:
H 0 : d ( g , h ) = 0 versus H 1 : d ( g , h ) > 0
at significance level α , where d was the distance between g and h and it was measured using the divergence measures (1)–(7) found in Theorem 6. In order to apply the closeness criteria, a simulation was carried out by the following steps:
  • All parameters, n, m, δ ,  a, b, and θ 1 , were fixed.
  • One sample of size n was generated from the underlying distribution, and was used to calculate W.
  • For w obtained in step (2), a test statistics d c a l was calculated, based on each of the divergence measures and their averages under the prior distribution of θ between g and h.
  • The test statistics d c a l obtained in step (2) was compared to the corresponding critical value d t a b , from the simulated percentiles Tables S1 and S2 in the Supplementary Materials, at significance level α = 0.01 ,   0.05 .
  • On the basis of each divergence measure, decisions were made to reject (R) H 0 if d c a l > d t a b and to fail to reject (FTR) H 0 if d c a l < d t a b , or by using p-value criteria to reject H 0 if p-value < α and to fail to reject H 0 if p-value α .
Table 1 gives the test statistics d c a l for the divergence measures (1)–(7) of Theorem 6 between g ( v θ 1 ) and h ( v w ) , for n = 18 ,   m = 5 ,   θ 1 = 2.5 ,   δ = 0.1 ,   a = 12.5 ,   b = 5 , and the decisions (FTR, R) to test the hypothesis in (24) at α = 0.01 , 0.05 , 0.1 .
Table 2 gives test statistics d ¯ c a l for the average divergence measures given in (1)–(7) of Theorem 6 for n = 18 ,   m = 5 ,   δ = 0.1 ,   a = 12.5 ,   b = 5 , and the decisions (FTR, R) for testing the hypothesis in (24) at α = 0.01, 0.05, 0.1.
The above testing of closeness procedure steps (1) through (5) was also performed in the Wald predictive distribution approach. In this case, the divergence measures derived in (1)–(7) of Theorem 7 were used to measure the distance between the future density g ( v ; θ 1 ) and the Wald predictive density q ( v ; w ) for fixed n, m, and δ , and the percentiles Table S3 in the Supplementary Materials was used in the comparison criteria, to make a decision. Table 3 shows test statistics d c a l for the divergence measures given in (1)–(7) of Theorem 7, for n = 18 ,   m = 5 ,   δ = 0.1 , and the decisions (FTR, R) for testing the hypothesis in (24), at α = 0.01, 0.05, 0.1.
From Table 1, Table 2 and Table 3, we can see that all the divergence measures gave the same decision for each case in hypothesis (24). As a result, we could choose any of these divergence measures to test the closeness between the classical distribution and the predictive distribution in a normal case.
The Power of a Test
To simulate the power of a test, as described previously, we applied the following steps:
  • The percentiles points P α were taken from the percentiles Table S2 in the Supplementary Materials when we dealt with the Bayesian predictive-distribution approach for fixed parameters, n = 18 ,   m = 5 ,   δ = 0.1 ,   a = 12.5 ,   b = 5 , and θ 1 = θ 0 , and from Table S3 in the Supplementary Materials when we dealt with the Wald predictive-distribution approach for fixed parameters, n = 18 ,   m = 5 ,   δ = 0.1 , and θ 1 = θ 0 , at significance level α = 0.05 .
  • A random sample of size n was generated from the underlying distribution at θ 1 θ 0 , and was used to calculate W.
  • For w obtained in step (2), we calculated the values of d based on each of the divergence measures (1)–(7) of Theorem 6 for the Bayesian predictive-distribution approach and on the divergence measures (1)–(7) of Theorem 7 for the Wald predictive-distribution approach.
  • The values of d obtained in step (3) were compared to P α in step (1).
  • Steps (2)–(4) above were repeated 10,000 times.
  • The power of a test γ ( θ 1 ) , where θ 1 θ 0 , was calculated based on the percentages when d > P α .
Table 4 and Table 5 give the values of the power of a test γ ( θ 1 ) for testing H 0 : d ( θ 1 ) = d ( 2.5 ) vs. H 1 : d ( θ 1 ) d ( 2.5 ) , where d was measured using divergence measures (1)–(7) of Theorem 6 and Theorem 7, respectively, at significance level 0.05 . The number of simulated samples was n i = 10,000.
From Table 4, we can see that the values of γ ( θ 1 ) depended on the value of θ 1 , and as the value of θ 1 became larger or smaller, γ ( θ 1 ) approached 1. On the other hand, Table 5 shows that the values of γ ( θ 1 ) oscillated around 0.05 and did not depend on θ 1 . Thus, the Bayesian predictive distribution was better in predicting future data than the Wald predictive distribution.
To check the validity of the power of a test, plots of γ ( θ 1 ) versus θ 1 are shown in Figure 1 and Figure 2 for the Bayesian predictive-distribution approach and the Wald predictive-distribution approach, respectively.

5. Application

In this section, we present an application of making a prediction using Bayesian predictive density about a future statistic based on a past sufficient statistic calculated from a real data set. In this application, we consider data set number 139 from [38], which represents the measurements L i of the length of the forearms (in inches) taken from 140 adult males. The distribution of this data set is normal, with mean 18.8 and variance 1.25546 ; L i N ( 18.8 , 1.25546 ) , i = 1 , 2 , , 140 .
By using the following transformation,
X i = L i 18.8 1.12047 + θ ,
we transform this data set to N ( θ , 1 ) .
The following data comprise a random sample of size 18 taken from the original transformed data to N ( θ , 1 ) , when θ = 2.5 :
3.12282 2.58734 3.92605 2.67658 1.78410 1.9626 0.26689 1.69486 2.94433 2.23034 3.30132 2.23034 2.49809 4.37229 1.33786 3.39057 1.15937 0.534634 .
From this data set, the value of the past sufficient statistic w = x i = 42.0204 . At significance level α = 0.05 , we want to test the hypothesis,
H 0 : d ( g , h ) = 0 versus H 1 : d ( g , h ) > 0 .
As all divergence measures give the same decision regarding the hypothesis testing in (24), we use the Kullback–Leibler divergence measure in (1) to calculate the distance d between the classical distribution g and the Bayesian predictive distribution h.
The value of d c a l = 0.0741083 , and from the percentiles Table S1, we obtain d t a b = 6.77810 ; as can be seen, d c a l < d t a b . Furthermore, the value of d ¯ c a l = 2.64472 , and from the percentiles Table S2, we obtain d ¯ t a b = 2.78029 ; as can be seen, d ¯ c a l < d ¯ t a b . Thus, the decision is to fail to reject H 0 . In other words, it is appropriate to use the Bayesian predictive distribution to predict the sum of the forearm lengths of males. As a result, we can predict the average length of forearms for males in general.

6. Conclusions

In this article, the Bayesian and Wald predictive distributions of future statistic V based on past sufficient statistic W from normal density were derived. Several divergence measures were used as criteria to obtain the distance between the classical distribution of future statistics and each of the Bayesian and the Wald predictive distributions. Hypothesis testing was used to test the closeness between the classical and each of the Bayesian and the Wald predictive distributions. The simulation results showed that all divergence measures used in this paper gave the same decisions in all cases. Therefore, it is recommended to use a divergence measure with simpler computations. Based on the power of a test, we conclude that Bayesian predictive distribution is better than Wald predictive distribution in normal distribution cases.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/sym16020212/s1.

Author Contributions

Conceptualization, S.A. (Suad Alhihi), M.A. and A.A.; methodology, S.A. (Suad Alhihi), M.A. and R.A.A.; software, S.A. (Suad Alhihi), M.A., G.A. and S.A. (Samer Alokaily); validation, S.A. (Suad Alhihi), M.A., R.A.A. and S.A. (Samer Alokaily); formal analysis, G.A. and A.A.; investigation, S.A. (Suad Alhihi), R.A.A. and A.A.; resources, G.A. and R.A.A.; data curation, A.A.; writing—original draft preparation, S.A. (Suad Alhihi) and A.A.; writing—review and editing, M.A., G.A. and S.A. (Samer Alokaily); visualization, S.A. (Samer Alokaily); supervision, M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Aitchison, J. Goodness of prediction fit. Biometrika 1975, 62, 547–554. [Google Scholar] [CrossRef]
  2. Aitchison, J.; Dunsmore, I. Statistical Prediction Analysis; Cambridge University Press: Cambridge, NY, USA, 1975. [Google Scholar]
  3. Escobar, M.; West, M. Bayesian density estimation and inference using mixtures. J. Am. Stat. Assoc. 1995, 90, 577–588. [Google Scholar] [CrossRef]
  4. Hamura, Y.; Kubokawa, T. Bayesian predictive density estimation with parametric constraints for the exponential distribution with unknown location. Metrika 2021, 85, 515–536. [Google Scholar] [CrossRef]
  5. Hamura, Y.; Kubokawa, T. Bayesian predictive density estimation for a Chi-squared model using information from a normal observation with unknown mean and variance. J. Stat. Plan. Inference 2022, 217, 33–51. [Google Scholar] [CrossRef]
  6. Wald, A. Setting of tolerance limits when the sample is large. Ann. Math. Stat. 1942, 13, 389–399. [Google Scholar] [CrossRef]
  7. Bjornstad, J. Predictive likelihood: A review. Stat. Sci. 1990, 242–254. [Google Scholar] [CrossRef]
  8. Awad, A.; Saad, T. Predictive Density Functions: A Comparative Study. Pak. J. Stat. 1987, 3, 91–118. [Google Scholar]
  9. Kullback, S.; Leibler, R. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  10. Lin, J. Divergence measures based on the Shannon entropy. IEEE Trans. Inf. Theory 1991, 37, 145–151. [Google Scholar] [CrossRef]
  11. Johnson, D.; Sinanovic, S. Symmetrizing the Kullback-Leibler Distance. IEEE Trans. Inf. Theory 2001. Available online: https://www.ece.rice.edu/~dhj/resistor.pdf (accessed on 1 December 2023). [CrossRef]
  12. Do, M.; Vetterli, M. Wavelet-based texture retrieval using generalized Gaussian density and Kullback-Leibler distance. IEEE Trans. Image Process. 2002, 11, 146–158. [Google Scholar] [CrossRef]
  13. Amin, M.; Khan, F.; Ahmed, S.; Imtiaz, S. A data-driven Bayesian network learning method for process fault diagnosis. Process Saf. Environ. Prot. 2021, 150, 110–122. [Google Scholar] [CrossRef]
  14. Jeffreys, H. An invariant form for the prior probability in estimation problems. Proc. R. Soc. London. Ser. A Math. Phys. Sci. 1946, 186, 453–461. [Google Scholar]
  15. Taneja, I.; Pardo, L.; Morales, D.; Menéndez, M. On generalized information and divergence measures and their applications: A brief review. QüEstiió Quad. d’EstadíStica Investig. Oper. 1989, 13, 47–73. [Google Scholar]
  16. Cichocki, A.; Amari, S. Families of alpha-beta-and gamma-divergences: Flexible and robust measures of similarities. Entropy 2010, 12, 1532–1568. [Google Scholar] [CrossRef]
  17. Sharma, K.; Seal, A.; Yazidi, A.; Selamat, A.; Krejcar, O. Clustering uncertain data objects using Jeffreys-divergence and maximum bipartite matching based similarity measure. IEEE Access 2021, 9, 79505–79519. [Google Scholar] [CrossRef]
  18. Rényi, A. On measures of entropy and information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics; University of California Press: Berkeley, CA, USA, 1961; Volume 4, pp. 547–562. [Google Scholar]
  19. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  20. Krishnamurthy, A.; Kandasamy, K.; Poczos, B.; Wasserman, L. Nonparametric estimation of Rényi divergence and friends. In Proceedings of the 31st International Conference on Machine Learning, Beijing, China, 22–24 June 2014; pp. 919–927. [Google Scholar]
  21. Sason, I.; Verdú, S. f-divergence Inequalities. IEEE Trans. Inf. Theory 2016, 62, 5973–6006. [Google Scholar] [CrossRef]
  22. Sharma, B.; Autar, R. Relative Information Function and Their Type (αβ) Generalizations. Metrika 1974, 21, 41–50. [Google Scholar] [CrossRef]
  23. Taneja, I.; Kumar, P. Relative information of type s, Csiszár’s f-divergence, and information inequalities. Inf. Sci. 2004, 166, 105–125. [Google Scholar] [CrossRef]
  24. Sharma, B.; Mittal, D. New non-additive measures of relative information. J. Comb. Inf. Syst. Sci. 1977, 2, 122–132. [Google Scholar]
  25. Pearson, K. On the Criterion that a Given System of Deviations From the Probable in the Case of a Correlated System of Variables is such that it Can be Reasonably Supposed to have a Risen From Random Sampling. Philos. Mag. 1900, 50, 157–172. [Google Scholar] [CrossRef]
  26. Taneja, I. On symmetric and nonsymmetric divergence measures and their generalizations. Adv. Imaging Electron Phys. 2005, 138, 177–250. [Google Scholar]
  27. Hellinger, E. Neue begründung der theorie quadratischer formen von unendlichvielen veränderlichen. J. Die Reine Angew. Math. 1909, 136, 210–271. [Google Scholar] [CrossRef]
  28. González-Castro, V.; Alaiz-Rodríguez, R.; Alegre, E. Class distribution estimation based on the Hellinger distance. Inf. Sci. 2013, 218, 146–164. [Google Scholar] [CrossRef]
  29. Dhumras, H.; Bajaj, R. On Novel Hellinger Divergence Measure of Neutrosophic Hypersoft Sets in Symptomatic Detection of COVID-19. Neutrosophic Sets Syst. 2023, 55, 16. [Google Scholar]
  30. Bhattacharyya, A. Several Analogues to the Amount of Information and Their Uses in Statistical Estimation. Sankhya 1946, 8, 315–328. [Google Scholar]
  31. Aherne, F.; Thacker, N.; Rockett, P. The Bhattacharyya metric as an absolute similarity measure for frequency coded data. Kybernetika 1998, 34, 363–368. [Google Scholar]
  32. Patra, B.; Launonen, R.; Ollikainen, V.; Nandi, S. A new similarity measure using Bhattacharyya coefficient for collaborative filtering in sparse data. Knowl.-Based Syst. 2015, 82, 163–177. [Google Scholar] [CrossRef]
  33. Pianka, E. Niche Overlap and Diffuse Competition. Proc. Natl. Acad. Sci. USA 1974, 71, 2141–2145. [Google Scholar] [CrossRef]
  34. Alhihi, S.; Almheidat, M. Estimation of Pianka Overlapping Coefficient for Two Exponential Distributions. Mathematics 2023, 11, 4152. [Google Scholar] [CrossRef]
  35. Abu Alfeilat, H.; Hassanat, A.; Lasassmeh, O.; Tarawneh, A.; Alhasanat, M.; Eyal Salman, H.; Prasath, V. Effects of distance measure choice on k-nearest neighbor classifier performance: A review. Big Data 2019, 7, 221–248. [Google Scholar] [CrossRef] [PubMed]
  36. Pardo, L. Statistical Inference Based on Divergence Measures; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
  37. Fisher, R. The Logic of Inductive Inference. J. R. Stat. Soc. 1935, 98, 39–82. [Google Scholar] [CrossRef]
  38. Hand, D.; Fergus, D.; McConway, D.; Ostrowski, E. A Handbook of Small Data Sets; CRC Press: Boca Raton, FL, USA, 1993. [Google Scholar]
Figure 1. The power function γ ( θ 1 ) for the Bayesian predictive-distribution approach.
Figure 1. The power function γ ( θ 1 ) for the Bayesian predictive-distribution approach.
Symmetry 16 00212 g001
Figure 2. The power function γ ( θ 1 ) for the Wald predictive-distribution approach.
Figure 2. The power function γ ( θ 1 ) for the Wald predictive-distribution approach.
Symmetry 16 00212 g002
Table 1. Hypothesis-testing decisions for closeness based on divergence measures between the classical distribution of future statistics and the Bayesian predictive distribution.
Table 1. Hypothesis-testing decisions for closeness based on divergence measures between the classical distribution of future statistics and the Bayesian predictive distribution.
Measure w = 30.5161 α w = 45.0341 α
d cal 0.010.050.1 d cal 0.010.050.1
K11.7321FTRRR 0.31330 FTRFTRFTR
J 23.4964 FTRRR 0.62746 FTRFTRFTR
K 0.2 1 2.35157 FTRRR 0.06280 FTRFTRFTR
K 1.1 1 12.9018 FTRRR 0.34454 FTRFTRFTR
  1 K 0.2 2.34936 FTRRR 0.06280 FTRFTRFTR
  1 K 1.1 12.9101 FTRRR 0.34455 FTRFTRFTR
K 0.2 0.3 2.34964 FTRRR 0.06280 FTRFTRFTR
K 0.2 1.5 2.35296 FTRRR 0.06280 FTRFTRFTR
K 1.1 0.3 12.8437 FTRRR 0.34450 FTRFTRFTR
K 1.1 1.5 12.9435 FTRRR 0.34457 FTRFTRFTR
χ 2 23.6760 FTRRR 0.62509 FTRFTRFTR
H 2.93274 FTRRR 0.07843 FTRFTRFTR
FTR: fail to reject H 0 , and R: reject H 0 . The d c a l values in the table were multiplied by 1000.
Table 2. Hypothesis-testing decisions for closeness, based on the average divergence measures between the classical distribution of future statistics and the Bayesian predictive distribution.
Table 2. Hypothesis-testing decisions for closeness, based on the average divergence measures between the classical distribution of future statistics and the Bayesian predictive distribution.
Measure w = 30.5161 α w = 45.0341 α
d cal ¯ 0.01 0.05 0.1 d cal ¯ 0.01 0.05 0.1
K 2.97156 RRR 2.56239 FTRFTRFTR
J 5.95129 RRR 5.13183 FTRFTRFTR
K 0.2 1 0.57320 RRR 0.49434 FTRFTRFTR
K 1.1 1 3.35800 RRR 2.89535 FTRFTRFTR
  1 K 0.2 0.45976 RRR 0.40829 FTRFTRFTR
  1 K 1.1 3.99059 RRR 3.35806 FTRFTRFTR
K 0.2 0.3 0.47216 RRR 0.41787 FTRFTRFTR
K 0.2 1.5 0.66378 RRR 0.56079 FTRFTRFTR
K 1.1 0.3 1.29241 RRR 1.24034 FTRFTRFTR
K 1.1 1.5 8.72037 RRR 6.50641 FTRFTRFTR
χ 2 113395.0 RRR 22364.9 FTRFTRFTR
H 0.50396 RRR 0.45375 FTRFTRFTR
Table 3. Hypothesis-testing decisions for closeness based on divergence measures between the classical distribution of future statistics and the Wald predictive distribution.
Table 3. Hypothesis-testing decisions for closeness based on divergence measures between the classical distribution of future statistics and the Wald predictive distribution.
Measure w = 30.5161 α w = 45.0341 α
d cal 0.010.050.1 d cal 0.010.050.1
K161.870RRR 0.000897 FTRFTRFTR
J 323.740 RRR 0.001794 FTRFTRFTR
K 0.2 1 32.3740 RRR 0.000179 FTRFTRFTR
K 1.1 1 178.057 RRR 0.000987 FTRFTRFTR
  1 K 0.2 32.3321 RRR 0.000179 FTRFTRFTR
  1 K 1.1 178.215 RRR 0.000987 FTRFTRFTR
K 0.2 0.3 32.3373 RRR 0.000179 FTRFTRFTR
K 0.2 1.5 32.4002 RRR 0.000179 FTRFTRFTR
K 1.1 0.3 176.952 RRR 0.000987 FTRFTRFTR
K 1.1 1.5 178.852 RRR 0.000987 FTRFTRFTR
χ 2 329.037 RRR 0.001794 FTRFTRFTR
H 40.3857 RRR 0.000224 FTRFTRFTR
FTR: fail to reject H 0 , and R: reject H 0 . The d c a l  values in the table are multiplied by 10,000.
Table 4. Values of the power function γ ( θ 1 ) around θ 1 = 2.5 for Bayesian prediction approach.
Table 4. Values of the power function γ ( θ 1 ) around θ 1 = 2.5 for Bayesian prediction approach.
θ 1 110 60 50 40 30 20
γ ( θ 1 ) 0.9992 0.8335 0.6897 0.5081 0.3293 0.1907
θ 1 2.5 506070100120
γ ( θ 1 ) 0.0494 0.2538 0.4152 0.6075 0.9527 0.9947
Table 5. Values of the power function γ ( θ 1 ) around θ 1 = 2.5 for Wald prediction approach.
Table 5. Values of the power function γ ( θ 1 ) around θ 1 = 2.5 for Wald prediction approach.
θ 1 110 60 50 40 30 20
γ ( θ 1 ) 0.0454 0.0474 0.0427 0.0464 0.0437 0.0440
θ 1 2.5 506070100120
γ ( θ 1 ) 0.0474 0.0459 0.0443 0.0423 0.0439 0.0446
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alhihi, S.; Almheidat, M.; Abufoudeh, G.; Abu Awwad, R.; Alokaily, S.; Almomani, A. Statistical Inference of Normal Distribution Based on Several Divergence Measures: A Comparative Study. Symmetry 2024, 16, 212. https://doi.org/10.3390/sym16020212

AMA Style

Alhihi S, Almheidat M, Abufoudeh G, Abu Awwad R, Alokaily S, Almomani A. Statistical Inference of Normal Distribution Based on Several Divergence Measures: A Comparative Study. Symmetry. 2024; 16(2):212. https://doi.org/10.3390/sym16020212

Chicago/Turabian Style

Alhihi, Suad, Maalee Almheidat, Ghassan Abufoudeh, Raed Abu Awwad, Samer Alokaily, and Ayat Almomani. 2024. "Statistical Inference of Normal Distribution Based on Several Divergence Measures: A Comparative Study" Symmetry 16, no. 2: 212. https://doi.org/10.3390/sym16020212

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop