Next Article in Journal
Research on Zonal Disintegration Characteristics and Failure Mechanisms of Deep Tunnel in Jointed Rock Mass with Strength Reduction Method
Next Article in Special Issue
Effect of Money Supply, Population, and Rent on Real Estate: A Clustering Analysis in Taiwan
Previous Article in Journal
The Vehicle Routing Problem with Simultaneous Pickup and Delivery and Parcel Lockers
Previous Article in Special Issue
Variable Selection for Generalized Linear Models with Interval-Censored Failure Time Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bootstrap Tests for the Location Parameter under the Skew-Normal Population with Unknown Scale Parameter and Skewness Parameter

1
School of Economics, Hangzhou Dianzi University, Hangzhou 310018, China
2
Alibaba Business College, Hangzhou Normal University, Hangzhou 310036, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(6), 921; https://doi.org/10.3390/math10060921
Submission received: 12 February 2022 / Revised: 6 March 2022 / Accepted: 10 March 2022 / Published: 13 March 2022
(This article belongs to the Special Issue Computational Statistics and Data Analysis)

Abstract

:
In this paper, the inference on location parameter for the skew-normal population is considered when the scale parameter and skewness parameter are unknown. Firstly, the Bootstrap test statistics and Bootstrap confidence intervals for location parameter of single population are constructed based on the methods of moment estimation and maximum likelihood estimation, respectively. Secondly, the Behrens-Fisher type and interval estimation problems of two skew-normal populations are discussed. Thirdly, by the Monte Carlo simulation, the proposed Bootstrap approaches provide the satisfactory performances under the senses of Type I error probability and power in most cases regardless of the moment estimator or ML estimator. Further, the Bootstrap test based on the moment estimator is better than that based on the ML estimator in most situations. Finally, the above approaches are applied to the real data examples of leaf area index, carbon fibers’ strength and red blood cell count in athletes to verify the reasonableness and effectiveness of the proposed approaches.

1. Introduction

In many practical problems, the real-data distribution tends to be skew with unimodal and asymmetrical characteristics such as dental plaque index data [1], freeway speed data [2] and polarizer manufacturing process data [3]. For this reason, Azzalini [4,5] proposed the concept of the skew-normal distribution originally and gave its density function expression to characterize it. The random variable X follows a skew-normal distribution with location parameter ξ R , scale parameter η 2 R + and skewness parameter λ R , denoted by X S N ( ξ , η 2 , λ ) , if its density function is:
f ( x ; ξ , η 2 , λ ) = 2 ϕ ( x ; ξ , η 2 ) Φ [ λ η 1 ( x ξ ) ] ,
where ϕ ( x ; ξ , η 2 ) is the normal probability density function with mean ξ and variance η 2 , and Φ ( · ) is the standard normal cumulative distribution function. When λ = 0 , Equation (1) degenerates into the normal distribution with mean ξ and variance η 2 .
In view of the wide applications of the skew-normal distribution, many scholars further explored its statistical properties. Some recent studies include: characterizations of distribution [6,7], characteristic functions [8], sampling distributions [9], distribution of quadratic forms [10,11,12], measures of skewness and divergence [13,14], asymptotic expansions for moments of the extremes [15], rates of convergence of the extremes [16], exact density of the sum of independent random variables [17], identifiability of finite mixtures of the skew-normal distributions [18], etc. On this basis, we can use the skew-normal distribution as the fitted distribution of real data and establish a statistical model to solve the practical problem. Some recent applications include: modelling of air pollution data [19], modelling of psychiatric measures [20], modelling of bounded health scores [21], modelling of insurance claims [22], asset pricing [23], individual loss reserving [24], robust portfolio estimation [25], growth estimates of cardinalfish [26], age-specific fertility rates [27], reliability studies [28], statistical process control [29], analysis of student satisfaction towards university courses [30], detecting differential expression to microRNA data [31], etc.
Due to the complex structure of the skew-normal distribution, the traditional parameter estimation method is difficult to be applied directly. To this end, Pewsey [32] studied the weaknesses of the direct parameterization in parameter estimation and proposed the centered parameterization method. Pewsey [33,34] applied this method to the wrapped skew-normal population and gave the methods of moment estimation and maximum likelihood (ML) estimation. Arellano-Valle and Azzalini [35] extended the centered parameterization method to the multivariate skew-normal distribution and studied its information matrix. Further, due to the wide application of location parameter in econometrics, medicinal chemistry and life testing, the research on the location parameter of skew-normal distribution has attracted many scholars’ attention. For example, Wang et al. [36] discussed the interval estimation of location parameter when the coefficient of variation and skewness parameter are known. Thiuthad and Pal [37] considered the hypothesis testing problem of location parameter and constructed three testing statistics when the scale parameter and skewness parameter are known. Ma et al. [38] studied the interval estimation and hypothesis testing problems of location parameter with known scale parameter and skewness parameter. Based on the approximate likelihood equations, Gui and Guo [39] derived the explicit estimators of scale parameter and location parameter. But in practical applications, inferences on the location parameter with unknown scale parameter and skewness parameter is by no means an exception but a fact of life. For this, the statistical inference problems of location parameter for single and two skew-normal populations are researched when the scale parameter and skewness parameter are unknown.
This paper is organized as follows. In Section 2, for single skew-normal population, the centered parameterization and Bootstrap approaches are used for the hypothesis testing and interval estimation problems of location parameter with unknown scale parameter and skewness parameter. In Section 3, for two skew-normal populations, the Behrens-Fisher type and interval estimation problems of location parameters are discussed when the scale parameters and skewness parameters are unknown. In Section 4, the Monte Carlo simulation results of the above approaches are presented, which are compared from analytical perspective at the same time. In Section 5, the proposed approaches are applied to the real data examples of leaf area index (LAI), carbon fibers’ strength and red blood cell (RBC) count in athletes. In Section 6, the summary of this paper is given.

2. Inference on the Location Parameter of Single Skew-Normal Population

In this section, the estimation problem of unknown parameters for single skew-normal population is considered firstly. Suppose that X 1 , , X n are random samples from the skew-normal distribution X S N ( ξ , η 2 , λ ) and all the samples are mutually independent. Let ( X ¯ , S 2 , S 3 ) denote the sample mean, the second and third central moments of the sample, respectively. Namely:
X ¯ = 1 n i = 1 n X i S 2 = 1 n i = 1 n ( X i X ¯ ) 2 S 3 = 1 n i = 1 n ( X i X ¯ ) 3 .
Theorem 1.
Let δ = λ / ( 1 + λ 2 ) 1 / 2 . If X S N ( ξ , η 2 , λ ) , then the moment estimators of ( ξ , η 2 , λ ) are:
ξ ^ = X ¯ c S 3 1 / 3 , η ^ 2 = S 2 + c 2 S 3 2 / 3 , λ ^ = δ ^ / ( 1 δ ^ 2 ) 1 / 2 ,
where b = ( 2 / π ) 1 / 2 , c = [ 2 / ( 4 π ) ] 1 / 3 , δ ^ = c S 3 1 / 3 / b ( S 2 + c 2 S 3 2 / 3 ) 1 / 2 .
 Proof. 
Let ( x ¯ , s 2 , s 3 ) be the observed values of ( X ¯ , S 2 , S 3 ) . Y 1 , , Y n are the standardized samples where Y i = ( X i x ¯ ) / s 2 from Y S N ( ξ s , η s 2 , λ ) , i = 1 , , n . Note that:
ξ s = ( ξ x ¯ ) / s 2 , η s = η / s 2 .
The moment generating function ( MGF ) of Y is
M Y ( t ) = E exp ( t Y ) = 2 exp t ξ s + t 2 η s 2 2 Φ t η s δ .
By Equation (5), we have:
M Y ( t ) | t = 0 = ξ s + b η s δ = 0 M Y ( t ) | t = 0 = ξ s 2 + 2 b ξ s η s δ + η s 2 = 1 | M Y ( t ) | t = 0 = ξ s 3 + 3 b ξ s 2 η s δ + 3 ξ s η s 2 + 3 b η s 3 δ b η s 3 δ 3 = s 2 3 / 2 s 3 .
According to Equations (4) and (6), the moment estimates of ( ξ , η 2 , λ ) can be expressed as:
ξ ^ * = x ¯ c s 3 1 / 3 , η ^ * 2 = s 2 + c 2 s 3 2 / 3 , λ ^ * = δ ^ * 1 δ ^ * 2 ,
where δ ^ * = c s 3 1 / 3 / b ( s 2 + c 2 s 3 2 / 3 ) 1 / 2 . By Equation (3), the moment estimators of ( ξ , η 2 , λ ) are given, then the proof of Theorem 1 is completed. □
Further, the ML estimators of ( ξ , η 2 , λ ) are considered. Pewsey [32] proved that the results of using numerical techniques to maximize the log-likelihood for direct parameters ( ξ , η 2 , λ ) , may be highly misleading as no unique solution exists in this case. For this, we derive the ML estimators of the unknown parameters based on the method of centered parametrization by References [4,32,34,35,40]. Firstly, we give the following definition.
Definition 1.
Suppose X S N ( ξ , η 2 , λ ) . Let Z = X ξ η S N ( λ ) , then:
X C = μ + σ Z E ( Z ) var ( Z ) S N C ( μ , σ 2 , γ ) ,
where S N C ( μ , σ 2 , γ ) denotes the skew-normal distribution with mean μ R , variance σ 2 R + and skewness coefficient γ.
The centered parameterization removes the singularity of the expected Fisher information matrix at λ = 0 . Furthermore, the components of centered parameters are less correlated than those of direct parameters. By Definition 1, the relationship between the direct parameters ( ξ , η 2 , λ ) and centered ones ( μ , σ 2 , γ ) is as follows (see [34]).
ξ = μ c γ 1 / 3 σ , η 2 = σ 2 ( 1 + c 2 γ 2 / 3 ) , λ = c γ 1 / 3 b 2 + c 2 ( b 2 1 ) γ 2 / 3 .
X C 1 , , X C n are assumed to be random samples from the skew-normal distribution X C S N C ( μ , σ 2 , γ ) . The sample mean, the second and third central moments of the sample can be written respectively as:
X ¯ C = 1 n i = 1 n X C i , S C 2 = 1 n i = 1 n ( X C i X ¯ C ) 2 , S C 3 = 1 n i = 1 n ( X C i X ¯ C ) 3 .
Theorem 2.
Suppose X S N ( ξ , η 2 , λ ) . Let Z = X ξ η and X C = μ + σ Z E ( Z ) var ( Z ) , then X C = X .
 Proof. 
The first three derivatives of MGF M X ( t ) of X can be obtained as:
E ( X ) = M X ( t ) | t = 0 = ξ + b η δ , E ( X 2 ) = M X ( t ) | t = 0 = ξ 2 + 2 b ξ η δ + η 2 , E ( X 3 ) = M X ( t ) | t = 0 = ξ 3 + 3 b ξ 2 η δ + 3 ξ η 2 + 3 b η 3 δ b η 3 δ 3 .
By the above three equations, the skewness coefficient γ of X has the form of:
γ = E X E X 3 E X E X 2 3 / 2 = b 3 δ 3 c 3 ( 1 b 2 δ 2 ) 3 / 2 .
From Equations (7) and (8), we have:
σ = η 1 + c 2 γ 2 / 3 = η 1 b 2 δ 2 , μ = ξ + c γ 1 / 3 σ = ξ + b η δ .
Because Z S N ( λ ) , we obtain that E ( Z ) = b δ and V a r ( Z ) = 1 b 2 δ 2 . Then,
X C = μ + σ Z E ( Z ) var ( Z ) = ξ + b η δ + η 1 b 2 δ 2 X ξ η b δ 1 b 2 δ 2 = X .
Hence, the proof of Theorem 2 is completed. □
Remark 1.
By Theorem 1, if λ , then δ 1 . Furthermore, we have γ ( 0.99527 , 0.99527 ) by (8). More details see Pewsey [32].
Besides, we consider the ML estimators of the centered parameters ( μ , σ 2 , γ ) . The observed values of ( X ¯ C , S C 2 , S C 3 ) are denoted by ( x ¯ C , s C 2 , s C 3 ) . Similarly, let
X s i = ( X C i x ¯ C ) / s C 2 , i = 1 , , n , where X s 1 , , X s n are the standardized samples from X s S N C ( μ s , σ s 2 , γ ) with μ s = ( μ x ¯ C ) / s C 2 and σ s = σ / s C 2 . So the density function of X s is:
f x s ; μ s , σ s 2 , γ = 2 σ s s C 2 1 + c 2 γ 2 / 3 ϕ x s μ s σ s + c γ 1 / 3 1 1 + c 2 γ 2 / 3 × Φ x s μ s σ s + c γ 1 / 3 c γ 1 / 3 1 + c 2 γ 2 / 3 b 2 + c 2 γ 2 / 3 b 2 1 .
By Equation (10), the logarithmic likelihood function (without constant terms) of X s 1 , , X s n can be represented as:
l x s 1 , , x s n ; μ s , σ s 2 , γ = n log σ s n 2 log 1 + c 2 γ 2 / 3 + i = 1 n log ϕ x s i μ s σ s + c γ 1 / 3 1 1 + c 2 γ 2 / 3 1 / 2 + i = 1 n log Φ x s i μ s c γ 1 / 3 σ s + c 2 γ 2 / 3 1 + c 2 γ 2 / 3 1 / 2 b 2 + c 2 γ 2 / 3 b 2 1 1 / 2 .
In addition, the ML estimators of ξ and η 2 satisfy the constraint (see [4]):
η 2 = i = 1 n ( X i ξ ) 2 / n .
By Theorem 2, η 2 in Equation (12) can also be expressed as:
η 2 = i = 1 n ( X C i ξ ) 2 / n .
From Equations (7) and (13), the ML estimators ( σ ˜ s , μ ˜ s 2 , γ ˜ ) of ( σ s , μ s 2 , γ ) have the following relationship:
σ s 2 = 1 + μ s 2 1 + c 2 γ 2 / 3 c μ s γ 1 / 3 2 .
Substituting Equation (14) into Equation (11), we have:
l x s 1 , , x s n ; μ s , σ s 2 , γ = n 2 log 1 + c 2 γ 2 / 3 n log 1 + μ s 2 1 + c 2 γ 2 / 3 1 / 2 c μ s γ 1 / 3 + i = 1 n log ϕ x s i μ s 1 + μ s 2 1 + c 2 γ 2 / 3 1 / 2 c μ s γ 1 / 3 + c γ 1 / 3 1 + c 2 γ 2 / 3 1 / 2 + i = 1 n log Φ x s i μ s c γ 1 / 3 1 + μ s 2 1 + c 2 γ 2 / 3 1 / 2 c μ s γ 1 / 3 + c 2 γ 2 / 3 1 + c 2 γ 2 / 3 1 / 2 b 2 + c 2 γ 2 / 3 b 2 1 1 / 2 .
Therefore, we define ( μ ˜ s * , σ ˜ s * 2 , γ ˜ * ) as the ML estimates of ( μ s , σ s 2 , γ ) with default starting values given by the moment estimates of ( μ s , σ s 2 , γ ) from Equation (15). Namely:
μ ^ s * = c s C 2 1 / 2 s C 3 1 / 3 , σ ^ s * 2 = 1 + c s C 2 1 / 2 s C 3 2 / 3 , γ ^ * = b δ ^ * 3 2 b 2 1 1 b 2 δ ^ * 2 3 / 2 .
Further, the ML estimates of μ and σ 2 are obtained as follows:
μ ˜ * = x ¯ C + s C 2 1 / 2 μ ˜ s * , σ ˜ * 2 = s C 2 σ ˜ s * 2 .
By Equation (7), the ML estimates of the direct parameters ( ξ , η 2 , λ ) are:
ξ ˜ * = μ ˜ * c γ ˜ * 1 / 3 σ ˜ * , η ˜ * 2 = σ ˜ * 2 ( 1 + c 2 γ ˜ * 2 / 3 ) , λ ˜ * = c γ ˜ * 1 / 3 b 2 + c 2 b 2 1 γ ˜ * 2 / 3 .
Then the ML estimate of δ is δ ˜ * = λ ˜ * / ( 1 + λ ˜ * 2 ) 1 / 2 . Hence, we have the following result.
Theorem 3.
Suppose that ( μ ˜ s , σ ˜ s 2 , γ ˜ ) are the ML estimators corresponding to ( μ ˜ s * , σ ˜ s * 2 , γ ˜ * ) in Equation (16), then the ML estimators of direct parameters ( ξ , η 2 , λ ) are:
ξ ˜ = μ ˜ c γ ˜ 1 / 3 σ ˜ , η ˜ 2 = σ ˜ 2 ( 1 + c 2 γ ˜ 2 / 3 ) , λ ˜ = c γ ˜ 1 / 3 b 2 + c 2 b 2 1 γ ˜ 2 / 3 ,
where μ ˜ and σ ˜ 2 are the ML estimators corresponding to μ ˜ * and σ ˜ * 2 in Equation (17), respectively. Furthermore, the ML estimator of δ is δ ˜ = λ ˜ / ( 1 + λ ˜ 2 ) 1 / 2 .
It is well-known that when λ 0 , the location parameter of skew-normal population is a generalization of the mean of normal population. Therefore, it is especially important to study the statistical inference of location parameter of single skew-normal distribution. Then, the Bootstrap approach for the hypothesis testing problem of location parameter in the single skew-normal population S N ( ξ , η 2 , λ ) is proposed. Specifically, the hypothesis of interest is:
H 0 : ξ = ξ 0 vs . H 1 : ξ ξ 0 ,
where ξ 0 is a specified value. Based on the central limit theorem, under H 0 in (19) we have:
T = X ¯ ξ 0 b η δ 1 n η 2 1 b 2 δ 2 .
If η and δ are known, T can be the test statistic for hypothesis testing problem (19). Since η and δ are often unknown, the test statistics might be developed by replacing η and δ with their moment and ML estimators in Equation (20), respectively. Therefore, the test statistics have the form of:
T 1 = X ¯ ξ 0 b η ^ δ ^ 1 n η ^ 2 1 b 2 δ ^ 2 ,
T 2 = X ¯ ξ 0 b η ˜ δ ˜ 1 n η ˜ 2 1 b 2 δ ˜ 2 .
As the exact distributions of T 1 and T 2 are often unknown, it is impossible to establish their exact test approaches, so we can construct an approximate test approach based on the central limit theorem. However, the Monte Carlo simulation results indicate that the Type I error probabilities of approximate approach exceed the nominal significance level in most cases. Namely, the above approach is liberal. This result may be attributed to its approximate distribution characteristic. In view of this, we propose the Bootstrap test statistic for hypothesis testing problem (19) in this paper.
Under H 0 in (19), we define X B M 1 , , X B M n as the Bootstrap samples from
S N ( ξ 0 , η ^ * 2 , λ ^ * ) , where ( X ¯ B M , S B M 2 , S B M 3 ) denote the sample mean, the second and third center moments of the sample and ( x ¯ B M , s B M 2 , s B M 3 ) are their observed values. By Theorem 1, the moment estimators of ( ξ , η 2 , δ ) have the form of:
ξ ^ B M = X ¯ B M c S B M 3 1 / 3 , η ^ B M 2 = S B M 2 + c 2 S B M 3 2 / 3 , δ ^ B M = c S B M 3 1 / 3 b S B M 2 + c 2 S B M 3 2 / 3 .
Let ( ξ ^ B M * , η ^ B M * 2 , δ ^ B M * ) be the moment estimates corresponding to ( ξ ^ B M , η ^ B M 2 , δ ^ B M ) . Let X B L 1 , , X B L n be the Bootstrap samples from S N ( ξ 0 , η ˜ * 2 , λ ˜ * ) with the sample mean X ¯ B L . Then the ML estimators ( ξ ˜ B L , η ˜ B L 2 , δ ˜ B L ) can be obtained by Theorem 3. Similar to T 1 and T 2 , the Bootstrap test statistics can be expressed as:
T B 1 = X ¯ B M ξ 0 b η ^ B M δ ^ B M 1 n η ^ B M 2 1 b 2 δ ^ B M 2 ,
T B 2 = X ¯ B L ξ 0 b η ˜ B L δ ˜ B L 1 n η ˜ B L 2 1 b 2 δ ˜ B L 2 .
Then the Bootstrap p-values for hypothesis testing problem (19) are defined as:
p i = 2 min { P ( T B i > t i ) , P ( T B i < t i ) } ,   i = 1 , 2 ,
where t 1 and t 2 are the observed values of T 1 and T 2 , respectively. The null hypothesis H 0 in (19) is rejected whenever the above p-values are less than the nominal significance level of α , which means that the difference between ξ and ξ 0 is significant.
Remark 2.
According to [41], similar to T B 1 and T B 2 , the Bootstrap pivot quantities of ξ can be constructed as T B 1 * and T B 2 * based on the moment estimator and ML estimator, respectively. Suppose that T B 1 * ( β ) is the 100 β empirical percentile of T B 1 * . Then the 100 ( 1 α ) % Bootstrap confidence interval for ξ is given by:
x ¯ b η ^ * δ ^ * T B 1 * ( 1 α / 2 ) 1 n η ^ * 2 1 b 2 δ ^ * 2 , x ¯ b η ^ * δ ^ * T B 1 * ( α / 2 ) 1 n η ^ * 2 1 b 2 δ ^ * 2
Similarly, a 100 ( 1 α ) % Bootstrap confidence interval for ξ based on T B 2 * is also obtained.

3. Inference on the Location Parameters of Two Skew-Normal Populations

Let X i 1 , , X i n i be random samples from X i S N ( ξ i , η i 2 , λ i ) and all of them are mutually independent, i = 1 , 2 . The sample mean, the second and third central moments of the sample can be expressed respectively as:
X ¯ i = 1 n i j = 1 n i X i j , S i 2 = 1 n i j = 1 n i ( X i j X ¯ i ) 2 , S i 3 = 1 n i j = 1 n i ( X i j X ¯ i ) 3 ,   i = 1 , 2 .
Firstly, we consider the estimation problems of ( ξ 1 , η 1 2 , λ 1 ) and ( ξ 2 , η 2 2 , λ 2 ) of two skew-normal populations in this section. By Theorem 1, the moment estimators of ( ξ 1 , η 1 2 , λ 1 ) and ( ξ 2 , η 2 2 , λ 2 ) can be given by:
ξ ^ i = X ¯ i c S i 3 1 / 3 , η ^ i 2 = S i 2 + c 2 S i 3 2 / 3 , λ ^ i = δ ^ i 1 δ ^ i 2 ,   i = 1 , 2 ,
where δ ^ i = c S i 3 1 / 3 / b S i 2 + c 2 S i 3 2 / 3 . By Theorem 3, the ML estimators of ( ξ i , η i 2 , λ i , δ i ) can be written as ( ξ ˜ i , η ˜ i 2 , λ ˜ i , δ ˜ i ) , i = 1 , 2 . Thus, let ( η ^ i * 2 , λ ^ i * , δ ^ i * ) be the moment estimates corresponding to ( η ^ i 2 , λ ^ i , δ ^ i ) and ( η ˜ i * 2 , λ ˜ i * , δ ˜ i * , ) be the ML estimates corresponding to ( η ˜ i 2 , λ ˜ i , δ ˜ i ) , i = 1 , 2 .
Next, the problem of interest here is to test:
H 0 : ξ 1 = ξ 2 = ξ * vs . H 1 : ξ 1 ξ 2 ,
where ξ * is a specified value. By the central limit theorem, under H 0 in (27) we have:
T * = ( X ¯ 1 X ¯ 2 ) b ( η 1 δ 1 η 2 δ 2 ) 1 n 1 η 1 2 1 b 2 δ 1 2 + 1 n 2 η 2 2 1 b 2 δ 2 2 .
If η i and δ i are known, i = 1 , 2 , then T * is a natural statistic for hypothesis testing problem (27). For i = 1 , 2 , since η i and δ i are often unknown in practical, the test statistics might be obtained by replacing η i and δ i by their moment estimators and ML estimators, respectively. They are given by:
T 3 = ( X ¯ 1 X ¯ 2 ) b ( η ^ 1 δ ^ 1 η ^ 2 δ ^ 2 ) 1 n 1 η ^ 1 2 1 b 2 δ ^ 1 2 + 1 n 2 η ^ 2 2 1 b 2 δ ^ 2 2 ,
T 4 = ( X ¯ 1 X ¯ 2 ) b ( η ˜ 1 δ ˜ 1 η ˜ 2 δ ˜ 2 ) 1 n 1 η ˜ 1 2 1 b 2 δ ˜ 1 2 + 1 n 2 η ˜ 2 2 1 b 2 δ ˜ 2 2 .
The exact distributions of T 3 and T 4 are also unknown like T 1 and T 2 . For this, the Bootstrap approach will be used to construct test statistics for hypothesis testing problem (27).
Under H 0 in (27), let X B M i 1 , , X B M i n i denote the Bootstrap samples from S N ( ξ * , η ^ i * 2 , λ ^ i * ) with the sample mean X ¯ B M i , i = 1 , 2 . By Theorem 1, the moment estimators of ( ξ i , η i 2 , δ i ) are ( ξ ^ B M i , η ^ B M i 2 , δ ^ B M i ) , i = 1 , 2 . Likewise, let X B L i 1 , , X B L i n i denote the Bootstrap samples from S N ( ξ * , η ˜ i * 2 , λ ˜ i * ) with the sample mean X ¯ B L i and the ML estimators of ( ξ i , η i 2 , δ i ) be ( ξ ˜ B L i , η ˜ B L i 2 , δ ˜ B L i ) by Theorem 3, i = 1 , 2 . Based on T 3 and T 4 , the Bootstrap test statistics are defined as:
T B 3 = ( X ¯ B M 1 X ¯ B M 2 ) b ( η ^ B M 1 δ ^ B M 1 η ^ B M 2 δ ^ B M 2 ) 1 n 1 η ^ B M 1 2 1 b 2 δ ^ B M 1 2 + 1 n 2 η ^ B M 2 2 1 b 2 δ ^ B M 2 2 ,
T B 4 = ( X ¯ B L 1 X ¯ B L 2 ) b ( η ˜ B L 1 δ ˜ B L 1 η ˜ B L 2 δ ˜ B L 2 ) 1 n 1 η ˜ B L 1 2 1 b 2 δ ˜ B L 1 2 + 1 n 2 η ˜ B L 2 2 1 b 2 δ ˜ B L 2 2 .
Then the Bootstrap p-values for hypothesis testing problem (27) are:
p i = 2 min { P ( T B i > t i ) , P ( T B i < t i ) } , i = 3 , 4 ,
where t 3 and t 4 are the observed values of T 3 and T 4 , respectively. The null hypothesis H 0 in (27) is rejected whenever the above p-values are less than the nominal significance level of α , which means that the difference between ξ 1 and ξ 2 is significant.
Remark 3.
Similar to Remark 2, the Bootstrap pivotal quantities of ξ 1 ξ 2 are constructed as T B 3 * and T B 4 * based on the moment estimators and ML estimators, respectively. Let T B 3 * ( β ) be the 100 β empirical percentile of T B 3 * . The 100 ( 1 α ) % Bootstrap confidence interval for ξ 1 ξ 2 is defined as:
( x ¯ 1 x ¯ 2 ) b ( η ^ 1 * δ ^ 1 * η ^ 2 * δ ^ 2 * ) T B 3 * ( 1 α / 2 ) 1 n 1 η ^ 1 * 2 1 b 2 δ ^ 1 * 2 + 1 n 2 η ^ 2 * 2 1 b 2 δ ^ 2 * 2 , ( x ¯ 1 x ¯ 2 ) b ( η ^ 1 * δ ^ 1 * η ^ 2 * δ ^ 2 * ) T B 3 * ( α / 2 ) 1 n 1 η ^ 1 * 2 1 b 2 δ ^ 1 * 2 + 1 n 2 η ^ 2 * 2 1 b 2 δ ^ 2 * 2 .
Similarly, a 100 ( 1 α ) % Bootstrap confidence interval for ξ 1 ξ 2 based on T B 4 * is also obtained.

4. Simulation Results and Discussion

In this section, the Monte Carlo simulation is used to numerically investigate properties of the above hypothesis testing approaches from the aspects of the Type I error rates and powers. Type I error refers to the error of rejecting the actually established and correct hypothesis, which can measure whether the testing approach is liberal or conservative. For convenience, we only provide the steps of the Bootstrap approach based on the moment estimators for hypothesis testing problem (19).
Step 1: For a given ( n , ξ 0 , η 2 , λ ) , generate a group of random samples x 1 , , x n from skew-normal distribution. And ( x ¯ , s 2 , s 3 ) are computed by Equation (2).
Step 2: By Theorem 1, the moment estimates of ( ξ , η 2 , δ ) are computed and denoted by ( ξ * , η ^ * 2 , δ ^ * ) . Then the observed value t 1 of T 1 is obtained by Equation (21).
Step 3: Under H 0 in (19), generate the Bootstrap samples x B M i S N ( ξ 0 , η ^ * 2 , λ ^ * ), i = 1 , , n . And ( x ¯ B M , s B M 2 , s B M 3 ) are computed.
Step 4: By Theorem 1, the moment estimates of ( ξ , η 2 , δ ) from Bootstrap samples are computed and denoted by ( ξ ^ B M * , η ^ B M * 2 , δ ^ B M * ) . Then T B 1 is obtained by Equation (23).
Step 5: Repeat Steps 3–4 n 1 times and compute p 1 by Equation (25). If p 1 0.05 , then Q = 1 ; otherwise, Q = 0 .
Step 6: Repeat Steps 1–5 n 2 times and we get Q 1 , , Q n 2 . Then the Type I error probability is ( 1 / n 2 ) i = 1 n 2 Q i .
Based on the above steps, the power of hypothesis testing problem (19) can be obtained similarly.
In this simulation, the parameters and sample sizes are set as follows. Firstly, let the nominal significance level be 5%, and the number of inner loops n 1 and number of outer loops n 2 both be 2500. Secondly, for hypothesis testing problem (19), we set ξ 0 = 2 , η 2 = 1 , λ = (−8, −7.5, −7, −6.5, −6, −5.5, −5, −4.5, −4, 4, 4.5, 5, 5.5, 6, 6.5, 7, 7.5, 8), and n = (40, 45, 50, 55, 60, 70, 100, 150, 200). Finally, for hypothesis testing problem (27), we suppose ξ * = 2 , η 1 2 , η 2 2 = 0.1 , 0.3 , 0.2 , 0.5 , 0.3 , 0.7 , 0.4 , 0.9 , 0.5 , 1.0 ,   λ 1 , λ 2 = 4 , 5 , 6 , 8 , and ( n 1 , n 2 ) = ( ( 40 , 50 ) , ( 50 , 50 ) , ( 50 , 60 ) , ( 60 , 70 ) , ( 70 , 80 ) , ( 80 , 90 ) , ( 90 , 120 ) , ( 120 , 150 ) , ( 150 , 200 ) ) .
For hypothesis testing problem (19), Table A1, Table A2 and Table A3 in Appendix A present the simulated Type I error probabilities and powers of the proposed approaches. Since the simulated results are similar in the case of positive and negative skewness, only the positive situation is analyzed below. From Table A1 in Appendix A, the Type I error probabilities based on p 1 are close to those based on p 2 in most parameter settings. Specifically, regarding the small sample size and skewness parameter, these two approaches are slightly liberal; regarding the large sample size, both approaches control the Type I error probabilities well. Furthermore, with the increase of sample size, the actual levels of the above two approaches are close to the nominal significance level of 5%. From Table A2 and Table A3 in Appendix A, it is clear that the powers of these two approaches based on p 1 and p 2 both increase with larger sample size, but the former approach always performs better than the latter.
For hypothesis testing problem (27), Table A4, Table A5 and Table A6 in Appendix A give the simulated Type I error probabilities and powers of the proposed approaches. From Table A4 in Appendix A, the approach based on p 3 is slightly liberal when the sample size and skewness parameter are small, but it can effectively control the Type I error probabilities under other parameter settings. As the sample size increases, the actual level of this approach is close to the nominal significance level of 5%. The approach based on p 4 is conservative under most parameter settings. From Table A5 and Table A6 in Appendix A, the powers of the approach based on p 3 are obviously better than those based on p 4 in most cases.
In a word, for hypothesis testing problems (19) and (27), the proposed Bootstrap approaches provide the satisfactory performances under the senses of Type I error probability and power in most cases regardless of the moment estimator or ML estimator. It is well-known that the ML estimator depends on the choice of initial value, which may influence its estimation accuracy. Hence, the Bootstrap test based on the moment estimator is better than that based on the ML estimator in most situations, which can provide a useful approach for the inference on location parameter in the real data examples.
Remark 4.
For hypothesis testing problem (27), we only provide the simulation results in the case of positive skewness. When the skewness parameter is negative, the results are similar to those of positive skewness parameter, so we omit them.

5. Illustrative Examples

In order to verify the rationality and validity of the proposed approaches, we apply them into the examples of LAI, carbon fibers’ strength and RBC count in athletes in this section.
Example 1.
The above approaches are applied to the data of LAI of Robinia pseudoacacia Plantation in Huaiping Forest Farm, Yongshou County, Shanxi Province (see Ye et al. [42]). From Figure 1 and Figure 2, the distribution of LAI does not follow the normal distribution but shows asymmetric right-biased distribution characteristics. To confirm the conclusion, we first test the normality of this data. It turns out that the p-values from R output of Shapiro–Wilk test, Anderson–Darling test and Lilliefor test are 0.0007, 0.0014 and 0.0458, respectively. Hence, the LAI is not normally distributed at the nominal significance level of 5%. Further, we should prove whether the distribution of LAI is skew-normal by the Chi-square goodness-of-fit test. By calculation, we have χ 2 = 5.0929 < χ 2 2 ( 0.95 ) = 5.9915 . Therefore, the LAI follows the skew-normal distribution S N ( ξ , η 2 , λ ) at the nominal significance level of 5%. Based on the method of moment estimation, the LAI is approximately distributed as S N ( 1.2585 , 1. 8332 2 , 2.7966 ) , and its density curve is given in Figure 2.
To illustrate the proposed approach for hypothesis testing problem (19), we suppose ξ 0 as the nearby value of moment estimate of ξ . Namely, consider the hypothesis testing problem:
H 0 : ξ = 2 vs . H 1 : ξ 2 .
Based on the moment and ML estimators, the p-values of Bootstrap test are 0.02584 and 0.00097, respectively. Hence, the null hypothesis H 0 is rejected at the nominal significance level of 5%, that is, the location parameter of LAI is not equal to 2 significantly.
Example 2.
Kundu and Gupta [43] presented a data set of the strength measured in GPA for single carbon fibers. The Shapiro-Wilk test, Anderson-Darling test and Lilliefor test are used to test the normality of the data. It turns out that the p-values of the data are 0.0108, 0.0109 and 0.0254 respectively. The P-P plot and the histogram of the data are given in Figure 3 and Figure 4. Furthermore, the chi-square goodness-of-fit test is used to see whether the distribution of data is skew-normal, namely we set H 0 : the carbon fibers’ strength data is skew-normally distributed. We obtain that χ 2 = 1.1907 < χ 3 2 0.95   = 7.8147 , then the null hypothesis H 0 is rejected at the nominal significance level of 5%. Similar to Example 1, the carbon fibers’ strength data is considered to follow skew normal distribution SN (2.0917, 0.9230 2 , 2.9668).
Consider the hypothesis testing problem:
H 0 : ξ = 2.1 vs . H 1 : ξ 2.1 .
By the moment and ML estimators, the p-values of Bootstrap test are 0.9433 and 0.0814, respectively. Therefore, the null hypothesis is not rejected at the nominal significance level of 5%.
Example 3.
The data collected by the Australian Institute of Physical Education of RBC count in 102 male and 100 female athletes are analyzed in this example (see Cook and Weisberg [44]). Similar to Example 2, the Shapiro-Wilk test, Anderson-Darling test and Lilliefor test are used to test the normality of RBC count in male and female athletes. It shows that the p-values of RBC count in male athletes are 0.0000, 0.0019 and 0.0025 respectively, while those in female athletes are 0.0065, 0.0131 and 0.0181 respectively. The corresponding images are shown in Figure 5 and Figure 6. Therefore, at the nominal significance level of 5%, the above tests all reject the null hypothesis that RBC counts in male and female athletes follow normal distributions. Furthermore, to verify the skew-normality of RBC count, we test the null hypothesis H 0 : the RBC count is skew-normally distributed. It can be obtained by calculation that χ m 2 = 2.0527 < χ 1 2 ( 0.95 ) = 3.8415 and χ f 2 = 0.3469 < χ 2 2 ( 0.95 ) = 5.9915 , which means the RBC count in male and female athletes follow the skew-normal distributions S N ( ξ m , η m 2 , λ m ) and S N ( ξ f , η f 2 , λ f ) respectively at the nominal significance level of 5%.
Consider the hypothesis testing problem:
H 0 : ξ m = ξ f vs . H 1 : ξ m ξ f .
The p-values of Bootstrap test statistics based on the moment estimators and ML estimators are 0.00071 and 0.00153, respectively. Therefore, the null hypothesis H 0 is rejected at the nominal significance level of 5%, that is, the location parameters of RBC counts in male and female athletes have significant differences.

6. Conclusions

By using the centered parameterization and Bootstrap approaches, we study the hypothesis testing and interval estimation problems of location parameters for single and two skew-normal populations with unknown scale parameters and skewness parameters. Firstly, the Bootstrap test statistics and Bootstrap confidence intervals for location parameter of single population are constructed based on the moment estimators and ML estimators, respectively. Secondly, the Bootstrap approaches for Behrens-Fisher type and interval estimation problems are established for two skew-normal populations. Thirdly, the Monte Carlo simulation results show that the Bootstrap test based on the moment estimator is better than that based on the ML estimator in most parameter settings, whether in single population or two. Finally, the above approaches are applied to LAI, carbon fibers’ strength and RBC count in athletes to verify the rationality and validity of the proposed approaches. In summary, the Bootstrap approach based on the moment estimator is preferentially suggested to be used for inference on the location parameter of the skew-normal population. In the future, we plan to consider the hypothesis testing and confidence interval problems for the location parameter vector in multivariate skew-normal population, and discuss the applications of Bootstrap approach in multivariate skew-normal data sets, expecting to provide a feasible solution for the multivariate skew data analysis problem.

Author Contributions

Conceptualization, R.Y. and B.F.; methodology, R.Y. and B.F.; software, B.F.; validation, R.Y. and B.F.; formal analysis, R.Y. and B.F.; investigation, R.Y. and B.F.; data curation, B.F.; writing—original draft preparation, B.F., R.Y. and K.L.; writing—review and editing, R.Y., B.F., W.D., K.L. and Y.L.; supervision, R.Y.; funding acquisition, R.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by National Social Science Foundation of China (Grant No. 21BTJ068).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Simulated Type I error probabilities of hypothesis testing problem (19).
Table A1. Simulated Type I error probabilities of hypothesis testing problem (19).
λ p n = 40 n = 45 n = 50 n = 55 n = 60 n = 70 n = 100 n = 150
4 p 1 0.06720.05440.05400.05280.05120.04800.04520.0484
p 2 0.06080.05560.04840.04920.04640.04560.04040.0460
4.5 p 1 0.05880.04560.04600.04760.04400.04160.04240.0440
p 2 0.05200.05040.04480.04280.04160.04400.03800.0420
5 p 1 0.05120.03960.04040.04080.04040.03680.03880.0412
p 2 0.04720.04760.04080.03760.04040.04000.03520.0408
5.5 p 1 0.04800.03760.03760.03560.03800.03360.03800.0372
p 2 0.04440.04160.03720.03560.03880.03800.03280.0396
6 p 1 0.04440.03560.03600.03440.03600.03160.03560.0352
p 2 0.04280.03840.03440.03520.03680.03640.03040.0384
6.5 p 1 0.04200.03320.03280.03320.03480.03120.03480.0332
p 2 0.04080.03640.03440.03240.03600.03520.03000.0360
7 p 1 0.03960.03240.03160.03160.03360.02960.03400.0324
p 2 0.03880.03600.03320.03120.03560.03400.02880.0360
7.5 p 1 0.03640.03000.03040.03040.03320.02960.03200.0312
p 2 0.03760.03600.03280.03120.03480.03320.02800.0356
8 p 1 0.03600.02880.03040.02960.03240.02880.03080.0308
p 2 0.03600.03440.03280.03040.03400.03320.02720.0356
−4 p 1 0.06800.06040.05960.05320.05280.04840.04880.0524
p 2 0.05760.06040.05200.04840.04880.04600.03800.0460
−4.5 p 1 0.06080.05240.04920.04560.04680.04480.04400.0464
p 2 0.05120.05320.04440.04360.04240.04200.03560.0460
−5 p 1 0.05400.04800.04480.04080.04280.03960.04240.0420
p 2 0.04520.04720.04120.04040.04000.03880.03400.0428
−5.5 p 1 0.04520.04520.04000.03760.03760.03720.03920.0392
p 2 0.04000.04240.03880.03840.03960.03640.03120.0412
−6 p 1 0.04120.04080.03680.03480.03520.03360.03760.0364
p 2 0.03680.04160.03640.03640.03760.03600.02960.0400
−6.5 p 1 0.04000.03760.03440.03200.03320.03280.03520.0340
p 2 0.03440.04040.03560.03480.03640.03520.02840.0384
−7 p 1 0.03720.03720.03240.03120.03280.03000.03400.0316
p 2 0.03320.04000.03560.03280.03480.03480.02760.0368
−7.5 p 1 0.03520.03640.03120.03080.03080.02920.03160.0308
p 2 0.03200.03920.03440.03200.03440.03480.02720.0368
−8 p 1 0.03440.03480.03120.03000.02960.02920.03040.0296
p 2 0.03160.03800.03360.03080.03400.03440.02640.0360
Table A2. Simulated powers of hypothesis testing problem (19) (positive skewness).
Table A2. Simulated powers of hypothesis testing problem (19) (positive skewness).
λ np ξ
1.91.81.71.61.51.4
440 p 1 0.15400.29360.45040.61840.76240.8556
p 2 0.12880.20960.26840.31960.37000.4380
50 p 1 0.17760.33880.50760.70280.82800.9016
p 2 0.13680.21840.28600.33760.40600.4780
60 p 1 0.17000.36040.56360.74440.87400.9408
p 2 0.13720.22560.28320.35400.42880.5076
70 p 1 0.18480.38840.59680.78920.90000.9620
p 2 0.13480.21880.28720.35800.44160.5416
100 p 1 0.20520.44240.70960.88720.96160.9904
p 2 0.13760.22120.29800.38320.48920.5972
150 p 1 0.26360.57040.83760.96080.99240.9992
p 2 0.15320.22640.30120.40840.53240.6836
200 p 1 0.29880.66200.91560.98760.99801.0000
p 2 0.14720.23240.33480.45360.59600.7672
4.540 p 1 0.14760.29280.45800.62920.78360.8688
p 2 0.12920.21880.28280.34040.39520.4632
50 p 1 0.17080.34520.52040.71640.84200.9136
p 2 0.14040.23960.31400.36320.43080.5092
60 p 1 0.17080.36560.57800.76440.88880.9484
p 2 0.14280.23800.30800.38360.46640.5456
70 p 1 0.18680.40040.61560.80160.91280.9660
p 2 0.14160.24520.32200.39560.48720.5816
100 p 1 0.21000.46400.72560.90240.96960.9932
p 2 0.15280.24960.33480.42840.53800.6452
150 p 1 0.27200.59280.85280.96840.99400.9996
p 2 0.18160.26440.35240.46600.59920.7288
200 p 1 0.30880.68280.92600.99080.99921.0000
p 2 0.17960.28040.39240.51080.66120.8100
540 p 1 0.13960.29440.46600.64040.78960.8768
p 2 0.12840.22720.29920.35840.41800.4964
50 p 1 0.16680.34720.53320.72680.85560.9260
p 2 0.14000.25240.33000.39000.45680.5380
60 p 1 0.17000.37360.58680.77440.89480.9532
p 2 0.14680.25720.33480.40760.49080.5712
70 p 1 0.18440.40640.62880.81760.92560.9732
p 2 0.14760.26880.35040.42240.51560.6080
100 p 1 0.21400.47920.73960.91080.97560.9936
p 2 0.16200.27800.37240.46440.57200.6788
150 p 1 0.27800.60880.86240.97120.99561.0000
p 2 0.19920.29560.39160.50880.64160.7600
200 p 1 0.32080.69680.93400.99120.99921.0000
p 2 0.21040.31960.43160.55640.71040.8356
5.540 p 1 0.13720.29800.47320.65320.79720.8852
p 2 0.12720.23280.31560.37320.43480.5136
50 p 1 0.16400.35400.54280.74360.86200.9348
p 2 0.14240.26080.34200.40840.48040.5568
60 p 1 0.17000.38000.59480.78480.90160.9604
p 2 0.14960.26880.35560.43120.50920.5988
70 p 1 0.18400.41360.63880.82520.93080.9752
p 2 0.15040.28640.36960.44560.53320.6268
100 p 1 0.21480.49040.74760.91920.97840.9952
p 2 0.16720.30320.39840.49520.60080.7016
150 p 1 0.28000.62200.87040.97440.99601.0000
p 2 0.21280.31800.42280.54520.67920.7908
200 p 1 0.33200.71480.93800.99240.99921.0000
p 2 0.23360.34960.46520.59320.74400.8540
640 p 1 0.13560.30080.47720.66280.80400.8892
p 2 0.12360.23680.32360.38680.44720.5264
50 p 1 0.16200.35920.54960.74680.86880.9416
p 2 0.14160.27040.35600.42360.49640.5768
60 p 1 0.17000.38480.60400.79320.90720.9648
p 2 0.14920.28000.36760.44280.52160.6156
70 p 1 0.18400.42160.64880.83080.93640.9804
p 2 0.15320.29800.38480.46600.55000.6440
100 p 1 0.21400.49680.75720.92240.98240.9960
p 2 0.16880.32320.41840.51880.62080.7212
150 p 1 0.28160.62960.87840.97760.99641.0000
p 2 0.22400.34160.44960.57480.70480.8116
200 p 1 0.34160.72320.94200.99320.99961.0000
p 2 0.25240.37880.49320.62960.77320.8676
6.540 p 1 0.13360.30400.48160.66840.81000.8940
p 2 0.12080.24000.33280.40080.46160.5392
50 p 1 0.15960.36280.55480.75280.87320.9448
p 2 0.14360.27640.36160.43120.50640.5896
60 p 1 0.16760.39000.60800.79920.91000.9680
p 2 0.15160.29160.38360.45840.53840.6308
70 p 1 0.18200.42640.65640.83480.93960.9828
p 2 0.15280.30520.39880.47800.56560.6596
100 p 1 0.21440.50480.76120.92560.98320.9960
p 2 0.17040.33640.43280.53600.64200.7320
150 p 1 0.28360.63680.88520.97840.99721.0000
p 2 0.23200.35880.46760.59720.72480.8232
200 p 1 0.34880.72960.94600.99440.99961.0000
p 2 0.26280.40160.51800.65720.78960.8840
740 p 1 0.13240.30520.48560.67400.81400.8984
p 2 0.11840.24400.34000.40880.46920.5472
50 p 1 0.15880.36680.56080.75880.87560.9468
p 2 0.14320.28040.37040.44040.51480.6016
60 p 1 0.16760.39320.61440.80520.91160.9712
p 2 0.15120.30280.39280.46880.54960.6396
70 p 1 0.18000.43040.66160.84040.94200.9844
p 2 0.15280.31600.40720.48760.58000.6736
100 p 1 0.21680.51120.76520.92840.98360.9964
p 2 0.17080.34760.44320.54480.65200.7468
150 p 1 0.28560.64280.88680.97880.99761.0000
p 2 0.23520.37400.48600.61560.74160.8360
200 p 1 0.35120.73520.94960.99520.99961.0000
p 2 0.27600.41680.53240.67240.80320.8956
7.540 p 1 0.13040.30480.48760.67640.81760.9000
p 2 0.11640.24840.34840.41480.47800.5560
50 p 1 0.15840.36800.56680.76320.87760.9492
p 2 0.14240.28560.38000.45000.52560.6136
60 p 1 0.16600.39680.61880.81000.91360.9740
p 2 0.15200.30840.39960.47560.55880.6500
70 p 1 0.17960.43440.66640.84480.94360.9848
p 2 0.15120.32120.41400.49600.59160.6844
100 p 1 0.21600.51520.77160.92880.98520.9964
p 2 0.17240.35720.45760.55560.66160.7572
150 p 1 0.28640.64840.89000.97920.99761.0000
p 2 0.24120.38560.50120.63080.75320.8456
200 p 1 0.35360.73880.95160.99560.99961.0000
p 2 0.28440.42880.54800.68640.81000.9016
840 p 1 0.12960.30640.49040.68080.82240.9024
p 2 0.11480.24920.35000.41800.48480.5648
50 p 1 0.15800.36920.56840.76800.88360.9500
p 2 0.14240.28880.38480.45880.53400.6196
60 p 1 0.16480.40080.62280.81280.91640.9756
p 2 0.15120.31200.40520.48520.56600.6560
70 p 1 0.17920.43920.67240.84720.94560.9856
p 2 0.15200.32760.42160.50280.59960.6916
100 p 1 0.21520.51880.77520.93160.98560.9968
p 2 0.17280.36960.46720.56560.66800.7644
150 p 1 0.28720.65200.89720.98080.99761.0000
p 2 0.24440.39480.51080.64200.75920.8532
200 p 1 0.35640.74400.95320.99640.99961.0000
p 2 0.28960.43680.56160.69640.81920.9096
Table A3. Simulated powers of hypothesis testing problem (19) (negative skewness).
Table A3. Simulated powers of hypothesis testing problem (19) (negative skewness).
λ np ξ
2.12.22.32.42.52.6
−440 p 1 0.15920.29800.46000.63440.76960.8628
p 2 0.12920.21000.26520.30600.36600.4392
50 p 1 0.17720.33440.51160.69920.83560.9100
p 2 0.14200.22680.27960.33360.39760.4832
60 p 1 0.17720.35880.56400.74720.87800.9404
p 2 0.14080.22440.29160.35240.42960.5120
70 p 1 0.18760.38840.60400.79200.90160.9620
p 2 0.12960.21880.29160.36800.44200.5280
100 p 1 0.20640.44600.69720.87640.95880.9908
p 2 0.14120.22560.29760.38240.48000.5872
150 p 1 0.26160.57000.83720.96480.99120.9992
p 2 0.15600.23120.30520.41880.54160.6828
200 p 1 0.29560.64960.91720.98880.99881.0000
p 2 0.14880.23560.33960.45080.58720.7512
−4.540 p 1 0.15560.30000.46800.64640.78160.8756
p 2 0.12640.21480.28200.33000.39600.4724
50 p 1 0.17040.33680.52160.71480.84640.9248
p 2 0.14040.23200.29680.35840.43400.5160
60 p 1 0.17440.37040.58080.76640.89520.9512
p 2 0.14400.24320.31440.38800.47120.5468
70 p 1 0.18840.40280.62440.80920.91440.9696
p 2 0.13640.24360.32480.40160.47920.5636
100 p 1 0.21320.46280.72120.89320.96760.9932
p 2 0.15200.25720.33520.42040.52480.6356
150 p 1 0.26920.58760.85360.97160.99360.9996
p 2 0.18320.26160.35800.47440.60000.7300
200 p 1 0.30840.67080.92680.99120.99961.0000
p 2 0.18200.27680.39160.50920.65240.8016
−540 p 1 0.15040.29920.47280.65600.79600.8852
p 2 0.12600.22680.30080.35000.41920.4972
50 p 1 0.17040.34440.53280.72800.85720.9336
p 2 0.14040.24440.31240.37840.45440.5452
60 p 1 0.17360.38040.59400.77920.90200.9564
p 2 0.14680.26000.34000.41480.49480.5752
70 p 1 0.18600.41360.63880.81880.92520.9752
p 2 0.14240.26840.34760.43080.51040.5932
100 p 1 0.21880.47640.73280.90200.97240.9944
p 2 0.16160.28360.36800.46080.56560.6712
150 p 1 0.27400.60040.86560.97320.99521.0000
p 2 0.19960.29400.39400.51640.64520.7640
200 p 1 0.32040.69240.93280.99320.99961.0000
p 2 0.20480.31600.42400.55720.70000.8288
−5.540 p 1 0.14240.29720.47520.65880.80160.8896
p 2 0.12200.23120.30960.36240.43520.5168
50 p 1 0.16680.34800.54000.78340.86800.9400
p 2 0.14360.25320.32880.39960.47320.5656
60 p 1 0.17000.38440.60120.78920.90600.9612
p 2 0.14760.27280.35840.43200.51520.5968
70 p 1 0.18640.41880.64640.82600.93160.9796
p 2 0.14600.28520.36600.45280.52880.6216
100 p 1 0.22080.48880.74160.91000.97560.9956
p 2 0.16760.30480.39400.48840.59240.6948
150 p 1 0.27960.61200.87480.97640.99561.0000
p 2 0.21480.32240.42440.55080.67720.7968
200 p 1 0.33000.70320.93720.99400.99961.0000
p 2 0.23080.34960.45680.59720.73360.8564
−640 p 1 0.13880.30000.48160.66520.80800.8976
p 2 0.11960.23840.32280.37480.44720.5296
50 p 1 0.16440.35440.54840.74920.87480.9440
p 2 0.14240.25760.34120.41600.49000.5832
60 p 1 0.16880.38920.60640.79560.91160.9660
p 2 0.15080.28560.37160.44560.52960.6168
70 p 1 0.18600.42200.65440.83280.93760.9808
p 2 0.14680.30200.38120.46880.54880.6436
100 p 1 0.22040.49640.75120.91400.97920.9960
p 2 0.17000.32360.41000.50880.61200.7160
150 p 1 0.28200.62520.88080.97840.99601.0000
p 2 0.22200.34200.45440.57560.70080.8148
200 p 1 0.33800.71600.94160.99400.99961.0000
p 2 0.25080.37320.48200.62640.76320.8748
−6.540 p 1 0.13760.30240.48760.67080.81640.9032
p 2 0.11880.24240.32960.38560.45600.5420
50 p 1 0.16200.35800.55280.75440.88000.9484
p 2 0.14320.26600.35040.42600.50440.5928
60 p 1 0.16720.39160.61040.79960.91640.9684
p 2 0.15240.29600.38320.45880.54320.6304
70 p 1 0.18480.42840.66000.83760.94120.9824
p 2 0.14920.31520.39280.48040.56200.6584
100 p 1 0.21880.50560.76120.91920.98080.9960
p 2 0.17040.33600.42840.53080.62680.7300
150 p 1 0.28320.63600.88760.98000.99721.0000
p 2 0.23120.36240.47200.59600.71720.8328
200 p 1 0.34440.72160.94400.99480.99961.0000
p 2 0.26600.39040.50400.64960.78200.8864
−740 p 1 0.13480.30600.48960.67560.82200.9068
p 2 0.11680.24360.33480.39640.46640.5520
50 p 1 0.16080.36000.55840.76000.88400.9524
p 2 0.12400.24600.33000.39400.47600.5680
60 p 1 0.16640.39520.61320.80440.92080.9704
p 2 0.14440.27280.36200.43840.51520.6056
70 p 1 0.18320.43080.66400.84080.94440.9828
p 2 0.15160.32440.40160.49120.57240.6728
100 p 1 0.21600.51000.76880.92320.98160.9964
p 2 0.17000.34720.44240.54160.64240.7448
150 p 1 0.28360.64160.89080.98120.99761.0000
p 2 0.23680.37880.48840.61400.73960.8404
200 p 1 0.34960.73000.94720.99600.99961.0000
p 2 0.27280.40680.52560.66960.79600.8952
−7.540 p 1 0.13080.30560.49240.67760.82600.9100
p 2 0.11360.24600.33840.40440.47480.5576
50 p 1 0.16040.36280.56240.76280.88600.9528
p 2 0.14480.28120.37320.44480.52120.6160
60 p 1 0.16440.39560.61800.80760.92120.9728
p 2 0.15160.31120.39920.47400.56040.6492
70 p 1 0.18160.43680.67000.84600.94640.9848
p 2 0.15240.32840.41320.50080.58440.6816
100 p 1 0.21520.51520.77400.92800.98240.9964
p 2 0.17160.35520.45400.55000.65400.7568
150 p 1 0.28520.64600.89320.98240.99761.0000
p 2 0.24200.38800.50520.62560.74880.8488
200 p 1 0.35280.73600.95040.99640.99961.0000
p 2 0.28080.42000.54440.68600.80760.9000
−840 p 1 0.13000.30720.49520.68080.82880.9116
p 2 0.11240.24760.34320.41080.48080.5624
50 p 1 0.15920.36520.56560.76480.89080.9552
p 2 0.14440.28560.37720.45080.52920.6252
60 p 1 0.16480.39840.62360.80920.92240.9732
p 2 0.15240.31760.40120.47840.56720.6592
70 p 1 0.18080.43880.67480.84760.94720.9848
p 2 0.15240.33200.42160.50800.59480.6884
100 p 1 0.21480.51880.77840.92920.98240.9968
p 2 0.17280.36040.46520.55880.66760.7652
150 p 1 0.28560.64960.89680.98280.99841.0000
p 2 0.24400.39880.51800.63840.76200.8576
200 p 1 0.35800.73920.95080.99680.99961.0000
p 2 0.28920.42960.55880.69840.81480.9072
Table A4. Simulated Type I error probabilities of hypothesis testing problem (27).
Table A4. Simulated Type I error probabilities of hypothesis testing problem (27).
n 1 n 2 η 1 2 η 1 2 λ 1 = 4 , λ 2 = 5 λ 1 = 6 , λ 2 = 8
p 3 p 4 p 3 p 4
40500.10.30.06560.06760.05160.0428
0.20.50.06880.05160.05080.0268
0.30.70.06840.04840.05120.0244
0.40.90.06800.04760.05160.0244
0.51.00.06520.03920.04720.0208
50500.10.30.06520.07600.04840.0528
0.20.50.06280.05400.05040.0312
0.30.70.06120.04080.05040.0232
0.40.90.06000.03480.04840.0188
0.51.00.05400.02640.04400.0132
50600.10.30.05960.05760.04560.0520
0.20.50.05800.04160.04600.0236
0.30.70.05640.03520.04480.0180
0.40.90.05520.03040.04480.0168
0.51.00.05000.02360.03760.0096
60700.10.30.05920.04920.04640.0548
0.20.50.05360.03560.04520.0300
0.30.70.05080.03240.04480.0232
0.40.90.04920.03000.04320.0184
0.51.00.04440.02200.03720.0112
70800.10.30.05760.04520.04880.0444
0.20.50.05280.03240.04640.0332
0.30.70.05120.02720.04480.0288
0.40.90.05040.02240.04440.0260
0.51.00.04040.01840.03960.0168
80900.10.30.05120.04560.04400.0732
0.20.50.04680.03320.04320.0532
0.30.70.04560.02800.04160.0428
0.40.90.04520.02840.04160.0372
0.51.00.04040.02320.03800.0256
901200.10.30.05840.03000.04760.0460
0.20.50.05400.03640.04960.0428
0.30.70.05080.02480.04800.0404
0.40.90.04920.02720.04640.0320
0.51.00.04360.02080.04200.0232
1201500.10.30.05760.02160.04600.0496
0.20.50.05560.02200.04800.0496
0.30.70.05600.01640.04720.0444
0.40.90.05520.02200.04720.0408
0.51.00.05040.01720.04720.0312
1502000.10.30.05480.01880.04480.0384
0.20.50.05520.01920.04880.0340
0.30.70.05520.01600.04760.0348
0.40.90.05480.01680.04800.0368
0.51.00.05120.01920.04840.0372
Table A5. Simulated powers of hypothesis testing problem (27) λ 1 = 4 , λ 2 = 5 .
Table A5. Simulated powers of hypothesis testing problem (27) λ 1 = 4 , λ 2 = 5 .
n 1 n 2 η 1 2 η 2 2 p ξ 1 ξ 2
0.10.150.20.250.3
40500.10.3 p 3 0.58200.83760.94600.98200.9892
p 4 0.57360.82840.93880.97360.9800
0.20.5 p 3 0.32440.49600.66280.80680.8948
p 4 0.21720.36040.54880.70920.8320
0.30.7 p 3 0.21960.33560.45520.56720.6876
p 4 0.12480.19720.28120.39520.5200
0.40.9 p 3 0.16960.25040.33840.42720.5192
p 4 0.09000.13280.18480.24240.3232
0.51.0 p 3 0.13160.19120.26120.34560.4188
p 4 0.06520.08480.11000.14880.1956
50500.10.3 p 3 0.60680.82880.94400.98120.9888
p 4 0.64080.86960.94920.97440.9840
0.20.5 p 3 0.33080.52880.69000.81120.9004
p 4 0.27400.47440.66920.80920.9040
0.30.7 p 3 0.22640.34720.49120.60600.7228
p 4 0.14920.25080.39000.52720.6608
0.40.9 p 3 0.17800.25280.35320.46600.5568
p 4 0.11080.15800.23720.33920.4480
0.51.0 p 3 0.14280.20920.27640.36720.4616
p 4 0.06360.09920.13640.19680.2672
50600.10.3 p 3 0.63760.87160.96320.98760.9944
p 4 0.63880.87520.96240.98360.9888
0.20.5 p 3 0.34400.54800.72200.85400.9276
p 4 0.27360.47840.68120.83520.9184
0.30.7 p 3 0.22520.35520.49880.63280.7476
p 4 0.13000.25360.39000.52640.6448
0.40.9 p 3 0.17480.25600.36240.47120.5800
p 4 0.09440.14320.24560.34840.4496
0.51.0 p 3 0.14160.20240.28000.37320.4696
p 4 0.05240.08200.12160.18080.2640
60700.10.3 p 3 0.65080.90000.97640.99400.9972
p 4 0.65400.91720.97520.98960.9952
0.20.5 p 3 0.35880.55680.74120.88040.9480
p 4 0.28120.52080.74920.89360.9504
0.30.7 p 3 0.24160.37200.51640.64640.7752
p 4 0.15040.27480.44000.61240.7692
0.40.9 p 3 0.18440.27480.37960.49320.5960
p 4 0.08520.15760.26160.39200.5208
0.51.0 p 3 0.13600.21560.30520.39960.4908
p 4 0.05320.08400.14480.22920.3308
70800.10.3 p 3 0.71840.93240.98960.99760.9992
p 4 0.71360.93560.98640.99480.9972
0.20.5 p 3 0.40240.62320.80600.92320.9680
p 4 0.30840.57960.81600.92360.9712
0.30.7 p 3 0.26640.42520.57400.72120.8368
p 4 0.16840.31080.51080.69680.8304
0.40.9 p 3 0.19320.31200.43560.54400.6712
p 4 0.10920.19960.32160.45800.6220
0.51.0 p 3 0.15960.24320.34600.45200.5552
p 4 0.07000.12560.19800.30360.4316
80900.10.3 p 3 0.75200.94840.99521.00001.0000
p 4 0.73360.94960.98880.99801.0000
0.20.5 p 3 0.40800.65360.84000.93920.9784
p 4 0.32720.60640.84120.94160.9756
0.30.7 p 3 0.25480.42800.60600.75920.8684
p 4 0.17600.33520.53720.72520.8680
0.40.9 p 3 0.18720.30040.43560.58080.6992
p 4 0.17640.21360.34600.49800.6480
0.51.0 p 3 0.14640.24120.34960.47400.5940
p 4 0.06720.12320.21720.33360.4740
901200.10.3 p 3 0.83720.97960.99921.00001.0000
p 4 0.74800.97160.99880.99960.9996
0.20.5 p 3 0.46800.73720.91640.97040.9948
p 4 0.28840.60760.86400.96520.9952
0.30.7 p 3 0.31520.49840.68680.85000.9320
p 4 0.14160.30120.53680.75600.8960
0.40.9 p 3 0.23960.36600.51240.66320.7924
p 4 0.09560.18040.31400.49440.6756
0.51.0 p 3 0.18760.30040.41640.55920.6856
p 4 0.06600.11720.20520.34880.5124
1201500.10.3 p 3 0.90240.99720.99961.00001.0000
p 4 0.78360.99600.99961.00001.0000
0.20.5 p 3 0.56520.82600.96000.99480.9988
p 4 0.30080.66720.92040.99040.9988
0.30.7 p 3 0.37800.60320.78800.91800.9732
p 4 0.14240.35640.62520.84080.9516
0.40.9 p 3 0.27080.44400.62040.75960.8740
p 4 0.09800.19960.38200.58360.7728
0.51.0 p 3 0.22440.36680.51960.66800.7912
p 4 0.06560.13680.26840.44800.6396
Table A6. Simulated powers of hypothesis testing problem (27) λ 1 = 6 , λ 2 = 8 .
Table A6. Simulated powers of hypothesis testing problem (27) λ 1 = 6 , λ 2 = 8 .
n 1 n 2 η 1 2 η 2 2 p ξ 1 ξ 2
0.10.150.20.250.3
40500.10.3 p 3 0.61880.86000.96200.98960.9964
p 4 0.71000.91440.97520.98840.9904
0.20.5 p 3 0.33240.52680.70680.84120.9180
p 4 0.24480.45280.66280.81760.9112
0.30.7 p 3 0.21960.34680.48720.60920.7296
p 4 0.11360.21000.34000.48960.6348
0.40.9 p 3 0.15960.25280.35360.46240.5536
p 4 0.06760.12080.19640.28560.3952
0.51.0 p 3 0.12480.19320.27240.36640.4584
p 4 0.04040.05920.09320.13800.2008
50500.10.3 p 3 0.64440.86280.96080.98800.9940
p 4 0.78840.94000.98080.98880.9920
0.20.5 p 3 0.35080.55960.72800.84840.9268
p 4 0.34480.58920.79320.90480.9584
0.30.7 p 3 0.23480.36640.52440.64920.7556
p 4 0.16200.30280.46520.64160.7752
0.40.9 p 3 0.17760.26720.37800.50120.5980
p 4 0.10080.17320.27560.40160.5256
0.51.0 p 3 0.14280.21840.29840.40320.5044
p 4 0.04720.08240.13600.20080.2908
50600.10.3 p 3 0.68640.90160.97680.99400.9988
p 4 0.81000.96080.98720.99360.9948
0.20.5 p 3 0.37320.58520.76280.88640.9500
p 4 0.34560.61160.82600.93240.9712
0.30.7 p 3 0.23720.38760.54200.68640.7920
p 4 0.15560.30400.49080.71040.8064
0.40.9 p 3 0.17480.27120.39720.51360.6288
p 4 0.08480.17480.28200.42160.5576
0.51.0 p 3 0.13840.21640.31280.41440.5192
p 4 0.03840.07560.12800.20280.2992
60700.10.3 p 3 0.69560.92560.98800.99720.9992
p 4 0.84760.96920.99280.99600.9984
0.20.5 p 3 0.39760.60400.79160.91200.9652
p 4 0.46840.74040.90840.96480.9824
0.30.7 p 3 0.25600.41440.56400.70600.8260
p 4 0.24640.45880.65880.81640.9180
0.40.9 p 3 0.18680.29800.42200.53680.6472
p 4 0.14440.27600.44520.60840.7352
0.51.0 p 3 0.14600.23560.33760.44760.5512
p 4 0.06800.14480.25200.39240.5268
70800.10.3 p 3 0.75880.95360.99600.99961.0000
p 4 0.87280.98000.99480.99840.9992
0.20.5 p 3 0.44080.66960.84640.94520.9844
p 4 0.54760.79400.93000.97840.9912
0.30.7 p 3 0.29080.46160.62480.77200.8788
p 4 0.33360.56760.75000.87760.9460
0.40.9 p 3 0.20960.34240.47400.60200.7228
p 4 0.22520.40000.57680.72000.8344
0.51.0 p 3 0.16720.27560.39320.50440.6140
p 4 0.13280.26160.41000.57160.7064
80900.10.3 p 3 0.79480.96760.99761.00001.0000
p 4 0.89560.98760.99961.00001.0000
0.20.5 p 3 0.45200.70000.87640.95880.9896
p 4 0.57520.81880.94560.98520.9964
0.30.7 p 3 0.28040.47680.65520.80600.9012
p 4 0.36080.59800.78160.89880.9596
0.40.9 p 3 0.19920.34120.48880.62880.7528
p 4 0.25000.42200.60480.75840.8608
0.51.0 p 3 0.16040.26520.40560.53600.6552
p 4 0.16960.30400.46600.63040.7684
901200.10.3 p 3 0.87200.98881.00001.00001.0000
p 4 0.91520.99321.00001.00001.0000
0.20.5 p 3 0.52520.78960.93720.98520.9992
p 4 0.55600.83520.96520.99320.9992
0.30.7 p 3 0.34680.55600.74320.88640.9500
p 4 0.34120.59160.79360.92480.9764
0.40.9 p 3 0.26160.39920.57480.71560.8440
p 4 0.24040.41960.61480.76120.8808
0.51.0 p 3 0.20680.34000.47560.62680.7424
p 4 0.16800.31640.49880.66720.8004
1201500.10.3 p 3 0.93000.99761.00001.00001.0000
p 4 0.95520.99841.00001.00001.0000
0.20.5 p 3 0.62200.86280.97480.99720.9988
p 4 0.59480.90080.98520.99920.9996
0.30.7 p 3 0.41800.65120.83040.94440.9864
p 4 0.36560.64600.88080.96760.9924
0.40.9 p 3 0.30080.49640.66360.80600.9088
p 4 0.24760.45320.67600.85440.9492
0.51.0 p 3 0.25640.41440.58320.72600.8368
p 4 0.20320.36800.57360.75880.8916

References

  1. Ghosh, P.; Bayes, C.L.; Lachos, V.H. A robust Bayesian approach to null intercept measurement error model with application to dental data. Comput. Stat. Data Anal. 2009, 53, 1066–1079. [Google Scholar] [CrossRef]
  2. Zou, Y.J.; Zhang, Y.L. Use of skew-normal and skew-t distributions for mixture modeling of freeway speed data. Transport. Res. Rec. 2011, 2260, 67–75. [Google Scholar] [CrossRef]
  3. Li, C.I.; Su, N.C.; Su, P.F. The Design of ¯ X and R Control Charts for Skew Normal Distributed Data. Commun. Statist. Theory Meth. 2014, 43, 4908–4924. [Google Scholar] [CrossRef]
  4. Azzalini, A. A class of distribution which includes the normal ones. Scand. J. Statist. 1985, 12, 171–178. [Google Scholar]
  5. Azzalini, A. Further results on a class of distributions which includes the normal ones. Statistica 1986, 49, 199–208. [Google Scholar]
  6. Arnold, B.C.; Lin, G.D. Characterizations of the skew-normal and generalized chi distributions. Sankhya 2004, 66, 593–606. [Google Scholar]
  7. Gupta, A.K.; Nguyen, T.T.; Sanqui, J.A.T. Characterization of the skew-normal distribution. Ann. Inst. Statist. Math. 2004, 56, 351–360. [Google Scholar] [CrossRef]
  8. Kim, H.M.; Genton, M.G. Characteristic functions of scale mixtures of multivariate skew-normal distributions. J. Multivar. Anal. 2011, 102, 1105–1117. [Google Scholar] [CrossRef] [Green Version]
  9. Su, N.C.; Gupta, A.K. On some sampling distributions for skew-normal population. J. Stat. Comput. Simul. 2015, 85, 3549–3559. [Google Scholar] [CrossRef]
  10. Gupta, A.K.; Huang, W.J. Quadratic forms in skew normal variates. J. Math. Anal. Appl. 2002, 273, 558–564. [Google Scholar] [CrossRef] [Green Version]
  11. Wang, T.H.; Li, B.K.; Gupta, A.K. Distribution of quadratic forms under skew normal settings. J. Multivar. Anal. 2009, 100, 533–545. [Google Scholar] [CrossRef] [Green Version]
  12. Ye, R.D.; Wang, T.H.; Gupta, A.K. Distribution of matrix quadratic forms under skew-normal settings. J. Multivar. Anal. 2014, 131, 229–239. [Google Scholar] [CrossRef]
  13. Balakrishnan, N.; Scarpa, B. Multivariate measures of skewness for the skew-normal distribution. J. Multivar. Anal. 2012, 104, 73–87. [Google Scholar] [CrossRef]
  14. Contreras-Reyes, J.E.; Arellano-Valle, R.B. Kullback-leibler divergence measure for multivariate skew-normal distributions. Entropy 2012, 14, 1606–1626. [Google Scholar] [CrossRef] [Green Version]
  15. Liao, X.; Peng, Z.X.; Nadarajah, S. Asymptotic expansions for moments of skew-normal extremes. Statist. Probab. Lett. 2013, 83, 1321–1329. [Google Scholar] [CrossRef] [Green Version]
  16. Liao, X.; Peng, Z.X.; Nadarajah, S.; Wang, X.Q. Rates of convergence of extremes from skew-normal samples. Statist. Probab. Lett. 2014, 84, 40–47. [Google Scholar] [CrossRef] [Green Version]
  17. Nadarajah, S.; Li, R. The exact density of the sum of independent skew normal random variables. J. Comput. Appl. Math. 2017, 311, 1–10. [Google Scholar] [CrossRef]
  18. Otiniano, C.E.G.; Rathie, P.N.; Ozelim, L.C.S.M. On the identifiability of finite mixture of skew-normal and skew-t distributions. Statist. Probab. Lett. 2015, 106, 103–108. [Google Scholar] [CrossRef]
  19. Bartoletti, S.; Loperfido, N. Modelling air pollution data by the skew-normal distribution. Stoch. Environ. Res. Risk Assess. 2010, 24, 513–517. [Google Scholar] [CrossRef]
  20. Counsell, N.; Cortina-Borja, M.; Lehtonen, A.; Stein, A. Modelling psychiatric measures using skew-normal distributions. Eur. Psychiatry 2011, 26, 112–114. [Google Scholar] [CrossRef] [Green Version]
  21. Hutton, J.L.; Stanghellini, E. Modelling bounded health scores with censored skew-normal distributions. Statist. Med. 2011, 30, 368–376. [Google Scholar] [CrossRef]
  22. Eling, M. Fitting insurance claims to skewed distributions: Are the skew-normal and skew-student good models. Insur. Math. Econom. 2012, 51, 239–248. [Google Scholar] [CrossRef]
  23. Carmichael, B.; Coen, A. Asset pricing with skewed-normal return. Financ. Res. Lett. 2013, 10, 50–57. [Google Scholar] [CrossRef]
  24. Pigeon, M.; Antonio, K.; Denuit, M. Individual loss reserving with the multivariate skew normal framework. Astin Bull. 2013, 43, 399–428. [Google Scholar] [CrossRef]
  25. Taniguchi, M.; Petkovic, A.; Kase, T.; Diciccio, T.; Monti, A.C. Robust portfolio estimation under skew-normal return processes. Eur. J. Financ. 2015, 21, 1091–1112. [Google Scholar] [CrossRef]
  26. Contreras-Reyes, J.E.; Arellano-Valle, R.B. Growth estimates of cardinalfish (Epigonus crassicaudus) based on scale mixtures of skew-normal distributions. Fish. Res. 2013, 147, 137–144. [Google Scholar] [CrossRef] [Green Version]
  27. Mazzuco, S.; Scarpa, B. Fitting age-specific fertility rates by a flexible generalized skew normal probability density function. J. R. Statist. Soc. 2015, 178, 187–203. [Google Scholar] [CrossRef]
  28. Gupta, R.C.; Brown, N. Reliability studies of the skew-normal distribution and its application to a strength-stress model. Commun. Statist. Theory Meth. 2001, 30, 2427–2445. [Google Scholar] [CrossRef]
  29. Figueiredo, F.; Gomes, M.I. The skew-normal distribution in SPC. Revstat Stat. J. 2013, 11, 83–104. [Google Scholar]
  30. Montanari, A.; Viroli, C. A skew-normal factor model for the analysis of student satisfaction towards university courses. J. Appl. Stat. 2010, 37, 473–487. [Google Scholar] [CrossRef]
  31. Hossain, A.; Beyene, J. Application of skew-normal distribution for detecting differential expression to microRNA data. J. Appl. Stat. 2015, 42, 477–491. [Google Scholar] [CrossRef]
  32. Pewsey, A. Problems of inference for Azzalini’s skewnormal distribution. J. Appl. Stat. 2000, 27, 859–870. [Google Scholar] [CrossRef]
  33. Pewsey, A. The wrapped skew-normal distribution on the circle. Commun. Statist. Theory Meth. 2000, 29, 2459–2472. [Google Scholar] [CrossRef]
  34. Pewsey, A. Modelling asymmetrically distributed circular data using the wrapped skew-normal distribution. Environ. Ecol. Stat. 2006, 13, 257–269. [Google Scholar] [CrossRef]
  35. Arellano-Valle, R.B.; Azzalini, A. The centred parametrization for the multivariate skew-normal distribution. J. Multivar. Anal. 2008, 99, 1362–1382. [Google Scholar] [CrossRef]
  36. Wang, Z.Y.; Wang, C.; Wang, T.H. Estimation of location parameter in the skew normal setting with known coefficient of variation and skewness. Int. J. Intell. Technol. Appl. Stat. 2016, 9, 191–208. [Google Scholar]
  37. Thiuthad, P.; Pal, N. Hypothesis testing on the location parameter of a skew-normal distribution (SND) with application. In Proceedings of the ITM Web of Conferences, International Conference on Mathematics (ICM 2018) Recent Advances in Algebra, Numerical Analysis, Applied Analysis and Statistics, Tokyo, Japan, 21–22 November 2018; p. 03003. [Google Scholar]
  38. Ma, Z.W.; Chen, Y.J.; Wang, T.H.; Peng, W.Z. The inference on the location parameters under multivariate skew normal settings. In Proceedings of the International Econometric Conference of Vietnam, Ho-Chi-Minh City, Vietnam, 14–16 January 2019; pp. 146–162. [Google Scholar]
  39. Gui, W.H.; Guo, L. Statistical inference for the location and scale parameters of the skew normal distribution. Indian J. Pure Appl. Mathe. 2018, 49, 633–650. [Google Scholar] [CrossRef]
  40. Azzalini, A.; Capitanio, A. The Skew-Normal and Related Families; Cambridge University Press: New York, NY, USA, 2014. [Google Scholar]
  41. Xu, L.W. The Bootstrap Statistical Inference of Complex Data and Its Applications; Science Press: Beijing, China, 2016. [Google Scholar]
  42. Ye, R.D.; Wang, T.H. Inferences in linear mixed models with skew-normal random effects. Acta Math. Sin. Engl. Ser. 2015, 31, 576–594. [Google Scholar] [CrossRef]
  43. Kundu, D.; Gupta, R.D. Estimation of P[Y < X] for Weibull distributions. IEEE Trans. Reliab. 2006, 55, 270–280. [Google Scholar]
  44. Cook, R.D.; Weisberg, S. An Introduction to Regression Graphics; John Wiley & Sons: New York, NY, USA, 2009. [Google Scholar]
Figure 1. Q-Q plot of LAI data (Red line and blue line respectively denote the standard normal quantiles and sample quantiles).
Figure 1. Q-Q plot of LAI data (Red line and blue line respectively denote the standard normal quantiles and sample quantiles).
Mathematics 10 00921 g001
Figure 2. Frequency histogram of the LAI with superimposed skew-normal density curve.
Figure 2. Frequency histogram of the LAI with superimposed skew-normal density curve.
Mathematics 10 00921 g002
Figure 3. P-P plot of carbon fibers’ strength data (Red line and blue line respectively denote the theoretical probabilities and sample probabilities).
Figure 3. P-P plot of carbon fibers’ strength data (Red line and blue line respectively denote the theoretical probabilities and sample probabilities).
Mathematics 10 00921 g003
Figure 4. Frequency histogram of the carbon fibers’ strength data with superimposed skew-normal density curve.
Figure 4. Frequency histogram of the carbon fibers’ strength data with superimposed skew-normal density curve.
Mathematics 10 00921 g004
Figure 5. P-P plot of RBC count in male athletes (Red line and blue line respectively denote the theoretical probabilities and sample probabilities).
Figure 5. P-P plot of RBC count in male athletes (Red line and blue line respectively denote the theoretical probabilities and sample probabilities).
Mathematics 10 00921 g005
Figure 6. P-P plot of RBC count in female athletes (Red line and blue line respectively denote the theoretical probabilities and sample probabilities).
Figure 6. P-P plot of RBC count in female athletes (Red line and blue line respectively denote the theoretical probabilities and sample probabilities).
Mathematics 10 00921 g006
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ye, R.; Fang, B.; Du, W.; Luo, K.; Lu, Y. Bootstrap Tests for the Location Parameter under the Skew-Normal Population with Unknown Scale Parameter and Skewness Parameter. Mathematics 2022, 10, 921. https://doi.org/10.3390/math10060921

AMA Style

Ye R, Fang B, Du W, Luo K, Lu Y. Bootstrap Tests for the Location Parameter under the Skew-Normal Population with Unknown Scale Parameter and Skewness Parameter. Mathematics. 2022; 10(6):921. https://doi.org/10.3390/math10060921

Chicago/Turabian Style

Ye, Rendao, Bingni Fang, Weixiao Du, Kun Luo, and Yiting Lu. 2022. "Bootstrap Tests for the Location Parameter under the Skew-Normal Population with Unknown Scale Parameter and Skewness Parameter" Mathematics 10, no. 6: 921. https://doi.org/10.3390/math10060921

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop