Next Article in Journal
Multilevel Ordinal Logit Models: A Proportional Odds Application Using Data from Brazilian Higher Education Institutions
Next Article in Special Issue
Bayesian Sensitivity Analysis for VaR and CVaR Employing Distorted Band Priors
Previous Article in Journal
Special Issue “Optimisation Models and Applications”
Previous Article in Special Issue
A Mixture Quantitative Randomized Response Model That Improves Trust in RRT Methodology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modified Two-Parameter Liu Estimator for Addressing Multicollinearity in the Poisson Regression Model

by
Mahmoud M. Abdelwahab
1,2,
Mohamed R. Abonazel
3,*,
Ali T. Hammad
4 and
Amera M. El-Masry
5
1
Department of Mathematics and Statistics, College of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 90950, Saudi Arabia
2
Department of Basic Sciences, Higher Institute of Administrative Sciences, Osim, Cairo 12961, Egypt
3
Department of Applied Statistics and Econometrics, Faculty of Graduate Studies for Statistical Research, Cairo University, Giza 12613, Egypt
4
Department of Mathematics, Faculty of Science, Tanta University, Tanta 31527, Egypt
5
Department of Mathematics and Statistics, Faculty of Management Technology and Information Systems, Port Said University, Port Said 42521, Egypt
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(1), 46; https://doi.org/10.3390/axioms13010046
Submission received: 28 September 2023 / Revised: 16 December 2023 / Accepted: 5 January 2024 / Published: 11 January 2024
(This article belongs to the Special Issue Computational Statistics and Its Applications)

Abstract

:
This study introduces a new two-parameter Liu estimator (PMTPLE) for addressing the multicollinearity problem in the Poisson regression model (PRM). The estimation of the PRM is traditionally accomplished through the Poisson maximum likelihood estimator (PMLE). However, when the explanatory variables are correlated, thus leading to multicollinearity, the variance or standard error of the PMLE is inflated. To address this issue, several alternative estimators have been introduced, including the Poisson ridge regression estimator (PRRE), Liu estimator (PLE), and adjusted Liu estimator (PALE), each of them relying on a single shrinkage parameter. The PMTPLE uses two shrinkage parameters, which enhances its adaptability and robustness in the presence of multicollinearity between explanatory variables. To assess the performance of the PMTPLE compared to the four existing estimators (the PMLE, PRRE, PLE, and PALE), a simulation study is conducted that encompasses various scenarios and two empirical applications. The evaluation of the performance is based on the mean square error (MSE) criterion. The theoretical comparison, simulation results, and findings of the two applications consistently demonstrate the superiority of the PMTPLE over the other estimators, establishing it as a robust solution for count data analysis under multicollinearity conditions.

1. Introduction

The Poisson regression model (PRM) is a crucial statistical tool applied in various fields for the analysis of count data. It is particularly valuable when examining the relationships between one or more explanatory variables and a response variable that represents rare events or non-negative integer counts. The significance of the PRM lies in its ability to accommodate data with distinct characteristics, such as the distribution of event occurrences. This makes it a fundamental component in epidemiology, ecology, economics, and numerous scientific disciplines. The Poisson regression model is not only the most commonly employed model for count data, but it is also highly popular for estimating the parameters of multiplicative models [1,2]. By employing the maximum likelihood estimation (MLE) method, the PRM enables researchers to estimate regression coefficients. This makes it an indispensable asset for precise modeling and hypothesis testing in data analysis.
In the context of multiple regression modeling, interpreting individual parameter estimates becomes difficult when the explanatory variables are highly correlated with each other—a phenomenon known as multicollinearity. This presents a significant challenge when estimating the unknown regression coefficients, especially in the PRM (which relies on MLE). Månsson and Shukur [3] emphasized the sensitivity of MLE to multicollinearity, highlighting the need to address this concern in statistical analysis. When multicollinearity is present, constructing robust inferences becomes complex as the variances and standard errors of the regression estimates increase, thus potentially leading to incorrect indications of the sign. Additionally, the t and F ratios, which are crucial for hypothesis testing, often lose their statistical significance in the presence of severe multicollinearity. This further underscores the importance of addressing this issue in multiple regression analysis.
Multicollinearity presents a challenge by causing an inflated variance in the estimated coefficient vectors, making interpreting parameter estimates difficult. This leads to unreliable statistical inferences and limits the ability to assess the impact of various economic factors on the dependent variable. To address this issue, ridge regression (RR) analysis is widely used. RR was introduced by Hoerl and Kennard [4], and it involves adding a positive value k, known as the ridge parameter, to the variance–covariance matrix. Alkhamisi et al. [5]; Alkhamisi and Shukur [6]; Khalaf and Shukur [7]; Kibria [8]; Manson and Shukur [3]; and Muniz and Kibria [9] have proposed different techniques for estimating k. These studies compared the performance of the RR estimator through simulations and found it to be an effective analysis method. Manson and Shukur [3] introduced the Poisson ridge regression estimator (PRRE) method for addressing multicollinearity and demonstrated that the PRRE outperformed the MLE method in Poisson regression analysis. Another approach to combat multicollinearity is the Liu estimator (LE) [10], which has gained popularity due to its linear function of the shrinkage parameter (d). Månsson et al. [11] extended this concept to propose the Poisson Liu regression (PLE) method, thereby showing its superior performance to the PRRE in Poisson regression analysis. Amin et al. [12] introduced the adjusted Liu estimator for Poisson regression, referred to as the PALE, which is a modified version of the one-parameter LE for linear regression models [13]. The PALE has proven effective in addressing multicollinearity challenges and is considered superior to both the PRRE and PLE methods in managing multicollinearity concerns, thereby enhancing the toolkit for robust statistical analysis according to the current literature.
Several research articles have suggested new two-parameter estimators for various regression models as a solution to the problem of multicollinearity. These articles have demonstrated that estimators relying on two parameters outperform those relying on only one parameter. Notable studies include those of Algamal and Abonazel [14]; Yang and Chang [15]; Abonazel et al. [16]; Omara [17]; and Abonazel et al. [18]. The objective of this article was to introduce a new modified two-parameter Liu estimator for the Poisson model, and to propose methods for selecting its parameters. Additionally, the maximum likelihood, ridge, Liu, and adjusted Liu estimators were compared with the proposed estimator.
The structure of this paper is as follows: Section 1 provides a definition of the Poisson distribution and its regression model. Section 2 introduces the PRRE, PLE, and PALE, as well as presents our proposed estimator. Section 3 offers theoretical comparisons of the proposed estimator and other estimators. Section 4 presents the optimal value of the biasing parameter. Section 5 discusses the simulation study conducted to evaluate the performance of the proposed estimator. Section 6 provides the results of the real-world application. Finally, Section 7 concludes the paper.

2. Methodology

2.1. Poisson Regression Model

The PRM was used to analyze data, which consist of counts. In this model, the response variable, denoted as Y i , follows a Poisson distribution. The Poisson distribution is characterized by its probability density function, which is expressed as follows:
f ( y i ) = μ i y i exp ( μ i ) y i ! y i = 0 , 1 , 2 , ,
where μ i > 0 for all i = 1 , 2 , , n , and the expected value and the variance of y i are equal to μ i , i.e., E ( y i ) = v a r ( y i ) = μ i . The expression for μ i is represented using the canonical log link function and a linear combination of the explanatory variables. This can be written as μ i = exp ( x i β ) , where x i represents the ith row of the data matrix X. The data matrix X has dimensions of n × ( p + 1 ) , where n represents the number of observations and p the number of explanatory variables. The vector β has dimensions of ( p + 1 ) × 1 and contains the coefficients for the linear combination.
The maximum likelihood method is a widely recognized technique for estimating model parameters in the PRM. The log-likelihood function for the PRM is provided below.
( β ) = i = 1 n y i x i β exp ( x i β ) log ( y i ! )
The PMLE is determined by calculating the first derivative of Equation (2) and equating it to zero. This process can be expressed as follows:
S ( β ) = ( β ; y ) β = i = 1 n y i exp ( x i β ) x i = 0 ,
where it is given that Equation (3) presents a nonlinear relationship with respect to β . To overcome this nonlinearity, the iteratively weighted least squares (IWLS) algorithm can be employed. This algorithm allows for the estimation of the PMLE values for the Poisson regression parameters as follows:
β ^ PMLE = ( H ) 1 X T W ^ s ^ ,
where H = X T W ^ X , s ^ is an n-dimensional vector with the ith element s ^ i = log μ ^ i + ( y i μ ^ i ) μ ^ i , and W ^ = diag [ μ ^ i ] .
The MLE follows a normal distribution, with a covariance matrix that is equal to the inverse of the second derivative, which is given by
Cov ( β ^ PMLE ) = E β j β j T 1 = H 1 .
The mean square error can calculated as follows:
MSE ( β ^ PMLE ) = E β ^ PMLE β T β ^ PMLE β = trace ( G U 1 G T ) = j = 1 p + 1 1 u j .
Given the equation U = diag ( u 1 , u 2 , , u p + 1 ) = G H G T , one can observe that G represents an orthogonal matrix with its columns corresponding to the eigenvectors of the matrix H. Moreover, u j refers to the eigenvalue associated with the j th position of the matrix H.
The correlation of the predictor variables significantly affects the H value, thus resulting in the high variance and instability of the PMLE estimator. The biased estimators proposed in the literature to deal with the multicollinearity problem in the PRM are addressed below.

2.2. Poisson Ridge Regression Estimator (PRRE)

In response to the multicollinearity issues in generalized linear models (GLMs), Segerstedt [19] introduced, inspired by Hoerl and Kannard [4], the RR estimator. Multicollinearity arises when explanatory variables in the PRM are correlated, which causes problems with MLE. To address this, Manson and Shukur [3] proposed the RR estimator for the PRM, thus offering a solution to the multicollinearity challenges. The formulation and characterization of the PRRE are as follows:
β ^ PRRE = ( H + k I ) 1 H β ^ PMLE k > 0 .
The ridge parameter, denoted as k, and the identity matrix of order ( p + 1 ) × ( p + 1 ) , denoted as I, play a crucial role in Equation (7). When k equals zero, the biased corrected PRRE is equivalent to the PMLE. To define the bias vector and covariance matrix in Equation (7), the following expressions are used:
Bias ( β ^ PRRE ) = E ( β ^ PRRE ) β = k G U k 1 α
Cov ( β ^ PRRE ) = E β ^ PRRE E ( β ^ PRRE ) β ^ PRRE E ( β ^ PRRE ) T = G U k 1 U U k 1 G T .
The formulation of the MSE and the matrix mean square error (MMSE) related to the PRRE is as follows:
MMSE ( β ^ PRRE ) = Cov ( β ^ PRRE ) + Bias ( β ^ PRRE ) Bias ( β ^ PRRE ) T = G U k 1 U U k 1 G T + k 2 G U k 1 α α T U k 1 G T .
In the given expression, the matrix U k is formed as a diagonal matrix diag ( u 1 + k , u 2 + k , , u p + k ) . Here, k is added to each diagonal element u j of the original diagonal matrix U. Using the trace operator trace ( · ) in Equation (10), Manson and Shukur [3] introduced a formulation for the MSE of the PRRE. This MSE acts as a metric for evaluating the accuracy and precision of the PRRE in estimating true parameter values, even in the presence of multicollinearity and other influencing factors.
MSE ( β ^ PRRE ) = trace ( MMSE ( β ^ PRRE ) ) = j = 1 p + 1 u j ( u j + k ) 2 + k 2 j = 1 p + 1 α j 2 ( u j + k ) 2 ,
where α represents the j th element of the vector α = ( α 1 , α 2 , , α p + 1 ) T . This vector is derived by multiplying the transpose of the matrix G by the parameter vector β = ( β 1 , β 2 , , β p + 1 ) T . The matrix G is an orthogonal matrix composed of the eigenvectors of a particular matrix H.

2.3. Poisson Liu Estimator (PLE)

Månsson et al. [20] introduced an alternative estimator known as the PLE to address multicollinearity more effectively than the previously mentioned PRRE. The PLE is defined by the following equation:
β ^ PLE = H + I 1 H + d I β ^ PMLE 0 < d < 1 .
The behavior of the Liu estimator depends on the value of the shrinkage parameter, d, as follows:
  • When d = 1 , the PLE is the same as the PMLE.
  • When d < 1 , the PLE tends to produce parameter estimates that are closer to zero than the PMLE. This effect helps reduce the impact of multicollinearity in the data.
The formulae for the bias vector, covariance matrix, and MMSE related to the PLE are as follows:
Bias ( β ^ PLE ) = E ( β ^ PLE ) β = G ( U + I ) 1 α ( d 1 )
Cov ( β ^ PLE ) = E β ^ PLE E ( β ^ PLE ) β ^ PLE E ( β ^ PLE ) T = G ( U + I ) 1 ( U + d I ) U 1 ( U + d I ) ( U + I ) 1 G T
MMSE ( β ^ PLE ) = Cov ( β ^ PLE ) + Bias ( β ^ PLE ) Bias ( β ^ PLE ) T = G ( U + I ) 1 ( U + d I ) U 1 ( U + d I ) ( U + I ) 1 G T + ( d 1 ) 2 G ( U + I ) 1 α α T ( U + I ) 1 G T .
The MSE attributed to the PLE is expressed as follows:
MSE ( β ^ PLE ) = trace ( MMSE ( β ^ PLE ) ) = j = 1 p + 1 ( u j + d ) 2 u j ( u j + 1 ) 2 + ( d 1 ) 2 j = 1 p + 1 α j 2 ( u j + 1 ) 2 .
The Liu parameter, denoted as d, can be determined by taking the derivative of Equation (16) with respect to d and setting it equal to zero. This process allows one to find the optimal value for the Liu parameter as follows:
d o p t = α j 2 1 1 u j + α j 2 .

2.4. Poisson-Adjusted Liu Estimator (PALE)

Expanding on the work of Lukman et al. [13], Amin et al. [12] proposed a modified version of the Liu estimator designed for the Poisson regression model called the Poisson-adjusted Liu estimator. This adaptation aimed to improve the effectiveness of the original Liu estimator when applied in the context of the Poisson model. The modified estimator is defined as follows:
β ^ PALE = H + I 1 H d 0 I β ^ PMLE 0 < d 0 < 1 .
The parameter d 0 is referred to as the adjusted Liu parameter. This modification significantly enhanced the effectiveness of the new PALE. The recommended estimator consistently achieves a lower MSE compared to the PMLE, PRRE, and PLE. The bias, covariance, and MMSE of the PALE can be expressed as follows:
Bias ( β ^ PALE ) = ( d 0 + 1 ) G ( U + I ) 1 α
Cov ( β ^ PALE ) = E β ^ PALE E ( β ^ PALE ) β ^ PALE E ( β ^ PALE ) T = G ( U + I ) 1 ( U d 0 I ) U 1 ( U d 0 I ) ( U + I ) 1 G T
MMSE ( β ^ PALE ) = Cov ( β ^ PALE ) + Bias ( β ^ PALE ) Bias ( β ^ PALE ) T = G ( U + I ) 1 ( U d 0 I ) U 1 ( U d 0 I ) ( U + I ) 1 G T + ( d 0 + 1 ) 2 G ( U + I ) 1 α α T ( U + I ) 1 G T .
The MSE of the PALE can be defined as follows:
MSE ( β ^ PALE ) = trace ( MMSE ( β ^ PALE ) ) = j = 1 p + 1 ( u j d 0 ) 2 u j ( u j + 1 ) 2 + ( d 0 + 1 ) 2 j = 1 p + 1 α j 2 ( u j + 1 ) 2 .
The procedure for determining the optimal value of d 0 involves calculating the partial derivative of Equation (22) with respect to d 0 , setting it equal to zero, and then solving for d 0 . The outcome provides the jth term for the biasing parameter as follows:
d 0 ( o p t ) = u j ( 1 α j 2 ) 1 + u j α j 2 .

2.5. Proposed Poisson Modified Two-Parameter Liu Estimator (PMTPLE)

Based on the research conducted by Abonazel [21], we propose a modified two-parameter Liu estimator for the PRM. This modified estimator, referred to as the PMTPLE, is constructed using a pair of parameters ( k * , d 0 * ) . The PMTPLE formulation is summarized as follows:
β ^ PMTPLE = H + I 1 H ( k * + d 0 * ) I β ^ PMLE ; k * > 0 , 0 < d 0 * < 1 .
The bias vector, variance–covariance matrix, and MSE matrix for the PMTPLE are given by the following expressions:
Bias ( β ^ PMTPLE ) = ( k * + d 0 * + 1 ) G ( U + I ) 1 α
Cov ( β ^ PMTPLE ) = E β ^ PMTPLE E ( β ^ PMTPLE ) β ^ PMTPLE E ( β ^ PMTPLE ) T = G U + I 1 U ( k * + d 0 * ) I U 1 U + I 1 U ( k * + d 0 * ) I G T
MMSE ( β ^ PMTPLE ) = Cov ( β ^ PMTPLE ) + Bias ( β ^ PMTPLE ) Bias ( β ^ PMTPLE ) T = G U + I 1 U ( k * + d 0 * ) I U 1 U + I 1 U ( k * + d 0 * ) I G T + ( k * + d 0 * + 1 ) 2 G ( U + I ) 1 α α T ( U + I ) 1 G T .
Then, the MSE of the PMTPLE is
MSE ( β ^ PMTPLE ) = trace ( MMSE ( β ^ PMTPLE ) ) = j = 1 p + 1 ( u j ( k * + d 0 * ) ) 2 u j ( u j + 1 ) 2 + ( k * + d 0 * + 1 ) 2 j = 1 p + 1 α j 2 ( u j + 1 ) 2 .
Following Abonazel [21], we provide a definition for the optimal values of k * and d 0 * for β ^ PMTPLE by equating the partial differentiation of MSE ( β ^ PMTPLE ) with respect to k * and d 0 * to zero. This yields the following equations:
k o p t * = u j ( 1 α j 2 ) 1 + u j α j 2 d 0 *
and
d 0 ( o p t ) * = u j ( 1 α j 2 ) k ( 1 + u j α j 2 ) 1 + u j α j 2 .

3. Comparison of the Estimators

In this section, we make theoretical comparisons between the above estimators based on the MMSE and extract the main conditions that make the proposed PMTPLE estimator more efficient than other estimators. The following lemma is useful for theoretical comparisons between the proposed PMTPLE estimator and other estimators.
Lemma 1.
Assume two estimators for β: β ^ 1 = C 1 y and β ^ 2 = C 2 y , where C 1 and C 2 are non-stochastic matrices. If the covariance matrix difference D = Cov ( β ^ 1 ) Cov ( β ^ 2 ) > 0 , where Cov ( β ^ 1 ) and Cov ( β ^ 2 ) are the covariance matrices of β ^ 1 and β ^ 2 , respectively, then M M S E ( β ^ 1 ) M M S E ( β ^ 2 ) > 0 if and only if b 2 T [ D + b 1 b 1 T ] 1 b 2 < 1 . Here, M M S E ( β ^ j ) = Cov ( β ^ j ) + b j b j T , with b j representing the bias vector of β ^ j for j = 1 , 2 , as discussed by [22].
Theorem 1.
M M S E ( β ^ PMLE ) MMSE ( β ^ PMTPLE ) > 0 , if and only if A T [ D 1 ] 1 A < 1 , and ( u j + 1 ) 2 ( u j k * d 0 * ) 2 > 0 for k * > 0 and 0 < d 0 * < 1 , where D 1 = Cov ( β ^ PMLE ) Cov ( β ^ PMTPLE ) , A = ( k * + d 0 * + 1 ) G ( U + I ) 1 α .
Proof. 
D 1 = Cov ( β ^ PMLE ) Cov ( β ^ PMTPLE ) = = G [ U 1 ( U + I ) 1 ( U ( k * + d 0 * ) I ) U 1 ( U ( k * + d 0 * ) I ) × ( U + I ) 1 ] G T = G diag 1 u j ( u j k * d 0 * ) 2 u j ( u j + 1 ) 2 j = 1 p + 1 G T = G diag ( u j + 1 ) 2 ( u j k * d 0 * ) 2 u j ( u j + 1 ) 2 j = 1 p + 1 G T .
Using Lemma (1), MMSE ( β ^ PMLE ) MMSE ( β ^ PMTPLE ) > 0 , if and only if A T [ D 1 ] 1 A < 1 , under the assumption that D 1 is a positive definite (pd) matrix ( D 1 > 0 ). Therefore D 1 > 0 , if U 1 ( U + I ) 1 ( U ( k * + d 0 * ) I ) U 1 ( U ( k * + d 0 * ) I ) ( U + I ) 1 > 0 , and this is achieved when ( u j + 1 ) 2 ( u j k * d 0 * ) 2 > 0 for k * > 0 and 0 < d 0 * < 1 . This means that β ^ PMTPLE is favored over β ^ PMLE if and only if A T [ D 1 ] 1 A < 1 and ( u j + 1 ) 2 ( u j k * d 0 * ) 2 > 0 . □
Theorem 2.
M M S E ( β ^ PRRE ) MMSE ( β ^ PMTPLE ) > 0 , if and only if A T [ D 2 + B B T ] 1 A < 1 and u j 2 ( u j + 1 ) 2 ( u j + k ) 2 ( u j k * d 0 * ) 2 > 0 for k > 0 , k * > 0 , and 0 < d 0 * < 1 , where D 2 = Cov ( β ^ PRRE ) Cov ( β ^ PMTPLE ) , B = k G U k 1 α .
Proof. 
D 2 = Cov ( β ^ PRRE ) Cov ( β ^ PMTPLE ) = = G [ U k 1 U U k 1 ( U + I ) 1 ( U ( k * + d 0 * ) I ) U 1 ( U × ( k * + d 0 * ) I ) ( U + I ) 1 ] G T = G diag u j ( u j + k ) 2 ( u j k * d 0 * ) 2 u j ( u j + 1 ) 2 j = 1 p + 1 G T = G diag u j 2 ( u j + 1 ) 2 ( u j + k ) 2 ( u j k * d 0 * ) 2 u j ( u j + 1 ) 2 ( u j + k ) 2 j = 1 p + 1 G T .
Using Lemma (1), M M S E ( β ^ P R R E ) M M S E ( β ^ P M T P L E ) > 0 , if and only if A T [ D 2 + B B T ] 1 A < 1 , under the assumption that D 2 = Cov ( β ^ PRRE ) Cov ( β ^ PMTPLE ) > 0 . Therefore, D 2 > 0 , if U k 1 U U k 1 ( U + I ) 1 ( U ( k * + d 0 * ) I ) U 1 ( U ( k * + d 0 * ) I ) ( U + I ) 1 > 0 , and this is achieved when u j 2 ( u j + 1 ) 2 ( u j + k ) 2 ( u j k * d 0 * ) 2 > 0 for k > 0 , k * > 0 and 0 < d 0 * < 1 . This means that β ^ PMTPLE is favored over β ^ PRRE if and only if A T [ D 2 + B B T ] 1 A < 1 and u j 2 ( u j + 1 ) 2 ( u j + k ) 2 ( u j k * d 0 * ) 2 > 0 for k > 0 , k * > 0 , and 0 < d 0 * < 1 . □
Theorem 3.
M M S E ( β ^ PLE ) MMSE ( β ^ PMTPLE ) > 0 , if and only if A T [ D 3 + S S T ] 1 A < 1 and ( u j + d ) 2 ( u j k * d 0 * ) 2 > 0 for k * > 0 , and 0 < d , d 0 * < 1 , where D 3 = Cov ( β ^ PLE ) Cov ( β ^ PMTPLE ) , S = G ( U + I ) 1 α ( d 1 ) .
Proof. 
D 3 = Cov ( β ^ PLE ) Cov ( β ^ PMTPLE ) = = G [ ( U + I ) 1 ( U + d I ) U 1 ( U + d I ) ( U + I ) 1 ( U + I ) 1 × ( U ( k * + d 0 * ) I ) U 1 ( U ( k * + d 0 * ) I ) ( U + I ) 1 ] G T = G diag ( u j + d ) 2 u j ( u j + 1 ) 2 ( u j k * d 0 * ) 2 u j ( u j + 1 ) 2 j = 1 p + 1 G T = G diag ( u j + d ) 2 ( u j k * d 0 * ) 2 u j ( u j + 1 ) 2 j = 1 p + 1 G T .
Using Lemma (1), M M S E ( β ^ P L E ) M M S E ( β ^ P M T P L E ) > 0 , if and only if A T [ D 3 + S S T ] 1 A < 1 , under the assumption that D 3 > 0 . Therefore, D 3 > 0 , if ( U + I ) 1 ( U + d I ) U 1 ( U + d I ) ( U + I ) 1 ( U + I ) 1 ( U ( k * + d 0 * ) I ) U 1 ( U ( k * + d 0 * ) I ) ( U + I ) 1 > 0 , and this is achieved when ( u j + d ) 2 ( u j k * d 0 * ) 2 > 0 for k , k * > 0 , and 0 < d , d 0 * < 1 . This means that β ^ PMTPLE is favored over β ^ PLE if and only if A T [ D 3 + S S T ] 1 A < 1 and ( u j + d ) 2 ( u j k * d 0 * ) 2 > 0 for k * > 0 , and 0 < d , d 0 * < 1 . □
Theorem 4.
M M S E ( β ^ PALE ) MMSE ( β ^ PMTPLE ) > 0 , if and only if A T [ D 4 + N N T ] 1 A < 1 and ( u j d 0 ) 2 ( u j k * d 0 * ) 2 > 0 for k * > 0 , and 0 < d 0 , d 0 * < 1 , where D 4 = Cov ( β ^ PALE ) Cov ( β ^ PMTPLE ) , N = ( d 0 + 1 ) G ( U + I ) 1 α .
Proof. 
D 4 = Cov ( β ^ PALE ) Cov ( β ^ PMTPLE ) = = G [ ( U + I ) 1 ( U d 0 I ) U 1 ( U d 0 I ) ( U + I ) 1 ( U + I ) 1 × ( U ( k * + d 0 * ) I ) U 1 ( U ( k * + d 0 * ) I ) ( U + I ) 1 ] G T = G diag ( u j d 0 ) 2 u j ( u j + 1 ) 2 ( u j k * d 0 * ) 2 u j ( u j + 1 ) 2 j = 1 p + 1 G T = G diag ( u j d 0 ) 2 ( u j k * d 0 * ) 2 u j ( u j + 1 ) 2 j = 1 p + 1 G T .
Using Lemma (1), M M S E ( β ^ P A L E ) M M S E ( β ^ P M T P L E ) > 0 , if and only if A T [ D 4 + N N T ] 1 A < 1 , under the assumption that D 4 > 0 . Therefore, D 4 > 0 , if ( U + I ) 1 ( U d 0 I ) U 1 ( U d 0 I ) ( U + I ) 1 ( U + I ) 1 ( U ( k * + d 0 * ) I ) U 1 ( U ( k * + d 0 * ) I ) ( U + I ) 1 > 0 , and this is achieved when ( u j d 0 ) 2 ( u j k * d 0 * ) 2 > 0 for k * > 0 , 0 < d 0 * < 1 , and 0 < d 0 < 1 . This means that β ^ PMTPLE is favored over β ^ PALE if and only if A T [ D 4 + N N T ] 1 A < 1 and ( u j d 0 ) 2 ( u j k * d 0 * ) 2 > 0 for k * > 0 , and 0 < d 0 , d 0 * < 1 . □

4. Selection of the Biasing Parameter

4.1. Ridge Parameter

Given the abundance in the literature addressing multicollinearity in the PRM, the optimal value of k for ridge regression is used, as outlined in the relevant scholarly works.
According to Lukman et al. [23] and Aladeitan et al. [24], the preferred value of k can be determined as follows:
k ^ 1 = 1 α ^ m a x 2 .
Amin et al. [12] proposed an alternative optimal value of k:
k ^ 2 = 1 j = 1 p + 1 α ^ j 2 .
Furthermore, Kibria et al. [25]; Ertan and Akay [26]; Akay and Ertan [27]; and Türkan and Özel [28] used the optimal value of k, which is described by the following equation:
k ^ 3 = max j σ ^ 2 α ^ j 2 1 2 ; σ ^ 2 = j = 1 n ( y i μ ^ ) n p 1 .

4.2. Liu Parameter

According to the research conducted by Akay and Ertan [27], Qasim et al. [29], Amin et al. [12], Abonazel et al. [16], and Lukman et al. [23], the optimal value of d can be determined as follows:
d 1 ^ = max j 0 , 1 p + 1 j = 1 p + 1 α ^ j 2 1 max j 1 u j + max j ( α ^ j 2 )
d 2 ^ = max j 0 , min j α ^ j 2 1 max j 1 u j + max j ( α ^ j 2 ) .

4.3. Adjusted Liu Parameter

Lukman et al. [13], Amin et al. [12], and Abonazel et al. [16] determined the most suitable value of d 0 as follows:
d 0 ^ = max j 0 , min j u j ( 1 α ^ j 2 ) 1 + ( u j α ^ j 2 ) .

4.4. Proposed Estimator Parameters

Based on the research conducted by Abonazel et al. [16] and Abonazel [21], we suggest using the estimator d ^ 0 * for the d 0 * parameter in the PMTPLE. This can be achieved by incorporating the value of d ^ 0 * within Equation (29), as outlined below:
d 0 * ^ = max j 0 , min j ( u m i n ) ( 1 α ^ m i n 2 ) k ^ ( 1 + u j α ^ j 2 ) 1 + u j α ^ j 2 ,
when it is given that k ^ = min j 1 α ^ j 2 .
Initially, the most suitable value of k * can be used as the value of k in the ridge estimator for the modified two-parameter Liu estimator as follows:
k ^ 1 , d 0 * = k ^ 1
k ^ 2 , d 0 * = k ^ 2 .
Given that 0 < d ^ 0 * < 1 , one can substitute d ^ 2 for d ^ 0 * to calculate the value of k * . Therefore, we propose defining the parameter k * as follows:
k ^ 3 , d 0 * = min j ( u j ) 1 min j ( α ^ j 2 ) 1 + min j u j mean j α ^ j 2 d ^ 2
k ^ 4 , d 0 * = p + 1 j = 1 p + 1 1 u j + 2 α ^ j 2 d ^ 2
k ^ 5 , d 0 * = 1 min j 1 + u j α ^ j 2 d ^ 2 .

5. Monte Carlo Simulation

5.1. Simulation Design

In this section, we present the Monte Carlo simulation study that was conducted to evaluate the performance of the different estimators in the context of a Poisson regression model when dealing with multicollinearity. This study involved generating a response variable based on a Poisson distribution [12,23], where the mean ( μ i ) is represented as exp ( x i β ) , the variable i ranges from 1 to n, and β consists of coefficients ( β 1 , β 2 , , β p + 1 ). The matrix X represents the design matrix, and x i corresponds to the i th row of this matrix. The explanatory variables were obtained from McDonald and Galarneau [30] and obeyed the following pattern:
x i j = 1 ρ 2 1 / 2 Q i j + ρ Q i , p + 1 , i = 1 , , n ; j = 2 , , p + 1 .
In this equation, Q i j is derived from a standard normal distribution, while ρ 2 represents the correlation between explanatory variables. The impact of ρ was examined by considering different values of 0.85, 0.90, 0.95, and 0.99. Mean functions were established for scenarios with 3, 6, 9, and 12 explanatory variables. The intercept ( β 1 ) values were strategically set to −1, 0, and 1, which influenced the average intensity of the Poisson process. The slope coefficients were chosen such that j = 2 p β j 2 = 1 and β 2 = = β p + 1 across various sample sizes (50, 100, 150, 200, 250, 300, and 400). The simulation experiments were conducted using the R programming language. For each replicate, the MSE of the estimators was calculated using the following formula:
MSE ( β * ) = 1 1000 l = 1 1000 ( β l * β ) T ( β l * β ) .
Here, β l * represents the vector of estimated values for the lth simulation experiment of one of the four estimators (PMLE, PRRE, PLE, and PMTPLE), and the estimator with the lowest MSE is considered the most suitable.

5.2. Simulation Results

To organize the presentation of the simulation results, Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6 show the MSE values for each estimator in the case of p = 3 and p = 6. Meanwhile, the results for the case of p = 9 and p = 12 are listed in Tables S1–S6 in the Supplementary Material section. The smallest MSE value in each row is highlighted in bold. The results of the simulation study provide a comprehensive analysis of the performance of the different estimators for the PRM when dealing with multicollinearity. The results offer valuable insights into the challenges and potential solutions for parameter estimation in complex regression scenarios. It is important to note that the PMLE consistently underperforms in these scenarios, thus indicating the need for caution or alternative approaches when multicollinearity is a concern. One of the key findings from this study was the significant impact of multicollinearity ( ρ ) on estimation accuracy. As the level of multicollinearity increased, all the estimators experienced an observable increase in MSE. This confirmed the well-established notion that multicollinearity adversely affects the precision of parameter estimation. Researchers should be aware of this effect and consider strategies to mitigate multicollinearity’s influence on their models. An increasing sample size (n) was identified as a critical factor for improving estimation accuracy, as larger sample sizes consistently led to a decrease in the MSE across all estimators. This emphasized the importance of having ample data when aiming for precise parameter estimates. Another noteworthy observation was the impact of the number of explanatory variables (p) on the simulated MSE. As the number of explanatory variables increased from 3 to 12, the MSE rose across all estimators. This suggested that additional variables introduce higher levels of multicollinearity. While the effect of changing the intercept value ( β 1 ) was modest compared to that of other factors, shifting β 1 from −1 to +1 resulted in a decrease in the MSE. Furthermore, the proposed estimator consistently outperformed other estimators, such as the PMLE, PRRE, PLE, and PALE, across various sample sizes (n), numbers of explanatory variables (p), levels of multicollinearity ( ρ ), and intercept values ( β 1 ). The proposed estimator, PMTPLE, demonstrated superior performance in the presence of multicollinearity. Additionally, though k ^ 1 , d 0 * and k ^ 2 , d 0 * were used for the proposed estimator, the approach still offered more favorable values in terms of MSE, thus making it a reliable choice for researchers facing similar regression challenges. However, it is important to note that the PMTPLE works for all values of k * ( k ^ 1 , d 0 * , k ^ 2 , d 0 * , k ^ 3 , d 0 , k ^ 4 , d 0 , and k ^ 5 , d 0 ). Interestingly, k ^ 3 , d 0 emerged as the most effective choice across the various scenarios as it consistently yielded the minimum MSE. This highlighted the robust and optimal performance of k ^ 3 , d 0 , which was related to the PMTPLE and arose from its derivative of the MSE.

5.3. Relative Efficiency

An alternative metric for evaluating the performance of biasing estimators is relative efficiency (RE). Relative efficiency offers an objective means for comparing the precision and accuracy of different estimators in a given context. It provides a quantitative measure through which to assess the effectiveness of one estimator relative to another. The MSE is a crucial factor in calculating relative efficiency as it captures both bias and variance. A lower MSE indicates more a favorable performance when evaluating estimators comprehensively. In many cases, the reference estimator β ^ PMLE is used as a benchmark due to its desirable asymptotic properties, including efficiency. The results of Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6 and Tables S1–S6 were used to calculate the relative efficiency. The calculation of the relative efficiency involved determining the MSE for both the reference estimator and other estimators, as described in Equation (42). The formula for calculating relative efficiency can be found in authoritative references such as [31,32] as follows:
R E ( β ^ * ) = MSE ( β ^ PMLE ) MSE ( β ^ * ) ,
where β ^ * denotes either the PRRE, PLE, PALE, or PMTPLE.
In Figure 1, Figure 2, Figure 3 and Figure 4, the RE of the various estimators is plotted against different variables, such as the sample size (n), correlation between explanatory variables ( ρ ), explanatory variable count (p), and intercept value ( β 1 ). The PMTPLE consistently demonstrated the highest RE values across various n, ρ , β 1 , and p levels. This finding highlighted the superior efficiency of the PMTPLE compared to the other four estimators under evaluation. In essence, the PMTPLE emerged as the preferred choice in various scenarios, including k ^ 3 , d 0 * , k ^ 4 , d 0 * , and k ^ 5 , d 0 * . It consistently outperformed the other estimators in terms of precision and efficiency, with k ^ 3 , d 0 * showing the highest RE across all cases.

6. Applications

In this section, we present the two real-world applications used to evaluate the effectiveness of the proposed estimator.

6.1. Mussel Data

In this subsection, the empirical study conducted by Asia et al. [33] is considered. The researchers employed a mussel dataset and Poisson inverse Gaussian regression modeling to address the issue of multicollinearity. The mussel dataset, initially introduced by Sepkoski and Rex [34], contains valuable insights into the factors that influence the count of mussel species in coastal rivers within the southeastern United States. This dataset includes 35 observations (after removing outlier values), with a single response variable and six explanatory variables. These variables are as follows: the number of mussel species (y); the area of drainage basins (A, x 1 ); the count of stepping stones to four major species–source river systems, namely the Alabama-Coosa River System (AC, x 2 ), the Apalachicola River (AP, x 3 ), the St. Lawrence River (SL, x 4 ), and the Savannah River (SV, x 5 ); and the natural logarithm of the area of drainage basins (ln(A), x 6 ).
The data show that the mean of y (10.285) equals the variance of y (10.327). Anderson–Darling and Pearson chi-square goodness-of-fit tests were conducted to evaluate the response variable’s probability distribution. Both tests indicated that the response variable y followed a Poisson distribution. The computed statistics (p-value) for the Anderson–Darling and Pearson chi-square tests were 0.604 (0.643) and 2.409 (0.996), respectively.
The correlation plot in Figure 5 provides a visual representation of the data, whereby a significant intercorrelation among most of the variables was revealed. A careful examination of the plot showed that the variables were highly correlated with each other, which raised concerns about the potential multicollinearity issues within the data. To validate the presence of multicollinearity, two commonly used measures were employed: the variance inflation factor (VIF) and the condition number (CN). Both the VIF and CN were applied to evaluate the strength of the linear association (correlation) between the predictor variables in a regression model. The condition number (CN) was calculated by taking the square root of the ratio between the maximum eigenvalue (3.2 × 1010) and the minimum eigenvalue (0.15984), thus resulting in a value of 445737. In addition, the variance inflation factors (VIFs) for the explanatory variables were 8.804, 271.929, 132.136, 59.234, 2.369, and 6.423. The correlation matrix, CN, and VIF collectively indicated the presence of the multicollinearity among the explanatory variables.
Table 7 provides a summary of the various estimators, including their associated biasing parameters, intercepts, coefficients for predictor variables x 1 through x 6 , and MSEs. Below is a brief overview of the results presented in the table:
  • PMLE: This estimator had an intercept of 2.09378 and coefficient values for each predictor variable ( x 1 through x 6 ). The associated MSE was 6.27306.
  • PRRE: We considered three variations of the PRRE, each with a different biasing parameter— k ^ 1 , k ^ 2 , and k ^ 3 . These estimators had varying intercepts and coefficient values for the predictor variables. The MSE values ranged from 0.15981 to 0.97892.
  • PLE: We considered two variations of the PLE, both with the same biasing parameter d ^ 1 . These estimators had consistent intercepts and coefficient values for the predictor variables, thereby resulting in an identical MSE of 0.13671.
  • PALE: This estimator, represented by d ^ 0 , had consistent intercept and coefficient values for the predictor variables. The MSE was 0.13671, thus matching the PLE.
  • PMTPLE: We considered five variations of the PMTPLE, denoted by k ^ 1 , d 0 * through k ^ 5 , d 0 * . These estimators had varying intercepts and coefficient values for the predictor variables. The MSE values ranged from 0.01931 to 0.88043.
Overall, the table provides a comprehensive comparison of the different estimation techniques in the context of the mussel data. It allows for an assessment of the biasing parameters, and it shows that the PMLE had the highest MSE. The PRRE had a lower MSE than the PLE for all biasing parameters, thus indicating that the PLE introduced a lower MSE compared to the PRRE. The PALE and PLE yielded the same MSE in this case. The PMTPLE demonstrated the lowest MSE of all estimators, especially for k ^ 1 , d 0 * , k ^ 2 , d 0 * , and k ^ 3 , d 0 * . The optimal MSE was observed at k ^ 3 , d 0 * for the PMTPLE, which supports the simulation results.

6.2. Recreation Demand Data

An additional practical application is presented to demonstrate the enhanced performance of the proposed estimator. This application’s dataset pertained to entertainment demand, a field that has previously been explored by Abonazel et al. [16]. These data, originally introduced by Cameron and Trivedi [35], were employed to estimate a function representing recreation demand. The dataset was collected through a survey conducted in 1980, which gathered information about the number of recreational boating trips taken to Somerville Lake in East Texas. The main focus of our analysis was the response variable denoted as y, which represents the count of recreational boating excursions. This dataset consists of 179 observations (after removing outlier values) and includes three explanatory variables, x 1 , x 2 , and x 3 , which represent the expenditure (measured in USD) associated with visiting Lake Conroe, Lake Somerville, and Lake Houston, respectively.
The analysis of the data indicated that the mean of the response variable (y) was 2.788, which was nearly equivalent to its variance of 2.864. To evaluate the goodness of fit for the probability distribution of the response variable, two statistical tests were performed: the Anderson–Darling and Pearson chi-square tests. Both tests yielded similar results, indicating that the response variable y closely followed a Poisson distribution. The p-values calculated for these tests were 10.909 (0.164) (Anderson–Darling) and 4.064 (0.760) (Pearson chi-square), thus confirming the compatibility of y with a Poisson distribution. The correlation coefficients between explanatory variables x 1 : x 2 , x 1 : x 3 , and x 2 : x 3 were 0.97, 0.98, and 0.94, respectively, thereby indicating a significant interrelationship between most of the variables. A careful examination of this plot highlighted a strong correlation between these variables, which raised concerns about the potential multicollinearity within the dataset. To confirm the presence of multicollinearity, two commonly used measures were employed: the variance inflation factor (VIF) and the condition number (CN). Both the VIF and CN are diagnostic tools for assessing the strength of linear association or correlation among predictor variables within a regression model. The condition number (CN) was computed by taking the square root of the ratio between the maximum eigenvalue (1147.54) and the minimum eigenvalue (1.006), thereby resulting in a value of 33.771. Additionally, the variance inflation factors (VIFs) for the explanatory variables were computed as 43.840, 14.619, and 24.63566, respectively. Overall, the correlation matrix, CN, and VIFs provided substantial evidence for the presence of multicollinearity among the explanatory variables.
Table 8 provides an insightful overview of the various estimators and their performance when modeling recreation demand data. To summarize the results, the first estimator, PMLE, yielded an intercept of 1.20525 and coefficients for the three predictor variables ( x 1 , x 2 , and x 3 ), with an associated MSE of 1.22355 and no biasing parameter. The PRRE estimator was divided into three variations ( k ^ 1 , k ^ 2 , and k ^ 3 ), each offering distinct intercepts, coefficients, and MSE values ranging from 0.57949 to 0.92717. The PLE, represented by two variations ( d ^ 1 and d ^ 2 ), maintained consistent intercepts, coefficients, and identical MSE values of 0.54352. The PALE ( d ^ 0 ) also resulted in the same intercepts, coefficients, and MSE as the PLE. The PMTPLE, with five variations ( k ^ 1 , d 0 * through k ^ 5 , d 0 * ), exhibited varying intercepts, coefficients, and MSE values between 0.49054 and 0.60474. In summary, the table serves as a valuable resource for comparing the estimation techniques when using recreation demand data, demonstrating that the PMLE had the highest MSE; the PRRE outperformed the PMLE for all biasing parameters; the PALE matched the PLE’s MSE; and the PMTPLE, particularly k ^ 2 , d 0 * and k ^ 3 , d 0 * , showed the lowest MSE, thus supporting the simulation results and indicating their optimality.
Through two applications, we verified the theoretical conditions of Theorems 1–4 and found that all conditions were satisfied on both datasets. This explained the superiority of the proposed PMTPLE estimator over the rest of the estimators.

7. Conclusions

This article introduced a new Liu estimator (PMTPLE) with two shrinkage parameters for the Poisson regression model to address the multicollinearity challenges in the Poisson maximum likelihood estimator (PMLE). The proposed PMTPLE was compared to four existing estimators (the PMLE, PRRE, PLE, and PALE) through a Monte Carlo simulation study and two empirical applications with a focus on evaluating the MSE. This study revealed that higher levels of multicollinearity ( ρ ) lead to less accurate parameter estimates, while larger sample sizes (n) resulted in more reliable estimates. Additionally, increasing the number of explanatory variables (p) also increased the estimation errors due to multicollinearity. However, varying the intercept value β 1 from −1 to +1 led to a decrease in the MSE. Both the Monte Carlo simulation and empirical applications consistently demonstrated the superiority of the proposed PMTPLE over the PMLE, PRRE, PLE, and PALE in various scenarios. Therefore, we strongly recommend that practitioners use the k ^ 3 , d 0 * shrinkage parameter when applying the PMTPLE to the Poisson regression model with significant multicollinearity. This recommendation holds practical significance and highlights the importance of the PMTPLE as the preferred choice for accurate parameter estimation in the presence of multicollinearity-related issues in statistical modeling. Researchers and practitioners are encouraged to consider k ^ 3 , d 0 * as the optimal choice for enhancing parameter estimation accuracy in such scenarios.
In future studies, we will integrate cross-validation and model selection techniques with the proposed PMTPLE, especially in cases where many predictors are involved in the model. Furthermore, to increase the efficiency of the PMTPLE, the generalized cross-validation (GCV) criterion will be used to select the biasing parameters.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/axioms13010046/s1, Table S1: Computed MSE for difference estimators p = 9 and β 0 = −1; Table S2: Computed MSE for difference estimators p = 9 and β 0 = 0; Table S3: Computed MSE for difference estimators p = 9 and β 0 = 1; Table S4: Computed MSE for difference estimators p = 12 and β 0 = −1; Table S5: Computed MSE for difference estimators p = 12 and β 0 = 0; Table S6: Computed MSE for difference estimators p = 12 and β 0 = 1.

Author Contributions

Conceptualization, M.M.A., M.R.A. and A.T.H.; methodology, M.R.A., A.T.H. and A.M.E.-M.; software, M.R.A. and A.T.H.; validation, M.M.A. and M.R.A.; formal analysis, M.M.A., M.R.A. and A.T.H.; investigation, M.M.A., M.R.A. and A.M.E.-M.; resources, M.R.A. and A.M.E.-M.; data curation, A.T.H.; writing—original draft preparation, A.T.H. and A.M.E.-M.; writing—review and editing, M.M.A., M.R.A. and A.T.H.; visualization, A.M.E.-M.; supervision, M.M.A. and M.R.A.; project administration, M.M.A. and M.R.A.; funding acquisition, M.M.A. and M.R.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University (IMSIU) (grant number IMSIU-RP23097).

Data Availability Statement

Datasets are mentioned along the paper.

Acknowledgments

This work was supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University (IMSIU).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Silva, J.S.; Tenreyro, S. The log of gravity. Rev. Econ. Stat. 2006, 88, 641–658. [Google Scholar] [CrossRef]
  2. Manning, W.G.; Mullahy, J. Estimating log models: To transform or not to transform? J. Health Econ. 2001, 20, 461–494. [Google Scholar] [CrossRef] [PubMed]
  3. Månsson, K.; Shukur, G. A Poisson ridge regression estimator. Econ. Model. 2011, 28, 1475–1481. [Google Scholar] [CrossRef]
  4. Hoerl, A.E.; Kennard, R.W. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics 1970, 12, 55–67. [Google Scholar] [CrossRef]
  5. Alkhamisi, M.; Khalaf, G.; Shukur, G. Some modifications for choosing ridge parameters. Commun. Stat. Theory Methods 2006, 35, 2005–2020. [Google Scholar] [CrossRef]
  6. Alkhamisi, M.A.; Shukur, G. Developing ridge parameters for SUR model. Commun. Stat. Theory Methods 2008, 37, 544–564. [Google Scholar] [CrossRef]
  7. Khalaf, G.; Shukur, G. Choosing ridge parameter for regression problems. Commun. Stat. Theory Methods 2005, 34, 1177–1182. [Google Scholar] [CrossRef]
  8. Kibria, B.G. Performance of some new ridge regression estimators. Commun. Stat. Simul. Comput. 2003, 32, 419–435. [Google Scholar] [CrossRef]
  9. Muniz, G.; Kibria, B.G. On some ridge regression estimators: An empirical comparisons. Commun. Stat. Simul. Comput. 2009, 38, 621–630. [Google Scholar] [CrossRef]
  10. Kejian, L. A new class of biased estimate in linear regression. Commun. Stat. Theory Methods 1993, 22, 393–402. [Google Scholar] [CrossRef]
  11. Månsson, K.; Kibria, B.G.; Sjölander, P.; Shukur, G. Improved Liu estimators for the Poisson regression model. Int. J. Stat. Probab. 2012, 1, 2. [Google Scholar] [CrossRef]
  12. Amin, M.; Akram, M.N.; Kibria, B.G. A new adjusted Liu estimator for the Poisson regression model. Concurr. Comput. Pract. Exp. 2021, 33, e6340. [Google Scholar] [CrossRef]
  13. Lukman, A.F.; Kibria, B.G.; Ayinde, K.; Jegede, S.L. Modified one-parameter Liu estimator for the linear regression model. Model. Simul. Eng. 2020, 2020, 1–17. [Google Scholar] [CrossRef]
  14. Algamal, Z.Y.; Abonazel, M.R. Developing a Liu-type estimator in beta regression model. Concurr. Comput. Pract. Exp. 2022, 34, e6685. [Google Scholar] [CrossRef]
  15. Yang, H.; Chang, X. A new two-parameter estimator in linear regression. Commun. Stat. Theory Methods 2010, 39, 923–934. [Google Scholar] [CrossRef]
  16. Abonazel, M.R.; Awwad, F.A.; Tag Eldin, E.; Kibria, B.G.; Khattab, I.G. Developing a two-parameter Liu estimator for the COM—Poisson regression model: Application and simulation. Front. Appl. Math. Stat. 2023, 9, 956963. [Google Scholar] [CrossRef]
  17. Omara, T.M. Modifying two-parameter ridge Liu estimator based on ridge estimation. Pak. J. Stat. Oper. Res. 2019, 15, 881–890. [Google Scholar] [CrossRef]
  18. Abonazel, M.R.; Algamal, Z.Y.; Awwad, F.A.; Taha, I.M. A new two-parameter estimator for beta regression model: Method, simulation, and application. Front. Appl. Math. Stat. 2022, 7, 780322. [Google Scholar] [CrossRef]
  19. Segerstedt, B. On ordinary ridge regression in generalized linear models. Commun. Stat. Theory Methods 1992, 21, 2227–2246. [Google Scholar] [CrossRef]
  20. Månsson, K.; Kibria, B.G.; Sjölander, P.; Shukur, G.; Sweden, V. New Liu Estimators for the Poisson Regression Model: Method and Application; Technical Report; HUI Research: Stockholm, Swizerland, 2011. [Google Scholar]
  21. Abonazel, M.R. New modified two-parameter Liu estimator for the Conway—Maxwell Poisson regression model. J. Stat. Comput. Simul. 2023, 93, 1976–1996. [Google Scholar] [CrossRef]
  22. Trenkler, G.; Toutenburg, H. Mean squared error matrix comparisons between biased estimators—An overview of recent results. Stat. Pap. 1990, 31, 165–179. [Google Scholar] [CrossRef]
  23. Lukman, A.F.; Aladeitan, B.; Ayinde, K.; Abonazel, M.R. Modified ridge-type for the Poisson regression model: Simulation and application. J. Appl. Stat. 2022, 49, 2124–2136. [Google Scholar] [CrossRef] [PubMed]
  24. Aladeitan, B.B.; Adebimpe, O.; Lukman, A.F.; Oludoun, O.; Abiodun, O.E. Modified Kibria-Lukman (MKL) estimator for the Poisson Regression Model: Application and simulation. F1000Research 2021, 10, 548. [Google Scholar] [CrossRef]
  25. Kibria, B.G.; Månsson, K.; Shukur, G. A simulation study of some biasing parameters for the ridge type estimation of Poisson regression. Commun. Stat. Simul. Comput. 2015, 44, 943–957. [Google Scholar] [CrossRef]
  26. Ertan, E.; Akay, K.U. A new class of Poisson Ridge-type estimator. Sci. Rep. 2023, 13, 4968. [Google Scholar] [CrossRef] [PubMed]
  27. Akay, K.U.; Ertan, E. A new improved Liu-type estimator for Poisson regression models. Hacet. J. Math. Stat. 2022, 51, 1484–1503. [Google Scholar] [CrossRef]
  28. Türkan, S.; Özel, G. A new modified Jackknifed estimator for the Poisson regression model. J. Appl. Stat. 2016, 43, 1892–1905. [Google Scholar] [CrossRef]
  29. Qasim, M.; Kibria, B.; Månsson, K.; Sjölander, P. A new Poisson Liu regression estimator: Method and application. J. Appl. Stat. 2020, 47, 2258–2271. [Google Scholar] [CrossRef]
  30. McDonald, G.C.; Galarneau, D.I. A Monte Carlo evaluation of some ridge-type estimators. J. Am. Stat. Assoc. 1975, 70, 407–416. [Google Scholar] [CrossRef]
  31. Farghali, R.A.; Qasim, M.; Kibria, B.G.; Abonazel, M.R. Generalized two-parameter estimators in the multinomial logit regression model: Methods, simulation and application. Commun. Stat. Simul. Comput. 2023, 52, 3327–3342. [Google Scholar] [CrossRef]
  32. Abonazel, M.R.; Taha, I.M. Beta ridge regression estimators: Simulation and application. Commun. Stat. Simul. Comput. 2023, 52, 4280–4292. [Google Scholar] [CrossRef]
  33. Batool, A.; Amin, M.; Elhassanein, A. On the performance of some new ridge parameter estimators in the Poisson-inverse Gaussian ridge regression. Alex. Eng. J. 2023, 70, 231–245. [Google Scholar] [CrossRef]
  34. Sepkoski, J.J., Jr.; Rex, M.A. Distribution of freshwater mussels: Coastal rivers as biogeographic islands. Syst. Biol. 1974, 23, 165–188. [Google Scholar] [CrossRef]
  35. Cameron, A.C.; Trivedi, P.K. Regression Analysis of Count Data; Cambridge University Press: Cambridge, UK, 2013; Volume 53. [Google Scholar]
Figure 1. The RE when applied to different estimators and when classified by sample size.
Figure 1. The RE when applied to different estimators and when classified by sample size.
Axioms 13 00046 g001
Figure 2. The RE when applied to different estimators and when classified by correlation between the explanatory variables.
Figure 2. The RE when applied to different estimators and when classified by correlation between the explanatory variables.
Axioms 13 00046 g002
Figure 3. The RE when applied to different estimators and when classified by the number of explanatory variables.
Figure 3. The RE when applied to different estimators and when classified by the number of explanatory variables.
Axioms 13 00046 g003
Figure 4. The RE when applied to different estimators and when classified by the intercept value.
Figure 4. The RE when applied to different estimators and when classified by the intercept value.
Axioms 13 00046 g004
Figure 5. Correlation matrix for the six explanatory variables in the mussel dataset.
Figure 5. Correlation matrix for the six explanatory variables in the mussel dataset.
Axioms 13 00046 g005
Table 1. Computed MSE for the difference estimators p = 3 and β 1 = 1 .
Table 1. Computed MSE for the difference estimators p = 3 and β 1 = 1 .
PRREPLE PMTPLE
ρ nPMLE k ^ 1 k ^ 2 k ^ 3 d ^ 1 d ^ 2 PALE k ^ 1 , d 0 * k ^ 2 , d 0 * k ^ 3 , d 0 * k ^ 4 , d 0 * k ^ 5 , d 0 *
0.85500.675630.492530.561800.276440.381340.369170.367890.273910.313040.257280.275980.26304
1000.215630.190590.199350.158530.176180.176120.175990.157690.164540.145820.155310.15755
1500.125190.114850.119230.107580.110570.110570.110560.103090.106440.100750.103160.10660
2000.093030.089270.090180.083410.086170.086170.086190.083470.084150.080680.082520.08334
2500.075350.072350.073410.069040.070600.070600.070600.068390.069230.067360.068140.06959
3000.065650.063480.064100.060550.061940.061940.061940.060330.060820.058920.059890.06081
4000.037130.036360.036660.035620.035930.035930.035930.035400.035620.035340.035400.03581
0.90500.602220.444540.506590.290220.362220.354660.352950.265360.303530.252490.268960.26101
1000.258410.219980.233690.187550.200160.199980.199830.171820.182560.155780.168670.17247
1500.197550.177160.184070.158150.165760.165700.165630.149470.155240.136160.146430.14932
2000.085740.080420.082890.078620.079160.079160.079110.074970.076980.073270.075120.07728
2500.074920.071490.072580.069020.069730.069730.069730.066760.067720.063150.065900.06715
3000.058730.056300.057320.055500.055560.055560.055560.053500.054390.052140.053340.05479
4000.041050.039900.040290.039250.039370.039370.039370.038380.038720.037310.038140.03886
0.95501.782151.058671.302070.300090.602410.518790.516790.282780.371810.292330.295000.19406
1000.562840.424160.478100.291710.356810.353610.353130.258440.298630.230540.257960.25004
1500.299150.252370.268730.215240.229220.229000.228780.192470.206030.169430.187430.19110
2000.219150.194980.202190.170470.180770.180760.180760.160600.166900.140520.155060.15808
2500.175030.159090.163460.142220.149500.149500.149500.135940.139810.120620.131380.13474
3000.161590.148220.151810.134060.140210.140210.140210.128660.131880.114630.124550.12757
4000.085760.081870.082820.077990.079440.079440.079440.075950.076820.070620.074420.07562
0.99507.828413.989004.750120.221220.930330.360810.360610.212010.210700.197070.166031.51426
1002.445351.341571.692380.268870.652360.501470.500840.220910.323030.258680.240740.10096
1501.362220.815681.013030.356740.531250.488270.487890.251090.349410.265330.270420.16967
2001.007920.647210.786550.366580.477470.460990.460770.263610.349250.256630.277790.21929
2500.806470.545830.643740.340370.420610.412770.412760.256680.321870.234040.260790.22371
3000.753570.528570.614190.339200.417020.409920.409920.267920.327450.241160.269840.24271
4000.403850.325010.354020.268920.287100.286630.286630.225200.249300.193510.219390.22089
Table 2. Computed MSE for the difference estimators p = 3 and β 1 = 0 .
Table 2. Computed MSE for the difference estimators p = 3 and β 1 = 0 .
PRREPLE PMTPLE
ρ nPMLE k ^ 1 k ^ 2 k ^ 3 d ^ 1 d ^ 2 PALE k ^ 1 , d 0 * k ^ 2 , d 0 * k ^ 3 , d 0 * k ^ 4 , d 0 * k ^ 5 , d 0 *
0.85500.234440.188020.204810.205040.193180.193140.186130.151690.167700.128300.148200.16812
1000.079190.073860.075160.075990.074510.074510.073840.069400.070680.061880.067250.07151
1500.047930.045910.046360.046910.046160.046160.045900.044210.044660.040970.043260.04520
2000.033960.033010.033210.033470.033140.033140.033010.032210.032410.030550.031730.03262
2500.026960.026300.026450.026630.026400.026400.026300.025760.025910.024640.025440.02611
3000.024030.023520.023630.023770.023590.023590.023520.023100.023210.022180.022840.02333
4000.013780.013620.013650.013700.013640.013640.013620.013490.013510.013180.013390.01358
0.90500.207250.170380.182230.185470.174140.174140.169890.140860.152250.116520.135220.15259
1000.087390.079210.081200.083160.080060.080060.079150.072320.074280.062350.069230.07532
1500.066530.062500.063360.064460.062920.062920.062480.059040.059890.052920.057140.06064
2000.033670.032740.032870.033180.032800.032800.032740.031900.032020.030010.031290.03229
2500.026430.025800.025870.026110.025840.025840.025800.025230.025300.023840.024780.02545
3000.020540.020160.020210.020360.020190.020190.020160.019820.019860.018970.019540.02000
4000.015130.014910.014940.015030.014930.014930.014910.014720.014740.014220.014560.01482
0.95500.602220.373570.459320.407300.362630.362120.344400.196530.266730.182770.209580.23705
1000.196040.163060.173140.175480.165990.165990.162530.135720.145540.110000.129150.14732
1500.108570.097410.100190.102880.098630.098630.097340.088020.090760.074590.083950.09198
2000.079460.073520.074910.076410.074230.074230.073490.068490.069870.059990.065950.07077
2500.069810.065370.066410.067560.065960.065960.065350.061660.062690.055010.059690.06348
3000.059260.055840.056600.057520.056290.056290.055830.052970.053730.047570.051370.05435
4000.032730.031770.031930.032260.031870.031870.031770.030930.031080.028990.030330.03129
0.99502.904231.516501.834880.417660.656130.515510.505120.236990.292680.220180.215210.10493
1000.926090.536790.670770.501220.453700.448480.433170.208990.304520.191530.227780.22738
1500.511840.335160.400870.386370.329990.329650.316300.193040.249860.167830.198750.23356
2000.359210.254310.291900.296190.260330.260330.250140.171940.206730.140420.169280.20104
2500.297110.219780.246650.252310.226020.226020.217140.158760.184230.128030.153710.18264
3000.269600.206160.228160.233640.211960.211960.204480.155720.176780.126230.150290.17711
4000.137700.118970.123930.128170.120820.120820.118830.103230.108130.084010.097450.10979
Table 3. Computed MSE for the difference estimators p = 3 and β 1 = 1 .
Table 3. Computed MSE for the difference estimators p = 3 and β 1 = 1 .
PRREPLE PMTPLE
ρ nPMLE k ^ 1 k ^ 2 k ^ 3 d ^ 1 d ^ 2 PALE k ^ 1 , d 0 * k ^ 2 , d 0 * k ^ 3 , d 0 * k ^ 4 , d 0 * k ^ 5 , d 0 *
0.85500.087010.084150.084740.084280.082280.082280.082280.079620.080190.075070.078230.07947
1000.028430.028160.028220.028190.027990.027990.027990.027740.027800.027270.027610.02773
1500.016700.016600.016620.016620.016530.016530.016530.016440.016460.016260.016390.01648
2000.012030.011980.011990.011990.011950.011950.011950.011910.011920.011820.011880.01190
2500.010040.010010.010020.010020.009990.009990.009990.009960.009960.009900.009940.00996
3000.009150.009120.009130.009130.009100.009100.009100.009080.009080.009030.009060.00908
4000.005080.005070.005070.005070.005070.005070.005070.005060.005060.005050.005060.00506
0.90500.077910.075280.075920.075720.073750.073750.073750.071300.071900.067290.070180.07096
1000.033740.033170.033300.033310.032820.032820.032820.032280.032400.031260.031990.03228
1500.026110.025810.025880.025900.025640.025640.025640.025360.025420.024800.025210.02530
2000.011430.011370.011390.011390.011340.011340.011340.011290.011300.011200.011270.01132
2500.010520.010470.010490.010490.010450.010450.010450.010410.010420.010330.010390.01041
3000.007730.007710.007720.007720.007700.007700.007700.007670.007680.007630.007660.00768
4000.006130.006120.006120.006120.006110.006110.006110.006100.006100.006070.006090.00610
0.95500.229240.204450.211300.206460.190090.190090.190090.168780.174930.145960.161790.16588
1000.073180.070650.071240.070990.069160.069160.069160.066750.067320.062570.065570.06642
1500.042820.041960.042140.042160.041430.041430.041430.040600.040770.039000.040140.04050
2000.028810.028400.028480.028490.028140.028140.028140.027750.027830.026940.027520.02768
2500.024300.024020.024070.024070.023840.023840.023840.023570.023620.023000.023400.02352
3000.020430.020220.020250.020250.020080.020080.020080.019870.019910.019430.019740.01984
4000.011850.011780.011800.011800.011740.011740.011740.011680.011690.011550.011640.01167
0.99501.068030.677050.822670.621140.486090.467230.467230.263360.349490.258670.275590.20956
1000.339530.282290.301760.290230.254080.254050.254050.207590.224320.175190.199840.20337
1500.186950.167950.173460.172540.157460.157460.157460.140720.145760.121850.135370.13851
2000.132930.123310.125750.125310.117570.117570.117570.108770.111070.096510.105110.10789
2500.107480.101020.102500.102250.096980.096980.096980.090980.092390.081540.088130.08990
3000.100410.095000.096240.095980.091590.091590.091590.086540.087720.078380.084100.08582
4000.048860.047410.047730.047820.046540.046540.046540.045140.045450.042570.044410.04503
Table 4. Computed MSE for the difference estimators p = 6 and β 1 = 1 .
Table 4. Computed MSE for the difference estimators p = 6 and β 1 = 1 .
PRREPLE PMTPLE
ρ nPMLE k ^ 1 k ^ 2 k ^ 3 d ^ 1 d ^ 2 PALE k ^ 1 , d 0 * k ^ 2 , d 0 * k ^ 3 , d 0 * k ^ 4 , d 0 * k ^ 5 , d 0 *
0.85500.801970.574170.688790.457700.492930.492120.483690.363890.432770.310650.339390.34642
1000.226590.202130.215010.203240.198640.198640.197460.178660.189470.157020.171170.17976
1500.144790.132800.139850.136150.133100.133100.131640.123280.129150.115280.121260.12756
2000.073150.069900.071980.071340.070360.070360.069790.067840.069460.066690.067750.06954
2500.053060.051080.052320.052000.051310.051310.051020.049840.050750.049130.049730.05038
3000.039160.038200.038840.038720.038380.038380.038200.037650.038130.037490.037700.03810
4000.031750.030970.031510.031490.031210.031210.030970.030560.031010.030400.030620.03116
0.90501.121410.765140.939490.558610.613470.611540.599750.420400.523490.358820.387610.38033
1000.296950.258090.277880.259540.251130.251130.249010.218690.235840.182610.204980.21982
1500.189460.171020.181260.175480.170530.170530.168500.154710.163740.138820.149610.15908
2000.097110.091230.095120.094520.092600.092600.091140.087540.090930.084280.087400.09052
2500.072300.068970.071160.070870.069670.069670.068950.066870.068740.065150.066810.06819
3000.051860.050180.051310.051250.050630.050630.050170.049190.050170.048360.049220.05010
4000.042170.040900.041760.041760.041270.041270.040890.040130.040910.039280.040100.04095
0.95502.856871.758492.222300.703230.966110.931720.923510.582710.750470.534810.527130.45352
1000.747790.566330.668200.538000.540280.540250.531670.399500.483320.328260.378920.39633
1500.319880.283270.301620.287050.277260.277260.276330.245730.262060.207530.230730.24526
2000.247480.228990.236560.226550.222470.222470.222380.205770.212780.172080.191710.20201
2500.200250.185570.192910.187310.183760.183760.183560.170280.177160.148180.162610.17105
3000.188640.175070.181960.177110.173700.173700.173480.161170.167660.140710.154200.16179
4000.095570.092710.093850.092900.091890.091890.091890.089210.090290.082410.086610.08905
0.995013.778557.863159.753431.401751.577781.031501.031150.641960.793880.669380.568570.69815
1003.523362.140092.799760.861631.209271.153671.151290.624550.913540.585970.595070.26733
1501.464771.011141.241470.902760.835490.834760.833000.545350.706260.455210.498790.44746
2001.126990.837450.983290.774920.729880.729810.729230.520490.633660.395220.466120.45326
2500.948260.733100.840700.695840.660530.660530.660080.494640.582800.362180.441100.44895
3000.873670.685280.777010.646520.617150.617150.617080.471370.546720.346030.416580.42715
4000.415350.367230.387990.366400.350240.350240.350240.307680.326710.237290.277600.29651
Table 5. Computed MSE for the difference estimators p = 6 and β 1 = 0 .
Table 5. Computed MSE for the difference estimators p = 6 and β 1 = 0 .
PRREPLE PMTPLE
ρ nPMLE k ^ 1 k ^ 2 k ^ 3 d ^ 1 d ^ 2 PALE k ^ 1 , d 0 * k ^ 2 , d 0 * k ^ 3 , d 0 * k ^ 4 , d 0 * k ^ 5 , d 0 *
0.85500.280860.239370.252570.256450.240560.240560.238350.203450.215920.146060.168790.20730
1000.079300.076080.076610.077980.076250.076250.076080.073120.073650.061710.067730.07359
1500.052370.050980.051170.051820.051060.051060.050980.049700.049900.044250.047190.04994
2000.025760.025460.025480.025650.025470.025470.025460.025170.025200.023720.024540.02521
2500.019950.019760.019780.019880.019770.019770.019760.019580.019600.018660.019180.01961
3000.014490.014400.014410.014460.014400.014400.014400.014310.014320.013850.014120.01433
4000.011370.011310.011310.011350.011310.011310.011310.011250.011250.010920.011110.01127
0.90500.392220.318240.343980.346110.318870.318870.315440.255000.278800.181360.206740.26062
1000.110720.104570.105730.108250.104870.104870.104560.098930.100080.079670.089440.09978
1500.071170.068650.069030.070200.068770.068770.068640.066310.066690.057140.061920.06670
2000.034280.033750.033790.034090.033760.033760.033750.033230.033270.030670.032110.03331
2500.026500.026190.026210.026390.026200.026200.026190.025890.025910.024370.025230.02593
3000.018590.018450.018450.018540.018450.018450.018450.018300.018310.017550.017980.01833
4000.015130.015020.015020.015090.015020.015020.015020.014910.014910.014330.014660.01493
0.95501.005520.671870.810470.731130.630100.630100.618680.396380.505010.312320.326560.38278
1000.273640.241750.251080.256430.243160.243160.241640.213410.222510.154790.181310.21736
1500.117150.110970.112120.114750.111270.111270.110960.105280.106430.085990.095690.10612
2000.091700.088140.088720.090280.088330.088330.088130.084840.085420.071760.078780.08528
2500.075540.073170.073520.074590.073310.073310.073170.070980.071330.061550.066740.07135
3000.072090.069930.070240.071220.070050.070050.069930.067930.068240.059240.064000.06829
4000.034270.033780.033830.034090.033800.033800.033780.033320.033370.031010.032320.03339
0.99504.751082.694613.399601.106821.178441.106661.101190.605710.811900.579480.517480.43081
1001.254660.832751.020090.903620.765190.764980.755620.465100.613400.360670.396470.43821
1500.530030.421400.462910.482220.422800.422800.418280.328370.367040.236050.269590.33750
2000.404300.337560.360830.376310.340130.340130.336970.279550.301900.192010.230800.28718
2500.347350.299310.315000.327300.301610.301610.299070.257040.272310.176640.215390.26301
3000.315630.273730.286760.298180.275790.275790.273520.236850.249550.164290.198180.24186
4000.159060.148640.150840.155360.149100.149100.148630.139030.141200.107990.124010.14032
Table 6. Computed MSE for the difference estimators p = 6 and β 1 = 1 .
Table 6. Computed MSE for the difference estimators p = 6 and β 1 = 1 .
PRREPLE PMTPLE
ρ nPMLE k ^ 1 k ^ 2 k ^ 3 d ^ 1 d ^ 2 PALE k ^ 1 , d 0 * k ^ 2 , d 0 * k ^ 3 , d 0 * k ^ 4 , d 0 * k ^ 5 , d 0 *
0.85500.100460.097580.098500.098720.096380.096380.096380.093680.094560.085950.090360.09287
1000.030020.029800.029870.029920.029710.029710.029710.029510.029580.028870.029260.02948
1500.019150.019070.019100.019120.019030.019030.019030.018950.018980.018710.018850.01896
2000.009740.009720.009720.009730.009710.009710.009710.009690.009700.009650.009680.00970
2500.007560.007540.007550.007550.007540.007540.007540.007530.007540.007520.007530.00753
3000.005360.005350.005350.005350.005350.005350.005350.005340.005350.005340.005340.00535
4000.004390.004390.004390.004390.004390.004390.004390.004380.004380.004380.004380.00439
0.90500.142160.135740.137910.138350.133140.133140.133140.127160.129230.111810.120310.12570
1000.039680.039220.039360.039480.039020.039020.039020.038580.038720.037080.037990.03854
1500.025540.025350.025410.025460.025270.025270.025270.025090.025150.024480.024850.02508
2000.013160.013120.013140.013150.013110.013110.013110.013070.013080.012950.013020.01308
2500.009800.009770.009780.009790.009770.009770.009770.009740.009750.009680.009720.00975
3000.007000.006990.006990.007000.006980.006980.006980.006970.006980.006940.006960.00698
4000.005510.005500.005500.005500.005500.005500.005500.005490.005490.005470.005480.00550
0.95500.356740.316220.331790.330640.299850.299850.299850.265100.279060.208850.235900.25514
1000.096960.094190.095110.095370.093090.093090.093090.090430.091330.082500.087190.08994
1500.042100.041600.041760.041880.041390.041390.041390.040900.041050.039260.040250.04083
2000.034970.034670.034750.034830.034530.034530.034530.034240.034320.033160.033820.03415
2500.027000.026810.026870.026910.026730.026730.026730.026540.026600.025850.026280.02649
3000.024630.024470.024520.024560.024400.024400.024400.024240.024280.023630.024000.02419
4000.012240.012200.012210.012220.012180.012180.012180.012140.012150.012000.012090.01214
0.99501.698021.111291.381391.147270.832160.830400.830400.516700.679480.438940.453850.36404
1000.456990.395600.422340.421340.374210.374210.374210.321450.345410.246690.286410.30745
1500.187790.176530.180580.182930.172050.172050.172050.161560.165410.136350.150430.15828
2000.149780.143190.145420.146890.140420.140420.140420.134140.136300.116440.126750.13213
2500.124250.119840.121270.122220.117910.117910.117910.113660.115060.100640.108360.11229
3000.116270.112350.113600.114460.110620.110620.110620.106840.108060.095100.102030.10561
4000.056420.055490.055780.056070.055110.055110.055110.054190.054480.051040.052970.05393
Table 7. The coefficients and MSE for different estimators in the mussel dataset.
Table 7. The coefficients and MSE for different estimators in the mussel dataset.
EstimatorBiasing ParameterIntercept x 1 x 2 x 3 x 4 x 5 x 6 MSE
PMLE-2.093780.00000−0.105270.06779−0.034620.001050.210886.27306
PRRE k ^ 1 0.86573−0.00001−0.068380.04946−0.013000.001850.266790.95335
k ^ 2 0.87691−0.00001−0.068720.04963−0.013200.001840.266290.97892
k ^ 3 0.34802−0.00001−0.052180.04128−0.003650.002180.289100.15981
PLE d ^ 1 0.29910−0.00001−0.050510.04041−0.002710.002210.290930.13671
d ^ 2 0.29910−0.00001−0.050510.04041−0.002710.002210.290930.13671
PALE d ^ 0 0.29910−0.00001−0.050510.04041−0.002710.002210.290930.13671
PMTPLE k ^ 1 , d 0 * −0.11367−0.00001−0.037910.034110.004630.002470.309340.04146
k ^ 2 , d 0 * −0.10463−0.00001−0.038190.034240.004470.002470.308930.03827
k ^ 3 , d 0 * 0.03867−0.00001−0.042560.036430.001920.002370.302540.01931
k ^ 4 , d 0 * −0.75974−0.00002−0.018200.024250.016120.002890.338150.88043
k ^ 5 , d 0 * −0.52938−0.00002−0.025230.027760.012020.002740.327880.44298
Table 8. The coefficients and MSE for the different estimators when using the recreation demand data.
Table 8. The coefficients and MSE for the different estimators when using the recreation demand data.
EstimatorBiasing ParameterIntercept x 1 x 2 x 3 MSE
PMLE-1.205250.396340.12524−0.804821.22355
PRRE k ^ 1 1.194750.139450.15493−0.567260.57949
k ^ 2 1.199930.230780.15145−0.660650.92717
k ^ 3 1.197950.190010.15443−0.620820.76366
PLE d ^ 1 1.193930.128880.15445−0.555300.54352
d ^ 2 1.193930.128880.15445−0.555300.54352
PALE d ^ 0 1.193930.128880.15445−0.555300.54352
PMTPLE k ^ 1 , d 0 * 1.18350−0.117420.18135−0.325520.60474
k ^ 2 , d 0 * 1.188950.011220.16730−0.445530.49054
k ^ 3 , d 0 * 1.188840.008590.16759−0.443070.49107
k ^ 4 , d 0 * 1.18642−0.048610.17384−0.389700.52129
k ^ 5 , d 0 * 1.18608−0.056540.17470−0.382310.52828
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abdelwahab, M.M.; Abonazel, M.R.; Hammad, A.T.; El-Masry, A.M. Modified Two-Parameter Liu Estimator for Addressing Multicollinearity in the Poisson Regression Model. Axioms 2024, 13, 46. https://doi.org/10.3390/axioms13010046

AMA Style

Abdelwahab MM, Abonazel MR, Hammad AT, El-Masry AM. Modified Two-Parameter Liu Estimator for Addressing Multicollinearity in the Poisson Regression Model. Axioms. 2024; 13(1):46. https://doi.org/10.3390/axioms13010046

Chicago/Turabian Style

Abdelwahab, Mahmoud M., Mohamed R. Abonazel, Ali T. Hammad, and Amera M. El-Masry. 2024. "Modified Two-Parameter Liu Estimator for Addressing Multicollinearity in the Poisson Regression Model" Axioms 13, no. 1: 46. https://doi.org/10.3390/axioms13010046

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop