Next Article in Journal
Dynamical Sphere Regrouping Particle Swarm Optimization: A Proposed Algorithm for Dealing with PSO Premature Convergence in Large-Scale Global Optimization
Previous Article in Journal
Extinctions in a Metapopulation with Nonlinear Dispersal Coupling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Concavity of Conditional Maximum Likelihood Estimation for Logit Panel Data Models with Imputed Covariates

1
Department of Statistics, Beijing University of Technology, Beijing 100124, China
2
Department of Statistics and Computational Mathematics, The Technical University of Kenya, Nairobi P.O. Box 52428-00200, Kenya
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(20), 4338; https://doi.org/10.3390/math11204338
Submission received: 17 July 2023 / Revised: 1 September 2023 / Accepted: 8 September 2023 / Published: 18 October 2023
(This article belongs to the Special Issue New Advances in Statistics and Econometrics)

Abstract

:
In estimating logistic regression models, convergence of the maximization algorithm is critical; however, this may fail. Numerous bias correction methods for maximum likelihood estimates of parameters have been conducted for cases of complete data sets, and also for longitudinal models. Balanced data sets yield consistent estimates from conditional logit estimators for binary response panel data models. When faced with a missing covariates problem, researchers adopt various imputation techniques to complete the data and without loss of generality; consistent estimates still suffice asymptotically. For maximum likelihood estimates of the parameters for logistic regression in cases of imputed covariates, the optimal choice of an imputation technique that yields the best estimates with minimum variance is still elusive. This paper aims to examine the behaviour of the Hessian matrix with optimal values of the imputed covariates vector, which will make the Newton–Raphson algorithm converge faster through a reduced absolute value of the product of the score function and the inverse fisher information component. We focus on a method used to modify the conditional likelihood function through the partitioning of the covariate matrix. We also confirm that the positive moduli of the Hessian for conditional estimators are sufficient for the concavity of the log-likelihood function, resulting in optimum parameter estimates. An increased Hessian modulus ensures the faster convergence of the parameter estimates. Simulation results reveal that model-based imputations perform better than classical imputation techniques, yielding estimates with smaller bias and higher precision for the conditional maximum likelihood estimation of nonlinear panel models.

1. Introduction

Parameter estimation is a key goal of inferential statistics, and most researchers attempt to fit data into models that would produce the best of all possible parameter estimates. The motivation behind parameter estimation is to make inferences about a study population using sample information, and this calls for very well-spelled-out ways of ensuring that unbiased and precise estimates are achieved in every parameter estimation technique applied. During sample data collection, researchers encounter missing values in the study variables, a problem that leads to complications in statistical analyses through inaccurate estimates that may eventually lead to incorrect inferences and policy actions.
Specifically, when the response variable is binary, problems of missing covariates are further compounded by the nonlinear treatment of the model specification. Studies on missingness and parameter estimation have shown that the most frequent techniques of imputation result into biased estimates with significant loss of power [1,2,3]. This problem cuts across every model, with no exception made for the logit model for binary choice response variables; several studies have made attempts to come up with reliable imputation techniques for missing observations, so as to reduce the estimates’ bias. For example, Fang and Jun [4] proposed a procedure for estimating the parameters of a generalized linear model (GLM) with missing dependent and independent variables, known as the iterative imputation estimation (IIE). This iterative method proved to be computationally faster and easier compared to maximum likelihood estimation (MLE) or weighted estimation equations, and it was therefore recommended for large samples with multiple covariates with missing values. IIE, however, proved to be less efficient than MLE, since it does not incorporate the present covariate values that correspond to missing response values when simplified computation is required. Another study by Horton and Laird [5] gave an in-depth review of the method of weights for GLMs, as was developed by Ibrahim [6] for missing discrete covariates. They also acknowledged that if nuisance parameter distribution is incorrectly specified, then the method of weights does not yield unbiased estimates of the regression model.
In narrowing down to a characterization of the association between dichotomous outcome variables and other model covariates, we often use logistic regression approaches. In a broad sense, the maximum likelihood estimation technique produces parameter estimates which yield the highest probability of achieving the observed data set. It is a general method for estimating the logistic regression parameters. When maximum likelihood estimates (MLE) are absent, the maximum likelihood technique (ML) is occasionally vulnerable to a convergence problem. The need to assess the behavior of parameter estimates for a logistic regression model using MLE is of great importance, and the applications of the logistic model stretch far and wide across research disciplines. There exist numerous works that discuss the convergence problem (on a logistic regression model, by Cox et al. [7], or bias reduction by Firth, Anderson and Richardson) [8,9]. Other studies outline many assumptions regarding the distributions of ML estimations resulting from the bias reduction technique, and the impact of varying sample size on MLE [10,11].
The asymptotic characteristics of the maximum likelihood estimator are crucial for statistical inference based on the logistic regression model, according to Lee [12]. Therefore, for the logistic regression parameters, the sampling distribution of the ML estimators is asymptotically normal and unbiased under large sample scenarios. On the other hand, due to unbiased estimates in small samples, the asymptotic properties of maximum likelihood estimations may not hold [12,13]. Other studies by Kyeongjun and Jung-In reveal that specific estimators, such as the pivot-based estimator, yield plausible mean square errors (MSE) and biases compared to MLEs and weighted least-square estimators [14]. Saeid et al. similarly compared the maximum likelihood estimates and the Bayes estimates for Gompertz distribution, but with no assumption of missing covariates [15]. Therefore, privy to the fact that MLE may not always be the best for all types of distributions and models, this study limits aims to investigate the performance of conditional MLE in panel data models. Firth’s method has been introduced as one of the penalization techniques to minimize the small sample bias of the ML estimators for the linear regression model [8,13]. Lee [9] performed a comparison of the performance of the standard MLE and that of Firth’s penalized MLE. The results of this comparison showed that the asymptotic MLE performed better than the penalized MLE in terms of the statistical power [12].
To prevent the problem of having to deal with extreme biases in estimates resulting from imputation of missing covariates, there is a need to try and establish the best imputation technique among those proposed in the literature. We propose the use of the Hessian matrix from the log-likelihood function to establish whether or not the used imputation technique yields parameter estimates that would maximize the conditional likelihood function of a logistic panel data model.
The present paper, therefore, aims to evaluate the susceptibility of the Hessian matrix to different imputation techniques by comparing the magnitudes of the determinants obtained from the Hessian matrix of the log-likelihood function with the imputed covariate vector.
In a bid to curb the incidental parameter problem, especially for logistic regression panel models, we adopt a conditional maximum likelihood estimator which analytically eliminates the individual fixed effects from the estimation algorithm. This we do in this first section, wherein we also lay down the basics of panel data econometrics.
After the introductory section, Section 2 of this paper gives the specification of the nonlinear binary choice panel data models, under the assumption that the response variable is dichotomous. Section 3 highlights the incidental parameter problem in estimating the logistic panel data model and shows how the conditional maximum likelihood approach circumvents it. In Section 4, we discuss parameter estimation for a logit panel data model, in which the covariate vector is partitioned into sample present values and missing or imputed values. This discerns the impact of missingness on the Hessian of the proposed estimator of the binary choice logistic panel model. In addition, we also present results from a Monte Carlo simulation which evaluates the effect of the imputation of the covariate vector on the determinants of the Hessian matrix and the parameter estimates. Section 5 concludes by summarizing the study’s findings and offering recommendations for more research based on its main findings.

2. Model Specification

2.1. Panel Data

Observing experimental subjects or units over a repeated number of times produces a set of data referred to as panel data, which provides two kinds of information: cross-sectional and time series. This unique characteristic allows panel data to account for individual differences that are time-invariant, through regression methods which adequately utilize all the information in the data set.
The logit panel data model then develops from the logistic regression into model binary choice response variables which have had wide applications in almost all research fields that conduct pretest and posttest studies, with the aim of discerning the impact of the test. As such, for N units each observed T times, we have a total of N × T observations made.

2.2. The Logit Panel Data Model

Suppose that a stream of T binary responses are observed for every unit i of a population of size N , and that all observations are made. We can define a T × 1 vector of the dichotomous variable Y as Y i = Y i 1 , Y i 2 , , Y i T , where Y i t 0,1 , taking the value of 1 if an event is successful, and 0 otherwise. It then follows that Y i t ~ b e r n o u l l i ( p i t ) and E Y i t = p i t . Similarly, for each time t , let Y i t be predicted by a corresponding 1 × k vector of covariates x i t = x ( 1 ) i t , x ( 2 ) i t , , x ( K ) i t and an individual specific, time-invariant parameter c i . Given the binary nature of the response variable, it is plausible to model the association between Y i t and the vector of covariates x i t using the logistic regression function:
p i t = Pr Y i t = 1 = E Y i t x i t , c i = F x i t β + c i ,
where β = β 1 , β 2 , , β K denotes the k × 1 vector of k regression coefficients of the covariate matrix x i t . This is so because   F · , as a link function that relates the binary outcome to the functional forms of the explanatory variables, is a probability model.
Under random sampling,   P r Y i t = 1 x i t , c i = E Y i t = 1 x i t , c i ; β , and when the binary response model is specified correctly, we can in turn specify the linear probability model (LPM):
P r Y i t = 1 x i t , c i = x i t β + c i P r Y i t = 0 x i t , c i = 1 x i t β + c i .
Adopting a linear probability model shows the absurdity of predicting the “probabilities” of the response variable as either less than zero or greater than one. This shortfall is, however, addressed by specifying the monotonically increasing function F such that F ( · ) : R 0,1 and
P r Y i t = 1 x i t , c i 1   a s     x i t β + c i P r Y i t = 1 x i t , c i 0   a s     x i t β + c i    
This study adopts the logistic distribution as a nonlinear functional form of F :
F x i t β + c i = e x i t β + c i 1 + e x i t β + c i   ,
such that 0 F 1 for all values of x i t β + c i . Equation (4) now depicts the (cumulative) logistic distribution function, which overcomes LPM’s drawbacks. Since F , as specified in (4), is nonlinear in both the parameters and covariates, OLS estimators for the parameters would be inappropriate. However, by obtaining the logs of the odds ratio, we can linearize Equation (4) to give the logit panel data regression model:
l n P r Y i t = 1 x i t , c i P r Y i t = 0 x i t , c i = x i t β + c i .
There are several methods for estimating fixed-effects models, including (a) demeaning variables, (b) unconditional maximum likelihood estimation (also known as least-squares dummy variables, or LSDV), and (c) conditional maximum likelihood estimation, which is the method of choice for logistic regressions. The methods used to estimate panel data models with fixed effects and the continuous dependent variable Y i t aim to compensate for the fixed effects, c i , through elimination, so as to estimate the covariate coefficients, β . The conditional maximum likelihood estimation partials or “conditions” the fixed effects out of the likelihood function for categorical dependent variables when certain nonlinear functions that preserve the structure of the dependent variable are taken into account. This is achieved by relating the probability of the regressand to the total number of events observed for each category [16].
When the panel data sets are unbalanced due to cases of missing covariates, the estimation methods become computationally complicated and produce inefficient parameter estimates [17]. Several factors, such as delayed enrollment, early withdrawal, or intermittent non-response from a study unit, have been identified as causes of missingness in the literature. As a result, in these situations, methods for handling missing observations that have been presented in the literature are applicable. In this study, we examine the effects of missing data on the nonlinear panel data models’ conditional maximum likelihood estimation methods, in an effort to determine the most effective method.

3. Incidental Parameter Problem and MLE

3.1. Incidental Parameter Problem

As specified in model (1), the presence of individual effects c i complicates the computation of parameter estimates greatly; thus, to obtain consistent estimates for the parameters for static linear models, we simply difference-out (differentiate) the fixed effects. The number of parameters c i increases with the increase in sample size, a notion attributed to Neyman, Scott and Lancaster [18,19], which is referred to as the incidental parameter problem. For example, in the model (1), we have K + N parameters to be estimated, and c i being dependent on N becomes incidental for the panel data model and we may not obtain their consistent estimates for nonlinear panels [20]. However, consistent estimators of the covariate coefficients for linear panel data models can be obtained after eliminating the incidental parameters through the difference y i t E ( Y i t ) . This is so because for the linear panel model, the MLE of c i and β are asymptotically independent [21].
Chamberlain demonstrated that this differentiation does not work for panel models with dichotomous response variables, even when T is fixed [16]. In other studies, Greene [22] was able to force the MLE of the incidental parameters by using an equivalent number of dummy variables.

3.2. The Unconditional Log Likelihood Function

In logistic regression, the maximum likelihood estimation of parameters finds a specific vector β ^ M L that yields the highest likelihood of obtaining the sample outcomes y 1 , y 2 , given an observed vector of explanatory variables x .
By assumption, p i t = Pr Y i t = 1 = E Y i t x i t , c i = F x i t β + c i and 1 p i t = Pr Y i t = 0 = 1 F x i t β + c i , therefore, the likelihood function for the entire sample is
L y x ; β = i = 1 N F ( x i t β + c i ) y i 1 F ( x i t β + c i ) 1 y i .  
This gives the log likelihood function for the sample:
l n L y x ; β = i = 1 N y i l n F x i t β + c i + 1 y i l n 1 F ( x i t β + c i ) .
The maximum likelihood estimator, β ^ M L , optimizes Equation (7) with a maxima.

3.3. Conditional Log Likelihood Function for the Logistic Panel Data Model

In limiting the function F to being the logistic distribution function (4), Equation (7) becomes the log likelihood function for the logistic panel data model now expressed as:
l n L y x ; β = i = 1 N y i l n e x i t β + c i 1 + e x i t β + c i + 1 y i l n 1 1 + e x i t β + c i .
The logistic model is preferred over the alternative probit model because it does not impose any assumptions on the relationship between c i and x i t in order to yield a consistent estimator of β , except under severe exogeneity. For the logistic distribution function, we are able to eliminate the fixed effects from the likelihood function by conditioning on the “minimal sufficient statistic” for the incidental parameters, c i . The newly obtained likelihood function is the conditional likelihood function to be maximized.
Considering the conditional probabilities when T = 2 , we know that:
P r Y i 1 + Y i 2 = 1 x i 1 , x i 2 , c i , = e x i 1 β + c i + e x i 2 β + c i 1 + e x i 1 β + c i 1 + e x i 2 β + c i ,
P r Y i 1 = 0 , Y i 2 = 1 x i 1 , x i 2 , c i , Y i 1 + Y i 2 = 1 = e x i 2 β + c i e x i 1 β + c i + e x i 2 β + c i = e ( x i 2 x i 1 ) β 1 + e ( x i 2 x i 1 ) β ,
and
P r Y i 1 = 1 , Y i 2 = 0 x i 1 , x i 2 , Y i 1 + Y i 2 = 1 = 1 1 + e ( x i 2 x i 1 ) β .
Note that conditioning is applied to y i 1 + y i 2 = 1 , for which y i t changes between the two time periods and ensures that the c i ’s are eliminated; therefore, t y i t is a sufficient statistic for the fixed effects.
Probabilities (10) and (11) are conditional on y i 1 + y i 2 = 1 , and do not depend on c i .
We have the joint probability distribution function:
P r Y i 1 , Y i 2 x i 1 , x i 2 , Y i 1 + Y i 2 = 1 =                       1                                 i f   y i 1 , y i 2 = 0,0 o r 1,1 1 1 + e ( x i 2 x i 1 ) β         i f   y i 1 , y i 2 = 1,0 e ( x i 2 x i 1 ) β 1 + e ( x i 2 x i 1 ) β           i f   y i 1 , y i 2 = 0,1 .  
Using the joint probability distribution function (12) in Equation (8), we express the conditional log-likelihood function as:
l n L = i = 1 N d 01 i l n e ( x i 2 x i 1 ) β 1 + e ( x i 2 x i 1 ) β + d 10 i l n 1 1 + e ( x i 2 x i 1 ) β   ,
where d 01 i chooses the units whose dependent variable changed from 0 to 1, while d 10 i chooses units wherein the response variable changed from 1 to 0.
Hence, by maximizing the conditional log likelihood function (13), we obtain consistent estimates of β , regardless of whether c i and x i t are correlated. Generally, for several imputation techniques, the conditional logit estimator yields fairer biases and root mean square errors than the unconditional logit estimator, especially for large N [23].
The trick is thus to condition the likelihood on the outcome series ( y i 1 , y i 2 ), and in the more general case, the general conditional probability of the response variable ( y i 1 , y i 2 , . , y i T ), given t y i t , is
P r Y i 1 , Y i 2 , . , Y i T X i , t Y i t = e t y i t x i t β d D i e t d i t x i t β ,
where D i = d i 1 , d i 2 , d i 3 , , d i T d i t = 0,1   and   t d i t = t y i t .

4. Parameter Estimation with the Imputed Covariate Sub-Matrix

4.1. Partitioned Covariate Matrix

In the presence of missing observations in the covariate vector x i t , we express it as a sum of two vectors x i t s and x i t I for the sample‘s present covariate values and the missing covariate value, respectively. Therefore, we have the conditional probabilities (10) and (11):
P r Y i 1 = 0 , Y i 2 = 1 x i 1 , x i 2 , Y i 1 + Y i 2 = 1 = e Δ x i I β e Δ x i s β + e Δ x i I β ,
and
P r Y i 1 = 1 , Y i 2 = 0 x i 1 , x i 2 , Y i 1 + Y i 2 = 1 = e Δ x i s β e Δ x i s β + e Δ x i I β ,
respectively, where Δ x i I = x i 2 I x i 1 I and Δ x i s = x i 2 s x i 1 s .
Equations (15) and (16) when used in Equation (13) now give the conditional log-likelihood function with imputed covariates:
l n L = i = 1 N d 01 i l n e Δ x i I β e Δ x i s β + e Δ x i I β + d 10 i l n e Δ x i s β e Δ x i s β + e Δ x i I β .
Consistent estimates of the parameters of Equation (17) are solved for with an iterative technique using the Newton–Raphson algorithm.

4.2. Newton-Raphson Algorithm and the Hessian Matrix Optimization of the Log Likelihood Function

Given a differentiable function f , Newton and Raphson proposed a non-analytic method of obtaining the roots of the function f through iterative approximations using the following relation:
x h + 1 = x n f x h f x h ,
where x h + 1 are the h + 1 t h iteration. The goal of this method is to make the approximated result as close as possible to the exact result. If f is defined as the gradient function (score vector), then the first derivative of f gives the Hessian matrix, which is the matrix of the second-order derivatives of the likelihood function.
The Newton–Raphson algorithm for MLE involves fixing an initial estimate value β ( 0 ) and using steps   h to iterate for the next value:
β ( h + 1 ) = β ( h ) + J β ( h ) 1 s β ( h ) ,
in which, s β = l n L β is the score or gradient vector of the log likelihood function (17), and J β = 2 l n L β β is the observed information matrix, obtained as the negative of the computed Hessian matrix.
The score vector and observed Hessian matrix from the log likelihood function are, respectively,
s β = l n L β = i = 1 N d 01 i Δ x i I Δ x i s e Δ x i s β + Δ x i I e Δ x i I β e Δ x i s β + e Δ x i I β d 10 i Δ x i s + Δ x i s e Δ x i s β + Δ x i I e Δ x i I β e Δ x i s β + e Δ x i I β ,
J β = 2 l n L β β = i = 1 N d 01 i Δ x i s Δ x i s e Δ x i s β + Δ x i I Δ x i I e Δ x i I β e Δ x i s β + e Δ x i I β D D + d 10 i Δ x i s Δ x i s e Δ x i s β + Δ x i I Δ x i I e Δ x i I β e Δ x i s β + e Δ x i I β D D ,
where D = Δ x i s e Δ x i s β + Δ x i I e Δ x i I β e Δ x i s β + e Δ x i I β .
For well-defined parameter estimates of the log likelihood function, it is sufficient that (a) the log likelihood function must be concave, indicating that the model is identified; and (b) the Hessian matrix must be negative and semi-definite, yielding a negative curvature of the log likelihood plot. This means that we can equally depict the general Gaussian curvature of the likelihood function by evaluating the determinant of the Hessian matrix at a critical point of the function. The concavity of the log-likelihood function is easily established when all eigenvalues of its Hessian are negative. Therefore, the determinant of the Hessian matrix of the likelihood function should be non-negative, as a necessary condition for concavity.
In this study, we confirm that the conditional log likelihood function of the logit panel data model preserves its concavity, even when different imputation techniques are applied to the missing covariates matrix X. Establishing the concavity or convexity of the log-likelihood function becomes a necessary condition to help know whether the solutions or parameter estimates are optimally local or global. For the nonlinear logit panel data model, the maximum likelihood estimates are yielded when the Hessian matrix is negative semi-definite, resulting from a strictly concave log-likelihood function.
We use simulations to assess the relationship between the Hessian modulus and the properties of the parameter estimates for the conditional MLE of the logit panel data model with various imputation techniques for missing covariates.

4.3. Simulation Study

To investigate the concavity of the log-likelihood function through the behavior of the Hessian matrix when different imputation techniques are used to fill up for the missing covariates, we present Monte Carlo simulation results for a logistic panel data set. In this section, we focus on the N-R maximization of Equation (17), and use simulation results to compare properties of the Hessian matrices of the conditional log-likelihood function resulting from the new data sets obtained after imputation.
The simulation compares different sets of panel data generated by imputing covariates with imposed missingness patterns. This is achieved through substitution of the imputed covariate vector x i t I into Equation (17) for which both item-based and model-based imputation methods are used to fill up for the missing covariates. We consider a binary response variable that is specified by the relation model:
y i t = 1 x i t β + c i + ε i t 0           i = 1,2 , 3 , , n       t = 1,2 , 3 , , T   ,
y i t = 1 c i + β 1 x ( 1 ) i t + β 2 x ( 2 ) i t + β 3 x ( 3 ) i t + β 4 x ( 4 ) i t + β 5 x ( 5 ) i t + ε i t 0 i = 1,2 , 3 , , n       t = 1,2 , 3 , , T           .
The covariate vector, x i t contains five different variables, each having values drawn from normal, uniform or binomial distributions, as shown in Table 1. ε i t is a disturbance term with a logistic distribution given by ε i t = l n u i t 1 + u i t with u i t ~ N ( 0,1 ) . The parameters β 1 to β 5 were fixed at β 1 = 1 , β 2 = 1 , β 3 = 1 , β 4 = 1 and β 5 = 1 . We simulated the fixed effects c i such that they depend partly on the sum of first covariate x ( 1 ) and the time period T as c i = T x ( 1 ) n + α i with α i ~ N ( 0,1 ) .
To establish the sample sizes, we imposed an expected probability of success at P r y i t = 1 x i t , c i = 0.5 and acceptable coefficients of variation values of C o V = 0.2 0.2,   C o V = 0.14 , and C o V = 0.09 , r e s p e c t i v e l y in the relation N 1 P r ( Y ) × C o V 2 . These gave three different values of N ( N = 50, N = 100 and N = 250 ) which were used for all sets of data fitted into the models to enable detailed comparisons and also to evaluate the impact of varying N on the determinant of the Hessian matrix of the log likelihood function. Further, to evaluate the impact of the proportion of missingness, we use two missingness proportions, 10% and 30%, by randomly inserting N A ’s corresponding to the desired proportion of observations from the data set, and imputing them back accordingly for each value of N .
For each data set specified, we found the determinants of the Hessian matrices and plot against the corresponding data code for ease of comparison across sample sizes. We used the determinant of the Hessian matrix as a generalization of the second derivative test for univariate functions, where a positive determinant indicates an optimum value. This shows that the log likelihood is a concave function. The imputation techniques used herein are mean imputation; median imputation; last value carried forward; and Bayesian (Multiple Imputation with Chained Equations) imputation (Table 2, Table 3, Table 4, Table 5 and Table 6).

5. Discussion, Conclusions, and Recommendations

The simulated data when used to fit the logit panel model produced conditional maximum likelihood estimates with complete data which followed finite sample distributions as shown in Figure 1 and Figure 2. We note that the conditional MLE values from the complete data set are asymptotically normally distributed. By using different sample sizes, our results validate the asymptotic nature of the parameter bias. Similarly, the results show that the parameter estimates improve with increasing sample size (Figure 3). The precision of the estimates asymptotically increases, thereby making them more statistically significant.
The key objectives of this study were to focus on a method used to modify the conditional likelihood function through the partitioning of the covariate matrix in a bid to curb the incidental parameter problem and to assess the susceptibility of the Hessian matrix of the log likelihood function to the imputation techniques employed in completing a panel data set with missing covariates.
Undeniably, of all the classical imputation techniques, mean and median imputation do not introduce much undue bias into the data set, and therefore perform relatively better than the last value-carried-forward technique and mode of imputation. However, a model-based technique for imputation, like MICE, yields even better estimates, with even more reduced bias and precision [24]. Figure 1 shows the varying and reducing trends of the parameter estimates across the sample sizes and across imputation methods used in this study.
The value of Δ x i I inversely impacts the elements of the Hessian matrix and consequently its determinants. As seen, this study revealed that the smaller the determinant, the larger the parameter estimates, which signify an increased bias for smaller sample sizes. This indicates that by increasing the determinant of the Hessian matrix through a reduction in Δ x i I values, we reduce the product J β ( h 1 ) 1 s β ( h 1 ) to zero.
From the N-R algorithm (19), therefore, the inverse of the Hessian J serves to reduce the product J β ( h 1 ) 1 s β ( h 1 ) to yield convergence in the iterations of β ( h ) . An increasing Hessian modulus therefore ensures faster convergence of the parameter estimates with more precision, as seen from Table 7 and Figure 4. The positive moduli of the Hessian for the conditional MLEs are sufficient for the concavity of the log likelihood function that gives the optimum estimates of the parameters.
Deriving estimators is crucial for improving their theoretical comprehension as well as for lowering the computational complexity involved in estimating logit panel data models. Unbalancedness in a data set leads to biased parameter estimates, as seen from the Monte Carlo results, and the various imputation methods used in this study react differently to the concavity of the Hessian matrix, which also affects the estimates’ bias and efficiency.
We can see from this study that when the within estimator becomes analytically cumbersome to use, the conditional maximum likelihood estimator becomes preferable over the unconditional MLE, since we are able to eliminate the fixed effects from the estimation process, thereby limiting our concentration on the parameter estimates only.
For further development of this study, we recommend consideration of panel models with multiple fixed effects, and panel data sets with study units observed over T > 2 time periods. Real data from social and industrial settings can also be used to validate the findings herein.

Author Contributions

Conceptualization, O.P.O.; Methodology, O.P.O. and W.C.; Resources, W.C.; Writing—original draft, O.P.O.; Supervision, W.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding. The APC was funded by Beijing University of Technology.

Data Availability Statement

The data used in this research was simulated in R-console version R 4.2.2 and the codes used are available in https://drive.google.com/file/d/1EwvKG-Zb1N0WX1QqkJbq0gNyUiv3_tOO/view?usp=drive_link.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Janssen, K.J.M.; Donders, A.R.T.; Harrell, F.E.; Vergouwe, Y.; Chen, Q.; Grobbee, D.E.; Moons, K.G.M. Missing Covariate Data in Medical Research: To Impute Is Better than to Ignore. J. Clin. Epidemiol. 2010, 63, 721–727. [Google Scholar] [CrossRef] [PubMed]
  2. Donders, A.R.T.; van der Heijden, G.J.M.G.; Stijnen, T.; Moons, K.G.M. Review: A Gentle Introduction to Imputation of Missing Values. J. Clin. Epidemiol. 2006, 59, 1087–1091. [Google Scholar] [CrossRef] [PubMed]
  3. Knol, M.J.; Janssen, K.J.M.; Donders, A.R.T.; Egberts, A.C.; Heerdink, E.R.; Grobbee, D.E.; Moons, K.G.M.; Geerlings, M.I. Unpredictable Bias When Using the Missing Indicator Method or Complete Case Analysis for Missing Confounder Values: An Empirical Example. J. Clin. Epidemiol. 2010, 63, 728–736. [Google Scholar] [CrossRef] [PubMed]
  4. Fang, F.; Shao, J. Iterated imputation estimation for generalized linear models with missing response and covariate values. Comput. Stat. Data Anal. 2016, 103, 111–123. [Google Scholar] [CrossRef]
  5. Horton, N.J.; Laird, N.M. Maximum likelihood analysis of generalized linear models with missing covariates. Stat. Methods Med. Res. 1999, 8, 37–50. [Google Scholar] [CrossRef] [PubMed]
  6. Ibrahim, J.G. Incomplete Data in Generalized Linear Models. J. Am. Stat. Assoc. 1990, 85, 765–769. [Google Scholar] [CrossRef]
  7. Cox, D.R.; Hinkley, D.V. Theoretical Statistics; Chapman and Hall: London, UK, 1974. [Google Scholar]
  8. Firth, D. Bias Reduction of Maximum Likelihood Estimates. Biometrika 1993, 80, 27–38. [Google Scholar] [CrossRef]
  9. Anderson, J.A.; Richardson, S.C. Logistic Discrimination and Bias Correction in Maximum Likelihood Estimation. Technometrics 1979, 21, 71–78. [Google Scholar] [CrossRef]
  10. McCullagh, P. The Conditional Distribution of Goodness-of-Fit Statistics for Discrete Data. J. Am. Stat. Assoc. 1986, 81, 104–107. [Google Scholar] [CrossRef]
  11. Shenton, L.R.; Bowman, K.O. Maximum Likelihood Estimation in Small Samples; Lubrecht & Cramer Limited: New York, NY, USA, 1977. [Google Scholar]
  12. Lee, S. Detecting Differential Item Functioning Using the Logistic Regression Procedure in Small Samples. Appl. Psychol. Meas. 2016, 41, 30–43. [Google Scholar] [CrossRef] [PubMed]
  13. Puhr, R.; Heinze, G.; Nold, M.; Lusa, L.; Geroldinger, A. Firth’s Logistic Regression with Rare Events: Accurate Effect Estimates and Predictions? Stat. Med. 2017, 36, 2302–2317. [Google Scholar] [CrossRef] [PubMed]
  14. Lee, K.; Seo, J.-I. Different Approaches to Estimation of the Gompertz Distribution under the Progressive Type-II Censoring Scheme. J. Probab. Stat. 2020, 2020, 3541946. [Google Scholar] [CrossRef]
  15. Asadi, S.; Panahi, H.; Swarup, C.; Lone, S.A. Inference on Adaptive Progressive Hybrid Censored Accelerated Life Test for Gompertz Distribution and Its Evaluation for Virus-Containing Micro Droplets Data. Alex. Eng. J. 2022, 61, 10071–10084. [Google Scholar] [CrossRef]
  16. Chamberlain, G. Analysis of Covariance with Qualitative Data. Rev. Econ. Stud. 1980, 47, 225. [Google Scholar] [CrossRef]
  17. Mátyás, L.; Lovrics, L. Missing Observations and Panel Data. Econ. Lett. 1991, 37, 39–44. [Google Scholar] [CrossRef]
  18. Neyman, J.; Scott, E.L. Consistent Estimates Based on Partially Consistent Observations. Econometrica 1948, 16, 1. [Google Scholar] [CrossRef]
  19. Lancaster, T. The Incidental Parameter Problem since 1948. J. Econom. 2000, 95, 391–413. [Google Scholar] [CrossRef]
  20. Baltagi, B.H. Econometric Analysis of Panal Data; John Wiley & Sons: Hoboken, NJ, USA, 2001. [Google Scholar]
  21. Hsiao, C. Analysis of Panel Data; Cambridge University Press: Cambridge, UK; New York, NY, USA, 2003. [Google Scholar]
  22. Greene, W. The Behaviour of the Maximum Likelihood Estimator of Limited Dependent Variable Models in the Presence of Fixed Effects. Econom. J. 2004, 7, 98–119. [Google Scholar] [CrossRef]
  23. Opeyo, P.O.; Olubusoye, O.E.; Odongo, L.O. Conditional Maximum Likelihood Estimation for Logit Panel Models with Non-Responses. Int. J. Sci. Res. 2014, 3, 2242–2254. [Google Scholar]
  24. Opeyo, P.O.; Cheng, W.; Xu, Z. Superiority of Bayesian Imputation to Mice in Logit Panel Data Models. Open J. Stat. 2023, 13, 316–358. [Google Scholar] [CrossRef]
Figure 1. Empirical distributions of parameter estimates by conditional MLE for a complete unimputed panel data set.
Figure 1. Empirical distributions of parameter estimates by conditional MLE for a complete unimputed panel data set.
Mathematics 11 04338 g001
Figure 2. Densities of parameter estimates by conditional MLE for a complete unimputed panel data set.
Figure 2. Densities of parameter estimates by conditional MLE for a complete unimputed panel data set.
Mathematics 11 04338 g002
Figure 3. Comparative parameter estimates by conditional MLE for different imputed panel data sets with varying sample sizes and proportions of missingness.
Figure 3. Comparative parameter estimates by conditional MLE for different imputed panel data sets with varying sample sizes and proportions of missingness.
Mathematics 11 04338 g003
Figure 4. Comparative determinants of the Hessian matrices across different imputed panel data sets with varying sample sizes.
Figure 4. Comparative determinants of the Hessian matrices across different imputed panel data sets with varying sample sizes.
Mathematics 11 04338 g004
Table 1. Description of variables.
Table 1. Description of variables.
Variable X ( 1 ) X ( 2 ) X ( 3 ) X ( 4 ) X ( 5 )
Typecontinuouscontinuouscontinuousdiscretediscrete
DistributionN~(0, 1)U~(0, 1)N~(0.5, 0.5)B~(nT, 2, 0.65)Bernoulli
Table 2. Parameter estimates by conditional MLE for a complete unimputed panel data set.
Table 2. Parameter estimates by conditional MLE for a complete unimputed panel data set.
Complete Unimputed Data
β 1 β 2 β 3 β 4 β 5
10% Missingnessn = 501.097358−1.10061.1629221.0881891.135867
n = 1001.063353−1.049961.0604431.0422681.07011
n = 2501.026624−1.020731.0205521.0202661.006623
30% Missingnessn = 501.118411−1.069171.1126671.1225871.10587
n = 1001.05757−1.023861.0740321.0330031.052396
n = 2501.027265−1.014061.0085241.0156730.993208
Table 3. Parameter estimates by conditional MLE for a mean imputed panel data set.
Table 3. Parameter estimates by conditional MLE for a mean imputed panel data set.
Mean Imputed Data
β 1 β 2 β 3 β 4 β 5
10% Missingnessn = 501.069795−1.072151.129531.0591921.245338
n = 1001.032826−1.006751.0234711.0063811.154124
n = 2501.000421−0.99560.9889250.9942091.120467
30% Missingnessn = 501.043838−0.9660030.9682621.0081041.329941
n = 1001.004478−0.933760.9707240.9298461.262566
n = 2500.980142−0.938820.9229160.9135461.238532
Table 4. Parameter estimates by conditional MLE for an LVCF imputed panel data set.
Table 4. Parameter estimates by conditional MLE for an LVCF imputed panel data set.
LVCF Imputed Data
β 1 β 2 β 3 β 4 β 5
10% Missingnessn = 500.943744−0.926640.9778880.9236321.126666
n = 1000.895114−0.876480.8893370.8789341.083107
n = 2500.869159−0.85630.8536530.8648031.03423
30% Missingnessn = 500.682111−0.607870.6265980.6650220.967133
n = 1000.660872−0.608550.6204070.6049950.92563
n = 2500.633169−0.591850.5949520.5928810.911108
Table 5. Parameter estimates by conditional MLE for a median imputed panel data set.
Table 5. Parameter estimates by conditional MLE for a median imputed panel data set.
Median Imputed Data
β 1 β 2 β 3 β 4 β 5
10% Missingnessn = 501.098609−1.060061.121131.0422171.090476
n = 1001.062593−0.994281.0172710.9886611.025071
n = 2501.030955−0.982890.979160.9748520.977667
30% Missingnessn = 501.104232−0.944690.9513330.9653041.058915
n = 1001.061597−0.923970.9563620.8906590.958509
n = 2501.034481−0.926030.9086870.8792510.930022
Table 6. Parameter estimates by conditional MLE for a Bayesian (MICE) imputed panel data set.
Table 6. Parameter estimates by conditional MLE for a Bayesian (MICE) imputed panel data set.
Bayesian (MICE) Imputed Data
β 1 β 2 β 3 β 4 β 5
10% Missingnessn = 501.067832−1.078221.274071.0629091.141502
n = 1001.028601−1.007931.027021.0103461.026907
n = 2500.998693−0.998620.9941320.9995321.000275
30% Missingnessn = 500.966065−1.041790.9869950.996030.878896
n = 1000.946108−0.947590.957250.9310010.834477
n = 2500.917728−0.934170.9344470.9217590.862901
Table 7. Comparative parameter biases and determinants of the Hessian matrices across different imputed panel data sets with varying sample sizes.
Table 7. Comparative parameter biases and determinants of the Hessian matrices across different imputed panel data sets with varying sample sizes.
Determinant of Hessian
× 10 6
Parameter Bias
β 1 β 2 β 3 β 4 β 5
Complete Unimputed Datan = 5010.331040.1184110.0691700.1126670.1225870.105870
n = 10013.676890.0575700.0238550.0740320.0330030.052396
n = 25015.001240.0272650.0140570.0085240.015673−0.006792
Mean Imputed Datan = 502.5099320.043838−0.033998−0.0317380.0081040.329941
n = 1003.1863490.004478−0.066237−0.029276−0.0701540.262566
n = 2504.677663−0.019858−0.061184−0.077084−0.0864550.238532
LVCF Imputed Datan = 5011.67492−0.317889−0.392132−0.373402−0.334978−0.032867
n = 10018.52447−0.339128−0.391447−0.379594−0.395006−0.074370
n = 25020.43683−0.366831−0.408155−0.405048−0.407119−0.088892
Median Imputed Datan = 504.1564530.104232−0.055310−0.048667−0.0346960.058915
n = 1004.0497920.061597−0.076035−0.043638−0.109342−0.041491
n = 2506.0799740.034481−0.073969−0.091313−0.120749−0.069978
Bayesian (MICE) Imputed Datan = 501.510900−0.0339350.041789−0.013006−0.003970−0.121104
n = 1001.968110−0.053892−0.052413−0.042750−0.068999−0.165523
n = 2503.926050−0.082272−0.065831−0.065554−0.078241−0.137099
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Otieno, O.P.; Cheng, W. The Concavity of Conditional Maximum Likelihood Estimation for Logit Panel Data Models with Imputed Covariates. Mathematics 2023, 11, 4338. https://doi.org/10.3390/math11204338

AMA Style

Otieno OP, Cheng W. The Concavity of Conditional Maximum Likelihood Estimation for Logit Panel Data Models with Imputed Covariates. Mathematics. 2023; 11(20):4338. https://doi.org/10.3390/math11204338

Chicago/Turabian Style

Otieno, Opeyo Peter, and Weihu Cheng. 2023. "The Concavity of Conditional Maximum Likelihood Estimation for Logit Panel Data Models with Imputed Covariates" Mathematics 11, no. 20: 4338. https://doi.org/10.3390/math11204338

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop