Next Article in Journal
Information Thermodynamics and Reducibility of Large Gene Networks
Next Article in Special Issue
Comparative Analysis of Different Univariate Forecasting Methods in Modelling and Predicting the Romanian Unemployment Rate for the Period 2021–2022
Previous Article in Journal
Second-Order Phase Transition in Counter-Rotating Taylor–Couette Flow Experiment
Previous Article in Special Issue
A Systematic Review of Statistical and Machine Learning Methods for Electrical Power Forecasting with Reported MAPE Score
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Extension of Thinning-Based Integer-Valued Autoregressive Models for Count Data

School of Mathematics, Jilin University, 2699 Qianjin Street, Changchun 130012, China
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(1), 62; https://doi.org/10.3390/e23010062
Submission received: 29 October 2020 / Revised: 28 December 2020 / Accepted: 28 December 2020 / Published: 31 December 2020
(This article belongs to the Special Issue Time Series Modelling)

Abstract

:
The thinning operators play an important role in the analysis of integer-valued autoregressive models, and the most widely used is the binomial thinning. Inspired by the theory about extended Pascal triangles, a new thinning operator named extended binomial is introduced, which is a general case of the binomial thinning. Compared to the binomial thinning operator, the extended binomial thinning operator has two parameters and is more flexible in modeling. Based on the proposed operator, a new integer-valued autoregressive model is introduced, which can accurately and flexibly capture the dispersed features of counting time series. Two-step conditional least squares (CLS) estimation is investigated for the innovation-free case and the conditional maximum likelihood estimation is also discussed. We have also obtained the asymptotic property of the two-step CLS estimator. Finally, three overdispersed or underdispersed real data sets are considered to illustrate a superior performance of the proposed model.

1. Introduction

Counting time series naturally occur in many contexts, including actuarial science, epidemiology, finance, economics, etc. The last few years have witnessed the rapid development of modeling time series of counts. One of the most common approaches for modeling integer-valued autoregressive (INAR) time series is based on thinning operators. In order to fit different kinds of situations, many corresponding operators have been developed; see [1] for a detailed discussion on thinning-based INAR models.
The most popular thinning operator is the binomial thinning operator introduced by [2]. Let X be a non-negative integer-valued random variable and α ( 0 , 1 ) , the binomial thinning operator is defined as
α X = i = 1 X B i , X > 0 ,
and 0 otherwise, where { B i } is a sequence of independent identically distributed (i.i.d.) Bernoulli random variables with fixed success probability α , and B i is independent of X. Based on the binomial thinning operator, [3,4] independently proposed an INAR(1) model as follows
X t = α X t 1 + ϵ t , t Z ,
where { ϵ t } is a sequence of i.i.d. integer-valued random variables with finite mean and variance. Since this seminal work, the INAR-type models have received considerable attention. For recent literature on this topic, see [5,6], among others.
Note that B i in (1) follows a Bernoulli distribution, so α X is always less than or equal to X; in other words, the first part of the right side in (2) cannot be greater than X t 1 , which limits the flexibility of the model. Although it has such a shortcoming, the simple form makes it easy to estimate the parameter, and it also has many similar properties to the multiplication operator in the continuous case. For this reason, there have still been many extensions of the binomial thinning operator since its emergence. Zhu and Joe [7] proposed the expectation thinning operator, which is the generalization of binomial thinning from the perspective of a probability generating function (pgf). Although this extension is very successful, the estimation procedure is a little complicated. Compared with this extension, the thinning operator we proposed is simpler and more intuitive. For recent developments, Yang et al. [8] proposed the generalized Poisson (GP) thinning operator, which is defined by replacing B i with a GP counting series. Although the GP thinning operator is flexible and adaptable, we argue that it has a potential drawback: the GP distribution is not a strict probability distribution in the conventional sense. Recently, Aly and Bouzar [9] introduced a two-parameter expectation thinning operator based on a linear fractional probability generating function, which can be regarded as a general case of at least nine thinning operators. Kang et al. [10] proposed a new flexible thinning operator, which is named GSC because of three initiators of the counting series: Gómez-Déniza, Sarabia and Calderín-Ojeda.
Although the binomial thinning operator is very popular, it may not perform very well in large numerical value counting time series. This is because under such circumstances, the predicted data are often volatile, and the data are more likely to be non-stationary when the numerical value is large. We intend to establish a new thinning operator which meets the following requirements: (i) it is an extension of the binomial thinning operator; (ii) it contains two parameters to achieve flexibility, (iii) it has a simple structure and is easy to implement.
Based on the above considerations, we propose a new thinning operator based on the extended binomial (EB) distribution. The operator has two parameters: real-valued α and integer-valued m ( 0 α 1 , m 2 ), which is more flexible compared to some single parameter thinning, and the binomial thinning operator (1) can be regarded as a special case of m = 2 in the EB thinning. The case of m > 2 in the EB thinning usually performs better than m = 2 in some large value data sets. In other words, the EB thinning alleviates the main defect of the binomial thinning to some extent. Since the EB thinning is not a special case of the expectation thinning in [9], we have further extended the framework of thinning-based INAR models to provide a new way in practical application. Therefore, an INAR(1) model is proposed based on the EB thinning operator, which is an extension of the model (2) and can more accurately and flexibly capture the dispersed features in real data.
This paper is organized as follows. In Section 2, we review the properties of the EB distribution and then introduce the EB thinning operator. Based on the new thinning operator, we propose a new INAR(1) model. In Section 3, two-step conditional least squares estimation is investigated for the innovation-free case of the model and the asymptotic property of the estimator is obtained. The conditional maximum likelihood estimation is discussed and the numerical simulations. In Section 4, we focus on forecasting and introduce two criteria to compare the prediction results for three overdispersed or underdispersed real data sets, which are considered to illustrate a better performance of the proposed model. In Section 5, we give some conclusions and related discussions.

2. A New INAR(1) Model

The EB distribution comes from the theory about Pascal’s triangles, which can be regarded as a multivariate case of the binomial distribution; see [11] for more details. Based on this distribution, we introduce the EB thinning operator and propose a corresponding INAR(1) model.

2.1. EB Distribution

The EB random variable X n ( m , α ) , denoted as E B ( m , n , α ) , which is defined as follows:
P ( X n ( m , α ) = r ) = C m ( n , r ) α r β ( m 1 ) n r , 0 r ( m 1 ) n ,
where m and n are both integers satisfying m 2 and n 1 ; C m ( n , r ) can be calculated as
C m ( n , r ) = s = 0 s 1 ( 1 ) s n s r + n s m 1 n 1 ,
where s 1 = min { n , integer part in r / m } ; and α and β in (3) satisfy the following restriction:
β m 1 + α β m 2 + α 2 β m 3 + + α m 1 = 1 , 0 α 1 , 0 β 1 .
The above restriction is equivalent to β m α m = β α . The mean and variance of EB random variables X n ( m , α ) are
E ( X n ( m , α ) ) = n α 1 m α m 1 β α , Var ( X n ( m , α ) ) = n α β 1 m 2 ( α β ) m 1 ( β α ) 2 ,
respectively. The pgf of X n ( m , α ) can be written as G ( t ) = E ( t X n ( m , α ) ) = ( β m α m t m β α t ) n . As X n ( m , α ) can be expressed as a convolution, the EB distribution has the reproductive property. Specifically, if Y 1 , Y 2 , , Y k are independent random variables with Y i E B ( m , n i , α ) in (3), then i = 1 k Y i E B ( m , i = 1 k n i , α ) . Notice that random variable Y i E B ( 2 , 1 , α ) is equivalent to a Bernoulli random variable satisfying P ( Y i = 1 ) = 1 P ( Y i = 0 ) = 1 α .

2.2. EB Thinning Operator

According to discussions in 2.1, we construct the EB thinning operator based on the configuration of n = 1 . Let { U i ( m , α ) } be a sequence of i.i.d. random variables with common distribution E B ( m , 1 , α ) , i.e., P ( U i ( m , α ) = z ) = α z β ( m 1 ) z , z = 0 , , m 1 , where α and β satisfy (4). Note that the mean and variance of U i are
μ = E ( U i ) : = α 1 m α m 1 β α , σ 2 = Var ( U i ) : = α β 1 m 2 ( α β ) m 1 ( β α ) 2 .
One can easily see that μ < σ 2 if and only if
m α m 2 ( m β m β + α ) < 1 .
For any 3 m < , the left-hand side of (6) approaches 0 as α 0 and m as α 1 , respectively. Hence, μ < σ 2 or μ σ 2 is possible. When m = 2 , which corresponds to the binomial distribution, and (5) gives μ = α and σ 2 = α β with α + β = 1 , then we have μ > σ 2 for all 0 < α < 1 . When α β and m , one can easily show that μ = θ / ( 1 θ ) , σ 2 = θ / ( 1 θ ) 2 , where θ = α / β . Therefore, μ σ 2 for all 0 < θ < 1 in this case.
Based on the reproductive property of the EB distribution, we define the EB thinning operator "⊚" as follows: for a non-negative integer-valued random variable X,
( m , α ) X = i = 1 X U i ( m , α ) , X > 0 ,
where ( m , α ) X = 0 if X = 0 . Note that the EB thinning operator reduces to the binomial operator (1) when m = 2 . It is easy to know that ( m , α ) X X or > X , so the EB thinning operator is quite flexible when dealing with the overdispersed or underdispersed data sets.
Remark 1.
The computation of ( α , m ) for given ( μ , σ 2 ) is based on (4) and (5). The solution can be obtained by solving these nonlinear equations. When m = 3 , β = ( α + 4 3 α 2 ) / 2 and when m = 4 ,
β = 10 α 3 27 1 2 + 10 α 3 27 1 2 2 + 2 α 2 9 3 3 + 10 α 3 27 1 2 10 α 3 27 1 2 2 + 2 α 2 9 3 3 α 3 .
For more complex cases ( m 5 ), we can derive the solution ( α , β , m ) by solving these large-scale nonlinear systems, and a more detailed calculation procedure is given in Section 3.3.

2.3. EB-INAR(1) Model

Based on the EB thinning operator, we define the EB-INAR(1) model as follows:
X t = ( m , α ) X t 1 + ϵ t , t = 1 , 2 ,
where α ( 0 , 1 ) , { X t } is a sequence of non-negative integer-valued random variables; the innovation process { ϵ t } is a sequence of i.i.d. integer-valued random variables with finite mean and variance; and ϵ t is independent of { X s , s < t } .
In order to obtain the estimation equations, we give some conditional or unconditional moments of the EB-INAR(1) model in the following proposition.
Proposition 1.
Suppose { X t } is a stationary process defined by (7) and let μ < 1 ; then for t 1 ,
1. 
E ( X t | X t 1 ) = μ X t 1 + μ ϵ ;
2. 
E ( X t ) = μ ϵ 1 μ ;
3. 
Var ( X t | X t 1 ) = σ 2 X t 1 + σ ϵ 2 ;
4. 
Var ( X t ) = σ 2 μ ϵ + σ ϵ 2 ( 1 μ ) ( 1 μ ) 2 ( 1 + σ 2 ) ;
5. 
C o v ( X t , X t h ) = μ h Var ( X t h ) and C o r r ( X t , X t h ) = μ h , for h = 0 , 1 , 2 ,
where μ ϵ and σ ϵ 2 are the expectation and variance of the innovation ϵ t , respectively.
The proof of some of these properties mentioned above is given in Appendix A.
Remark 2.
Inspired by the INAR(p) model in [12], we can further extend this model to INAR(p); the EB-INAR(p) model is defined as follows:
X t = ( m , α 1 ) X t 1 + + ( m , α p ) X t p + ϵ t , t = 2 , 3 ,
where α 1 , , α p ( 0 , 1 ) , m is an integer satisfying m 2 , { X t } is a sequence of non-negative integer-valued random variables, the innovation process { ϵ t } is a sequence of i.i.d. integer-valued random variables with finite mean and variance, and ϵ t is independent with { X s , s < t } .
We will show that the new model can accurately and flexibly capture the dispersion features of real data in Section 4.

3. Estimation

We use the two-step conditional least squares estimation proposed by [13] to investigate the innovation-free case and the asymptotic properties of the estimators are obtained. Conditional maximum likelihood estimation for the parametric cases are also discussed. Finally, we demonstrate the finite sample performance via simulation studies.

3.1. Two-Step Conditional Least Squares Estimation

Denote θ 1 = ( μ , μ ϵ ) , θ 2 = ( σ 2 , σ ϵ 2 ) and θ = ( θ 1 , θ 2 ) . The two-step CLS estimation will be conducted by the following two steps.
Step 1.1. The estimator for θ 1 .
Let g 1 ( θ 1 , X t 1 ) = E ( X t | X t 1 ) = μ X t 1 + μ ϵ , q 1 t ( θ 1 ) = ( X t g 1 ( θ 1 , X t 1 ) ) 2 . Let
Q 1 ( θ 1 ) = t = 1 n q 1 t ( θ 1 )
be the CLS criterion function. Then the CLS estimator θ ^ 1 , C L S : = ( μ ^ C L S , μ ^ ϵ , C L S ) of θ 1 can be obtained by solving the score equation Q 1 ( θ 1 ) θ 1 = 0 , which implies a closed-form solution:
θ ^ 1 , C L S = t = 1 n X t 1 2 t = 1 n X t 1 t = 1 n X t 1 n 1 t = 1 n X t X t 1 t = 1 n X t .
Step 1.2. The estimator for θ 2 .
Let Y t = X t E ( X t | X t 1 ) , g 2 ( θ 2 , X t 1 ) = Var ( X t | X t 1 ) = σ 2 X t 1 + σ ϵ 2 . Then
E ( Y t 2 | X t 1 ) = E ( ( X t E ( X t | X t 1 ) 2 ) | X t 1 ) = Var ( X t | X t 1 ) = g 2 ( θ 2 , X t 1 ) .
Let q 2 t ( θ 2 ) = ( Y t 2 g 2 ( θ 2 , X t 1 ) ) 2 ; then the CLS criterion function for θ 2 can be written as
Q 2 ( θ 2 ) = t = 1 n q 2 t ( θ 2 ) .
By solving the score equation Q 2 ( θ 2 ) θ 2 = 0 , we can obtain the CLS estimator θ ^ 2 , C L S : = ( σ ^ C L S 2 , σ ^ ϵ , C L S 2 ) of θ 2 , which also is a closed-form solution:
θ ^ 2 , C L S = t = 1 n X t 1 2 t = 1 n X t 1 t = 1 n X t 1 n 1 t = 1 n Y t 2 X t 1 t = 1 n Y t 2 .
Step 2. Estimating parameters ( m , α ) via the method of moments.
The estimator ( m ^ , α ^ ) of ( m , α ) , which is called a two-step CLS estimator, can be obtained by solving the following estimation equations:
μ ^ C L S = α 1 m α m 1 β α , σ ^ C L S 2 = α β 1 m 2 ( α β ) m 1 ( β α ) 2 ,
where α and β satisfy (4).
Therefore, the resulting CLS estimator is Θ ^ C L S = ( m ^ C L S , α ^ C L S , μ ^ ϵ , C L S , σ ^ ϵ , C L S 2 ) . To study the asymptotic behaviour of the estimator, we make the following assumptions:
Assumption 1.
{ X t } is a stationary and ergodic process;
Assumption 2.
E X t 4 < .
Proposition 2.
Under assumptions 1 and 2, the CLS estimator θ ^ 1 , C L S is strongly consistent and asymptotically normal:
n ( θ ^ 1 , C L S θ 1 , 0 ) L N ( 0 , V 1 1 W 1 V 1 1 ) ,
where V 1 : = E ( θ 1 g 1 ( θ 1 , 0 , X 0 ) θ 1 g 1 ( θ 1 , 0 , X 0 ) ) , W 1 : = E ( q 11 ( θ 1 , 0 ) θ 1 g 1 ( θ 1 , 0 , X 0 ) θ 1 g 1 ( θ 1 , 0 , X 0 ) ) , and θ 1 , 0 = ( μ 0 , μ ϵ 0 ) denotes the true value of θ 1 .
To obtain the asymptotic normality of θ ^ 2 , C L S , we make a further assumption:
Assumption 3.
E X t 6 < .
Then we have the following proposition.
Proposition 3.
Under assumptions 1 and 3, the CLS estimator θ ^ 2 , C L S is strongly consistent and asymptotically normal:
n ( θ ^ 2 , C L S θ 2 , 0 ) L N ( 0 , V 2 1 W 2 V 2 1 ) ,
where V 2 : = E ( θ 2 g 2 ( θ 2 , 0 , X 0 ) θ 2 g 2 ( θ 2 , 0 , X 0 ) ) , W 2 : = E ( q 21 ( θ 2 , 0 ) θ 2 g 2 ( θ 2 , 0 , X 0 ) θ 2 g 2 ( θ 2 , 0 , X 0 ) ) , and θ 2 , 0 = ( σ 0 2 , σ ϵ 0 2 ) denotes the true value of θ 2 .
Based on Propositions 2 and 3 and Theorem 3.2 in [14], we have the following proposition.
Proposition 4.
Under assumptions 1 and 3, the CLS estimator θ ^ C L S = ( θ ^ 1 , C L S , θ ^ 2 , C L S ) is strongly consistent and asymptotically normal:
n ( θ ^ C L S θ 0 ) L N ( 0 , Ω ) ,
where
Ω = V 1 1 W 1 V 1 1 V 1 1 M V 2 1 V 2 1 M V 1 1 V 2 1 W 2 V 2 1 ,
M = E ( q 11 ( θ 1 , 0 ) q 21 ( θ 2 , 0 ) θ 1 g 1 ( θ 1 , 0 , X 0 ) θ 2 g 2 ( θ 2 , 0 , X 0 ) ) , and θ 0 = ( θ 1 , 0 , θ 2 , 0 ) denotes the true value of θ.
We do the following preparation to establish Proposition 5. Based on (5), solve the equation about ( m , α ) , and denote the solution as ( h 1 ( μ , σ 2 ) , h 2 ( μ , σ 2 ) ) . Let
D = D ( μ , σ 2 ) = h 1 / μ h 1 / σ 2 h 2 / μ h 2 / σ 2 .
Based on Proposition 4, we state the strong consistency and asymptotic normality of ( m ^ , α ^ ) in the following proposition.
Proposition 5.
Under assumptions 1 and 3, the CLS estimator ( m ^ C L S , α ^ C L S ) is strongly consistent and asymptotically normal:
n m ^ C L S m 0 α ^ C L S α 0 L N ( 0 , D Σ D ) ,
where D is given in (9); Σ = d i a g ( I V 1 1 W 1 V 1 1 I , I V 2 1 W 2 V 2 1 I ) with I = ( 1 , 0 ) ; m 0 and α 0 denote the true values of m and α, respectively.
The brief proofs of Propositions 2–5 are given in Appendix A.

3.2. Conditional Maximum Likelihood Estimation

We maximize the likelihood function with respect to the model parameters θ = ( m , α , δ ) to get the conditional maximum likelihood (CML) estimate of the parametric case
L ( X 1 = x 1 , , X N = x N | θ ) = P θ ( X 1 = x 1 ) i = 1 N P θ ( X i = x i | X i 1 = x i 1 , , X 1 = x 1 ) = P θ ( X 1 = x 1 ) i = 1 N P θ ( X i = x i | X i 1 = x i 1 ) ,
where δ is the parameter of ϵ i , P X 1 is the pmf for X 1 and P θ ( X i + 1 | X i ) is the conditional pmf. Since the marginal distribution is difficult to obtain in general, a simple approach is conditional on the observed X 1 . By essentially ignoring the dependency on the initial value and considering the CML estimate given X 1 as an estimate for θ by maximizing the conditional log-likelihood
l ( X 1 = x 1 , , X N = x N | θ ) = i = 2 N log P θ ( X i | X i 1 )
over Θ , we denote the CML estimate by θ ^ = ( m ^ , α ^ , δ ^ ) . The log-likelihood function is as follows:
l ( X 1 = x 1 , , X N = x N | θ ) = i = 2 N log { w = 0 min { ( m 1 ) x i 1 , x i } C m ( x i 1 , w ) α w β ( m 1 ) x i 1 w · P ( ϵ i = x i w ) } ,
where α and β satisfy (4); ϵ i follows a non-negative discrete distribution with a parameter δ . In what follows, we consider two cases: m = 3 , 4 .
Case 1: For m = 3 with Poisson innovation, i.e., ϵ t P ( δ ) .
l ( X 1 = x 1 , , X N = x N | θ ) = i = 2 N log { w = 0 min { 2 x i 1 , x i } t 1 = max { 0 , x i 1 w } x i 1 w 2 x i 1 t 1 x i 1 t 1 2 x i 1 2 t 1 w · ( β 2 ) t 1 ( α β ) 2 x i 1 2 t 1 w ( α 2 ) t 1 x i 1 + w δ ( x i w ) ( x i w ) ! e δ } ,
where β is given in Remark 1.
Case 2: For m = 4 with geometric innovation, i.e., ϵ t G e ( δ ) = ( 1 δ ) k δ for k = 0 , 1 , 2 ,
l ( X 1 = x 1 , , X N = x N | θ ) = i = 2 N log { w = 0 min { 3 x i 1 , x i } t 1 = x i 1 w x i 1 w 3 t 2 = max { 0 , 2 x i 1 2 t 1 w } 3 x i 1 3 t 1 w 2 x i 1 t 1 x i 1 t 1 t 2 · x i 1 t 1 t 2 3 x i 1 3 t 1 2 t 2 w ( β 3 ) t 1 ( α β 2 ) t 2 ( α 2 β ) 3 x i 1 3 t 1 2 t 2 w · ( α 3 ) 2 t 1 + t 2 2 x i 1 + w ( 1 δ ) ( x i w ) δ } ,
where β is given in Remark 1. For higher order m, the formula is a little tedious, which is omitted here. For the estimate of EB-INAR(p), the CML estimation is too complicated, but the two-step CLS estimation is quite feasible, the procedure is similar to the case of p = 1 . For this reason, we only consider the case of EB-INAR(1) in simulation studies.

3.3. Simulation

A Monte Carlo simulation study was conducted to evaluate the finite sample performance of the estimator. For CLS estimation, we used the package BB in R for solving and optimizing large-scale nonlinear systems to solve Equations (4) and (8). For CML estimation, we used the package maxLik in R to maximize the log-likelihood function.
We considered the following configurations of the parameters:
  • Poisson INAR(1) models with θ = ( m , α , δ ) :
    ( A 1 ) = ( 3 , 0.2 , 1 ) ; ( A 2 ) = ( 3 , 0.1 , 0.5 ) ; ( A 3 ) = ( 4 , 0.2 , 1 ) ; ( A 4 ) = ( 4 , 0.1 , 0.5 ) ;
  • Geometric INAR(1) models with θ = ( m , α , δ ) :
    ( B 1 ) = ( 3 , 0.3 , 0.5 ) ; ( B 2 ) = ( 3 , 0.4 , 2 / 3 ) ; ( B 3 ) = ( 4 , 0.3 , 0.5 ) ; ( B 4 ) = ( 4 , 0.4 , 2 / 3 ) .
In simulations, we chose sample sizes n = 100, 200 and 400 with M = 500 replications for each choice of parameters. The root mean squared error (RMSE) was calculated to evaluate the performance of the estimator according to the following formula: RMSE = 1 M 1 j = 1 M ( ξ ^ j ξ 0 ) 2 , where ξ ^ j is the estimator of ξ 0 in the jth replication.
For the CLS estimate, the solutions of (4) and (8) are sensitive to μ ^ and σ ^ 2 , so we adopted the following estimation procedure. First, calculate 500 groups of μ ^ and σ ^ 2 estimates, then use the mean values of μ ^ and σ ^ 2 to solve the Equations (4) and (8). The simulation results of CLS are summarized in Table 1. We found that the estimation values are closer to the true value and the values of RMSE gradually decrease as the sample size increases.
As it is a little difficult to estimate the parameter m in CML estimation, we considered m as known. The simulation results of CML estimators are given in Table 2. For all cases, all estimates generally show small values of RMSE, and the values of RMSE gradually decrease as the sample size increases.

4. Real Data Examples

In this section, three real data sets, including overdispersed and underdispersed settings, are considered to illustrate the better performance of the proposed model. The first example is overdispersed crime data in Pittsburgh; the second is overdispersed stock data in New York Stock Exchange (NYSE); and the third is underdispersed crime data in Pittsburgh, which was also analyzed by [15]. As is well known, in time series analysis, forecasting is very important in model evaluation. We first introduce two criteria on forecasting, and other preparations.

4.1. Forecasting

Before introducing the evaluation criterion, we briefly introduce the basic procedure as follows: First, we divide the n 1 + n 2 data into two parts, the training set with the first n 1 data and the prediction set with the last n 2 data. The training set is used to estimate the parameters and evaluate the fitness of the model. Then we can evaluate the efficiency of each model by comparing the following criteria between prediction data and the real data in the prediction set.
Similar to the procedure in [16], which performs an out-of-sample experiment to compare forecasting performances of two model-based bootstrap approaches, we introduce the forecasting procedure as follows: For each t = ( n 1 + 1 ) , , ( n 1 + n 2 5 ) we estimate an INAR(1) model for the data x 1 , , x t , then we use the fitted result based on x 1 , , x t to generate the next five forecasts, which is called the 5-step ahead forecast x t + 1 F , , x t + 5 F for each t in { ( n 1 + 1 ) , , ( n 1 + n 2 5 ) } , where x t F is the forecast at time t. In this way we obtain many sequences of 1 , 2 , , 5 step-ahead forecasts, finally we replicate the whole procedure P times. Then we can evaluate the point forecast accuracy by the forecast mean square error (FMSE) defined as
FMSE = 1 P i = ( n 1 + 1 ) n 2 ( x i x ¯ i F ) 2 ,
and forecast mean absolute error (FMAE) defined as
FMAE = 1 P i = ( n 1 + 1 ) n 2 | x i x ¯ i F | ,
where x i is the true value of the data, x ¯ i F is the mean of all the forecasts at i and P is the number of replicates.

4.2. Overdispersed Cases

We consider two overdispersed data sets, the first one contains 144 observations and represents monthly tallies of crime data from the Forecasting Principles website http://www.forecastingprinciples.com, and these crimes are reported in the police car beats in Pittsburgh from January 1990 to December 2001; the second one is Empire District Electric Company (EDE) data set from the Trades and Quotes (TAQ) set in NYSE, which contains 300 observations, and it was also analyzed by [17].

4.2.1. P1V Data

The 45th P1V (Part 1 Violent Crimes) data set contains crimes of murder, rape, robbery and other kinds; see more details in the data dictionary on the Forecasting Principles website. Figure 1 plots the time series plot, the autocorrelation function (ACF) and the partial autocorrelation function (PACF) of 45th data of P1V series, respectively. The maximum value of the data is 15 and the minimum is 0; the mean is 4.3333; the variance is 7.4685. From the ACF plot, we found that the data are dependent. From the PACF plots, we can see that only the first sample is significant, which strongly suggests an INAR(1) model.
First, we divided the data set into two parts–the training set with the first n 1 = 134 counting data and the prediction set with the last n 2 = 10 data. We fit the training set by the following models: expectation thinning INAR(1) (ETINAR(1)) model in [9], GSC thinning INAR(1) (GSCINAR(1)) model in [10], the binomial thinning INAR(1) model and EB thinning EB-INAR(1) models with m = 3 , 4 . According to the mean and variance of P1V data, we used one of the most common settings–geometric distribution–as the distribution of the innovation in above models.
In order to compare the effectiveness of the models, we consider the following evaluation criteria: (1) AIC. (2) The mean and standard error of Pearson residual r t and its related Ljung–Box statistics, where the Pearson residuals are defined as
r t = X t μ ^ X t 1 μ ^ ϵ [ σ ^ 2 X t 1 + σ ^ ϵ 2 ] 1 2 , t = 1 , 2 , ,
where μ ^ and σ ^ 2 are the estimated expectation and variance for related thinning operators, respectively. (3) Three goodness-of-fit statistics: RMS (root mean square error), MAE (mean absolute error) and MdAE (median absolute error), where the error is defined by X t E ( X t | X t 1 ) , t = 1 , , n 1 . (4) The mean of the data x ¯ ^ on the training set calculated by the estimated results.
Next, focusing on forecasting, we generated P = 100 replicates based on the training set for each model. Then we calculated the FMSE and FMAE for each model.
All results of the fitted models are given in Table 3. There is no evidence of any correlation within the residuals of all five models, which is also supported by the Ljung–Box statistic based on 15 lags (because χ 0.05 2 ( 14 ) = 23.6847 ). There were no significant differences for the RMS, MAE, MdAE and x ¯ ^ values (the true mean of the 134 training set was 4.3880) of the models. In other words, no model performed the best in terms of these four criteria, so we also considered AIC. Since the CML estimator cannot be adopted in GSCINAR(1), one can only compare other criteria.
Considering the fitness on the training set, the EB-INAR(1) with m = 3 has the smallest AIC, EB-INAR(1) with m = 4 has almost the same AIC as m = 3 . For the results on forecasting, EB-INAR(1) with m = 4 has the smallest FMSE and the second smallest FMAE among all models. EB-INAR(1) with m = 3 has the second smallest FMSE and the smallest FMAE. Based on these results, we conclude that EB-INAR(1) with m = 3 , 4 performs better than INAR(1), ETINAR(1) and GSCINAR(1).

4.2.2. Stock Data

We analyzed another overdispersed data set of Empire District Electric Company (EDE) from the Trades and Quotes (TAQ) data set in NYSE. The data are about the number of trades in 5 min intervals between 9:45 a.m. and 4:00 p.m. in the first quarter of 2005 (3 January–31 March 2005, 61 trading days). Here we analyze a portion of the data between first to fourth trading days. As there are 75 5 min intervals per day, the sample size was T = 300.
Figure 2 plots the time series plot, the ACF and the PACF of the EDE series. The maximum value of the data is 25 and the minimum is 0; the mean is 4.6933; and the variance is 14.1665. It seems that the series is not completely stationary with several outliers or influential observations based on the time series plot. Zhu et al. [18] analyzed the Poisson autoregression for the stock transaction data with extreme values, which can be considered in the current setting. From the ACF plot, we found that the data are dependent. From the PACF plots, we can see that only the first sample is significant, which strongly suggests an INAR(1) model. We used the same procedures and criteria as before. We used the geometric distribution as the distribution of the innovation in above models.
First divide the data set into two parts–the training set with the first n 1 = 270 data and the prediction set with the last n 2 = 30 data. All results of the fitted models are given in Table 4. Among all models, EB-INAR(1) with m = 4 has the smallest AIC, and there is no evidence of any correlation within the residuals of all five models, which is also supported by the Ljung–Box statistic based on 15 lags. There are no significant differences for the RMS, MAE, MdAE and x ¯ ^ values (the true mean of the 270 training set was 4.3407) of all considered models. For the results of prediction, EB-INAR(1) with m = 4 has the smallest FMSE and FMAE among all models. Based on the above results, we conclude that EB-INAR(1) with m = 4 performs best for this data set.

4.3. Underdispersed Case

The 11th FAMVIOL data set contains the crimes of family violence, which can also be obtained from the Forecasting Principles website. Figure 3 plots the time series plot, the ACF and the PACF of the 11th data set of FAMVIOL series. The maximum value of the data is 3 and the minimum is 0; the mean is 0.4027; and the variance is 0.3820. We use the procedures and criteria in Section 4.2.1 to compare different models. According to the mean and the variance of FAMVIOL data, we use one of the most common settings-Poisson distribution as the distribution of the innovation in above models.
All results of the fitted models are given in Table 5. There is no evidence of any correlation within the residuals of all five models, which is also supported by the Ljung–Box statistic based on 15 lags. There are no significant differences about the criteria on the fitness and forecasting of all models. ETINAR(1) with the biggest AIC, performed the worst in these models.
Now let us have a brief summary. For the P1V data and stock data, which are overdispersed with slightly high-count data, the EB-INAR(1) of m > 2 is obviously better than m = 2 . For the FAMVIOL data, which is underdispersed with small-count data, the EB-INAR(1) with m > 2 is also competitive.

5. Conclusions

This paper proposes an EB-INAR(1) model based on the newly constructed EB thinning operator, which is an extension of the thinning-based INAR models. We gave the estimation method for parameters and established the asymptotic properties of the estimators for the innovation-free case. Based on the simulations and real data analysis, the EB-INAR(1) model can accurately and flexibly capture the dispersion features of the data, which shows its effectiveness and practicality. Compared with other models, such as ETINAR(1) and GSCINAR(1), our model is competitive.
We point out that many existing integer-valued models can be generalized by replacing the binomial thinning operator with the EB thinning operator, such as those models in [19,20,21,22,23]. In addition, we can extend the considered first-order INAR model to the higher-order one. More research will be studied in the future.

Author Contributions

Conceptualization, Z.L. and F.Z.; methodology, F.Z.; software, Z.L.; validation, Z.L. and F.Z.; formal analysis, Z.L.; investigation, Z.L. and F.Z.; resources, F.Z.; data curation, Z.L.; writing—original draft preparation, Z.L.; writing—review and editing, F.Z.; visualization, Z.L.; supervision, F.Z.; project administration, F.Z.; funding acquisition, F.Z. All authors have read and agreed to the published version of the manuscript.

Funding

Zhu’s work is supported by National Natural Science Foundation of China grant numbers 11871027 and 11731015, and Cultivation Plan for Excellent Young Scholar Candidates of Jilin University.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof of Proposition 1. 
Since 1–4 are easy to verify, we only prove 5. By the law of total covariance, we have
C o v ( X t , X t h ) = C o v ( E ( X t | X t 1 , ) , E ( X t h | X t 1 , ) ) + E ( C o v ( X t , X t h | X t 1 , ) ) = C o v ( μ X t 1 + μ ϵ , X t h ) + E ( E ( X t E ( X t | X t 1 , ) ) ( X t h E ( X t h | X t 1 , ) ) | X t 1 , ) = μ · C o v ( X t 1 , X t h ) + 0 = · · · = μ h · Var ( X t h ) .
Thus, the autocorrelation function C o r r ( X t , X t h ) = μ h . □
Proof of Propositions 2 and 3. 
Propositions 2 and 3 are similar to Theorems 1 and 2 in [8], which can be proved by verifying the regularity conditions of Theorems 3.1 and 3.2 in [24]. For instance, in the proof of Proposition 2, the partial derivatives g ( α , F m 1 ) / α i have finite fourth moments in [24], which correspond to Assumption 2 in Section 3.1, u m 2 ( α ) in [24] is corresponds to q 1 t ( θ 1 ) in Step 1.1. Hence, Proposition 2 can be regarded as a direct conclusion of Theorem 3.2.
Besides, the proof of Proposition 3 is similar to Proposition 2; the procedure is almost the same as Theorem 3.2 in [24]. □
Proof of Proposition 4. 
Similarly to the Theorem in [25], based on Theorem 3.2 in [14], we have
n ( θ ^ C L S θ 0 ) L N ( 0 , Ω ) ,
where
Ω = V 1 1 W 1 V 1 1 V 1 1 M V 2 1 V 2 1 M V 1 1 V 2 1 W 2 V 2 1 .
Based on the proof of the Theorem in [25], q 1 t ( θ 1 ) corresponds to u t and q 2 t ( θ 2 ) corresponds to U t in [25]. Based on the result of V 1 , W 1 in Proposition 2 and V 2 , W 2 in Proposition 3, we can obtain M = E ( q 11 ( θ 1 , 0 ) q 21 ( θ 2 , 0 ) θ 1 g 1 ( θ 1 , 0 , X 0 ) θ 2 g 2 ( θ 2 , 0 , X 0 ) ) . □
Proof of Proposition 5. 
Since the solutions ( h 1 ( μ , σ 2 ) , h 2 ( μ , σ 2 ) ) about ( m , α ) in (3.1) are real-valued and have a nonzero differential, Proposition 5 is an application of the δ -method, for example, which can be found in Theorem A on p.122 of [26]. □

References

  1. Weiß, C.H. An Introduction to Discrete-Valued Time Series; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2018. [Google Scholar]
  2. Steutel, F.W.; van Harn, K. Discrete analogues of self-decomposability and stability. Ann. Probab. 1979, 7, 893–899. [Google Scholar] [CrossRef]
  3. McKenzie, E. Some simple models for discrete variate time series. Water Resour. Bull. 1985, 21, 645–650. [Google Scholar] [CrossRef]
  4. Al-Osh, M.; Alzaid, A. First-order integer-valued autoregressive (INAR(1)) processes. J. Time Ser. Anal. 1987, 8, 261–275. [Google Scholar] [CrossRef]
  5. Schweer, S. A goodness-of-fit test for integer-valued autoregressive processes. J. Time Series Anal. 2016, 37, 77–98. [Google Scholar] [CrossRef]
  6. Chen, C.W.S.; Lee, S. Bayesian causality test for integer-valued time series models with applications to climate and crime data. J. R. Stat. Soc. Ser. C. Appl. Stat. 2017, 66, 797–814. [Google Scholar] [CrossRef]
  7. Zhu, R.; Joe, H. Negative binomial time series models based on expectation thinning operators. J. Statist. Plann. Inference 2010, 140, 1874–1888. [Google Scholar] [CrossRef]
  8. Yang, K.; Kang, Y.; Wang, D.; Li, H.; Diao, Y. Modeling overdispersed or underdispersed count data with generalized Poisson integer-valued autoregressive processes. Metrika 2019, 82, 863–889. [Google Scholar] [CrossRef]
  9. Aly, E.; Bouzar, N. Expectation thinning operators based on linear fractional probability generating functions. J. Indian Soc. Probab. Stat. 2019, 20, 89–107. [Google Scholar] [CrossRef] [Green Version]
  10. Kang, Y.; Wang, D.; Yang, K.; Zhang, Y. A new thinning-based INAR(1) process for underdispersed or overdispersed counts. J. Korean Statist. Soc. 2020, 49, 324–349. [Google Scholar] [CrossRef]
  11. Balasubramanian, K.; Viveros, R.; Balakrishnan, N. Some discrete distributions related to extended Pascal triangles. Fibonacci Quart. 1995, 33, 415–425. [Google Scholar]
  12. Du, J.; Li, Y. The integer-valued autoregressive (INAR(p)) model. J. Time Ser. Anal. 1991, 12, 129–142. [Google Scholar]
  13. Li, Q.; Zhu, F. Mean targeting estimator for the integer-valued GARCH(1,1) model. Statist. Pap. 2020, 61, 659–679. [Google Scholar] [CrossRef]
  14. Nicholls, D.F.; Quinn, B.G. Random Coefficient Autoregressive Models: An Introduction; Springer: New York, NY, USA, 1982. [Google Scholar]
  15. Bakouch, H.S.; Ristić, M.M. A mixed thinning based geometric INAR(1) model. Metrika 2010, 72, 265–280. [Google Scholar] [CrossRef]
  16. Bisaglia, L.; Gerolimetto, M. Model-based INAR bootstrap for forecasting INAR(p) models. Comput. Statist. 2019, 34, 1815–1848. [Google Scholar] [CrossRef]
  17. Jung, R.C.; Liesenfeld, R.; Richard, J.-F. Dynamic factor models for multivariate count data: An application to stock-market trading activity. J. Bus. Econom. Statist. 2011, 29, 73–85. [Google Scholar] [CrossRef] [Green Version]
  18. Zhu, F.; Liu, S.; Shi, L. Local influence analysis for Poisson autoregression with an application to stock transaction data. Stat. Neerl. 2016, 70, 4–25. [Google Scholar] [CrossRef]
  19. Zhang, H.; Wang, D.; Zhu, F. Inference for INAR(p) processes with signed generalized power series thinning operator. J. Statist. Plann. Inference 2010, 140, 667–683. [Google Scholar] [CrossRef]
  20. Zhang, H.; Wang, D.; Zhu, F. Generalized RCINAR(1) process with signed thinning operator. Comm. Statist. Theory Methods 2012, 41, 1750–1770. [Google Scholar] [CrossRef]
  21. Qi, X.; Li, Q.; Zhu, F. Modeling time series of count with excess zeros and ones based on INAR(1) model with zero-and-one inflated Poisson innovations. J. Comput. Appl. Math. 2019, 346, 572–590. [Google Scholar] [CrossRef]
  22. Liu, Z.; Li, Q.; Zhu, F. Random environment binomial thinning integer-valued autoregressive process with Poisson or geometric marginal. Braz. J. Probab. Stat. 2020, 34, 251–272. [Google Scholar] [CrossRef]
  23. Qian, L.; Li, Q.; Zhu, F. Modelling heavy-tailedness in count time series. Appl. Math. Model. 2020, 82, 766–784. [Google Scholar] [CrossRef]
  24. Klimko, L.A.; Nelson, P.I. On conditional least squares estimation for stochastic processes. Ann. Statist. 1978, 6, 629–642. [Google Scholar] [CrossRef]
  25. Zhu, F.; Wang, D. Estimation of parameters in the NLAR(p) model. J. Time Ser. Anal. 2008, 29, 619–628. [Google Scholar] [CrossRef]
  26. Serfling, R.J. Approximation Theorems of Mathematical Statistics; John Wiley & Sons, Inc.: New York, NY, USA, 1980. [Google Scholar]
Figure 1. The data, autocorrelation function (ACF) and partial autocorrelation function (PACF) of 45th P1V series.
Figure 1. The data, autocorrelation function (ACF) and partial autocorrelation function (PACF) of 45th P1V series.
Entropy 23 00062 g001
Figure 2. The data, ACF and PACF of first to fourth trading days of EDE series.
Figure 2. The data, ACF and PACF of first to fourth trading days of EDE series.
Entropy 23 00062 g002
Figure 3. The data, ACF and PACF of the 11th data set of the FAMVIOL series.
Figure 3. The data, ACF and PACF of the 11th data set of the FAMVIOL series.
Entropy 23 00062 g003
Table 1. Means of estimates, RMSEs (within parentheses) by CLS.
Table 1. Means of estimates, RMSEs (within parentheses) by CLS.
Casen m ^ α ^ δ ^
A11002.91000.17581.0220(0.1595)
2002.92190.19311.0142(0.1148)
4002.99950.19841.0090(0.0771)
A21002.76280.08860.5014(0.0893)
2002.81690.08730.5049(0.0668)
4002.91320.09420.5004(0.0449)
A31003.78850.17131.0265(0.1594)
2003.84500.19021.0165(0.1127)
4003.89570.19521.0113(0.0789)
A41003.84210.09120.5059(0.0912)
2003.84830.09120.5031(0.0603)
4003.95900.09810.5012(0.0439)
B11002.90740.31220.4981(0.0511)
2002.95880.31150.5008(0.0365)
4002.98580.30870.4984(0.0270)
B21003.07190.38410.6578(0.0553)
2003.05230.38770.6569(0.0400)
4003.09860.39240.6600(0.0307)
B31003.61270.27030.4962(0.0536)
2003.79370.29610.4980(0.0404)
4003.92170.29840.4970(0.0269)
B41003.95750.39350.6417(0.0653)
2004.03880.39290.6556(0.0478)
4004.00270.39530.6606(0.0353)
Note: RMSE, root mean squared error.
Table 2. Means of estimates, RMSEs (within parentheses) by CML.
Table 2. Means of estimates, RMSEs (within parentheses) by CML.
Casen100200400
A1 α ^ 0.1858(0.0656)0.1927(0.0476)0.1970(0.0347)
δ ^ 1.0070(0.1420)1.0043(0.1057)1.0057(0.0806)
A2 α ^ 0.1041(0.0575)0.1027(0.0453)0.0973(0.0357)
δ ^ 0.4939(0.0804)0.4964(0.0593)0.4973(0.0422)
A3 α ^ 0.1827(0.0592)0.1923(0.0406)0.1966(0.0306)
δ ^ 1.0186(0.1358)1.0030(0.1059)1.0002(0.0789)
A4 α ^ 0.0938(0.0500)0.0940(0.0414)0.0987(0.0342)
δ ^ 0.4993(0.0790)0.5052(0.0583)0.4967(0.0420)
B1 α ^ 0.2982(0.0456)0.2980(0.0329)0.2992(0.0237)
δ ^ 0.5086(0.0469)0.5013(0.0323)0.5016(0.0224)
B2 α ^ 0.3888(0.0455)0.3949(0.0331)0.3980(0.0222)
δ ^ 0.6685(0.0501)0.6681(0.0374)0.6664(0.0256)
B3 α ^ 0.2904(0.0441)0.2958(0.0276)0.2990(0.0219)
δ ^ 0.5039(0.0480)0.5003(0.0338)0.5006(0.0264)
B4 α ^ 0.3867(0.0348)0.3940(0.0240)0.3965(0.0163)
δ ^ 0.6650(0.0539)0.6674(0.0382)0.6662(0.0271)
Note: RMSE, root mean squared error.
Table 3. Fitting results, AIC and some characteristics of P1V data.
Table 3. Fitting results, AIC and some characteristics of P1V data.
ModelEstimatesAIC r t ¯ std( r t )Ljung-BoxRMSMAEMdAE x ¯ ^ FMSEFMAE
INAR(1) α ^ = 0.3955 641.91360.00200.800812.55452.63842.02361.58284.38110.39880.1809
δ ^ = 0.2741
q ^ = 0.4942
ETINAR(1) r ^ = 0.3739 639.32430.01340.839216.60922.67362.06281.69584.37580.38470.1821
δ ^ = 0.3112
GSCINAR(1) γ ^ = 0.8944 --------0.00050.680512.74692.63042.00271.73954.38380.41150.1935
δ ^ = 0.2515
EB-INAR(1) with m = 3 α ^ = 0.3299 636.23860.00970.852515.77222.66662.05631.71674.37710.37700.1748
δ ^ = 0.3050
EB-INAR(1) with m = 4 α ^ = 0.3164 636.28870.01450.852617.07852.67892.06731.68154.37560.35890.1791
δ ^ = 0.3156
Table 4. Fitting results, AIC and some characteristics of EDE data.
Table 4. Fitting results, AIC and some characteristics of EDE data.
ModelEstimatesAIC r t ¯ std( r t )Ljung-BoxRMSMAEMdAE x ¯ ^ FMSEFMAE
INAR(1) α ^ = 0.2457 1341.6480.00030.859013.06963.34072.48172.00754.335810.26021.2368
δ ^ = 0.2341
q ^ = 0.3204
ETINAR(1) r ^ = 0.4125 1339.3970.00410.881116.40703.35702.49102.05344.335710.61031.2688
δ ^ = 0.2533
GSCINAR(1) γ ^ = 0.9578 -------- 0.0003 0.802713.15963.33972.48122.04094.336110.45771.2362
δ ^ = 0.2284
EB-INAR(1) with m = 3 α ^ = 0.2200 1338.0710.00180.880114.58783.34772.48632.02994.335210.10381.2317
δ ^ = 0.2448
EB-INAR(1) with m = 4 α ^ = 0.2194 1337.5540.00270.883815.32383.35142.48802.02244.335010.04631.2279
δ ^ = 0.2486
Table 5. Fitting results, AIC and some characteristics of FAMVIOL data.
Table 5. Fitting results, AIC and some characteristics of FAMVIOL data.
ModelEstimatesAIC r t ¯ std( r t )Ljung-BoxRMSMAEMdAE x ¯ ^ FMSEFMAE
INAR(1) α ^ = 0.1750 206.5983 0.0005 0.91578.66720.55960.49400.48510.37590.11110.0790
δ ^ = 0.3101
q ^ = 0.1774
ETINAR(1) r ^ = 0.0267 208.5826 0.0003 0.91228.63440.55960.49390.48660.37590.12050.0837
δ ^ = 0.3092
GSCINAR(1) γ ^ = 0.9741 --------0.00230.86098.37350.55950.49320.49440.37590.10200.0814
δ ^ = 0.3045
EB-INAR(1) with m = 3 α ^ = 0.1505 206.53350.00070.90058.54850.55950.49350.49000.37570.11350.0792
δ ^ = 0.3068
EB-INAR(1) with m = 4 α ^ = 0.1371 206.8414 0.0014 0.89438.49280.55970.49440.48020.37590.11490.0808
δ ^ = 0.3131
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, Z.; Zhu, F. A New Extension of Thinning-Based Integer-Valued Autoregressive Models for Count Data. Entropy 2021, 23, 62. https://doi.org/10.3390/e23010062

AMA Style

Liu Z, Zhu F. A New Extension of Thinning-Based Integer-Valued Autoregressive Models for Count Data. Entropy. 2021; 23(1):62. https://doi.org/10.3390/e23010062

Chicago/Turabian Style

Liu, Zhengwei, and Fukang Zhu. 2021. "A New Extension of Thinning-Based Integer-Valued Autoregressive Models for Count Data" Entropy 23, no. 1: 62. https://doi.org/10.3390/e23010062

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop