Next Article in Journal
A Fast Attribute Reduction Algorithm Based on a Positive Region Sort Ascending Decision Table
Previous Article in Journal
An Accelerated Symmetric Nonnegative Matrix Factorization Algorithm Using Extrapolation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Consistency of Estimators in a Heteroscedastic Partially Linear Model with ρ-Mixing Errors

State Key Laboratory of Mechanics and Control of Mechanical Structures and Department of Mathematics, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(7), 1188; https://doi.org/10.3390/sym12071188
Submission received: 21 May 2020 / Revised: 8 July 2020 / Accepted: 9 July 2020 / Published: 17 July 2020
(This article belongs to the Section Computer)

Abstract

:
This paper studies a heteroscedastic partially linear model based on ρ -mixing random errors, stochastically dominated and with zero mean. Under some suitable conditions, the strong consistency and p -th ( p > 0 ) mean consistency of least squares (LS) estimators and weighted least squares (WLS) estimators for the unknown parameter are investigated, and the strong consistency and p -th ( p > 0 ) mean consistency of the estimators for the non-parametric component are also studied. These results include the corresponding ones of independent, negatively associated (NA), and ρ * -mixing random errors as special cases. At last, two simulations are presented to support the theoretical results.

1. Introduction

Consider the following heteroscedastic partially linear model:
y ( t ) ( x i n , z i n ) = z i n β + h ( x i n ) + σ i n ε ( t ) ( x i n ) ,   1 t r ,   1 i n ,  
where σ i n 2 = f ( u i n ) , z i n R , x i n R p , u i n R p , and ( x i n , z i n , u i n ) are known and nonrandom design points, β represents an unknown parameter, f ( ) and h ( ) represent unknown functions, which are defined on a compact set M R p , y ( t ) ( x i n , z i n ) stands for the t -th variables that can be observable at points ( x i n , z i n ) , and { ε ( t ) ( x i n ) , 1 t r , 1 i n } stands for random errors.
In order to analyze the effect of temperature on electricity usage, Engle et al. [1] proposed the partially linear model
y i = x i β + h ( z i ) + ε i ,   1 i n .  
Since then, many statisticians have studied partially linear regression models. The model (2) was further investigated by Heckman [2], Speckman [3], Gao [4], Härdle et al. [5], Hu et al. [6], Zeng and Liu [7], and so forth. Some applications of the model were given. Inspired by the model (2), a more general model was proposed by Gao et al. [8]:
y i = x i β + h ( z i ) + σ i ε i ,   1 i n .  
Gao et al. [8] established the asymptotic normality of least squares (LS) and weighted least squares (WLS) estimators for β based on the family of non-parametric estimators for h ( ) and f ( ) in the model (3). Baek and Liang [9] investigated the asymptotic property in the model (3) for negatively associated errors. Zhou et al. [10] derived the moment consistency for β and h ( ) in model (3) under negatively associated samples. Hu [11] proposed a new partially linear model
y ( t ) ( x i n , z i n ) = z i n β + h ( x i n ) + ε ( t ) ( x i n ) ,   1 t r ,   1 i n ,  
and derived the strong and moment consistencies with independent and φ -mixing errors. Li and Yang [12,13] studied the moment and strong consistencies for β and h ( ) in the model (4) based on negatively associated samples. Wang et al. [14] and Wu and Wang [15] discussed the moment and strong consistencies for LS and WLS estimators of β and h ( ) with ρ * -mixing errors. In the present paper, we will investigate the model (1) based on the model (4). The model (1) can be used in hydrology, biology, and so on (see [16]).
Now, let us recall some concepts of dependent structures. Assume that is a set of natural numbers and S , T are two non-empty disjoint sets. We define that dist ( S , T ) = min i S , j T | i j | .
Definition 1 
([17]). A finite collection of random variables { X n , n 1 } is called negatively associated (NA) if for each pair of disjoint subsets B 1 , B 2 of ,
C o v { g 1 ( X k , k B 1 ) , g 2 ( X l , l B 2 ) } 0 ,
where g 1 and g 2 are any coordinate-wise non-decreasing functions such that the covariance exists.
Definition 2 
([18]). A random sequence { X n , n 1 } is said to be ρ * -mixing if
ρ * ( u ) = sup { ρ ( T , U ) : T , U , dist ( T , U ) u } 0 ( u ) ,
where
ρ ( T , U ) = sup { | E ( X Y ) E ( X ) E ( Y ) | V a r ( X ) V a r ( Y ) : X L 2 ( σ ( T ) ) , Y L 2 ( σ ( U ) ) } ,
σ ( T ) and σ ( U ) are σ -fields that are generated by { X j , j T } and { X k , k U } respectively, L 2 ( σ ( T ) ) is the space of all square integral and σ ( T ) -measurable random variables, and L 2 ( σ ( U ) ) is defined in the same way.
Definition 3 
([19,20]). A sequence { X n , n 1 } is said to be ρ -mixing if
ρ ( u ) = sup { ρ ( T , U ) : T , U , dist ( T , U ) u } 0 ( u ) ,
where
ρ ( T , U ) = 0 { C o v ( g 1 ( X i , i T ) , g 2 ( X j , j U ) ) V a r ( g 1 ( X i , i T ) ) V a r ( g 2 ( X j , j U ) ) : g 1 , g 2 H } ,
0 x = max { 0 , x } , and H is the set of non-decreasing functions.
We can easily see that ρ ( u ) ρ * ( u ) and a ρ -mixing sequence is NA (in particular, independent) if and only if ρ ( 1 ) = 0 . Therefore, ρ -mixing sequences include ρ * -mixing sequences and NA sequences as special cases. However, ρ * -mixing sequences and NA sequences are not always ρ -mixing sequences. Zhang and Wang [19] constructed the following example, which is a ρ -mixing sequence, but not NA and also not ρ * -mixing.
Example 1 
([19]). Assume that { ξ n , n 1 } , { η n , n 1 } , and { ζ n , n 1 } are three independent sequences that are independent and identically distributed (i.i.d.) standard normal random variables. Denote
X n = { ξ t , n = 2 t 1 ξ t , n = 2 t ,   Y n = { η t , n = 2 2 t 1 η t , n = 2 2 t ζ n ,   else ,
and Z n = X n 2 + Y n . Then, { Z n , n 1 } is ρ -mixing with ρ ( 2 ) = 0 . However, { Z n , n 1 } is neither NA nor ρ * -mixing.
On one hand, NA sequences have been widely applied to reliability theorem and multivariate statistical analysis (see [21,22]). On the other hand, some Markov Chains and moving average processes are ρ * -mixing sequences (see [23]). The concept of ρ * -mixing sequences is important in a lot of areas, for instance, finance, economics, and other sciences (see [24]). Therefore, studying ρ -mixing sequences is of considerable significance.
Since Zhang and Wang [19] proposed the concept of ρ -mixing sequences, many results on ρ -mixing sequences have been established. One can refer to Zhang and Wang [19], Wang and Lu [25], and Yuan and An [26] for some moment inequalities and some limiting behavior; Zhang [20] and Zhang [27] for some central limit theorems; Chen et al. [28] for complete convergence for weighted sums of ρ -mixing sequences; Zhang [29] for the complete moment convergence for the partial sum of ρ -mixing moving average processes; Wu and Jiang [30] for almost sure convergence of ρ -mixing sequences; and Xu and Wu [31] for an almost sure central limit theorem for the self-normalized partial sums.
However, we have not found studies on the model (1) under ρ -mixing random errors in the literature. In the present paper, we will study the estimation problem for the model (1) based on the assumption that the errors are ρ -mixing sequences that are stochastically dominated and zero mean. The strong consistency and mean consistency of LS estimators and WLS estimators for β and h ( ) are established respectively based on some suitable conditions. The results obtained in the paper deal with independent errors as well as dependent errors as special cases.
Next, we will recall the definition of stochastic domination.
Definition 4 
([32]). A random sequence { Y i , i 1 } is stochastically dominated by a random variable Y if
P ( | Y i | > y ) c P ( | Y | > y )
for some c > 0 , every y 0 and each n 1 .
The remainder of this paper is organized as follows. The LS estimators and WLS estimators of β based on the family of non-parametric estimators for h ( ) and some conditions are introduced in Section 2. We give the main results in Section 3. Several lemmas are given in Section 4. We provide the proofs of the main results in Section 5. Two simulations are carried out in Section 6. We conclude the paper in Section 7. Throughout the paper, let C denote positive constants whose values may be different in various places. “i.i.d.” stands for independent and identically distributed. stands for the Euclidean norm.

2. Estimation and Conditions

Assume that { y ( t ) ( x i n , z i n ) , z i n R , x i n M , u i n M , 1 t r , 1 i n } satisfies the model (1) and W n j ( x ) = W n j ( x ; x 1 , x 2 , , x n ) is a weight function that is measurable on the compact set M . For simplicity and convenience, the model (1) can be written as
y i ( t ) = z i β + h ( x i ) + σ i ε i ( t ) ,   1 t r ,   1 i n .
We denote z ˜ j = z j i = 1 n W n i ( x j ) z i , y ˜ ( k ) j = y j ( k ) 1 r t = 1 r i = 1 n W n i ( x j ) y i ( t ) , γ i = 1 / σ i 2 , 1 k r , 1 j n , T ˜ n 2 = i = 1 n z ˜ i 2 , and U ˜ n 2 = i = 1 n γ i z ˜ i 2 .
For the model (5), one can get from E ( ε i ( t ) ) = 0 that h ( x i ) = E ( y i ( t ) z i β ) for 1 t r , 1 i n . Thus, for any given β , we can define the non-parametric estimator of h ( ) in terms of
h r , n ( x , β ) = 1 r t = 1 r i = 1 n W n i ( x ) ( y i ( t ) z i β ) .
Hence, the LS estimators of β can be defined by
β ^ r , n ( L S ) = arg min β t = 1 r i = 1 n [ y i ( t ) z i β h r , n ( x i , β ) ] 2 .  
By (7), we have
β ^ r , n ( L S ) = 1 r t = 1 r i = 1 n z ˜ i y ˜ i ( t ) / T ˜ n 2 .  
When the random errors are heteroscedastic, we modify β ^ r , n ( L S ) to a WLS estimator. We can define the WLS estimators of β in terms of
β ^ r , n ( W L S ) = arg min β t = 1 r i = 1 n [ ( y i ( t ) z i β h r , n ( x i , β ) ) / σ i ] 2 .  
By (9), we derive that
β ^ r , n ( W L S ) = 1 r t = 1 r i = 1 n γ i z ˜ i y ˜ i ( t ) / U ˜ n 2 .  
Taking into account β ^ r , n ( L S ) and β ^ r , n ( W L S ) , we define the estimator of h ( ) respectively:
h ^ r , n ( x ) = 1 r t = 1 r i = 1 n W n i ( x ) ( y i ( t ) z i β ^ r , n ( L S ) )
and
h ˜ r , n ( x ) = 1 r t = 1 r i = 1 n W n i ( x ) ( y i ( t ) z i β ^ r , n ( W L S ) ) .  
In order to obtain the relevant theorems, several important conditions are given below.
( C 1 )
(i) lim n T ˜ n 2 / n = C ;
(ii) 0 < s 0 inf u M f ( u ) sup u M f ( u ) S 0 < ;
(iii) f ( ) and h ( ) are continuous functions on compact set M .
( C 2 )
(i) sup x M i = 1 n | W n i ( x ) | = O ( 1 ) ;
(ii) sup i 1 , x M | W n i ( x ) | = O ( n α ) for some α > 0 .
( C 3 )
(i) sup x M | i = 1 n W n i ( x ) 1 | = o ( 1 ) ;
(ii) sup x M i = 1 n | W n i ( x ) | I ( x i x > δ ) = o ( 1 ) for any δ > 0 .
( C 4 )
sup x M | i = 1 n W n i ( x ) z i | = O ( 1 ) .
Remark 1.
Conditions ( C 1 ) (i) (ii) are some regular conditions that are often imposed in studies of LS and WLS estimators in heteroscedastic partially linear models. One can refer to [5,8,9] and so on. ( C 1 ) (iii) is mild and holds for most commonly used functions, such as polynomial and trigonometric functions (see [9]). Conditions ( C 2 ) ( C 4 ) are often applied to investigate strong consistency (see [9,33]) and mean consistency (see [10,16]). ( C 2 ) (ii) is weaker than the corresponding conditions of [16] and [33]. Thus, the above conditions are very mild. Moreover, by ( C 1 ) (i) (ii), one can get that
T ˜ n 2 i = 1 n | z ˜ i | C
and
U ˜ n 2 i = 1 n | γ i z ˜ i | C .

3. Main Results

In this paper, let { ε i ( t ) , 1 t r , 1 i n } be a ρ -mixing sequence with zero mean, which is stochastically dominated by a random variable ε .
Theorem 1.
Assume that ( C 1 ) - ( C 3 ) hold. If E | ε | p < for some p > 2 , then
β ^ r , n ( L S ) a . s . β
as min ( r , n ) and
β ^ r , n ( W L S ) a . s . β .  
as min ( r , n ) .
Theorem 2.
Under the conditions of Theorem 1, in addition, if ( C 4 ) holds, then
sup x M | h ^ r , n ( x ) h ( x ) | a . s . 0
as min ( r , n ) and
sup x M | h ˜ r , n ( x ) h ( x ) | a . s . 0 .  
as min ( r , n ) .
Theorem 3.
Assume that ( C 1 ) - ( C 3 ) holds. If E | ε | p < for some p 2 , then
lim min ( r , n ) E | β ^ r , n ( L S ) β | p = 0
and
lim min ( r , n ) E | β ^ r , n ( W L S ) β | p = 0 .  
Theorem 4.
Under the conditions of Theorem 3, in addition, if ( C 4 ) holds, then
lim min ( r , n ) sup x M E | h ^ r , n ( x ) h ( x ) | p = 0
and
lim min ( r , n ) sup x M E | h ˜ r , n ( x ) h ( x ) | p = 0 .  
Remark 2.
Since ρ -mixing sequences include NA (in particular, independent) and ρ * -mixing sequences, Theorems 1–4 also apply for NA and ρ * -mixing sequences.

4. Some Lemmas

From the definition of ρ -mixing sequences, we can get the first lemma.
Lemma 1.
If { X i , i 1 } is a ρ -mixing sequence with mixing coefficients ρ ( u ) , then { f i ( X i ) , i 1 } is still a ρ -mixing sequence with mixing coefficients not greater than ρ ( u ) . Here, f 1 , f 2 , are non-decreasing functions (non-increasing functions).
Lemma 2
(Rosenthal-type inequality, [25,29]). If { X i , i 1 } is a ρ -mixing sequence of zero mean with E | X i | p < for some p 2 , then there exists a constant C > 0 depending only on p and ρ ( s ) such that
E ( max 1 i n | S i | p ) C { i = 1 n E | X i | p + [ ( i = 1 n E X i 2 ) p / 2 ] p } ,
for any n 1 , here S i = j = 1 i X j .
Lemma 3
([32]). If { X i , i 1 } is a random sequence that is stochastically dominated by a random variable X , for every a > 0 and β > 0 , we have
E | X i | β I ( | X i | a ) c 1 [ E | X | β I ( | X | a ) + a β P ( | X | > a ) ] ,
E | X i | β I ( | X i | > a ) c 2 E | X | β I ( | X | > a ) .
Therefore,
E | X i | β c E | X | β ,
where c is a positive constant.
Lemma 4.
Let { ε i ( t ) , 1 t r , 1 i n } be a ρ -mixing sequence of zero mean. Suppose that { c n i ( v ) ,   1 i n ,   n 1 } is an array of functions defined on a compact set M such that sup v M i = 1 n | c n i ( v ) | = O ( 1 ) and sup i 1 , v M | c n i ( v ) | = O ( n γ ) for some γ > 0 . If E | ε | p < for some p > 2 , then
sup v M | 1 r t = 1 r i = 1 n c n i ( v ) ε i ( t ) | a . s . 0
as min ( r , n ) .
Proof. 
Denote
ε 1 i ( t ) = r 1 p I ( ε i ( t ) < r 1 p ) + ε i ( t ) I ( | ε i ( t ) | r 1 p ) + r 1 p I ( ε i ( t ) > r 1 p ) ,
ε 2 i ( t ) = ε i ( t ) ε 1 i ( t ) = ( ε i ( t ) + r 1 p ) I ( ε i ( t ) < r 1 p ) + ( ε i ( t ) r 1 p ) I ( ε i ( t ) > r 1 p ) ,
ε i ( t ) = ε 1 i ( t ) E ε 1 i ( t )
and
ε i ( t ) = ε 2 i ( t ) E ε 2 i ( t ) .
Without loss of generality, one can suppose that c n i ( v ) > 0 . Hence, we know by Lemma 1 that { c n i ( v ) ε i ( t ) , 1 t r , 1 i n } , { c n i ( v ) ε i ( t ) , 1 t r , 1 i n } , and { c n i ( v ) ε i ( t ) , 1 t r , 1 i n } are also ρ -mixing sequences with zero mean. Note that ε i ( t ) = ε i ( t ) + ε i ( t ) . Hence, for any v M , we have
A r , n = : P ( | 1 r t = 1 r i = 1 n c n i ( v ) ε i ( t ) | > ε ) P ( | t = 1 r i = 1 n c n i ( v ) ε i ( t ) | > r ε / 2 ) + P ( | t = 1 r i = 1 n c n i ( v ) ε i ( t ) | > r ε / 2 ) = : ( A r , n ( 1 ) + A r , n ( 2 ) ) .
The proof of (24) is similar to that of the Lemma 3.3 in Zhou and Hu [34]. By Lemma 3, we have E | ε i ( t ) | p C E | ε | p < . Hence, for every s > p > 2 , from the Markov inequality, Lemma 2 and E | ε | p < , we get that
A r , n ( 1 ) C r s E | t = 1 r i = 1 n c n i ( v ) ε i ( t ) | s C r s { t = 1 r i = 1 n E | c n i ( v ) ε i ( t ) | s + [ ( t = 1 r i = 1 n E ( c n i ( v ) ε i ( t ) ) 2 ) s / 2 ] } C r s sup z M { t = 1 r i = 1 n E | c n i ( v ) ε i ( t ) | s + [ ( t = 1 r i = 1 n E ( c n i ( v ) ε i ( t ) ) 2 ) s / 2 ] } C r s { sup z M t = 1 r i = 1 n E | c n i ( v ) ε i ( t ) | s + [ sup z M ( t = 1 r i = 1 n E ( c n i ( v ) ε i ( t ) ) 2 ) s / 2 ] } C r s ( r s / p n γ ( s 1 ) + r s / 2 n γ s / 2 ) C r s / 2
and
A r , n ( 2 ) C r p E | t = 1 r i = 1 n c n i ( v ) ε i ( t ) | p C r p { sup z M t = 1 r i = 1 n E | c n i ( v ) ε i ( t ) | p + [ sup z M ( t = 1 r i = 1 n E ( c n i ( v ) ε i ( t ) ) 2 ) p / 2 ] } C r p ( r n γ ( p 1 ) + r p / 2 n γ p / 2 ) C ( r p + 1 + r p / 2 ) .  
Hence, it follows from (24) through (26) and s > p > 2 that
r = 1 A r , n C r = 1 ( r s / 2 + r p + 1 + r p / 2 ) < .
By the Borel–Cantenlli lemma, we obtain for any v M that
| 1 r t = 1 r i = 1 n c n i ( v ) ε i ( t ) | a . s . 0
as min ( r , n ) . Therefore, (23) follows. □
Lemma 5.
Let { ε i ( t ) , 1 t r , 1 i n } be a ρ -mixing random sequence of zero mean. Suppose that { c n i ( v ) ,   1 i n ,   n 1 } is an array of functions defined on a compact set M such that sup v M i = 1 n | c n i ( v ) | = O ( 1 ) and sup i 1 , v M | c n i ( v ) | = O ( n α ) . If E | ε | p < for some p 2 , then
lim min ( r , n ) sup v M E | 1 r t = 1 r i = 1 n c n i ( v ) ε i ( t ) | p = 0 .  
Proof. 
Using the notations in the proof of Lemma 4 and by C p inequality (let { X i , i 1 } be a random sequence, then E | i = 1 n X i | p n p 1 i = 1 n E | X i | p for p > 1 ), one gets that
E | 1 r t = 1 r i = 1 n c n i ( v ) ε i ( t ) | p 2 p 1 ( E | 1 r t = 1 r i = 1 n c n i ( v ) ε i ( t ) | p + E | 1 r t = 1 r i = 1 n c n i ( v ) ε i ( t ) | p ) = : 2 p 1 ( G r , n ( 1 ) + G r , n ( 2 ) ) .  
For every s > p 2 , by Lemma 2, Lemma 3, and E | ε | p < , we derived that
sup v M G r , n ( 1 ) ( sup v M E | 1 r t = 1 r i = 1 n c n i ( v ) ε i ( t ) | s ) p / s C [ ( 1 r ) s ( sup v M E | t = 1 r i = 1 n c n i ( v ) ε i ( t ) | s ) ] p / s C ( 1 r ) p ( sup v M t = 1 r i = 1 n E | c n i ( v ) ε i ( t ) | s + [ sup v M ( t = 1 r i = 1 n E ( c n i ( v ) ε i ( t ) ) 2 ) ] s / 2 ) p / s C ( 1 r ) p ( r s / p n γ ( s 1 ) + r s / 2 n γ s / 2 ) p / s C r p / 2
and
sup v M G r , n ( 2 ) C ( 1 r ) p ( sup v M t = 1 r i = 1 n E | c n i ( v ) ε i ( t ) | p + [ sup v M ( t = 1 r i = 1 n E ( c n i ( v ) ε i ( t ) ) 2 ) ] p / 2 ) C ( 1 r ) p ( r n γ ( p 1 ) + r p / 2 n γ p / 2 ) C r p / 2 .  
Therefore, (27) follows from (28) through (30). □

5. Proofs of the Main Results

By (5), (8), and (10), we derive that
β ^ r , n ( L S ) β = T ˜ n 2 ( 1 r t = 1 r i = 1 n z ˜ i e ˜ i ( t ) + i = 1 n z ˜ i h ( x i ) )
and
β ^ r , n ( W L S ) β = U ˜ n 2 ( 1 r t = 1 r i = 1 n γ i z ˜ i e ˜ i ( t ) + i = 1 n γ i z ˜ i h ( x i ) )   ,  
where e ˜ j ( k ) = e j ( k ) 1 r t = 1 r i = 1 n W n i ( x j ) e i ( t ) , h ( x ) = h ( x ) j = 1 n W n j ( x ) h ( x j ) and e i ( t ) = σ i ε i ( t ) , 1 k r , 1 j n .
Proof of Theorem 1.
We only need to prove (16) since the proof of (15) is analogous. By (32), we can get that
β ^ r , n ( W L S ) β = U ˜ n 2 [ 1 r t = 1 r i = 1 n γ i z ˜ i σ i ε i ( t ) j = 1 n γ j z ˜ j ( 1 r t = 1 r i = 1 n W n i ( x j ) σ i ε i ( t ) ) + i = 1 n γ i z ˜ i h ( x i ) ] = : I r , n ( 1 ) I r , n ( 2 ) + I r , n ( 3 ) | I r , n ( 1 ) | + | I r , n ( 2 ) | + | I r , n ( 3 ) | .  
Observe that I r , n ( 1 ) = 1 r t = 1 r i = 1 n ( U ˜ n 2 γ i σ i z ˜ i ) ε i ( t ) 1 r t = 1 r i = 1 n a n i ε i ( t ) . Hence, it follows from ( C 1 ) (i) and (ii) and (14) that
max 1 i n | a n i | C max 1 i n | γ i z ˜ i | U ˜ n 1 U ˜ n 1 = O ( n 1 / 2 )
and
i = 1 n | a n i | C i = 1 n | γ i z ˜ i | U ˜ n 2 = O ( 1 ) .  
Thus, by Lemma 4, we have
| I r , n ( 1 ) | a . s . 0
as min ( r , n ) . Note that
I r , n ( 2 ) = 1 r t = 1 r i = 1 n ( j = 1 n U ˜ n 2 γ j σ i z ˜ j W n i ( x j ) ) ε i ( t ) = : 1 r t = 1 r i = 1 n a n i ε i ( t ) .
Hence, it follows from ( C 1 ) (ii), ( C 2 ) , and (14) that
max 1 i n | a n i | C sup i 1 , x M | W n i ( x ) | i = 1 n | γ j z ˜ j | U ˜ n 2 = O ( n α )
and
i = 1 n | a n i | C sup x M i = 1 n | W n i ( x ) | i = 1 n | γ j z ˜ j | U ˜ 2 = O ( 1 ) ,  
where α is the same as that in ( C 2 ) (ii).
Thus, by Lemma 4, one can get that
| I r , n ( 2 ) | a . s . 0
as min ( r , n ) . By (14), we derive that
| I r , n ( 3 ) | sup x M | h ( x ) | i = 1 n | γ i z i | / U ˜ n 2 C sup x M | h ( x ) | .  
By ( C 1 ) (iii), ( C 2 ) (i), and ( C 3 ) , we obtain that
sup x M | h ( x ) | sup x M | j = 1 n W n j ( x ) 1 | | h ( x ) | + sup x M j = 1 n | W n j ( x ) | | h ( x ) h ( x j ) | sup x M | j = 1 n W n j ( x ) 1 | | h ( x ) | + sup x M j = 1 n | W n j ( x ) | | h ( x ) h ( x j ) | I ( x x j > δ ) + sup x M j = 1 n | W n j ( x ) | | h ( x ) h ( x j ) | I ( x x j δ ) = o ( 1 ) .
Thus, by (40) and (41), we have
| I r , n ( 3 ) | 0
as min ( r , n ) . Therefore, (16) follows from (33), (36), (39), and (42). □
Proof of Theorem 2.
We only need to prove (18) since the proof of (17) is analogous. In light of (12), we have
h ˜ r , n ( x ) h ( x ) = 1 r t = 1 r i = 1 n W n i ( x i ) ( z i β + h ( x i ) + σ i ε i ( t ) z i β ^ r , n ( W L S ) ) h ( x ) = 1 r t = 1 r i = 1 n W n i ( x ) z i ( β β ^ r , n ( W L S ) ) h ( x ) + 1 r t = 1 r i = 1 n W n i ( x ) σ i ε i ( t ) .  
Hence,
sup x M | h ˜ r , n ( x ) h ( x ) | sup x M | i = 1 n W n i ( x ) z i | | β β ^ r , n ( W L S ) | + sup x M | h ( x ) | + sup x M | 1 r t = 1 r i = 1 n W n i ( x ) σ i ε i ( t ) | = : J r , n ( 1 ) + J r , n ( 2 ) + J r , n ( 3 ) .  
By (16) and ( C 4 ) , we have
J r , n ( 1 ) a . s . 0
as min ( r , n ) . From (41), it follows that
J r , n ( 2 ) 0
as min ( r , n ) . By ( C 1 ) (ii) and ( C 2 ) , we can get that
sup i 1 , x M | W n i ( x ) σ i | C sup i 1 , x M | W n i ( x ) | = O ( n α ) ,
and
sup x M i = 1 n | W n i ( x ) σ i | C sup x M i = 1 n | W n i ( x ) | = O ( 1 ) .
Hence, from Lemma 4, it follows that
J r , n ( 3 ) a . s . 0 .  
as min ( r , n ) . Therefore, (18) follows from (44) through (47). □
Proof of Theorem 3.
We only need to prove (20) since the proof of (19) is analogous. By (33), we have
β ^ r , n ( W L S ) β = U ˜ n 2 [ 1 r t = 1 r i = 1 n γ i z ˜ i σ i ε i ( t ) j = 1 n γ j z ˜ j ( 1 r t = 1 r i = 1 n W n i ( x j ) σ i ε i ( t ) ) + i = 1 n γ i z ˜ i h ( x i ) ] = : I r , n ( 1 ) I r , n ( 2 ) + I r , n ( 3 ) .
Hence, it follows by C p inequality that
E | β ^ r , n ( W L S ) β | p 3 p 1 ( E | I r , n ( 1 ) | p + E | I r , n ( 2 ) | p + E | I r , n ( 3 ) | p ) .  
The rest of the proof is similar to the proof of (16), so we omitted the details here. □
Proof of Theorem 4.
We only need to prove (22) since the proof of (21) is analogous. By (43), we derive that
h ˜ r , n ( x ) h ( x ) = 1 r t = 1 r i = 1 n W n i ( x ) ( z i β + h ( x ) + σ i ε i ( t ) z i β ^ r , n ( W L S ) ) h ( x ) = 1 r t = 1 r i = 1 n W n i ( x ) z i ( β β ^ r , n ( W L S ) ) h ( x ) + 1 r t = 1 r i = 1 n W n i ( x ) σ i ε i ( t ) = : J r , n ( 1 ) J r , n ( 2 ) + J r , n ( 3 ) .
Hence, by C p inequality, we derive that
E | h ˜ r , n ( x ) h ( x ) | p 3 p 1 ( E | J r , n ( 1 ) | p + E | J r , n ( 2 ) | p + E | J r , n ( 3 ) | p ) .  
Since | J r , n ( 1 ) | p ( sup x M | i = 1 n W n i ( x ) z i | ) p | β β ^ r , n ( W L S ) | p , together with (20) and ( C 4 ) , we can get that
lim min ( r , n ) sup x M E | J r , n ( 1 ) | p = 0 .  
From (41), it follows that
lim min ( r , n ) sup x M E | J r , n ( 2 ) | p = 0 .  
By ( C 1 ) (ii) and ( C 2 ) , we can get that
sup i 1 , x M | W n i ( x ) σ i | C sup i 1 , x M | W n i ( x ) | = O ( n α ) ,
and
sup x M i = 1 n | W n i ( x ) σ i | C sup x M i = 1 n | W n i ( x ) | = O ( 1 ) .
Hence, by Lemma 5, one can get that
lim min ( r , n ) sup x M E | J r , n ( 3 ) | p = 0 .  
Therefore, (22) follows from (49) through (52). □

6. Numerical Simulations

In this section, we will verify the validity of the theoretical results by two simulations.

6.1. Simulation 1

We will simulate a partially linear model
y i ( t ) = z i β + h ( x i ) + σ i ε i ( t ) ,   r = n / 2 , 1 i n ,  
where β = 2.5 , h ( x ) = sin ( 2 π x ) , z i = ( 1 ) i i n , σ i = 1 , 1 i n , and random errors { ε i ( t ) } have the common distribution as that of { Z n } in Example 1 of Section 1. Then, { ε i ( t ) } is a ρ -mixing sequence, and it is neither NA nor ρ * -mixing.
In particular, we take the weight function W n i ( ) as the following nearest neighbor weight function (see [11,35]). Without loss of generality, denote M = [ 0 , 1 ] and x i = i n ( 1 i n ) . For each x M , we rewrite
| x 1 x | , | x 2 x | , , | x n x |
as follows:
| x R 1 ( x ) x | | x R 2 ( x ) x | , , | x R n ( x ) x | .
Take k n = [ n 0.8 ] and define the nearest neighbor weight function as
W n i ( x ) = { 1 k n ,   if | x i x | | x R k n x | , 0 ,   else .
where the sample sizes are taken as n = 100 , 600 , 1200 , 1900 , 2700 , and 3600 and the points x are taken as x = 0.2 , 0.4 , 0.6 , and 0.8 , respectively. We compute β ^ r , n ( L S ) β and h ^ r , n ( x ) h ( x ) for 1000 times, respectively. The boxplots of β ^ r , n ( L S ) β are provided in Figure 1, Figure 2, Figure 3 and Figure 4, the violin plots of h ^ r , n ( x ) h ( x ) are provided in Figure 5, Figure 6, Figure 7 and Figure 8, the curves of h ( x ) and h ^ r , n ( x ) are provided in Figure 9, and the mean squared errors (MSE) of β ^ r , n ( L S ) and h ^ r , n ( x ) are presented in Table 1 and Table 2, respectively.

6.2. Simulation 2

We will simulate a partially linear model
y i ( t ) = z i β + h ( x i ) + σ i ε i ( t ) ,   r = n / 2 , 1 i n ,  
where β = 3.5 , h ( x ) = cos ( π x ) , z i = ( 1 ) i i n , σ i = 1 , 1 i n , and random errors { ε i ( t ) } have the same distribution as { Z n } in Example 1 of Section 1. Then, { ε i ( t ) } is a ρ -mixing sequence, and it is neither NA nor ρ * -mixing.
Using the same estimating methods as model (53), we compute β ^ r , n ( L S ) β and h ^ r , n ( x ) h ( x ) for 1000 times in model (54) under different values of n , respectively. The boxplots of β ^ r , n ( L S ) β are provided in Figure 10, Figure 11, Figure 12 and Figure 13, the violin plots of h ^ r , n ( x ) h ( x ) are provided in Figure 14, Figure 15, Figure 16 and Figure 17, the curves of h ( x ) and h ^ r , n ( x ) are provided in Figure 18, and the MSEs of β ^ r , n ( L S ) and h ^ r , n ( x ) are presented in Table 3 and Table 4, respectively.
It can be seen from Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8 and Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16 and Figure 17 that regardless of the values of x , β ^ r , n ( L S ) β and h ^ r , n ( x ) h ( x ) fluctuate to zero and the ranges of β ^ r , n ( L S ) β and h ^ r , n ( x ) h ( x ) decrease as n increases. From Table 1, Table 2, Table 3 and Table 4, one can see that regardless of the values of x , the MSEs decrease gradually as n increases. Hence the estimators get closer and closer to their real values as n increases. Figure 9 and Figure 18 further show that the estimators of function h ( x ) have good effects. The simulation results directly reflect our theoretical results.

7. Conclusions

In this paper, we mainly investigated the asymptotic properties of the estimators for the unknown parameter and non-parametric component in the heteroscedastic partially linear model (1). A lot of authors have derived the asymptotic properties of the estimators in partially linear models with independent random errors (see [4,5,6,8,33]). However, in many applications, the random errors are not independent. Here, we assumed that the random errors are ρ -mixing, which includes independent, NA, and ρ * -mixing random variables as special cases. Under some suitable conditions, the strong consistency and p -th ( p > 0 ) mean consistency of the LS estimator and WLS estimator for the unknown parameter β were investigated, and the strong consistency and p -th ( p > 0 ) mean consistency of the estimators for the non-parametric component h ( ) were also studied. The results obtained in this paper include the corresponding ones of independent random errors, NA random errors (see [16]), and ρ * -mixing random errors as special cases. Furthermore, for the model (1), we carried out simulations to study the numerical performance of the asymptotic properties for the estimators of the unknown parameter and non-parametric component for the first time. ρ -mixing sequences are widely used dependent sequences. Therefore, investigating the limit properties of the estimators in regression models under ρ -mixing errors in future studies is an interesting subject.

Author Contributions

Methodology, Software, Writing—original draft, and Writing—review and editing, Y.Z.; Funding acquisition, Supervision, and Project administration, X.L.; Validation, Y.Z. and X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (61374183) and the Project of Guangxi Education Department (2017KY0720).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Engle, R.; Granger, C.; Rice, J.; Weiss, A. Nonparametric estimates of the relation between weather and electricity sales. J. Am. Stat. Assoc. 1986, 81, 310–320. [Google Scholar] [CrossRef]
  2. Heckman, N. Spline smoothing in partly linear models. J. R. Stat. Soc. Ser. B 1986, 48, 244–248. [Google Scholar] [CrossRef]
  3. Speckman, P. Kernel smoothing in partial linear models. J. R. Stat. Soc. Ser. B 1988, 50, 413–436. [Google Scholar] [CrossRef]
  4. Gao, J.T. Consistency of estimation in a semiparametric regression model (I). J. Syst. Sci. Math. Sci. 1992, 12, 269–272. [Google Scholar]
  5. Härdle, W.; Liang, H.; Gao, J.T. Partially Linear Models; Physica-Verlag: Heidelberg, Germany, 2000. [Google Scholar]
  6. Hu, H.C.; Zhang, Y.; Pan, X. Asymptotic normality of DHD estimators in a partially linear model. Stat. Papers 2016, 57, 567–587. [Google Scholar] [CrossRef]
  7. Zeng, Z.; Liu, X.D. Asymptotic normality of difference-based estimator in partially linear model with dependent errors. J. Inequal. Appl. 2018, 2018, 267. [Google Scholar] [CrossRef] [PubMed]
  8. Gao, J.T.; Chen, X.R.; Zhao, L.C. Asymptotic normality of a class of estimators in partial linear models. Acta Math. Sin. 1994, 37, 256–268. [Google Scholar]
  9. Baek, J.; Liang, H.Y. Asymptotic of estimators in semi-parametric model under NA samples. J. Stat. Plan. Inference 2006, 136, 3362–3382. [Google Scholar] [CrossRef]
  10. Zhou, X.C.; Hu, S.H. Moment consistency of estimators in semiparametric regression model under NA samples. Pure Appl. Math. 2010, 6, 262–269. [Google Scholar]
  11. Hu, S.H. Consistency estimate for a new semiparametric regression model. Acta Math. Sci. 1997, 40, 527–536. [Google Scholar]
  12. Li, J.; Yang, S.C. Moment consistency of estimators for semiparametric regression. Acta Math. Appl. Sin. 2004, 17, 257–262. [Google Scholar]
  13. Li, J.; Yang, S.C. Strong consistency of estimators for semiparametric regression. J. Math. Study 2004, 37, 431–437. [Google Scholar]
  14. Wang, X.J.; Deng, X.; Xia, F.X.; Hu, S.H. The consistency for the estimators of semiparametric regression model based on weakly dependent errors. Stat. Papers 2017, 58, 303–318. [Google Scholar] [CrossRef]
  15. Wu, Y.; Wang, X.J. A note on the consistency for the estimators of semiparametric regression model. Stat. Papers 2018, 59, 1117–1130. [Google Scholar] [CrossRef]
  16. Zhou, X.C.; Liu, X.S.; Hu, S.H. Moment consistency of estimators in partially linear models under NA samples. Metrika 2010, 72, 415–432. [Google Scholar] [CrossRef]
  17. Joag, D.K.; Proschan, F. Negative association of random variables with applications. Ann. Stat. 1983, 11, 286–295. [Google Scholar] [CrossRef]
  18. Bradley, R.C. On the spectral density and asymptotic normality of weakly dependent random fields. J. Theor. Probab. 1992, 5, 355–373. [Google Scholar] [CrossRef]
  19. Zhang, L.X.; Wang, X.Y. Convergence rates in the strong laws of asymptotically negatively associated random fields. Appl. Math. J. Chin. Univ. 1999, 14, 406–416. [Google Scholar]
  20. Zhang, L.X. A functional central limit theorem for asymptotically negatively dependent random fields. Acta Math. Hung. 2000, 86, 237–259. [Google Scholar] [CrossRef]
  21. Liang, H.Y. Complete convergence for weighted sums of negatively associated random variables. Stat. Probab. Lett. 2000, 48, 317–325. [Google Scholar] [CrossRef]
  22. Wu, Q.Y. Strong consistency of M Estimator in linear model for negatively associated samples. J. Syst. Complex. 2006, 19, 592–600. [Google Scholar] [CrossRef]
  23. Thanh, L.V.; Yin, G.G.; Wang, L.Y. State observers with random sampling times and convergence analysis of double-indexed and randomly weighted sums of mixing processes. SIAM J. Control. Optim. 2011, 49, 106–124. [Google Scholar] [CrossRef] [Green Version]
  24. Tang, X.F.; Xi, M.M.; Wu, Y.; Wang, X.J. Asymptotic normality of a wavelet estimator for asymptotically negatively associated errors. Stat. Probab. Lett. 2018, 140, 191–201. [Google Scholar] [CrossRef]
  25. Wang, J.F.; Lu, F.B. Inequalities of maximum partial sums and weak convergence for a class of weak dependent random variables. Acta Math. Sin. 2006, 22, 693–700. [Google Scholar] [CrossRef]
  26. Yuan, D.M.; Wu, X.S. Limiting behavior of the maximum of the partial sum for asymptotically negatively associated random variables under residual Cesaro alpha-integrability assumption. J. Stat. Plan. Inference 2010, 140, 2395–2402. [Google Scholar] [CrossRef]
  27. Zhang, L.X. Central limit theorems for asymptotically negatively associated random fields. Acta Math. Sin. 2000, 16, 691–710. [Google Scholar] [CrossRef]
  28. Chen, Z.; Lu, C.; Shen, Y.; Wang, R.; Wang, X.J. On complete and complete moment convergence for weighted sums of ANA random variables and applications. J. Stat. Comput. Simul. 2009, 89, 2871–2898. [Google Scholar] [CrossRef]
  29. Zhang, Y. Complete moment convergence for moving average process generated by ρ -mixing random variables. J. Inequal. Appl. 2015, 2015, 245. [Google Scholar] [CrossRef] [Green Version]
  30. Wu, Q.Y.; Jiang, Y.Y. Some Limiting behavior for asymptotically negative associated random variables. Probab. Eng. Inf. Sci. 2018, 32, 58–66. [Google Scholar] [CrossRef]
  31. Xu, F.; Wu, Q.Y. Almost sure central limit theorem for self-normalized partial ρ -mixing sequences. Stat. Probab. Lett. 2017, 129, 17–27. [Google Scholar] [CrossRef]
  32. Shen, A.T.; Zhang, Y.; Volodin, A. Applications of the Rosenthal-Type inequality for negatively superadditive dependent random variables. Metrika 2015, 78, 295–311. [Google Scholar] [CrossRef]
  33. Chen, M.H.; Ren, Z.; Hu, S.H. Strong consistency of a class of estimators in partial linear model. Acta Math. Sin. 1998, 41, 429–439. [Google Scholar]
  34. Zhou, X.C.; Hu, S.H. Strong consistency of estimators in partially linear models under NA samples. J. Syc. Sci. Math. Sci. 2010, 30, 60–71. [Google Scholar]
  35. Hu, S.H. Fixed-Design semiparametric regression for linear time series. Acta Math. Sci. 2006, 26, 74–82. [Google Scholar] [CrossRef]
Figure 1. Boxplots of β ^ r , n ( L S ) β with β = 2.5 and x = 0.2 .
Figure 1. Boxplots of β ^ r , n ( L S ) β with β = 2.5 and x = 0.2 .
Symmetry 12 01188 g001
Figure 2. Boxplots of β ^ r , n ( L S ) β with β = 2.5 and x = 0.4 .
Figure 2. Boxplots of β ^ r , n ( L S ) β with β = 2.5 and x = 0.4 .
Symmetry 12 01188 g002
Figure 3. Boxplots of β ^ r , n ( L S ) β with β = 2.5 and x = 0.6 .
Figure 3. Boxplots of β ^ r , n ( L S ) β with β = 2.5 and x = 0.6 .
Symmetry 12 01188 g003
Figure 4. Boxplots of β ^ r , n ( L S ) β with β = 2.5 and x = 0.8 .
Figure 4. Boxplots of β ^ r , n ( L S ) β with β = 2.5 and x = 0.8 .
Symmetry 12 01188 g004
Figure 5. Violin plots of h ^ r , n ( x ) h ( x ) with β = 2.5 and x = 0.2 .
Figure 5. Violin plots of h ^ r , n ( x ) h ( x ) with β = 2.5 and x = 0.2 .
Symmetry 12 01188 g005
Figure 6. Violin plots of h ^ r , n ( x ) h ( x ) with β = 2.5 and x = 0.4 .
Figure 6. Violin plots of h ^ r , n ( x ) h ( x ) with β = 2.5 and x = 0.4 .
Symmetry 12 01188 g006
Figure 7. Violin plots of h ^ r , n ( x ) h ( x ) with β = 2.5 and x = 0.6 .
Figure 7. Violin plots of h ^ r , n ( x ) h ( x ) with β = 2.5 and x = 0.6 .
Symmetry 12 01188 g007
Figure 8. Violin plots of h ^ r , n ( x ) h ( x ) with β = 2.5 and x = 0.8 .
Figure 8. Violin plots of h ^ r , n ( x ) h ( x ) with β = 2.5 and x = 0.8 .
Symmetry 12 01188 g008
Figure 9. Curves of h ( x ) = sin ( 2 π x ) and h ^ r , n ( x ) with β = 2.5 and n = 1200 .
Figure 9. Curves of h ( x ) = sin ( 2 π x ) and h ^ r , n ( x ) with β = 2.5 and n = 1200 .
Symmetry 12 01188 g009
Figure 10. Boxplots of β ^ r , n ( L S ) β with β = 3.5 and x = 0.2 .
Figure 10. Boxplots of β ^ r , n ( L S ) β with β = 3.5 and x = 0.2 .
Symmetry 12 01188 g010
Figure 11. Boxplots of β ^ r , n ( L S ) β with β = 3.5 and x = 0.4 .
Figure 11. Boxplots of β ^ r , n ( L S ) β with β = 3.5 and x = 0.4 .
Symmetry 12 01188 g011
Figure 12. Boxplots of β ^ r , n ( L S ) β with β = 3.5 and x = 0.6 .
Figure 12. Boxplots of β ^ r , n ( L S ) β with β = 3.5 and x = 0.6 .
Symmetry 12 01188 g012
Figure 13. Boxplots of β ^ r , n ( L S ) β with β = 3.5 and x = 0.8 .
Figure 13. Boxplots of β ^ r , n ( L S ) β with β = 3.5 and x = 0.8 .
Symmetry 12 01188 g013
Figure 14. Violin plots of h ^ r , n ( x ) h ( x ) with β = 3.5 and x = 0.2 .
Figure 14. Violin plots of h ^ r , n ( x ) h ( x ) with β = 3.5 and x = 0.2 .
Symmetry 12 01188 g014
Figure 15. Violin plots of h ^ r , n ( x ) h ( x ) with β = 3.5 and x = 0.4 .
Figure 15. Violin plots of h ^ r , n ( x ) h ( x ) with β = 3.5 and x = 0.4 .
Symmetry 12 01188 g015
Figure 16. Violin plots of h ^ r , n ( x ) h ( x ) with β = 3.5 and x = 0.6 .
Figure 16. Violin plots of h ^ r , n ( x ) h ( x ) with β = 3.5 and x = 0.6 .
Symmetry 12 01188 g016
Figure 17. Violin plots h ^ r , n ( x ) h ( x ) with β = 3.5 and x = 0.8 .
Figure 17. Violin plots h ^ r , n ( x ) h ( x ) with β = 3.5 and x = 0.8 .
Symmetry 12 01188 g017
Figure 18. Curves of h ( x ) = cos ( π x ) and h ^ r , n ( x ) with β = 3.5 and n = 1200 .
Figure 18. Curves of h ( x ) = cos ( π x ) and h ^ r , n ( x ) with β = 3.5 and n = 1200 .
Symmetry 12 01188 g018
Table 1. The MSEs of β ^ r , n ( L S ) with β = 2.5 and h ( x ) = sin ( 2 π x ) .
Table 1. The MSEs of β ^ r , n ( L S ) with β = 2.5 and h ( x ) = sin ( 2 π x ) .
x n = 100 n = 600 n = 1200 n = 1900 n = 2700 n = 3600
0.20.0295640.00404290.00273930.00178790.00117780.00096205
0.40.0275630.00481070.0025570.00136180.00122250.00083697
0.60.0324180.00454370.00302470.00178160.0010160.0007695
0.80.0267150.00496480.00260330.00149110.000996220.00083525
Table 2. The MSEs of h ^ r , n ( x ) with β = 2.5 and h ( x ) = sin ( 2 π x ) .
Table 2. The MSEs of h ^ r , n ( x ) with β = 2.5 and h ( x ) = sin ( 2 π x ) .
x n = 100 n = 600 n = 1200 n = 1900 n = 2700 n = 3600
0.20.0728990.01328460.00641220.004507550.004081980.00309342
0.40.0644070.0126050.00610760.00401730.00372970.0027017
0.60.06519450.011438440.00616470.00482460.00318930.002914
0.80.0672540.01346880.07016440.005156160.003976910.0030646
Table 3. The MSEs of β ^ r , n ( L S ) with β = 3.5 and h ( x ) = cos ( π x ) .
Table 3. The MSEs of β ^ r , n ( L S ) with β = 3.5 and h ( x ) = cos ( π x ) .
x n = 100 n = 600 n = 1200 n = 1900 n = 2700 n = 3600
0.20.0287350.00469350.0025470.00172090.00109440.00085433
0.40.0327030.00536210.00225950.00159340.0012060.0008074
0.60.0278530.00482290.00257560.00140960.00118480.00084765
0.80.030420.0048480.0023520.00146490.00118240.00084588
Table 4. The MSEs of h ^ r , n ( x ) with β = 3.5 and h ( x ) = cos ( π x ) .
Table 4. The MSEs of h ^ r , n ( x ) with β = 3.5 and h ( x ) = cos ( π x ) .
x n = 100 n = 600 n = 1200 n = 1900 n = 2700 n = 3600
0.20.0255360.00687330.00352590.00252820.00190820.0015027
0.40.0298590.00619450.00323470.00223210.0020020.0016778
0.60.024150.0054750.00408590.00268730.00174360.0014589
0.80.0271640.00559040.00367750.00243260.00173960.0014935

Share and Cite

MDPI and ACS Style

Zhang, Y.; Liu, X. The Consistency of Estimators in a Heteroscedastic Partially Linear Model with ρ-Mixing Errors. Symmetry 2020, 12, 1188. https://doi.org/10.3390/sym12071188

AMA Style

Zhang Y, Liu X. The Consistency of Estimators in a Heteroscedastic Partially Linear Model with ρ-Mixing Errors. Symmetry. 2020; 12(7):1188. https://doi.org/10.3390/sym12071188

Chicago/Turabian Style

Zhang, Yu, and Xinsheng Liu. 2020. "The Consistency of Estimators in a Heteroscedastic Partially Linear Model with ρ-Mixing Errors" Symmetry 12, no. 7: 1188. https://doi.org/10.3390/sym12071188

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop