Next Article in Journal
Semi-RainGAN: A Semisupervised Coarse-to-Fine Guided Generative Adversarial Network for Mixture of Rain Removal
Next Article in Special Issue
Editorial Summary: Mathematical Models and Methods in Various Sciences
Previous Article in Journal
Metal-Organic Frameworks Based Multifunctional Materials for Solar Cells: A Review
Previous Article in Special Issue
Scaled-Invariant Extended Quasi-Lindley Model: Properties, Estimation, and Application
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SIMEX Estimation of Partially Linear Multiplicative Regression Model with Mismeasured Covariates

1
School of Zhangjiagang, Jiangsu University of Science and Technology, Zhangjiagang 215600, China
2
Suzhou Institute of Technology, Jiangsu University of Science and Technology, Zhangjiagang 215600, China
*
Author to whom correspondence should be addressed.
Symmetry 2023, 15(10), 1833; https://doi.org/10.3390/sym15101833
Submission received: 23 August 2023 / Revised: 22 September 2023 / Accepted: 25 September 2023 / Published: 27 September 2023
(This article belongs to the Special Issue Mathematical Models and Methods in Various Sciences)

Abstract

:
In many practical applications, such as the studies of financial and biomedical data, the response variable usually is positive, and the commonly used criteria are based on absolute errors, which is not desirable. Rather, the relative errors are more of concern. We consider statistical inference for a partially linear multiplicative regression model when covariates in the linear part are measured with error. The simulation–extrapolation (SIMEX) estimators of parameters of interest are proposed based on the least product relative error criterion and B-spline approximation, where two kinds of relative errors are both introduced and the symmetry emerges in the loss function. Extensive simulation studies are conducted and the results show that the proposed method can effectively eliminate the bias caused by the measurement errors. Under some mild conditions, the asymptotic normality of the proposed estimator is established. Finally, a real example is analyzed to illustrate the practical use of our proposed method.

1. Introduction

In many applications, such as studies on financial and biomedical data, the response variable is usually positive. For modeling the relationship between the positive response and a set of explanatory variables, the natural idea is to first take an appropriate transformation for the response, e.g., the logarithmic transformation, and then, some common regression models, such as linear regression or quantile regression, which can be employed based on the transformed data. As argued by [1], the least-squares or least absolute deviation criteria are both based on absolute errors, which is not desirable in many practical applications. Rather, the relative errors are more of concern.
In the early literature, many authors contributed fruitfully to this issue; see [2,3,4], where the relative error is defined as the ratio of the error relative to the target value. Since the work of [1], where both the ratios of the error relative to both the target value and the predictor are introduced in the loss function, called the least absolute relative error (LARE) criterion, more attention has been focused on the multiplicative regression (MR) model, and various extensions have been investigated. For example, Ref. [5] considered the estimation problem of the nonparametric MR model; see also [6,7] and references therein. In particular, some semi-parametric MR models have been studied. When estimating the nonparametric function g ( z ) in these models, such as the partially linear MR model ([8,9,10]), single-index MR model ([11,12,13]), varying-coefficient MR model ([14]), and others ([15]), almost all researchers use the local linear smoothing technique and approximate it in a neighborhood of z for obtaining its estimation, where a good choice of the bandwidths is quietly critical and its value is possibly sensitive to the performance of the resulting estimation and inference. Additionally, due to the fact that the value of the function at every observation of z is estimated separately, the optimal selection of bandwidth for all observations may be not the same. Thus, the computation is cumbersome and the numerical problem becomes untractable when the sample size is large. As a result, researchers have had to compromise and assume that the bandwidths used for estimating the nonparametric function are the same.
When solving nonparametric regression, spline-based methods, such as regression splines, smoothing splines, and penalized splines, are popular and applied extensively in many fields. Recently, Ref. [16] proposed multiplicative additive models based on the least product relative error criterion (LPRE), where the B-spline basis functions are used to estimate the nonparametric functions. Simulation studies have demonstrated that their approach performs well. It is worth noting that the loss function based on LPRE is smooth enough and differentiable with respect to the regression parameter, in contrast to that based on LARE. Moreover, LPRE inherits the symmetry between the two kinds of relative errors presented in LARE. Using this symmetry makes the computation and derivation of asymptotic properties easier.
A common feature in the above-mentioned literature is that these studies presume that all variables in the model are precisely observed. However, in many applications, some covariates cannot be measured exactly due to various limitations; see [17] for such examples in econometric, biology, nutrition, and toxicology studies. Extensive studies have been conducted, such as quantile and other traditional robust statistical inference procedures in the measurement error setup. Only recently have we witnessed an interest in applying multiplicative regression when the covariates are contaminated with measurement errors. Ref. [18] developed a simulation–extrapolation (SIMEX) estimation method for unknown parameters based on the LPRE criterion for the linear and varying coefficient multiplicative models, respectively, with the covariates being measured with additive error, where the measurement error is assumed to follow a normal distribution, and under certain conditions, the large sample properties of the given estimates are proved.
The SIMEX estimation procedure was first developed by [19] to reduce the estimation bias in the presence of additive measurement errors. Since then, the SIMEX method has gained more attention in the literature, and it has become a standard tool for analyzing complex regression models. A significant feature of SIMEX is that one can rely on standard inferential procedures to estimate the unknown parameters. Since its conception, more researchers have extended the SIMEX method to various applications. Ref. [20] considered statistical inference for additive partial linear models when the linear covariate is measured with error using attenuation-to-correction and SIMEX methods. Ref. [21] proposed graphical proportional hazards measurement error models and developed SIMEX procedures for the parameter of interest with complex structured covariates. To the best of our knowledge, there are seldom studies on the partially linear multiplicative regression model with measurement error. To fill this gap, we will address this problem in detail in this paper.
This paper is organized as follows. In Section 2, we first introduce in detail the simulation–extrapolation method for the partially linear multiplicative regression model with measurement errors. Combining the B-spline approximation and the LPRE criterion, a new estimation method is proposed, and some remarks about the selection of number and location of knots and the asymptotic properties of the proposed estimator are presented. Some simulation studies are carried out to assess the performance of our method under a finite-sample situation in Section 3. A real example is analyzed to illustrate the practical usage of our proposed method in Section 4. Finally, some discussions in Section 5 conclude the paper.

2. Methodology

In this section, we propose the simulation–extrapolation estimation for regression parameters and the nonparametric function in the partially linear multiplicative regression model, where the covariates in the parametric part are measured with additive measurement errors. Computation details are presented, and some asymptotic results are also established.

2.1. Notations and Model

Let Y denote the positive response variable, which satisfies the following partially linear multiplicative regression model
Y = exp ( X β + g ( Z ) ) ϵ ,
where X is the p-dimensional vector of covariates associated with the regression parameter vector β , Z is a continuous univariate variable, ϵ is the positive error and independent of ( X , Z ) , and g ( . ) is an unknown smooth link function.
Due to some practical limitations, the covariate X cannot be observed precisely. Instead, its surrogate, W, through the additive covariate measurement error structure
W = X + U ,
is available, where U is the measurement error with mean zero and the covariance matrix u and independent of ( X , Z ) and ϵ . Assume that u is known; otherwise, it can be estimated through the replication experiments technique, as argued in much of the literature such as [17]. When some components of X are error-free, the corresponding terms in u are set to be zero. In particular, when u is a zero matrix, i.e., U is zero, there is no measurement error.
We combine Models (1) and (2) and refer to it as the partially linear multiplicative regression measurement error (PLMR-ME) model. Let ( Y i , X i , Z i , W i ) i = 1 , , n be independent and identical replicates of ( Y , X , Z , W ) .

2.2. SIMEX Estimation of PLMR-ME Model

In general, the SIMEX method consists of a simulation step, an estimation step, and an extrapolation step. Before the detailed introduction of our method, we must specify two kinds of parameters; one is the simulation times, denoted by n 0 , and the other is the levels of added error, denoted by λ Λ = { λ 1 , , λ M } . Oftentimes, equally spaced values with λ 1 = 0 and λ M = 2 are adopted, M ranges from 10 to 20, and B is a given integer lying in [50,200].
In our method, we use the SIMEX algorithm, B-spline approximation, and the LPRE criterion to estimate β and g ( . ) . First, we approximate g ( . ) using a B-spline function, i.e., g ( z ) j = 1 K n α j B j ( z ) , where B j ( . ) is the B-spline basis function of order d with k n internal knots, and K n = d + k n . Then, Model (1), as in [22,23], can be rewritten as the spline model
Y = exp ( X β + B α ) ε ,
where B = B ( Z ) = ( B 1 ( Z ) , , B K n ( Z ) ) , α = ( α 1 , , α K n ) is the corresponding vector of the spline coefficients. In this way, the estimation problem of unknown function g ( . ) is transformed into the estimation of α . Next, we employ the LPRE criterion to estimate β and α . Explicitly speaking, the proposed SIMEX algorithm proceeds as follows.
(1)
Simulation step.
For each λ Λ , generate B independent random samples of size n from N ( 0 , u ) . That is to say, for the j-th sample, generate a sequence of pseudo-predictors
W i b ( λ ) = W i + λ V i b , i = 1 , , n , b = 1 , , n 0 ,
where V i b N ( 0 , u ) . Note that the covariance matrix of W i b ( λ ) given X i is
V a r ( W i b ( λ ) | X i ) = λ u + V a r ( W i | X i ) = ( 1 + λ ) u .
Thus, when λ = 1 , it follows that V a r ( W i j ( λ ) | X i ) = 0 . Combining the fact that E ( W i b ( λ ) | X i ) = X i , the conditional mean square error of W i b ( λ ) , defined as E [ ( W i b ( λ ) X i ) 2 | X i ] , converges to zero as λ 1 .
(2)
Estimation step.
For a fixed λ , based on the b-th random sample ( Y i , W i b ( λ ) , Z i ) i = 1 , , n , one can obtain the estimator of ( β , α ) , denoted by ( β ^ b ( λ ) , α ^ b ( λ ) ) , which is the minimizer of the objective function
L n b ( β , α ; λ ) = i = 1 n Y i exp ( W i b ( λ ) β + B i α ) Y i × Y i exp ( W i b ( λ ) β + B i α ) exp ( W i b ( λ ) β + B i α ) = i = 1 n Y i exp ( W i b ( λ ) β B i α ) + Y i 1 exp ( W i b ( λ ) β + B i α ) 2 ,
where B i = B ( Z i ) . Then, define the final estimates of ( β , α ) using the average of ( β ^ b ( λ ) , α ^ b ( λ ) ) over b = 1 , , n 0 , defined by β ^ ( λ ) = b = 1 n 0 β ^ b ( λ ) / n 0 and α ^ ( λ ) = b = 1 n 0 α ^ b ( λ ) / n 0 , where λ Λ . Furthermore, the corresponding estimator of g ( z )   i s g ^ ( z ; λ ) = B α ^ ( λ ) .
(3)
Extrapolation step.
Consider two extrapolation models: linear and quadratic. Without loss of generality, denote the extrapolation function by Ψ ( λ , Γ ) , where Γ is the regression parameter vector. At this time, the linear extrapolation function is Ψ ( λ , Γ ) = γ 0 + γ 1 λ , and the quadratic one is Ψ ( λ , Γ ) = γ 0 + γ 1 λ + γ 1 λ 2 . For the two sequences { ( λ , β ^ ( λ ) ) , λ Λ } and { ( λ , g ^ ( z ; λ ) ) , λ Λ } , we fit a regression model to each of the two sequences from
β ^ ( λ ) = Ψ 1 ( λ , Γ 1 ) + ε 1 , g ^ ( z ; λ ) = Ψ 2 ( λ , Γ 2 ) + ε 2
respectively, where ε 1 and ε 2 are random errors. Using the least-squares method, one can obtain the estimates of Γ 1 and Γ 2 and denote them as Γ ^ 1 and Γ ^ 2 , respectively. Then, the SIMEX estimator of β is defined as the predicted value
β ^ S I M E X = Ψ 1 ( 1 , Γ ^ 1 ) .
Meanwhile, the naive estimator of β reduces to Ψ 1 ( 0 , Γ ^ 1 ) . As for β above, the nonparametric term g ( . ) can be estimated in the same way. Denote the SIMEX estimator of g ( z ) by g ^ S I M E X ( z ) = Ψ 2 ( 1 , Γ ^ 2 ) .

2.3. Asymptotic Results

To derive the asymptotic normality of the SIMEX estimator β ^ S I M E X , some regularity conditions are necessary to be introduced as follows.
(A1)
E [ ( ϵ ϵ 1 ) | X , Z ] = 0 .
(A2)
E ( X X ) is a positive definite matrix.
(A3)
There exists δ > 0 such that E [ ( ϵ + ϵ 1 ) exp ( δ | | X | | ) ] < , E [ ( ϵ + ϵ 1 ) 2 exp ( δ | | X | | ) ] < , and E [ ( ϵ + ϵ 1 ) 2 ( x j x k x l ) 2 exp ( δ | | X | | ) ] < , j , k , l = 1 , , p .
(A4)
g ( . ) H = { g C r [ a , b ] : g ( j ) M 0 , j = 1 , , r , | g ( r ) ( z 1 ) g ( r ) ( z 2 ) | M 1 | z 1 z 2 | } , where M 0 and M 1 are some positive constants and · is the superior norm. 0 r d .
Conditions (A1)–(A3) are common requirements in the penalized spline theory. (A4) is the regularization condition used in the study of MR. (A5) is an identification condition for the LPRE estimation, which is similar to the zero-mean condition in the classical linear mean regression.
Before presenting our result, some notations need to be introduced in advance. Let β ^ ( Λ ) = ( β ^ ( λ 1 ) , , β ^ ( λ M ) ) and Γ = ( Γ 11 , , Γ 1 p ) , where Γ 1 j is the true parameter vector estimated in the extrapolation step for the j-th component of β ^ ( λ ) . Define G ( λ k , Γ ) = ( Ψ ( λ k , Γ 11 ) , , Ψ ( λ k , Γ 1 p ) ) and G ( Λ , Γ ) = ( G ( λ 1 , Γ ) , , G ( λ M , Γ ) ) . Let Γ ^ be the minimizer of R e s ( Γ ) R e s ( Γ ) , where R e s ( Γ ) = β ^ ( Λ ) G ( Λ , Γ ) . According to the least-squares theory, Γ ^ satisfies s ( Γ ) R e s ( Γ ) = 0 , where s ( Γ ) = R e s ( Γ ) / ( Γ ) . Denote D ( Γ ) = s ( Γ ) s ( Γ ) and G ( λ , Γ ) = G ( λ , Γ ) / ( Γ ) .
Theorem 1.
Assume that the extrapolation function is theoretically exact. Under the conditions (A1)–(A4), it follows that as n , we have
n ( β ^ S I M E X β ) d N ( 0 , G ( 1 , Γ ) ( Γ ) G ( 1 , Γ ) ) ,
Proof. 
Assume that β ( λ ) is the true value based on the model Y i = exp ( W i b ( λ ) β + g ( Z i ) ) ϵ ˜ i . Using the similar method in Theorem 2 in [16], we have
n ( β ^ b ( λ ) β ( λ ) ) = n [ K ( β ( λ ) , λ ) ] 1 J n ( β ( λ ) , λ ) + o P ( 1 ) ,
where
K ( β ( λ ) , λ ) = E [ Y exp ( W ( λ ) β ( λ ) g ( Z ) ) + Y 1 exp ( W ( λ ) β ( λ ) + g ( Z ) ) ] W ( λ ) W i b ( λ ) ,
J n ( β ( λ ) , λ ) = 1 n i = 1 n [ Y i exp ( W i b ( λ ) β ( λ ) g ( Z i ) ) + Y i 1 exp ( W i b ( λ ) β ( λ ) + g ( Z i ) ) ] W i b ( λ ) ,
and W ( λ ) = X + λ V . Because β ^ ( λ ) = b = 1 n 0 β ^ b ( λ ) / n 0 , it follows that
n ( β ^ ( λ ) β ( λ ) ) = [ K ( β ( λ ) , λ ) ] 1 n 1 / 2 J n B ( β ( λ ) , λ ) + o P ( 1 ) ,
where J n B ( β ( λ ) , λ ) = 1 n 0 b = 1 n 0 η i b ( β ( λ ) , λ ) = 1 n 0 b = 1 n 0 [ Y i exp ( W i b ( λ ) β ( λ ) g ( Z i ) ) + Y i 1 exp ( W i b ( λ ) β ( λ ) + g ( Z i ) ) ] W i b ( λ ) . Define ( λ ) = C o v ( n 1 / 2 J n B ( β ( λ ) , λ ) ) . Some algebraic calculations show that
( λ ) = 1 n 0 V a r ( η i 1 ( β ( λ ) , λ ) ) + n 0 2 n 0 n 0 2 C o v ( η i 1 ( β ( λ ) , λ ) , η i 2 ( β ( λ ) , λ ) ) .
Then, according to the central limit theorem, it holds that
n ( β ^ ( λ ) β ( λ ) ) d N ( 0 , [ K ( β ( λ ) , λ ) ] 1 ( λ ) [ K ( β ( λ ) , λ ) ] ) .
Write ( Λ ) = d i a g ( ( λ 1 ) , , ( λ M ) ) . In the following, using the standard derivation of the SIMEX method and the definition of Γ ^ , we have
n ( Γ ^ Γ ) d N ( 0 , ( Γ ) ) ,
where ( Γ ) = D ( Γ ) 1 s ( Γ ) ( Λ ) s ( Γ ) D ( Γ ) 1 . Finally, using the Delta method and noting the facts β ^ S I M E X = β ^ ( 1 ) = Ψ 1 ( 1 , Γ ^ ) and β = Ψ ( 1 , Γ ) , the desirable result is established. □

3. Simulation Studies

Numerical studies were conducted to evaluate the finite sample performance of our proposed SIMEX estimators under various situations. To fairly compare the SIMEX estimator with the naive estimator that ignores measurement errors and the true estimator based on the data without measurement errors, we set the degree of spline basis to be q = 2 and the number of internal knots to k n = r o u n d ( n 1 / 3 ) + 1 ; these are located on equally spaced quantiles for all methods. All results below are based on 500 replicates, where M = 11 , λ 1 = 0 , λ 2 = 0.2 , , λ M = 2 , n 0 = 50 , and the sample size n = 50 , 100, and 200, respectively. All simulations were implemented using the software R.
Now, generate ( Y i , X i , Z i , W i ) from the following model
Y i = exp ( β 1 X 1 i + β 2 X 2 i + sin ( π Z i 2 ) ) ϵ i , W i = X i + U i , i = 1 , , n ,
where β 1 = 1.5 , β 2 = 1 , X i = ( X 1 i , X 2 i ) , U i = ( U 1 i , U 2 i ) , X 1 i N ( 0 , 1 ) , Z 2 i B i n o m ( 1 , 0.5 ) , and Z i U n i f ( 2 , 2 ) and is independent of the error ϵ i exp ( U n i f ( 2 , 2 ) ) . Further, we assume U 2 i = 0 , which means that X 2 i is error-free. However, for U 1 i , three measurement error distributions are considered, namely,
  • Case 1: U 1 i N ( 0 , 0.09 ) ;
  • Case 2: U 1 i N ( 0 , 0.36 ) ;
  • Case 3: U 1 i N ( 0 , 0.81 ) .
These represent the light-level, moderate-level, and heavy-level measurement error, respectively. In the extrapolation step, consider both the linear and quadratic extrapolation functions and, respectively, denote the corresponding method as SIMEX1 and SIMEX2.
For estimators of ( β 1 , β 2 ) , we record their empirical bias (BIAS), sample standard deviation (SD), and mean square error (MSE). For the nonparametric part, we use the averaged integrated absolute bias (IABIAS) and mean integrated square error (MISE), where for one estimator g ^ j ( j = 1 , , 401 ) , obtained from the j-th sample,
IABIAS = 1 500 j = 1 500 1 n g r i d k = 1 n g r i d | g ^ j ( u k ) g ( u k ) | ,
MISE = 1 500 j = 1 500 1 n g r i d k = 1 n g r i d [ g ^ j ( u k ) g ( u k ) ] 2 ,
at the fixed grid points { u k } equally spaced in [−2,2] and n g r i d = 401 . The values in parentheses below them are the associated sample standard deviation.
Table 1, Table 2 and Table 3 report the simulation results of different estimators of regression parameter and nonparametric function under cases 1–3 with different sample sizes. For β 1 , we can see that when the measurement error is small as in Table 1, all methods behave similarly and the proposed SIMEX method gains no obvious advantage over the naive method. Not surprisingly, as the measurement error becomes moderate as seen in Table 2, the naive estimates are substantially biased and have a larger mean square error (MSE), while the SIMEX estimates, especially when the quadratic function is applied, are unbiased and have a smaller MSE. When the measurement error is large, as seen in Table 3, all methods except the true one are slightly biased, but the performance of the SIMEX methods is still relatively better than that of the naive method. For β 2 and g ( . ) , the corresponding covariates X 2 i and Z i are error-free, and it seems that under the same measurement error level and sample size, both the naive and SIMEX estimates have similar performance in terms of the sample standard deviation (SD) and MSE for β 2 , integrated absolute bias (IABIAS), and mean integrated square error (MISE) for g ( . ) .
On the other hand, for each method and given case, the SD and MSE of estimates of ( β 1 , β 2 ) and the IABIAS and MISE of estimates of g ( . ) decrease as the sample size increases. Although the MSE of SIMEX2 is smaller that that of SIMEX1 for β 1 , their SD is reversed. Figure 1 is the Q-Q plots of the estimates of ( β 1 , β 2 ) in case 2 with a sample size n = 100 . It can be seen that all points are close to the line, which indicates that the resulting SIMEX estimator is asymptotically normal. This finding is in accordance with the theoretical result in Theorem 1. Figure 2 and Figure 3 display the boxplots of estimators of β 1 = 1.5 and β 2 = 1 in cases 2 and 3 with a sample size n = 100 , respectively, which reveal the similar conclusions obtained above. Figure 4 presents the average estimated curves, which are very close to the true one. Similar plots are obtained in other cases and omitted due to the limitation of space.

4. Real Data Analysis

To illustrate the proposed procedure, an application regarding body fat data is provided. These data are available at http://lib.stat.cmu.edu/datasets/bodyfat (accessed on 1 January 2020) and have been analyzed by several authors in different contexts; see [8,10,24]. There are 252 observations and several variables, including the percentage of body fat as the response variable Y, and 13 explanatory variables: age ( X 1 ), weight ( X 2 ), height ( X 3 ), neck ( X 4 ), chest ( X 5 ), abdomen ( X 6 ), hip ( X 7 ), thigh ( X 8 ), knee ( X 9 ), ankle ( X 10 ), biceps ( X 11 ), forearm ( X 12 ), and wrist ( X 13 ). As in [10], we deleted all possible outliers and obtained a sample size of 248. Following [8], we selected chest ( X 5 ) as the nonlinear effect U, and the other 12 covariates were treated as the linear component X in Model (1). Motivated by the suggestion in [24], weight ( X 2 ) was presumed to be mismeasured, and others were presumed to be error-free. Similar to [10], before the forthcoming computation, the nonparametric part U was transformed into [0,1] and the other covariates were standardized.
Estimation results of the regression coefficients β using the naive method and SIMEX methods with linear or quadratic extrapolation functions are shown in Table 4 and Table 5, associated with the results presented in [8] (local linear LARE estimator) and [10] (local linear LPRE estimator), which are denoted by Naive, SIMEX1, SIMEX2, ZW, and CL, respectively. To evaluate the impact of the measurement error level σ 2 and the number of interior knots k n , the variance in the measurement error and the number were set to 0.1 and 0.3, 3, and 6, respectively. This means that four cases were considered. In each specific case, the estimates of regression coefficients were close to each other. However, the estimates of the coefficient associated with weight ( X 2 ) varied greatly. In particular, for the the coefficient of X 2 , the sign of the naive estimate was negative, but the SIMEX estimates were both positive, although their absolute values were small. As the level of measurement error increased, the changes in SIMEX estimates varied steadily.
The estimated curves of g ( . ) are plotted in Figure 5 and Figure 6. All curves had a similar trend. In other words, g ( U ) firstly increased until around U = 0.4 , and then it then decreased. This phenomenon was also found in [10], but their figures behaved less clearly than ours. For a fixed number of knots, the level of measurement error had little effect on the estimated curves. Instead, the difference between Figure 5 and Figure 6 was relatively large, which may have been caused by the overfitting when k n was 6 and underfiting when k n was 3. It is worth noting that the SIMEX estimates were less sensitive than the naive estimate in all cases.

5. Conclusions

In this study, we used the simulation–extrapolation method to estimate the regression parameters and the nonparametric function in the partially linear multiplicative regression model in Models (1) and (2) based on the LPRE loss function and B-spline approximation, where covariates in the linear part are measured with additive measurement errors, but the nonparametric part is exactly observed. Under some regularity conditions, the SIMEX estimates were asymptotically normal with a more complex covariance matrix structure than naive estimates. Furthermore, extensive numerical studies show that our proposed method performs better than naive estimators when the measurement error is moderate or heavy, and it is comparable with the naive estimators when the measurement error is light. As the covariate in the nonparametric component is error-free, the resulting estimates of the nonparametric function are always well-fitted.
As indicated in Section 1, the approaches proposed in this paper may be adapted to the other more general models, such as the partially linear additive model as in [20], or single-index or varying-coefficient multiplicative regression models. Our future work will also consider extensions of them in fields with covariate measurement errors in all covariates, censored data, or longitudinal data analysis, which is meaningful for practitioners. As indicated by one referee, the model in Models (1) and (2) assumed that the measurement error only occurred in the linear part. In fact, the nonlinear part may be measured with error. For the later case, our method can still be implemented as in [20], except some minor modifications. However, the asymptotic theory becomes troublesome. Furthermore, as in [16], how to identify which set of covariates lies in the linear part or the nonlinear part is interesting. Additionally, when the dimension of covariates is high, how to effectively select the true important variables deserves to be studied thoroughly. All these issues will be investigated in the future.

Author Contributions

Conceptualization, W.C. and M.W.; methodology, W.C.; software, W.C.; validation, M.W.; writing—original draft preparation, W.C.; writing—review and editing, W.C. and M.W.; visualization, M.W.; supervision, W.C.; project administration, W.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partly funded by a grant from Natural Science Foundation of Jiangsu Province of China (Grant No. BK20210889).

Data Availability Statement

Not applicable.

Acknowledgments

This work was partly supported by the start-up fund for the doctoral research of Jiangsu University of Science and technology. The authors also thank the lecturer Feng-ling Ren, School of Computer and Engineering, Xinjiang University of Finance & Economics, for helpful discussions.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PLMM-MEPartially linear multiplicative regression models with measurement errors
LARELeast absolute relative error
LPRELeast product relative error
SIMEXSimulation–extrapolation

References

  1. Chen, K.; Guo, S.; Lin, Y.; Ying, Z. Least absolute relative error estimation. J. Am. Stat. Assoc. 2010, 105, 1104–1112. [Google Scholar] [CrossRef] [PubMed]
  2. Khoshgoftaar, T.M.; Bhattacharyya, B.B.; Richardson, G.D. Predicting software errors, during development, using nonlinear regression models: A comparative study. IEEE Trans. Reliab. 1992, 41, 390–395. [Google Scholar] [CrossRef]
  3. Narula, S.C.; Wellington, J.F. Prediction, linear regression and the minimum sum of relative errors. Technometrics 1977, 19, 185–190. [Google Scholar] [CrossRef]
  4. Park, H.; Stefanski, L.A. Relative-error prediction. Statist. Probab. Lett. 1998, 40, 227–236. [Google Scholar] [CrossRef]
  5. Chen, W.; Wan, M. Penalized Spline Estimation for Nonparametric Multiplicative Regression Models. J. Appl. Stat. 2023. submitted. [Google Scholar]
  6. Chen, K.; Lin, Y.; Wang, Z.; Ying, Z. Least product relative error estimation. J. Multivar. Anal. 2016, 144, 91–98. [Google Scholar] [CrossRef]
  7. Hirose, K.; Masuda, H. Robust relative error estimation. Entropy 2018, 20, 632. [Google Scholar] [CrossRef]
  8. Zhang, Q.; Wang, Q. Local least absolute relative error estimating approach for partially linear multiplicative model. Stat. Sinica 2013, 23, 1091–1116. [Google Scholar] [CrossRef]
  9. Zhang, J.; Feng, Z.; Peng, H. Estimation and hypothesis test for partial linear multiplicative models. Comput. Stat. Data Anal. 2018, 128, 87–103. [Google Scholar] [CrossRef]
  10. Chen, Y.; Liu, H. A new relative error estimation for partially linear multiplicative model. Commun. Stat. Simul. Comput. 2021, 1–19. [Google Scholar] [CrossRef]
  11. Liu, H.; Xia, X. Estimation and empirical likelihood for single-index multiplicative models. J. Stat. Plan. Inference 2018, 193, 70–88. [Google Scholar] [CrossRef]
  12. Zhang, J.; Zhu, J.; Feng, Z. Estimation and hypothesis test for single-index multiplicative models. Test 2019, 28, 242–268. [Google Scholar] [CrossRef]
  13. Zhang, J.; Cui, X.; Peng, H. Estimation and hypothesis test for partial linear single-index multiplicative models. Ann. Inst. Stat. Math. 2020, 72, 699–740. [Google Scholar] [CrossRef]
  14. Hu, D.H. Local least product relative error estimation for varying coefficient multiplicative regression model. Acta Math. Appl. Sin. Engl. Ser. 2019, 35, 274–286. [Google Scholar] [CrossRef]
  15. Chen, Y.; Liu, H.; Ma, J. Local least product relative error estimation for single-index varying-coefficient multiplicative model with positive responses. J. Comput. Appl. Math. 2022, 415, 114478. [Google Scholar] [CrossRef]
  16. Ming, H.; Liu, H.; Yang, H. Least product relative error estimation for identification in multiplicative additive models. J. Comput. Appl. Math. 2022, 404, 113886. [Google Scholar] [CrossRef]
  17. Carroll, R.J.; Ruppert, D.; Stefanski, L.A.; Crainiceanu, C.M. Measurement Error in Nonlinear Models: A Modern Perspective, 2nd ed.; Chapman and Hall/CRC: New York, NY, USA, 2006. [Google Scholar]
  18. Tian, Y. Simulation-Extrapolation Estimation for Multiplicative Regression Model with Measurement Error. Master’s Thesis, Shanxi Normal University, Xi’an, China, 2020. [Google Scholar]
  19. Cook, J.R.; Stefanski, L.A. Simulation-extrapolation estimation in parametric measurement error models. J. Am. Stat. Assoc. 1994, 89, 1314–1328. [Google Scholar] [CrossRef]
  20. Liang, H.; Thurston, S.W.; Ruppert, D.; Apanasovich, T.; Hauser, R. Additive partial linear models with measurement errors. Biometrika 2008, 95, 667–678. [Google Scholar] [CrossRef]
  21. Chen, L.P.; Yi, G.Y. Analysis of noisy survival data with graphical proportional hazards measurement error models. Biometrics 2021, 77, 956–969. [Google Scholar] [CrossRef]
  22. Afzal, A.R.; Dong, C.; Lu, X. Estimation of partly linear additive hazards model with left-truncated and right-censored data. Stat. Model. 2017, 6, 423–448. [Google Scholar] [CrossRef]
  23. Chen, W.; Ren, F. Partially linear additive hazards regression for clustered and right censored data. Bull. Inform. Cybern. 2022, 54, 1–14. [Google Scholar] [CrossRef]
  24. Zhang, J.; Zhu, J.; Zhou, Y.; Cui, X.; Lu, T. Multiplicative regression models with distortion measurement errors. Stat. Pap. 2020, 61, 2031–2057. [Google Scholar] [CrossRef]
Figure 1. Q-Q plots of various estimators of β 1 = 1.5 (left panel) and β 2 = 1 (right panel) in case 2 with sample size n = 100 .
Figure 1. Q-Q plots of various estimators of β 1 = 1.5 (left panel) and β 2 = 1 (right panel) in case 2 with sample size n = 100 .
Symmetry 15 01833 g001aSymmetry 15 01833 g001b
Figure 2. Boxplots of various estimators of β 1 = 1.5 (left panel) and β 2 = 1 (right panel) in case 2 with sample size n = 100 .
Figure 2. Boxplots of various estimators of β 1 = 1.5 (left panel) and β 2 = 1 (right panel) in case 2 with sample size n = 100 .
Symmetry 15 01833 g002
Figure 3. Boxplots of various estimators of β 1 = 1.5 (left panel) and β 2 = 1 (right panel) in case 3 with sample size n = 100 .
Figure 3. Boxplots of various estimators of β 1 = 1.5 (left panel) and β 2 = 1 (right panel) in case 3 with sample size n = 100 .
Symmetry 15 01833 g003
Figure 4. Average estimated curve of g ( z ) = sin ( π z / 2 ) in case 2 with sample size n = 100 . The segment line (gray) is the true curve. The solid line (black), the dotted line (red), the dot–dashed lines (green and blue) correspond to the oracle estimator, naive estimator, SIMEX1 estimator, and SIMEX2 estimator, respectively.
Figure 4. Average estimated curve of g ( z ) = sin ( π z / 2 ) in case 2 with sample size n = 100 . The segment line (gray) is the true curve. The solid line (black), the dotted line (red), the dot–dashed lines (green and blue) correspond to the oracle estimator, naive estimator, SIMEX1 estimator, and SIMEX2 estimator, respectively.
Symmetry 15 01833 g004
Figure 5. Estimated curves of g ( U ) when k n = 3 . The left (right) panel corresponds to the case with σ 2 = 0.1 ( σ 2 = 0.3 ). The solid line (black), the dotted line (red), and the dashed lines (green) correspond to the naive estimator, SIMEX1 estimator, and SIMEX2 estimator, respectively.
Figure 5. Estimated curves of g ( U ) when k n = 3 . The left (right) panel corresponds to the case with σ 2 = 0.1 ( σ 2 = 0.3 ). The solid line (black), the dotted line (red), and the dashed lines (green) correspond to the naive estimator, SIMEX1 estimator, and SIMEX2 estimator, respectively.
Symmetry 15 01833 g005
Figure 6. Estimated curves of g ( U ) when k n = 6 . The left (right) panel corresponds to the case with σ 2 = 0.1 ( σ 2 = 0.3 ). The solid line (black), the dotted line (red), and the dashed lines (green) correspond to the naive estimator, SIMEX1 estimator, and SIMEX2 estimator, respectively.
Figure 6. Estimated curves of g ( U ) when k n = 6 . The left (right) panel corresponds to the case with σ 2 = 0.1 ( σ 2 = 0.3 ). The solid line (black), the dotted line (red), and the dashed lines (green) correspond to the naive estimator, SIMEX1 estimator, and SIMEX2 estimator, respectively.
Symmetry 15 01833 g006
Table 1. Results for case 1 with different sample sizes ( × 10 2 ) .
Table 1. Results for case 1 with different sample sizes ( × 10 2 ) .
β 1 β 2 g ( . )
n MethodBIASSDMSE BIASSDMSE  IABIASMISE
50True−0.7117.533.07 −2.0231.7310.09 39.71 (28.80)12.63 (21.81)
Naive−1.8717.693.15 −1.9632.1410.35 40.15 (29.46)12.87 (21.97)
SIMEX1−0.6817.823.17 −2.0032.1510.35 40.14 (29.43)12.84 (21.77)
SIMEX2−0.6817.873.19 −1.9332.1510.35 40.17 (29.51)12.94 (22.07)
100True−0.5111.171.24 −0.7721.804.74 26.80 (12.31)7.36 (6.67)
Naive−1.7111.271.29 −0.6621.954.81 27.07 (12.50)7.34 (6.72)
SIMEX1−0.5511.361.29 −0.6721.974.82 27.05 (12.48)7.32 (6.70)
SIMEX2−0.5311.371.29 −0.7121.914.79 27.15 (12.56)7.32 (6.73)
200True−0.017.060.49 0.3815.012.25 18.44 (5.72)4.47 (2.69)
Naive−1.227.110.51 0.4215.292.33 18.68 (5.87)4.55 (2.79)
SIMEX1−0.067.150.51 0.4215.302.33 18.67 (5.87)4.55 (2.79)
SIMEX2−0.037.200.52 0.4115.332.34 18.71 (5.89)4.55 (2.791)
Table 2. Results for case 2 with different sample sizes ( × 10 2 ) .
Table 2. Results for case 2 with different sample sizes ( × 10 2 ) .
β 1 β 2 g ( . )
n Method BIAS SDMSE BIAS SD MSE  IABIAS MISE
50True−1.3717.072.92 0.4334.5711.93 39.42 (29.02)12.59 (23.99)
Naive−18.4417.506.46 0.5638.7615.00 44.81 (37.48)14.15 (34.49)
SIMEX1−6.9719.124.13 0.6838.6314.90 45.14 (38.02)14.32 (35.95)
SIMEX2−2.7220.134.11 1.0438.9115.12 45.87 (39.14)14.76 (38.13)
100True−0.6510.391.08 0.3322.845.20 26.40 (11.89)7.24 (6.37)
Naive−17.3411.224.26 0.5826.677.10 30.60 (16.05)8.23 (8.98)
SIMEX1−5.8612.231.83 0.6726.957.25 30.83 (16.28)8.33 (9.10)
SIMEX2−1.2713.061.71 0.5427.137.34 31.28 (16.73)8.33 (9.06)
200True−0.227.020.49 −0.4314.582.10 18.64 (5.86)4.79 (2.85)
Naive−17.397.553.59 −0.2316.462.71 21.57 (7.79)5.38 (3.72)
SIMEX1−5.868.271.02 −0.1516.582.74 21.73 (7.90)5.39 (3.76)
SIMEX2−1.288.990.82 −0.4616.802.82 22.13 (8.17)5.50 (3.90)
Table 3. Results for case 3 with different sample sizes ( × 10 2 ) .
Table 3. Results for case 3 with different sample sizes ( × 10 2 ) .
β 1 β 2 g ( . )
n Method BIAS SD MSE BIASSDMSE  IABIASMISE
50True−1.3717.072.92 0.4334.5711.93 39.42 (29.02)12.59 (23.99)
Naive−59.5218.0038.65 0.1349.0223.98 55.64 (57.13)17.20 (50.82)
SIMEX1−43.9921.2123.83 0.2949.6124.56 56.79 (59.36)17.55 (56.63)
SIMEX2−23.5027.3712.99 1.1851.5226.51 60.71 (67.76)19.19 (71.43)
100True−0.6510.391.08 0.3322.845.20 26.40 (11.89)7.24 (6.37)
Naive−59.0812.0836.3 6 0.7933.2211.01 39.29 (26.38)10.77 (15.19)
SIMEX1−43.3714.2020.82 1.0334.1911.67 40.13 (27.60)11.19 (15.94)
SIMEX2−23.0117.998.52 1.3436.4313.26 43.08 (31.79)12.44 (18.27)
200True−0.227.010.49 −0.4314.482.09 18.64 (5.85)4.78 (2.84)
Naive−59.538.4336.15 −0.2921.404.57 28.50 (13.44)7.10 (6.41)
SIMEX1−44.019.8620.33 −0.0021.774.72 29.15 (14.07)7.16 (6.67)
SIMEX2−23.5712.637.15 −0.4223.535.52 31.55 (16.46)7.86 (7.73)
Table 4. Estimation results for the body fat data when k n = 3 .
Table 4. Estimation results for the body fat data when k n = 3 .
σ 2 = 0.1 σ 2 = 0.3
Naive SIMEX1SIMEX2 SIMEX1SIMEX2 CLZW
Age0.06770.07220.0729 0.07240.0731 0.07020.1476
Weight−0.07190.01280.0138 0.01360.0141 −0.1346−0.3945
Height−0.0028−0.0094−0.0072 −0.0093−0.0074 0.00660.1050
Neck−0.0745−0.0792−0.0793 −0.0795−0.0794 −0.0698−0.066
Abdomen0.54960.53520.5350 0.53330.5321 0.54320.8309
Hip−0.0809−0.1026−0.1020 −0.1037−0.1024 −0.0996−0.1936
Thigh0.08810.08170.0787 0.08240.0792 0.12570.1665
Knee0.0004−0.00070.0016 −0.00020.0020 −0.0013−0.0259
Ankle0.00610.00130.0027 0.00120.0029 0.01530.0407
Biceps0.01950.01850.0200 0.01820.0199 0.02920.1103
forearm0.02970.02580.0249 0.02550.0249 0.03770.0723
Wrist−0.0944−0.1011−0.1047 −0.1014−0.1047 −0.0838−0.0860
Table 5. Estimation results for the body fat data when k n = 3 .
Table 5. Estimation results for the body fat data when k n = 3 .
σ 2 = 0.1 σ 2 = 0.3
NaiveSIMEX1SIMEX2  SIMEX1SIMEX2 CLZW
Age0.07860.07330.0716 0.07310.0723 0.07020.1476
Weight−0.20050.01100.0113 0.01130.0120 −0.1463−0.3945
Height0.0381−0.0085−0.0014 −0.0087−0.0014 0.00660.1050
Neck−0.0861−0.0858−0.0962 −0.0854−0.0956 −0.0698−0.066
Abdomen0.53090.53090.5330 0.53170.5285 0.54320.8309
Hip0.0254−0.1039−0.1098 −0.1033−0.1077 −0.0996−0.1936
Thigh0.10360.08730.0881 0.08670.0860 0.12570.1665
Knee−0.0346−0.00100.0010 −0.00130.0024 −0.0013−0.0259
Ankle0.0168−0.0019−0.0108 −0.0019−0.0106 0.01530.0407
Biceps0.01930.01750.0129 0.01790.0127 0.02920.1103
Forearm0.01610.02690.0373 0.02680.0374 0.03770.0723
Wrist−0.0827−0.1012−0.1013 −0.1009−0.1040 −0.0838−0.086
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, W.; Wan, M. SIMEX Estimation of Partially Linear Multiplicative Regression Model with Mismeasured Covariates. Symmetry 2023, 15, 1833. https://doi.org/10.3390/sym15101833

AMA Style

Chen W, Wan M. SIMEX Estimation of Partially Linear Multiplicative Regression Model with Mismeasured Covariates. Symmetry. 2023; 15(10):1833. https://doi.org/10.3390/sym15101833

Chicago/Turabian Style

Chen, Wei, and Mingzhen Wan. 2023. "SIMEX Estimation of Partially Linear Multiplicative Regression Model with Mismeasured Covariates" Symmetry 15, no. 10: 1833. https://doi.org/10.3390/sym15101833

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop