Next Article in Journal
An Iterative PSD-Based Procedure for the Gaussian Stochastic Earthquake Model with Combined Intensity and Frequency Nonstationarities: Its Application into Precast Concrete Structures
Next Article in Special Issue
Survival Analysis of the PRC Model from Adaptive Progressively Hybrid Type-II Censoring and Its Engineering Applications
Previous Article in Journal
VRR-Net: Learning Vehicle–Road Relationships for Vehicle Trajectory Prediction on Highways
Previous Article in Special Issue
Optimal Model Averaging for Semiparametric Partially Linear Models with Censored Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Weighted Competing Risks Quantile Regression Models and Variable Selection

1
College of Science, North China University of Technology, Beijing 100144, China
2
School of Mathematics, University of Manchester, Manchester M13 9PL, UK
3
Department of Physics, Astronomy and Mathematics, School of Physics, Engineering & Computer Science, University of Hertfordshire, Hatfield AL10 9EU, UK
4
Department of Mathematics, College of Engineering, Design and Physical Sciences Brunel University, Uxbridge UB8 3PH, UK
5
School of Business and Economics, Humboldt-Universität zu Berlin, 10117 Berlin, Germany
6
School of Statistics and Mathematics, Shanghai Lixin University of Accounting and Finance, Shanghai 201620, China
7
Center for Applied Statistics, School of Statistics, Renmin University of China, Beijing 100872, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(6), 1295; https://doi.org/10.3390/math11061295
Submission received: 23 January 2023 / Revised: 24 February 2023 / Accepted: 2 March 2023 / Published: 8 March 2023
(This article belongs to the Special Issue Statistical Methods and Models for Survival Data Analysis)

Abstract

:
The proportional subdistribution hazards (PSH) model is popularly used to deal with competing risks data. Censored quantile regression provides an important supplement as well as variable selection methods due to large numbers of irrelevant covariates in practice. In this paper, we study variable selection procedures based on penalized weighted quantile regression for competing risks models, which is conveniently applied by researchers. Asymptotic properties of the proposed estimators, including consistency and asymptotic normality of non-penalized estimator and consistency of variable selection, are established. Monte Carlo simulation studies are conducted, showing that the proposed methods are considerably stable and efficient. Real data about bone marrow transplant (BMT) are also analyzed to illustrate the application of the proposed procedure.

1. Introduction

In survival analysis, sometimes events fail because of a specific cause or from some other causes or competing risks. Consider the dataset of bone marrow transplant (BMT) in [1] for example, which includes 177 patients who received a stem cell transplant for acute leukemia. Whereas 56 patients in this dataset relapsed (REL), considered as the event of interest, 75 patients died from causes related to the transplant (transplant related mortality, TRM), which is considered a competing risk, as it hinders the occurrence of leukemia relapse. The other 46 patients are regarded as censored due to the end of the study. In the analysis of such a dataset, treating competing risks (TRM) as censoring cases and using usual Cox modelling may be inaccurate, as the competing risks are probably affected by covariates. To deal with such competing risks data, Ref. [2] proposed a novel semiparametric proportional hazards for the subdistribution, or PSH model, which directly analyzes the effect of covariates on the marginal probability function or cumulative incidence function (CIF). The competing risks data often occur in clinical trials containing large numbers of covariates, among which only a few have significant or essential influence on the response, generating the variable selection issues, such as the general penalized log-partial likelihood method proposed by [3].
Quantile regression introduced by [4] is widely known to more comprehensively describe the conditional distribution of response on covariates. Existing work about competing risks quantile regression includes [5], which first transforms competing risks quantile regression models to accelerated the failure model and uses an estimating equation procedure for estimation. In addition, Ref. [6] discussed the quantile regression for competing risks data with missing cause of failure. Then [7,8] developed variable selection procedures based on unbiased estimating equations with group structures and penalization methods for competing risks quantile regression models.
In the paper, in spite of the estimating equation method, we propose developing a more general method for competing risks quantile regression and expanding the weighted procedures by considering the re-distribution methods [9] for the PSH model. By transformed responses, we can rewrite the competing risks quantiles formulation as a general quantile regression objective function, then apply the constructed weights. With unbiasedness of the subgradient of this weighted objective function at the true cumulative-incidence function and coefficient proved, consistency and asymptotic normality of the penalty-free estimators are established under regularity conditions. To realize the variable selection, penalization methods such as the least absolute shrinkage and selection operator (LASSO) proposed by [10] and the adaptive LASSO (ALASSO) developed by [11] are applied to the weighted objective function, which can be easily applied with the R package. The consistency of the variable selection procedure is also established, and Monte Carlo simulation is performed to illustrate the efficiency and stability of our proposed procedures. Real data about bone marrow transplant are analyzed using our methods.
The paper is organized as follows. Our proposed weighted competing risks quantile regression model and its penalized methods are developed in Section 2, with asymptotic properties demonstrated in Section 3. Simulation studies as well as the application to the BMT data are performed in Section 4 to illustrate the performance of proposed methods.

2. Models

We take the formulation of competing risks quantile regression in [5]. In the setting of competing risks models, assume there exist K causes of failure, denoted by an observable indicator ϵ { 1 , , K } , the same denotation as [2]. Without loss of generality, we can set K = 2 . Let T and C denote the failure and censoring time, respectively, and we observe X = min ( T , C ) , and censoring or risk indicator δ = I ( T C ) , where I ( · ) is an indicator function. Denote a p × 1 bounded time-independent covariate vector as Z ˜ and Z = ( 1 , Z ˜ ) . Assume that { X i , δ i ϵ i , Z i } , i = 1 , , n are independent and identically distributed observed samples.
Ref. [2] modeled the CIF for failure from cause 1 conditionally on the covariates, F 1 ( t | Z ) = P ( T t , ϵ = 1 | Z ) . They proposed the PSH model based on the formula of subdistribution hazard, which is defined as
λ 1 ( t | Z ) = lim Δ t 0 1 Δ t P t < T t + Δ t , ϵ = 1 | ( T t ) ( T t ϵ 1 ) , Z = { d F 1 ( t | Z ) / d t } / { 1 F 1 ( t | Z ) }
in [12]. Analogue to the definition of quantile, we define the conditional quantile as Q k ( τ | Z ) = inf { t : F k ( t | Z ) τ } , k = 1 , , K , where F k ( t | Z ) = P { T t , ϵ = k } is the CIF for cause k; for more details, refer to [5]. For τ [ τ L , τ U ] , consider Q 1 ( τ | Z ) to be modeled as
Q 1 ( τ | Z ) = g { Z β 0 ( τ ) } ,
where β 0 ( τ ) is a ( p + 1 ) × 1 coefficient vector, g ( · ) is a known monotone increasing and continuously differential bounded link function, 0 < τ L τ U < 1 . With the statement in [2], if we denote T 1 * = I ( ϵ = 1 ) × T + { 1 I ( ϵ = 1 ) } × , then T 1 * has a distribution function equal to F 1 ( t | Z ) when t < and a point mass P ( T * = | Z ) = P ( T < , ϵ 1 ) = 1 F 1 ( | Z ) at t = . Then, at τ < F 1 ( | Z ) , the τ quantile of T 1 * equals F 1 1 ( τ | Z ) = Q 1 ( τ | Z ) = g { Z β 0 ( τ ) } under the formulation of (1).
Remark 1.
According to the formulation of T 1 * , we can see that when τ F 1 ( ; Z ) , the τ quantile of T 1 * will become ∞, which is obvious when reviewing the definition that F 1 ( t ; Z ) = P ( T t , ϵ = 1 | Z ) P ( T , ϵ = 1 | Z ) = F 1 ( ; Z ) and the fact that g ( · ) is monotone increasing. This fact provides a thought about the choice of τ U .
With reference to [13], for proper τ , β 0 ( τ ) is supposed to be the minimizer of the following expected loss function with respect to β ( τ ) :
β 0 ( τ ) = arg min β ( τ ) E ρ τ ( g 1 ( T 1 * ) Z β ( τ ) ) ,
where E denotes the expectation, and ρ τ ( u ) = u { τ I ( u 0 ) } is called the “check” function.
In a sample scenario, we can obtain the estimator β ^ ( τ ) of β 0 ( τ ) via minimizing the following objective function:
min β ( τ ) i = 1 n ρ τ g 1 ( T 1 , i * ) Z i β ( τ ) .

2.1. Weighted Competing Risks Quantile Regression

Similar to [5], our paper first considers the case in which there are no missing data (i.e., there is no censoring). As a result, X = T and δ = 1 , δ ϵ = ϵ . As aforementioned, we can estimate β 0 ( τ ) via the minimization problem (3). Because T 1 , i * is not observed, we modify (3) to
i = 1 n I ( ϵ i = 1 ) ρ τ ( g 1 ( X i ) Z i β ( τ ) ) + I ( ϵ i 1 ) ρ τ ( g 1 ( X ) Z i β ( τ ) ) ,
where X is any value sufficiently large to exceed all Z i β ( τ ) . Then, it is not difficult to derive the negative subgradient of (4) with respect to β ( τ ) .
For the censoring case, we aim to construct such a weighted quantile objective function to estimate β 0 ( τ ) as follows:
Q ( β ( τ ) , w 0 ) = i = 1 n w 0 i ρ τ ( g 1 ( X i ) Z i β ( τ ) ) + ( 1 w 0 i ) ρ τ ( g 1 ( X + ) Z i β ( τ ) ) .
The weight function is re-constructed based on competing risks analogy to [14], as follows:
w 0 i = 1 , δ i ϵ i = 1 , 0 , δ i ϵ i 1 , F 1 ( C i | Z i ) > τ , τ F 1 ( C i | Z i ) 1 F 1 ( C i | Z i ) , δ i ϵ i 1 , F 1 ( C i | Z i ) τ .
Remark 2.
In our case of competing risks quantile regression, each point contributes to the subgradient condition only via the sign of g 1 ( T 1 , i * ) Z i β 0 ( τ ) . For data with δ i ϵ i = 1 , we know X i = T i C i , ϵ i = 1 , i . e . , X i = T 1 , i * , and I ( g 1 ( T 1 , i * ) Z i β 0 ( τ ) < 0 ) can be observed, thus we assign a weight of 1 for this case. For data with δ i ϵ i 1 and F 1 ( C i | Z i ) > τ , then T i > C i , F 1 ( C i | Z i ) > τ or T i C i , ϵ i = 2 , F 1 ( C i | Z i ) > τ ; in the first scenario, T 1 , i * T i X i = C i > g ( Z i β 0 ( τ ) ) , I ( g 1 ( T 1 , i * ) Z i β 0 ( τ ) < 0 ) = 0 ; in the second scenario, T i C i , ϵ = 2 , I ( g 1 ( T 1 , i * ) Z i β 0 ( τ ) < 0 ) = 0 , where we assign a weight of 0. The ambiguous situation is δ i ϵ i 1 and F 1 ( C i | Z i ) < τ , i.e., C i F 1 1 ( τ | Z i ) = g ( Z i β 0 ( τ ) ) . If δ i = 1 , ϵ i = 2 , X i = T i < C i < g ( Z i β 0 ( τ ) ) , or I { g 1 ( X i ) Z i β 0 ( τ ) < 0 } = 1 ; if δ i = 0 , X i = C i < Z i β 0 ( τ ) , i.e., I { g 1 ( X i ) Z i β 0 ( τ ) < 0 } = 1 . However, the I ( T 1 , i * g ( Z i β 0 ( τ ) ) < 0 ) cannot be observed.
Thus, we assign the weight w i ( F 0 ) = τ F 1 ( C i | Z i ) 1 F 1 ( C i | Z i ) for this case, where given ( Z i , C i ) ,
E I ( g 1 ( T 1 , i * ) Z i β 0 ( τ ) < 0 ) | δ i ϵ i 1 , Z i = P ϵ i = 1 , T i < g ( Z i β 0 ( τ ) ) | Z i P ϵ i = 1 , T i < C i 1 P T i C i , ϵ i = 1 | Z i = τ F 1 ( C i | Z i ) 1 F 1 ( C i | Z i ) .
We can show that a subgradient of the weighted quantile objective function (5) with respect to β ( τ )
M n ( β ( τ ) , w 0 ) = i = 1 n Z i τ w 0 i I ( g 1 ( X i ) < Z i β ( τ ) )
is an unbiased estimating function of β 0 ( τ ) .
E w 0 i I { g 1 ( X i ) < Z i β 0 ( τ ) } | Z i = E I { δ i ϵ i = 1 } w 0 i I { g 1 ( X i ) < Z i β 0 ( τ ) } | Z i + E I { δ i ϵ i 1 , F 1 ( C i ) > Z i β 0 ( τ ) } w 0 i I { g 1 ( X i ) < Z i β 0 ( τ ) } | Z i + E I { δ i ϵ i 1 , F 1 ( C i ) Z i β 0 ( τ ) } w 0 i I { g 1 ( X i ) < Z i β 0 ( τ ) } | Z i = P ϵ i = 1 , g 1 ( T i ) < Z i β 0 ( τ ) | Z i = τ .
Although the unbiasedness of (8) is proved with F 1 ( C i | Z i ) in w 0 i , the underlying distribution F 1 ( t | Z ) or w 0 i is unknown in practice. Here we use the IPCW [15] estimator proposed by [5] to estimate F 1 ( t | Z ) ,
F ^ 1 ( x | Z ) = 1 n i = 1 n I ( X i x , δ i ϵ i = 1 ) 1 G ^ ( X i | Z i ) ,
where 1 G ( · t | Z ) is the survival function of C given Z, which can be estimated semiparametrically or nonparametrically. Here for simplicity, as in [2], we assume the independence of C and ( T , ϵ , Z ) , then the Kaplan–Meier estimator in [16] could be used. Such a computation-friendly estimator (9) has been proved to behave quite well in simulation results, which should be well improved combined with more effective estimators of F 1 ( t | Z ) .
By plugging (9) in the expression of w 0 i , we can get the estimated weights w i ( F ^ 1 ) ,
w i ( F ^ 1 ) = 1 , δ i ϵ i = 1 , 0 , δ i ϵ i 1 , F ^ 1 ( C i | Z i ) > τ , τ F ^ 1 ( C i | Z i ) 1 F ^ 1 ( C i | Z i ) , δ i ϵ i 1 , F ^ 1 ( C i | Z i ) τ ,
where F ^ 1 is as in (9) or replaced with other consistent estimators. Then, we obtain the weighted censoring quantile regression estimator β ^ ( τ ) by minimizing the weighted objective function,
Q ( β ( τ ) , F ^ 1 ) = i = 1 n w i ( F ^ 1 ) ρ τ ( g 1 ( X i ) Z i β ( τ ) ) + ( 1 w i ( F ^ 1 ) ) ρ τ ( g 1 ( X + ) Z i β ( τ ) ) .

2.2. Variable Selection Procedure

To select important variables, a penalty function is added to the weighted objective function (11) to obtain the penalized estimator β ˜ ( τ ) :
Q p ( β ( τ ) , w i ( F ^ 1 ) ) = i = 1 n w i ( F ^ 1 ) ρ τ ( g 1 ( X i ) Z i β ( τ ) ) + ( 1 w i ( F ^ 1 ) ) ρ τ ( g 1 ( X + ) Z i β ( τ ) ) + j = 1 p p λ ( | β j ( τ ) | ) ,
where p λ ( · ) can be LASSO, adaptive LASSO, and so on.
For LASSO and ALASSO penalty, we can easily write p λ ( | β j | ) = λ n | β ^ j | γ , where | β ^ j | is the jth element of the initial consistent unpenalized estimator. We choose γ = 0 for LASSO and γ = 1 for ALASSO. The minimization of (12) and (11) can be directly solved with the R package quantreg without linear programming, leading our proposed methods to conveniently applicable tools.

3. Theoretical Property

To establish the asymptotic results in this paper, we require the following assumptions:
A1
The covariate Z is bounded in probability. There exists a constant K z such that E Z 3 K z , and E ( Z Z ) is a positive definite ( p + 1 ) × ( p + 1 ) matrix.
A2
The functions F 1 ( t | Z ) and G ( t ) have first derivatives with respect to t, denoted as f 1 ( t | Z ) and g 0 ( t ) , which are uniformly bounded away from infinity. Additionally, F 1 ( t | Z ) and G ( t ) have bounded (uniformly in t) second-order partial derivatives with respect to Z.
A3
For β in the neighborhood of β 0 ( τ ) , E ( Z Z g ( Z β ) f 1 ( g ( Z β ) | Z ) { 1 G ( g ( Z β ) ) } ) and E ( Z Z g ( Z β ) g 0 ( g ( Z β ) ) ) are positive definite.
Assumption A1 states some tail and moment conditions on the covariate Z, which are standard for the quantile regression. Assumption A2 is needed for the local Kaplan–Meier estimator. It allows us to obtain the local expansions of F 1 ( t | z ) and G ( t ) in the neighborhood of Z β 0 ( τ ) in order to obtain the uniform consistency and the linear representation of F ^ 1 ( t | Z ) . Assumption A3 ensures that the expectation of the estimating function E { M n ( β , F 1 ) } has a unique zero at β 0 ( τ ) , and it is needed to establish the asymptotic distribution of β ^ ( τ ) .
C1
There exists ν > 0 such that P ( C = ν ) > 0 and P ( C > ν ) =0.
C2
β 0 ( τ ) is Lipschitz continuous for τ [ τ L , τ U ] .
C3
P ( ϵ = 1 | Z ) < 1 a.s.
Assumptions C1 and C2 are regularity conditions for competing risks quantile regressions. Assumption C3 is easily satisfied for the situation of competing risks; otherwise, it will turn out to be a standard Cox model.
Theorem 1.
Assume that triples { Z i , X i , δ i ϵ i } , i = 1 , , n constitute an i.i.d. multivariate random sample and that the censoring variable C i is independent of T i conditionally on the covariate Z i . Under model (1) and assumptions A1–A3, C1–C3,
β ^ ( τ ) β 0 ( τ )
in probability as n .
Theorem 2.
Under the assumptions of Theorem 1 and r < 1 / 4 , we have
n 1 / 2 ( β ^ ( τ ) β 0 ( τ ) ) D N ( 0 , Γ 1 V Γ 1 ) ,
where
Γ 1 = E [ Z Z g ( Z β 0 ( τ ) ) { 1 G ( g ( Z β 0 ( τ ) ) ) } f 1 ( g ( Z β 0 ( τ ) ) | Z ) ] ,
and
V = Cov ( m i ( β 0 , F 1 ) + ( 1 τ ) ϕ i ) ,
with m i ( β 0 , F 1 ) = Z i { τ w i ( F 1 ) I ( X i < g ( Z i β 0 ( τ ) ) ) } , ϕ i defined in Equation (A9).
Theorems 1 and 2 established the consistency and asymptotic normality of the unpenalized estimator β ^ ( τ ) . We then establish the property of consistency in variable selection of the proposed penalized estimator β ˜ ( τ ) . Let A ( τ ) = { j : β 0 j 0 } and A c ( τ ) = { j : β 0 j ( τ ) = 0 } .
Theorem 3.
If A1–A3, C1–C3 hold, and if n 1 / 2 λ n 0 and n ( γ 1 ) / 2 λ n , then
P { j : β ˜ j ( τ ) 0 } = A ( τ ) 1 as n .
Theorem 3 states that the proposed procedure is able to select the correct model with probability approaching one. By the remark of Theorem 2 of [14], the oracle properties are satisfied by the proposed estimators.
The proofs are presented in Appendix A.

4. Numerical Studies

4.1. Monte Carlo Simulation

We conduct Monte Carlo simulations to evaluate the performance of the proposed methods and consider the data-generating ways as in [5] with a larger dimension of covariates.
We generate ( T , ϵ ) satisfying P ( ϵ = 1 | Z ) = p 0 I ( Z 2 = 0 ) + p 1 I ( Z 2 = 1 ) , P ( T t | ϵ = 1 , Z ) = Φ ( log t γ 0 Z ) , and P ( T t | ϵ = 2 , Z ) = Φ ( log t α 0 Z ) , where Φ ( · ) denotes the standard normal distribution function, p 0 = 0.8 , p 1 = 0.6 , γ 0 and α 0 are true parameters in the model above. Set γ 0 = ( 2 , 2.5 , 2 , 2.4 , 0 , , 0 ) while α 0 = γ 0 . Then
log Q 1 ( τ | Z ) = Φ 1 τ p 0 + γ 0 ( 1 ) Z 1 + γ 0 ( 2 ) + Φ 1 τ p 1 Φ 1 τ p 0 Z 2 + γ 0 ( 3 ) Z 3 + γ 0 ( 4 ) Z 4
where Z j is the jth component of covariate Z, and γ 0 ( j ) is the jth component of γ 0 . Then the estimated coefficient in model (1) is
β 0 ( τ ) = ( Φ 1 τ p 0 , γ 0 ( 1 ) , γ 0 ( 2 ) + Φ 1 τ p 1 Φ 1 τ p 0 , γ 0 ( 3 ) , γ 0 ( 4 ) , 0 , , 0 ) .
Thus the true number of non-zero coefficients is 5 for τ 0.4 and 4 for τ = 0.4 due to Φ 1 0.4 p 0 = 0 .
In simulations, we set the number of irrelevant predictors to be s = # { j : β 0 j 0 } = 30 , the sample size to be n = 200 . For the structure of the covariance matrix for covariates, we consider Σ 1 , i j = ρ , where ρ = 0 , 0.25 , 0.5 , 0.75 .
We generate the covariate vector Z = ( Z 1 , Z 2 , Z 3 , , Z p ) as follows: Z 1 Unif(0,1) and Z 2 Bernoulli(0.5), Z j N ( 0 , Σ ) , j = 3 , , p . For each scenario, the simulation is repeated 500 times. The censoring rate average is 36%.
We use the following criteria to evaluate the performances: the ratio of number of relevant variables correctly selected to true number of relevant variables (TPr) defined as TPr = # { { j : | β ^ j ( τ ) | 0 } { j : | β 0 j ( τ ) | 0 } } # { j : | β 0 j ( τ ) | 0 } , the ratio of number of irrelevant variables incorrectly selected to true number of irrelevant variables (FPr) defined as FPr = # { { j : β ^ j ( τ ) 0 } { j : | β 0 j | = 0 } } # { j : | β 0 j | = 0 } , the absolute error P 1 = j = 1 p | β j ^ ( τ ) β 0 j ( τ ) | , and the squared error P 2 = j = 1 p | β ^ j ( τ ) β 0 j ( τ ) ) | 2 . The closer TPr is to 1 and FPr is to 0, the better. Both TPr and FPr range from 0 to 1, thus we present them together in Figure 1 for comparison.
We compare our proposed weighted estimators with the estimated estimator of competing risks quantile regression model proposed in [8], denoted as wcqr and cqr, respectively, implying the weighting method or not. In simulation tables, we use cqr.l and cqr.a to represent cqr estimators with LASSO and ALASSO penalty, respectively. Similarly, our estimators, denoted as wcqri.l and wcqri.a, i = 0 , 1 stands for administrative censoring where C is known and randomly right censoring cases where X is in place of C, respectively; wcqr2 uses a different weight:
w i ( F 1 ) = 1 δ i ϵ i = 1 τ F ^ 1 ( C i ) 1 F ^ 1 ( C i ) F ^ 2 ( C i ) δ i = 0 , F ^ 1 ( C i ) < τ 0 otherwise .
As the weight above involves the estimation of F 2 ( t | Z ) = P ( T t , ϵ = 2 | Z ) , which probably is complicated in practical circumstances, we only use it for comparison in simulations. Here in wcqr2, we apply a similar estimating method of F 1 to F 2 .
Alhough our theoretical results are not based on these two estimators of w i , most simulation results show that wcqr0 and wcqr1 are considerably close, as the weight is only different at δ = 1 , suggesting good estimates in large censoring rates. Research about massive competing risks data with enormous censored observations will appear in our future work.
Before the variable selection, we also conduct the simulation for unpenalized estimators. In this case, we use γ 0 = ( 1 , 1.5 , 0.5 ) , p 0 = 0.8 , p 1 = 0.6 , and ρ = 0 . We repeat this 1000 times and compare the empirical bias (EmpBias) and average coverage probabilities based on 95% confidence intervals computed with empirical variance. The results are summarized in Table 1, where in lower quantiles, the cqr method shows extreme excellence, whereas in high quantiles, it displays some instability. For weighted methods, though inferior to cqr in lower quantiles, these methods still behave well in most simulations, especially wcqr1 and wcqr2. The average coverage probabilities display similar patterns; cqr behaves well until τ < 0.4 . In relatively high quantiles such as τ = 0.5 , wcqr2 behaves the best for most coefficients.
With moderate dimensions of covariates ( s = 30 ), Figure 1 presents the TPr and FPr values evaluated for n = 200 and τ ( 0 , 0.6 ) and four ρ s for Σ 1 , respectively. Generally speaking, we can observe that almost all selection performances appear to decline as τ increases, and with higher TPr and lower FPr, ALASSO penalized methods are overall superior to LASSO methods. Specifically, in quantiles lower than 0.4, with ALASSO penalty, cqr and wcqr both have good performances for identification of important variables, with TPr close to 1. Compared to the ALASSO method, LASSO methods have higher FPr values and tend to select much more irrelevant variables. In Figure 1, wcqr estimators display comparable performance with cqr estimators according to high TPr and low FPr at moderate τ ( 0 , 0.35 ) . At higher quantiles, although a little bit inferior to cqr estimators in TPr, wcqr estimators have very low FPr values despite a rapid increase of FPr for cqr estimators, which means wcqr estimators have a strong ability to drop irrelevant variables as well as select correct variables even when cqr estimators almost fail in particularly high quantiles. We should state that in all simulations, wcqr estimators present quite stable performances in higher quantiles and higher dimensions. The decline of performance with increasing τ can be explained by a higher τ that is approaching the probability P ( ϵ = 1 | Z ) , which induces larger biases. In addition, it is notable that TPr has a very small decrease when ρ increases except for cqr.l, which has a large decrease, since when the correlation of covariates increases, it is more difficult for identification. Even when ρ = 0.75 , the wcqr estimators with ALASSO behave quite well in simulations.
Figure 2 shows the P1 and P2 performances for the eight methods, and the two values for cqr estimators are too large to be displayed in the plot. In contrast, wcqr estimators stably indicate a decrease from 0.1 to about 0.27 and an increase from 0.3 to 0.6. It can be explained that in low quantiles, few ambiguous cases are used for estimation, which causes insufficient use of information; whereas in high quantiles, where more ambiguous observations are weighted, the accuracy of weights will affect the estimation performance. The improvement of estimation for w 0 i can be an investigation in the future.
We also present other simulation results in Figures S1–S11 in the supplementary material. Figures S1 and S2 show the TPr, FPr, P1, and P2 for s = 20 and 50, respectively. We can observe that, in Figure S1, the TPr of wcqr estimators with ALASSO are above 0.9 except for very high quantiles, indicating stability with low FPr compared to cqr estimators; in Figure S2, the tendency remains but the selection performance is inferior, although TPr still stays higher than 0.8 at τ = 0.4 . We also conduct the case when s = 100 , n = 100 , which means the number of predictors exceeds the sample size. In this case, cqr estimators fail due to singular design matrix as well as the ALASSO estimator. We discover, surprisingly, that our wcqr estimators still work and behave quite well, as illustrated in Figure S3. Numerical studies for s = 10 and γ 0 = ( 2 , 2.5 , 0.5 , 0 , , 0 ) are also discussed in the supplementary material, illustrated by Figures S4–S11. For the structure of the covariance matrix, we consider another kind of setup: Σ 2 , i j = ρ | i j | . We also consider a different choice for p 0 and p 1 as 0.6 and 0.45, respectively, in order to test the performance under a different probability of P ( ϵ i = 1 ) . In addition, we also simulate the heavy-tailed distributions t ( 3 ) instead of Gaussian distribution for P ( T t | ϵ = 1 , Z ) . Figures S4–S7 show that the ALASSO penalty significantly decreases the FP for both estimators, which suggests the superiority of ALASSO. Our estimators behave fairly close to the cqr estimator in most cases. Although the TP of our estimators behave slightly worse, the FP shows a relatively better performance. Not only does wcqr shows smaller deviation about estimated coefficients, but it also shows great stability, especially for the ALASSO penalty, in the case of higher quantiles τ = 0.5 . This shows the meaningful application of our estimators in high quantiles. Figure S8 represents the case of n = 400 , where the performances of all criteria are greatly improved.
Figure S9 shows the performance for Σ 2 , which presents slightly better results than the case of Σ 1 . Figure S10 is for a different pair of ( p 0 , p 1 ) = ( 0.6 , 0.45 ) , and our τ s ranges from 0 to 0.4, and τ = 0.3 turns out to be the quantile of 3 nonzero coefficients, which fits our simulation results. Figure S11 simulates t(3) distribution in place of standard normal distribution, displaying that our estimators behave significantly well for heavy-tailed distributions.
To conclude, wcqr estimators behaves comparably with the cqr estimator, with slightly worse performance for TP but better for FP. Interestingly, for the higher correlations and higher quantiles and heavy-tailed distribution, the superior performance the wcqr estimators display show good potential applicability to more complex data and higher quantiles.

4.2. Real Data Analysis

In this subsection, we use the BMT dataset in [1] for practical application. As the simulation illustrates, wcqr estimators display more stability to the complexity of data and high quantiles than existing cqr estimators, which motivates us to conduct the data analysis with our methods.
In this dataset, a total of 177 patients received a stem cell transplant for acute leukemia. The failure event is relapse (REL, 56 patients), and death from causes related to the transplant (transplant related mortality, TRM, 75 patients) is the competing risk. Forty-six patients are censored, thus the censoring rate is 26%. Covariates that affect REL and TRM includes sex, disease (lymphoblastic or myeloblastic leukemia), phase at transplant (Relapse, CR1, CR2, CR3), source of stem cells (bone marrow and peripheral blood, coded as BM+PB, or peripheral blood, coded as PB), and age. The link function is assumed to be exponential.
Figure 3, Figure 4 and Figure 5 report the numbers of selected variables as well as coefficient estimates by our weighted estimators compared with penalized quantile estimating equations proposed by [8] and the penalty-free methods with τ ranging from 0 to 0.4 .
From the figure we can see mainly our estimators select similar numbers of variables to cqr estimators at lower quantiles, but in higher quantiles, the wcqr estimators lie between the cqr-LASSO and cqr-ALASSO. For the intercept, in lower quantiles, five estimators appears conincident with one another, although cqr-ALASSO estimators tend to be unstable, whereas wcqr estimators shows stability here. For age, all estimators regard this variable as unimportant, except that the two LASSO estimators probably overestimate the importance. For sex:F, almost all estimators shrink the corresponding coefficients to zero. The ALASSO estimators tend to treat D:AML as an unimportant variable, except for quantiles around 0.1. For phase:CR1 and phase:CR2, all estimators tend to select them in lower quantiles, but wcqr tends to select phase:CR1 at higher quantiles larger than 0.21 but neglects phase:CR2 from 0.22 to 0.27. For phase:CR3, all the estimators show analogue performances but with slight shifts. For source:PB, the wcqr estimators perform more stably than cqr for all quantiles. The estimations for F 1 based on the five methods are placed in Figure 6.
To conclude, our wcqr estimators present stability and keep similar performances to the results of cqr estimators. More importantly, our weighted estimates provide a relatively general objective function for researchers to directly use R packages for application.

5. Conclusions

In this paper, we proposed a weighted method for competing risks quantile regression model to transform the estimating equation to a common weighted objective function and applied the LASSO and ALASSO penalization for variable selection. We established the consistency and asymptotic normality for penalty-free estimators as well as the consistency of variable selection. Monte Carlo simulations were conducted for several scenarios, presenting good variable selection performance and stability. Finally, a real dataset was utilized to illustrate the application of our methods, which is comparable with other methods.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/math11061295/s1, Figure S1: Case n = 200 , s = 20 , p 0 = 0.8 , p 1 = 0.6 . Reports of TPr, FPr, P 1 and P 2 at different τ . Four colors are used to represent the methods: red for cqr, green for wcqr0, blue for wcqr1 and purple for wcqr2. Light colors represent LASSO penalty and dark colors for ALASSO penalty. Figure S2: Case n = 200 , s = 50 , p 0 = 0.8 , p 1 = 0.6 . Reports of TPr, FPr, P 1 and P 2 at different τ . Four colors are used to represent the methods: red for cqr, green for wcqr0, blue for wcqr1 and purple for wcqr2. Light colors represent LASSO penalty and dark colors for ALASSO penalty. Figure S3: Case n = 100 , s = 100 , p 0 = 0.8 , p 1 = 0.6 , Reports of TPr, FPr, P 1 and P 2 at different τ . Various colors of line represent eight methods respectively. Figure S4: Case n = 200 , p 0 = 0.8 , p 1 = 0.6 . Comparision of TP and FP for four levels of ρ . In each subplot, the Y axis reports the TP values at different τ . Various colors of line represent eight methods respectively. Figure S5: Case n = 200 , p 0 = 0.8 , p 1 = 0.6 , Plots of FP for four levels of ρ . In each subplot, the Y axis reports the FP values at different τ . Various colors of line represent eight methods respectively. Figure S6: Plots of P 1 for four levels of ρ . In each subplot, the Y axis reports the P 1 values at different τ . Various colors of line represent eight methods respectively. n = 200 , p 0 = 0.8 , p 1 = 0.6 . Figure S7: Plots of P 2 for four levels of ρ . In each subplot, the Y axis reports the P 2 values at different τ . Various colors of line represent eight methods respectively, n = 200 , p 0 = 0.8 , p 1 = 0.6 . Figure S8: Case n = 400 , ρ = 0.5 , p 0 = 0.8 , p 1 = 0.6 . Reports of TP, FP, P 1 and P 2 at different τ . Various colors of line represent eight methods respectively. Figure S9: Case n = 400 , ρ = 0.5 , p 0 = 0.8 , p 1 = 0.6 , Σ 2 . Reports of TP, FP, P 1 and P 2 at different τ . Various colors of line represent eight methods respectively. Figure S10: Case n = 400 , ρ = 0.5 , p 0 = 0.6 , p 1 = 0.45 , Σ 1 . Reports of TP, FP, P 1 and P 2 at different τ . Various colors of line represent eight methods respectively. Figure S11: Case n = 400 , ρ = 0.5 , p 0 = 0.8 , p 1 = 0.6 , Σ 2 , t ( 3 ) . Reports of TP, FP, P 1 and P 2 at different τ . Various colors of line represent eight methods respectively.

Author Contributions

Conceptualization, E.L., J.P., M.T. (Manlai Tang), K.Y., W.K.H., X.D. and M.T. (Maozai Tian); Methodology, E.L.; Software, E.L.; Validation, E.L.; Formal analysis, E.L.; Investigation, E.L.; Resources, E.L.; Data curation, E.L., J.P., M.T. (Manlai Tang), K.Y., W.K.H., X.D. and M.T. (Maozai Tian); Writing—original draft, E.L.; Writing—review & editing, E.L., J.P., M.T. (Manlai Tang), K.Y., W.K.H., X.D. and M.T. (Maozai Tian); Visualization, E.L.; Supervision, J.P., M.T. (Manlai Tang), K.Y., W.K.H. and M.T. (Maozai Tian); Project administration, E.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Funds of China (Grant No. 12101015), Scientific Research Foundation of North China University of Technology (No. 110051360002), the Fundamental Research Funds for Beijing Universities, NCUT (NO.110052971921/007), National Natural Science Foundation of China (No. 11861042), and the China Statistical Research Project (No. 2020LZ25).

Data Availability Statement

Publicly available datasets were analyzed in this study. These data can be found here: 10.1038/bmt.2009.359.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Technical Details of Proofs

To simplify the presentation, we omit τ in such expressions as β ( τ ) .
Since the weights w i depend on F 1 * , we take w i as w i ( F 1 * ) . Additionally, we define M n ( β , F 1 * ) = n 1 i = 1 n m i ( β , F 1 * ) as the subgradient of the weighted quantile objective function (11), where
m i ( β , F 1 * ) = Z i { τ w i ( F 1 * ) I ( g 1 ( X i ) Z i β ) } = Z i τ I { ϵ i = 1 , T i C i , C i g ( Z i β ) } I { ϵ i = 1 , T i C i , g 1 ( T i ) Z i β , C i > g ( Z i β ) } τ F 1 * ( C i ) 1 F 1 * ( C i ) I { F 1 * ( C i ) τ , C i g ( Z i β ) } ( 1 I { T i C i , ϵ i = 1 } ) τ F 1 * ( C i ) 1 F 1 * ( C i ) I { ϵ i = 2 , F 1 * ( C i | Z i ) τ , T i g ( Z i β ) , C i > g ( Z i β ) }
Let M ( β , F 1 * ) = E { m n ( β , F 1 * ) } = E { Z τ H ( g ( Z β ) ) R ( β , F 1 * ) J ( β , F 1 * ) } , where
H ( t | Z ) = t F 1 ( u ) g 0 ( u ) d u + ( 1 G ( t ) ) F 1 ( t | Z ) , R ( β , F 1 * ) = E C | Z τ F 1 * ( C ) 1 F 1 * ( C ) I { F 1 * ( C ) τ , C g ( Z β ) } ( 1 I { T C , ϵ = 1 } ) = 0 g ( Z β ) g 0 ( u ) I { F 1 * ( u ) τ } ( 1 F 1 ( u | Z ) ) τ F 1 * ( u ) 1 F 1 * ( u ) d u , J ( β , F 1 * ) = E C | Z I { ϵ = 2 , F 1 * ( C ) τ , T g ( Z β ) , C > g ( Z β ) } τ F 1 * ( C ) 1 F 1 * ( C ) = ( F 0 ( g ( Z β ) | Z ) F 1 ( g ( Z β ) | Z ) ) g ( Z β ) I { F 1 * ( u ) τ } τ F 1 * ( u ) 1 F 1 * ( u ) g 0 ( u ) d u
where g 0 ( u ) is the density of censoring variable C conditionally on Z, and F 0 ( t | Z ) = P ( T t | Z ) . It is noteworthy that J ( β 0 , F 1 ) 0 , and it is easy to derive that M ( β 0 , F 1 ) 0 .
Lemma A1.
Assume assumptions A1–A3, C1–C3 hold. Then
F ^ 1 F 1 H = · sup t sup z | F ^ 1 ( t | z ) F 1 ( t | z ) | = o p ( n 1 / 2 + r )
for every r > 0 .
Remark A1.
Lemma A1 directly guarantees the consistency of our weight estimation w i ( F ^ 1 ) to w i ( F 1 ) , which is the w 0 i in Equation (6).
Proof. 
By condition C1 and A1 and [17], Ref. [5] has developed that for every r > 0 , sup t < ν | G ^ ( t ) G ( t ) | = o ( n 1 / 2 + r ) , a.s. This, coupled with C2, implies that
sup x n 1 i = 1 n I { X i x } I { δ i ϵ i = 1 } 1 G ^ ( X i ) n 1 i = 1 n I { X i x } I ( δ i ϵ i = 1 ) 1 G ( X i ) = o ( n 1 / 2 + r ) , a . s .
Simultaneously, for t < ν , 1 G ( t ) is uniformly bounded away from 0, thus by Chebyshev’s inequality, for every r > 0 ,
P n 1 / 2 r n 1 i = 1 n I { X i x } I ( δ i ϵ i = 1 ) 1 G ( X i ) n 1 i = 1 n E I { X i x } I ( δ i ϵ i = 1 ) 1 G ( X i ) | Z i ε n 2 r Var ( I { X i x } I ( δ i ϵ i = 1 ) | Z i ) ε 2 0 , n ,
which holds for any x, that is
sup x , z n 1 i = 1 n I { X i x } I ( δ i ϵ i = 1 ) 1 G ( X i ) F 1 ( x | Z i ) = o p ( n 1 / 2 + r ) .
Combining Equations (A2) and (A3), we have
sup x , z F ^ 1 ( x | Z ) F 1 ( x | Z ) = o p ( n 1 / 2 + r )
holds uniformly for Z, that is,
F ^ 1 F 1 H = · sup t sup z | F ^ 1 ( t | z ) F 1 ( t | z ) | = o p ( n 1 / 2 + r ) .
Lemma A2.
For all positive values ε n = o ( 1 ) , we have
sup β β 0 ε n , F 1 * F 1 ε n M n ( β , F 1 * ) M ( β , F 1 * ) M n ( β 0 , F 1 ) = o p ( n 1 / 2 )
Proof. 
Let Z i j and m i j denote the jth coordinates of Z i and m i , respectively. For notational simplicity, in the following we omit the subscript i in various expressions such as Z i , Z i j , T i , C i . Let K j , j = 1 , , 5 be some positive constants. Note that for j = 1 , , p ,
| m j ( β , F 1 * ) m j ( β , F 1 * ) | 2 B 1 + B 2 + B 3 + B 4 ,
where
B 1 = Z j 2 | I { ϵ = 1 , T C , C g ( Z β ) } I { ϵ = 1 , T C , C g ( Z β ) } | B 2 = Z j 2 | I { ϵ = 1 , T C , T g ( Z β ) , C > g ( Z β ) } I { ϵ = 1 , T C , T g ( Z β ) , C > g ( Z β ) } | B 3 = Z j 2 τ F 1 * ( C ) 1 F 1 * ( C ) I { F 1 * ( C ) τ , C g ( Z β ) } ( 1 I { T C , ϵ = 1 } ) τ F 1 * ( C ) 1 F 1 * ( C ) I { F 1 * ( C ) τ , C g ( Z β ) } ( 1 I { T C , ϵ = 1 } ) B 4 = Z j 2 τ F 1 * ( C ) 1 F 1 * ( C ) I { ϵ = 2 , F 1 * ( C | Z ) τ , T g ( Z β ) , C > g ( Z β ) } τ F 1 * ( C ) 1 F 1 * ( C ) I { ϵ = 2 , F 1 * ( C ) τ , T g ( Z β ) , C > g ( Z β ) }
It is easy to verify that
sup β : β β ε n | I ( g ( Z β ) < C ) I ( g ( Z β ) < C ) | Z { I ( g ( Z β ) ε n < C ) I ( g ( Z β ) + ε n < C ) }
or multiplied by a constant, by Assumption C3. Therefore, by Assumptions A1 and A2,
E sup β : β β ε n B 1 = E sup β : β β ε n Z j 2 | I { C g ( Z β ) } I { C g ( Z β ) } | E Z 3 { G ( g ( Z β ) + ε n ) G ( g ( Z β ) ε n ) } K 1 ε n .
Following similar arguments, we can show that
E sup β : β β ε n B 2 = E sup β : β β ε n Z j 2 I ( ϵ i = 1 ) × | I { T g ( Z β ) , C > g ( Z β ) } I { T g ( Z β ) , C > g ( Z β ) } | E [ Z 3 { G ( g ( Z β ) + ε n ) G ( g ( Z β ) ε n ) } + Z 3 { F 1 ( g ( Z β ) + ε n ) F 1 ( g ( Z β ) ε n ) } ] K 2 ε n .
Note that
B 3 Z j 2 1 1 τ 1 F 1 * ( C ) I { F 1 * ( C ) τ } 1 1 τ 1 F 1 * ( C ) I { F 1 * ( C ) τ } + Z j 2 I { C g ( Z β ) } I { C g ( Z β ) } = · B 31 + B 32 .
Similarly to B 1 , it is easy to verify that E sup β : β β ε n B 32 K 1 ε n . Then
B 31 = Z j 2 I { F 1 * ( C ) < τ , F 1 * ( C ) < τ } ( 1 τ ) [ F 1 * ( C ) F 1 * ( C ) ] ( 1 F 1 * ( C ) ) ( 1 F 1 * ( C ) ) + Z j 2 I { F 1 * ( C ) < τ < F 1 * ( C ) } 1 τ 1 F 1 * ( C ) + Z j 2 I { F 1 * ( C ) < τ < F 1 * ( C ) } 1 τ 1 F 1 * ( C ) Z j 2 F 1 * ( C ) F 1 * ( C ) ( 1 τ ) + Z j 2 I { F 1 * ( C ) < τ < F 1 * ( C ) } + Z j 2 I { F 1 * ( C ) < τ < F 1 * ( C ) } .
Since
E sup F 1 * : F 1 * F 1 * H I { F 1 * ( C ) < τ < F 1 * ( C ) } P { F 1 * ( C ) < τ < F 1 * ( C ) + ε n } G { F 1 * 1 ( τ ) } G { F 1 * 1 ( τ τ ) } K 3 ε n .
Then by Assumption A1, we have E sup β : β β ε n B 31 K 4 ε n . Consequently,
E sup β : β β ε n B 3 K 5 ε n .
Similar arguments to proving B 3 , by adding and subtracting τ F 1 * ( C ) 1 F 1 * ( C ) I { ϵ = 2 , F 1 * ( C ) τ , T g ( Z β ) , C > g ( Z β ) } , yields
B 4 Z j 2 τ F 1 * ( C ) 1 F 1 * ( C ) I { F 1 * ( C ) τ } τ F 1 * ( C ) 1 F 1 * ( C ) I { F 1 * ( C ) τ } + Z j 2 I { T g ( Z β ) , C > g ( Z β ) } I { T g ( Z β ) , C > g ( Z β ) } = · B 41 + B 42 .
By the proof of B 31 and B 2 , we can easily get that E sup β : β β ε n B 41 K 4 ε n and E sup β : β β ε n B 42 K 2 ε n . Thus E sup β : β β ε n B 4 K 5 ε n .
Therefore, condition (3.2) of [18] holds with r = 2 and s j = 1 / 2 , and condition (3.3) is satisfied by remark 3(ii) of their paper. Thus, Lemma 2 holds by applying Theorem 3 of [18]. □
Proof of Theorem 1.
Note that F 1 ( t | Z ) < τ is equivalent to t < g ( Z β 0 ) and F 1 ( g ( Z β 0 ) ) = τ . Therefore, when plugging in the true β 0 and F 1 into M, we get
M ( β 0 , F 1 ) = E { Z τ H ( g ( Z β 0 ) ) R ( β 0 , F 1 ) J ( β 0 , F 1 ) } = 0 .
Because β 0 is the solution of M ( β , F 1 ) with M ( β , F 1 ) being a continuous function of β in a compact parameter neighborhood B .
Therefore, the consistency of β ^ is the direct conclusion of Theorem 1 of [18], and we only need verify conditions (1.1), (1.2), and (1.5’) in their paper, as (1.3) is trivially satisfied and (1.4) follows from Lemma A1.
(1.1)
By the subgradient condition of quantile regression [13], there exists a vector v with coordinates | v i | 1 such that
M n ( β ^ , w ^ ) = n 1 ( Z i v i ) : i Ξ = o p ( n 1 / 2 )
by Assumption A.1, where Ξ denotes a ( p + 1 ) element subset of { 1 , 2 , , n } .
(1.2)
For any ε > 0 and β B ,
inf β β 0 ε M ( β , F 1 ) = inf β β 0 ε M ( β , F 1 ) M ( β 0 , F 1 ) inf β β 0 ε E [ Z Z ( β β 0 ) ] g ( ξ * ) { 1 G ( g ( ξ * ) | Z ) } f 1 ( g ( ξ * ) | Z ) ,
which is strictly positive under Assumptions A1 and A3. Here ξ * is some value between Z β and Z β 0 .
(1.5’)
Let { a n } be a sequence of positive numbers approaching zero as n . Note that E { Z i w i I ( X i g ( Z i β ) ) 2 } E ( Z i 2 ) K z , under Assumption A1. It then follows from Chebyshev’s inequality that
sup β B , F 1 * F 1 H a n M n ( β , F 1 * ) M ( β , F 1 * ) = o p ( 1 ) .
Then the proof of Theorem 1 is complete with the conclusion of Theorem 1 of [18]. □
Proof of Theorem 2.
The asymptotic normality of β ^ relies on the results of Theorem 2 in [18]. We need to prove conditions (2.1)–(2.4), (2.5’), and (2.6’) in their paper. Conditions (2.1), (2.4), and (2.5’) hold directly by (A5), Lemma A1, and Lemma A2, respectively.
Note that for any C i lying above the τ th conditional quantile Z i β 0 , the quantile fit will not be affected if we assign the entire weight to either ( Z i , C i ) or ( Z i , X + ) . Then we obtain
Γ 1 ( β 0 , F 1 ) = M ( β , F 1 ) β β = β 0 = E [ Z Z g ( Z β 0 ) { 1 G ( g ( Z β 0 ) | Z ) } f 1 ( g ( Z β 0 ) | Z ) ] ,
which is continuous at β 0 and of full rank under Assumption A3. For all β B , we define the functional derivative of M ( β , F 1 * ) at F 1 in the direction [ F 1 * F 1 ] as
Γ 2 ( β , F 1 ) [ F 1 * F 1 ] = lim ε 0 1 ε [ M { β , F 1 + ε ( F 1 * F 1 ) } M { β , F 1 } ] = lim ε 0 1 ε E R ( β , F 1 ) R ( β , F 1 ε ) + J ( β , F 1 ) J ( β , F 1 ε )
where F 1 ε = F 1 + ε ( F 1 * F 1 ) . Since
lim ε 0 1 ε E Z R ( β , F 1 ) R ( β , F 1 ε ) = E Z [ A 1 ( β ) + A 2 ( β ) ] + ( 1 τ ) E Z 0 g ( Z β ) g 0 ( u ) I { F 1 ( u | Z ) τ } F 1 * ( u | Z ) F 1 ( u | Z ) 1 F 1 ( u | Z ) d u
where
A 1 ( β ) = lim ε 0 1 ε 0 g ( Z β ) g 0 ( u ) ( 1 F 1 ( u ) ) I { F 1 ( u | Z ) τ } I { F 1 ε ( u | Z ) τ } d u A 2 ( β ) = lim ε 0 1 ε 0 g ( Z β ) g 0 ( u ) ( 1 F 1 ( u ) ) ( 1 τ ) I { F 1 ε ( u | Z ) τ } I { F 1 ( u | Z ) τ } 1 F 1 ε ( u | Z ) d u .
Similarly, we can derive
lim ε 0 1 ε E Z J ( β , F 1 ) J ( β , F 1 ε ) = E Z [ A 3 ( β ) + A 4 ( β ) ] + ( 1 τ ) E Z [ F 0 ( g ( Z β ) | Z ) F 1 ( g ( Z β ) | Z ) ] × g ( Z β ) g 0 ( u ) I { F 1 ( u | Z ) τ } F 1 * ( u | Z ) F 1 ( u | Z ) ( 1 F 1 ( u | Z ) ) 2 d u
where
A 3 ( β ) = [ F 0 ( g ( Z β ) | Z ) F 1 ( g ( Z β ) | Z ) ] lim ε 0 1 ε g ( Z β ) g 0 ( u ) I { F 1 ( u | Z ) τ } I { F 1 ε ( u | Z ) τ } d u A 4 ( β ) = [ F 0 ( g ( Z β ) | Z ) F 1 ( g ( Z β ) | Z ) ] ( 1 τ ) lim ε 0 1 ε g ( Z β ) g 0 ( u ) I { F 1 ε ( u | Z ) τ } I { F 1 ( u | Z ) τ } 1 F 1 ε ( u | Z ) d u .
For β such that g ( Z β ) < g ( Z β 0 ) , A 1 ( β ) = 0 , A 2 ( β ) = 0 . For sufficiently small ε , F 1 ε 1 ( τ ) > g ( Z β ) , then
A 3 ( β ) = [ F 0 ( g ( Z β ) | Z ) F 1 ( g ( Z β ) | Z ) ] lim ε 0 1 ε G ( g ( Z β 0 ) ) G ( F 1 ε 1 ( τ | Z ) ) A 4 ( β ) = ( 1 τ ) [ F 0 ( g ( Z β ) | Z ) F 1 ( g ( Z β ) | Z ) ] lim ε 0 1 ε G ˜ ( F 1 ε 1 ( τ | Z ) ) G ˜ ( g ( Z β 0 | Z ) )
where d G ˜ ( u | Z ) d u = g 0 ( u ) 1 F 1 ( u | Z ) .
For β such that g ( Z β ) > g ( Z β 0 ) , A 3 ( β ) = 0 , A 4 ( β ) = 0 . For sufficiently small ε , F 1 ε 1 ( τ ) < g ( Z β ) , then
A 1 ( β ) = lim ε 0 1 ε G ˘ ( g ( Z β 0 | Z ) ) G ˘ ( F 1 ε 1 ( τ | Z ) )
where d G ˘ ( u | Z ) d u = g 0 ( u ) ( 1 F 1 ( u | Z ) ) and
A 2 ( β ) = ( 1 τ ) lim ε 0 1 ε G ( F 1 ε 1 ( τ | Z ) ) G ( g ( Z β 0 ) ) .
For β = β 0 , note that I { F 0 ( t | Z ) < τ } = 1 for t ( 0 , g ( Z β ) ) , then
A 1 ( β ) = lim ε 0 1 ε G ˘ ( g ( Z β 0 | Z ) ) G ˘ ( F 1 ε 1 ( τ | Z ) ) A 2 ( β ) = ( 1 τ ) lim ε 0 1 ε G ( F 1 ε 1 ( τ | Z ) ) G ( g ( Z β 0 ) )
and I { F 0 ( t | Z ) < τ } = 0 for t ( g ( Z β ) , ) , then
A 3 ( β ) = [ F 0 ( g ( Z β ) | Z ) F 1 ( g ( Z β ) | Z ) ] lim ε 0 1 ε G ( g ( Z β 0 ) ) G ( F 1 ε 1 ( τ | Z ) ) A 4 ( β ) = ( 1 τ ) [ F 0 ( g ( Z β ) | Z ) F 1 ( g ( Z β ) | Z ) ] · lim ε 0 1 ε G ˜ ( F 1 ε 1 ( τ | Z ) ) G ˜ ( g ( Z β 0 | Z ) ) .
By expanding G ˜ ( F 1 ε 1 ( τ | Z ) ) (treated as a function of ε ) around ε = 0 , and using the fact that d d ε F 1 ε 1 ( τ | Z ) | ε = 0 = τ F 1 * ( g ( Z β 0 ) ) f 1 ( g ( Z β 0 ) ) (example 20.5 in [19]), we obtain
G ( F 1 ε 1 ( τ | Z ) ) = G ( g ( Z β 0 ) ) + g 0 ( g ( Z β 0 ) ) τ F 1 * ( g ( Z β 0 ) ) f 1 ( g ( Z β 0 ) ) ε + O ( ε 2 ) .
Similarly, we have
G ˘ ( F 1 ε 1 ( τ | Z ) ) = G ˘ ( g ( Z β 0 ) ) + g 0 ( g ( Z β 0 ) ) ( 1 F 1 ( g ( Z β 0 ) ) ) · τ F 1 * ( g ( Z β 0 ) ) f 1 ( g ( Z β 0 ) ) ε + O ( ε 2 ) . G ˜ ( F 1 ε 1 ( τ | Z ) ) = G ˜ ( g ( Z β 0 ) ) + g 0 ( g ( Z β 0 ) ) 1 F 1 ( g ( Z β 0 ) ) · τ F 1 * ( g ( Z β 0 ) ) f 1 ( g ( Z β 0 ) ) ε + O ( ε 2 ) .
Therefore, for β such that g ( Z β ) < g ( Z β 0 ) ,
A 3 ( β ) + A 4 ( β ) = [ F 0 ( g ( Z β ) | Z ) F 1 ( g ( Z β ) | Z ) ] g 0 ( g ( Z β 0 ) ) τ F 1 * ( g ( Z β 0 ) ) f 1 ( g ( Z β 0 ) ) F 1 ( g ( Z β 0 ) ) τ 1 F 1 ( g ( Z β 0 ) ) 0
for β such that g ( Z β ) g ( Z β 0 ) ,
A 1 ( β ) + A 2 ( β ) = g 0 ( g ( Z β 0 ) ) ( F 1 ( g ( Z β 0 ) ) τ ) τ F 1 * ( g ( Z β 0 ) ) f 1 ( g ( Z β 0 ) ) 0 .
That is
Γ 2 ( β 0 , F 1 ) [ F 1 * F 1 ] = ( 1 τ ) E Z 0 g ( Z β ) g 0 ( u ) I { F 1 ( u | Z ) τ } F 1 * ( u | Z ) F 1 ( u | Z ) 1 F 1 ( u | Z ) d u + ( 1 τ ) E Z [ F 0 ( g ( Z β ) | Z ) F 1 ( g ( Z β ) | Z ) ] × g ( Z β ) g 0 ( u ) I { F 1 ( u | Z ) τ } F 1 * ( u | Z ) F 1 ( u | Z ) ( 1 F 1 ( u | Z ) ) 2 d u .
With the process of Taylor expansion, we can verify condition (2.3) of [18] under Assumptions A1 and A2.
Then, we verify condition (2.6). Combining (A6) and the analysis above, we have
Γ 2 ( β 0 , F 1 ) [ F ^ 1 F 1 ] = ( 1 τ ) E Z 0 g ( Z β 0 ) g 0 ( u ) F ^ 1 ( u | Z ) F 1 ( u | Z ) 1 F 1 ( u | Z ) d u
Denote F 1 G ( t | Z ) = 1 n i = 1 n I { X i t , δ i ϵ i = 1 } 1 G ( X i ) , N i G ( t ) = I ( X i t , δ i ϵ i = 0 ) , Y i ( t ) = I ( X i t ) , y ( t ) = P ( X t ) , λ G ( t ) = lim Δ 0 P ( X ( t , t + Δ ) | X t ) , Λ G ( t ) = 0 t λ G ( s ) d s and M i G ( t ) = N i G 0 Y i ( s ) d Λ G ( s ) . Follow the proof in [5], sup t [ 0 , ν ) n 1 / 2 { G ^ ( t ) G ( t ) n 1 / 2 i = 1 n G ( t ) 0 t y ( s ) 1 d M i G } 0 , from [17], and n 1 i = 1 n Y i ( t ) I { X i x } I ( δ i ϵ i = 1 ) ( 1 G ( X i ) ) 1 converges to π ( x , t ) uniformly in both x R and t [ 0 , ν ) , where π ( x , t ) = E Y i ( t ) I { X i x } I ( δ i ϵ i = 1 ) ( 1 G ( X i ) ) 1 . Then
F ^ 1 ( x | Z ) F 1 ( x | Z ) = F 1 G ( x | Z ) F 1 ( x | Z ) + F ^ 1 ( x | Z ) F 1 G ( x | Z ) = 1 n i = 1 n ξ 1 , i ( x ) 1 n i = 1 n G ^ ( X i ) G ( X i ) G ^ ( X i ) G ( X i ) I ( X i x ) I ( δ i ϵ i = 1 ) 1 n i = 1 n ξ 1 , i ( x ) 1 n i = 1 n n 1 j = 1 n Y i ( s ) y ( s ) 1 d M j G G ( X i ) I ( X i x ) I ( δ i ϵ i = 1 ) = 1 n i = 1 n ξ 1 , i ( x ) 1 n i = 1 n 0 j = 1 n Y j ( s ) I ( X j x ) I ( δ j ϵ j = 1 ) n G ( X j ) d M i G ( s ) y ( s ) 1 n i = 1 n ξ 1 , i ( x ) 1 n i = 1 n 0 π ( x , s ) d M i G ( s ) y ( s ) = 1 n i = 1 n { ξ 1 , i ( x ) ξ 2 , i ( x ) } ,
where ≈ denotes asymptotic equivalence uniformly in τ [ τ L , τ U ] , ξ 1 , i ( x ) = I ( X i x ) I ( δ i ϵ i = 1 ) G ( X i ) 1 F 1 ( x | Z ) and ξ 2 , i = 0 π ( x , s ) y ( s ) 1 d M i G ( s ) , i = 1 , , n . Similarly derived as [5], 0 π ( x , s ) y ( s ) 1 d M i G is Lipshitz in x, F ^ 1 ( x | Z ) F 1 ( x | Z ) converges weakly to a mean zero Guassian process with covariance matrix Σ ( x ) = E { ξ 1 ( x ) ξ 1 ( x ) } . Then by (A7),
Γ 2 ( β 0 , F 1 ) [ F ^ 1 F 1 ] ( 1 τ ) n 1 i = 1 n E z Z 0 g ( Z β 0 ) g 0 ( u ) ξ 1 , i ( u ) ξ 2 , i ( u ) 1 F 1 ( u | Z ) d u = ( 1 τ ) n 1 i = 1 n ϕ i
where
ϕ i = E z Z 0 g ( Z β 0 ) g 0 ( u ) ξ 1 , i ( u ) ξ 2 , i ( u ) 1 F 1 ( u | Z ) d u
is a random vector with mean 0 and E ϕ i 2 < by Assumptions A1–A3.
Recall M n ( β 0 , F 1 ) = n 1 i = 1 n m i ( β 0 , F 1 ) being independent mean 0 random vectors.
m i ( β 0 , F 1 ) = Z i τ I { ϵ i = 1 , T i C i , C i g ( Z i β 0 ) } I { ϵ i = 1 , T i C i , g 1 ( T i ) Z i β 0 , C i > g ( Z i β 0 ) } τ F 1 ( C i ) 1 F 1 ( C i ) I { F 1 ( C i ) τ , C i g ( Z i β 0 ) } ( 1 I { T i C i , ϵ i = 1 } ) = · Z i τ D 1 D 2 D 3 .
Since E m i ( β 0 , F 1 ) = 0 , and D i D j = 0 for i j , it is easy to verify
Cov { m i ( β 0 , F 1 ) } = E Z , C E Z i Z i τ ( 1 τ ) I ( C i > g ( Z i β 0 ) ) + I ( C i g ( Z i β 0 ) ) F 1 ( C i ) ( 1 τ ) 2 1 F 1 ( C i ) = · d 1 .
Then applying the central limit theorem gives
n 1 / 2 { M n ( β 0 , F 1 ) + Γ 2 ( β 0 , F 1 ) [ F ^ 1 F 1 ] } D N ( 0 , V ) ,
where
V = Cov { m i ( β 0 , F 1 ) + ( 1 τ ) ϕ i } = · d 1 + d 2 + d 2 d 1 = ( 1 τ ) E { m i ( β 0 , F 1 ) ϕ } d 2 = ( 1 τ ) 2 E { ϕ ϕ }
Then the proof for (14) is thus complete by Theorem 2 of [18]. □
Proof of Theorem 3.
Let A ^ n = { j : β ˜ j 0 } . We first show that for any j A , P ( j A ^ n ) 0 as n . Suppose there exists a k A ^ c such that | β ˜ k | 0 . Let β * be a vector constructed by replacing β ˜ k with 0 in β ˜ . For simplicity, we write w ^ i = w i ( F ^ 1 ) . Note that | ρ τ ( a ) ρ τ ( b ) | | a b | max { τ , 1 τ } < | a b | . Therefore, for large enough n,
Q p ( β ˜ , w ^ i ) Q p ( β * , w ^ i ) = i = 1 n w ^ i ρ τ ( g 1 ( X i ) Z i β ˜ ) ρ τ ( g 1 ( X i ) Z i β * ) + i = 1 n ( 1 w ^ i ) ρ τ ( g 1 ( X + ) Z i β ˜ ) ρ τ ( g 1 ( X + ) Z i β * ) + p λ n ( | β ˜ k | ) 2 i = 1 n Z i · | β ^ k | + λ n | β ^ k | γ | β ˜ k | .
By Theorem 1, β ^ k β k = O p ( n 1 / 2 ) and β k , thus β ^ k = O p ( n 1 / 2 ) . As i = 1 n Z i = O p ( 1 ) and n 1 λ n | β ^ k | γ n r / γ 1 λ n , which yields
Q p ( β ˜ , w ^ i ) Q p ( β * , w ^ i ) 2 i = 1 n Z i · | β ^ k | + λ n | β ^ k | γ | β ˜ k | | β ˜ k | n O p ( 1 ) + n 1 λ n | β ^ k | γ c * n γ / 2 1 / 2 λ n > 0 , as n ,
where c * is any positive constant. This contradicts the fact that Q p ( β ˜ , w ^ i ) Q p ( β * , w ^ i ) .
We next show that for any j A , P ( j A ^ n ) 0 . We write b A = ( b j , j A ) for any vector b R p , and B AA as the sub-matrix of a ( p + 1 ) × ( p + 1 ) matrix B with both row and column indices in A . By Taylor expansion
M n ( β A , F 1 * ) = M n ( β 0 A , F 1 ) + Γ 1 A A ( β A β 0 A ) + Γ 2 A A ( β 0 A , F 1 ) [ F 1 * F 1 ] + o p ( n 1 / 2 )
uniformly over β A , F 1 such that β A β 0 A = O ( n 1 / 2 ) and F 1 * F 1 H = o ( n 1 / 2 + r ) . Let β A β 0 A = n 1 / 2 u , we have
n u M n ( β A , F ^ 1 ) = n u { M n ( β 0 A , F 1 ) + Γ 2 A A } + n 1 / 2 u Γ 1 A A u + o p ( n 1 / 2 )
where Γ 2 A A = Γ 2 A A ( β 0 A , F 1 ) [ F ^ 1 F 1 ] . Therefore, with probability tending to 1,
n u M n ( β A , F ^ 1 ) n u { M n ( β 0 A , F 1 ) + Γ 2 } n 1 / 2 u Γ 1 A A u + o ( n 1 / 2 ) k 0 n 1 / 2 + r
for some positive k 0 and r > 0 . However, the subgradient condition (A5) requires that
n u M n ( β A , F ^ 1 ) + λ n j A | β ^ j | r | τ I ( β ˜ j < 0 ) | O p ( max i Z i ) .
When λ n = o ( n 1 / 2 ) and Assumption A1 holds, (A13) and (A14) suggest that the subgradient condition cannot hold if β ˜ A β 0 A = K n 1 / 2 for some positive K. Using the monotonicity argument in [20], we can show that the subgradient condition also cannot hold if β ˜ A β 0 A > K n 1 / 2 . Therefore, β ˜ A β 0 A K n 1 / 2 with probability tending to 1. Equivalently speaking, for all j A , P ( j A ^ n ) 1 or P ( j A ^ n ) 0 . The proof of Theorem 3 is thus complete. □

References

  1. Scrucca, L.; Santucci, A.; Aversa, F. Regression modeling of competing risk using R: An in depth guide for clinicians. Bone Marrow Transpl. 2010, 45, 1388–1395. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Fine, J.P.; Gray, R.J. A proportional hazards model for the subdistribution of a competing risk. J. Am. Stat. Assoc. 1999, 94, 496–509. [Google Scholar] [CrossRef]
  3. Fu, Z.; Parikh, C.R.; Zhou, B.J. Penalized variable selection in competing risks regression. Lifetime Data Anal. 2017, 23, 353–376. [Google Scholar] [CrossRef] [PubMed]
  4. Koenker, R.W.; Bassett, G. Regression quantile. Econometrica 1978, 46, 33–50. [Google Scholar] [CrossRef]
  5. Peng, L.; Fine, J.P. Competing risks quantile regression. J. Am. Stat. Assoc. 2009, 104, 1440–1453. [Google Scholar] [CrossRef]
  6. Sun, Y.Q.; Wang, H.; Gilbert, P. Quantile regression for competing risks data with missing cause of failure. Stat. Sin. 2012, 22, 703–728. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Ahn, K.W.; Kim, S. Variable selection with group structure in competing risks quantile regression. Stat. Med. 2018, 37, 1577–1586. [Google Scholar] [CrossRef] [PubMed]
  8. Li, E.; Tian, M.; Tang, M. Variable selection in competing risks models based on quantile regression. Stat. Med. 2019, 38, 4670–4685. [Google Scholar] [CrossRef] [PubMed]
  9. Wang, H.J.; Wang, L. Locally weighted censored quantile regression. J. Am. Stat. Assoc. 2009, 104, 1117–1128. [Google Scholar] [CrossRef] [Green Version]
  10. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B. 1996, 58, 267–288. [Google Scholar] [CrossRef]
  11. Zou, H. The adaptive lasso and its oracle properties. J. Am. Stat. Assoc. 2006, 101, 121–152. [Google Scholar] [CrossRef] [Green Version]
  12. Gray, R.J. A class of k-sample tests for comparing the cumulative incidence of a competing risk. Ann. Stat. 1988, 16, 1141–1154. [Google Scholar] [CrossRef]
  13. Koenker, R. Quantile Regression; Cambridge University Press: New York, NY, USA, 2005. [Google Scholar]
  14. Wang, H.J.; Zhou, J.; Li, Y. Variable selection for censored quantile regresion. Stat. Sin. 2013, 23, 145–167. [Google Scholar] [PubMed]
  15. Robins, J.M.; Rotnitzky, A. Recovery of information and adjustment for dependent censoring using surrogate markers. In AIDS Epidemiology Theorethodological Issues; Jewell, N., Dietz, K., Farewell, V., Eds.; Birkhäuser: Boston, MA, USA, 1992; pp. 24–33. [Google Scholar]
  16. Kaplan, E.L.; Meier, P. Nonparametric estimation from incomplete observations nonparametric estimation from incomplete observations. J. Am. Stat. Assoc. 1958, 53, 457–481. [Google Scholar] [CrossRef]
  17. Pepe, M.S. Inference for Events With Dependent Risks in Multiple Endpoint Studies. J. Am. Stat. Assoc. 1991, 86, 770–778. [Google Scholar] [CrossRef]
  18. Chen, X.; Linton, O.; Van Keilegom, I. Estimation of semiparametric models when the criterion function is not smooth. Econometrica 2003, 71, 1591–1608. [Google Scholar] [CrossRef] [Green Version]
  19. van der Vaart, A.W. Asymptotic Statistics; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar]
  20. Jureckova, J. Asymptotic relations of m-estimates and r-estimates in linear regression. Ann. Statist. 1977, 5, 464–472. [Google Scholar] [CrossRef]
Figure 1. Case n = 200 , s = 30 , p 0 = 0.8 , p 1 = 0.6 . Comparison of TPr and FPr for four levels of ρ . In each subplot, the Y axis reports the TPr and FPr values at different τ . The solid line is TPr and the dashed line is FPr. Four colors are used to represent the methods: red for wcqr, green for wcqr0, blue for wcqr1, and purple for wcqr2. Light colors represent the LASSO penalty and dark colors are the ALASSO penalty for all methods.
Figure 1. Case n = 200 , s = 30 , p 0 = 0.8 , p 1 = 0.6 . Comparison of TPr and FPr for four levels of ρ . In each subplot, the Y axis reports the TPr and FPr values at different τ . The solid line is TPr and the dashed line is FPr. Four colors are used to represent the methods: red for wcqr, green for wcqr0, blue for wcqr1, and purple for wcqr2. Light colors represent the LASSO penalty and dark colors are the ALASSO penalty for all methods.
Mathematics 11 01295 g001
Figure 2. Case n = 200 , s = 30 , p 0 = 0.8 , p 1 = 0.6 . Comparison of P1 and P2 for four levels of ρ . In each subplot, the Y axis reports the P1 and P2 values at different τ . The solid line is P1 and the dashed line is P2. Four colors are used to represent the methods: red for cqr, green for wcqr0, blue for wcqr1, and purple for wcqr2. Light colors represent the LASSO penalty and dark colors are the ALASSO penalty for all methods.
Figure 2. Case n = 200 , s = 30 , p 0 = 0.8 , p 1 = 0.6 . Comparison of P1 and P2 for four levels of ρ . In each subplot, the Y axis reports the P1 and P2 values at different τ . The solid line is P1 and the dashed line is P2. Four colors are used to represent the methods: red for cqr, green for wcqr0, blue for wcqr1, and purple for wcqr2. Light colors represent the LASSO penalty and dark colors are the ALASSO penalty for all methods.
Mathematics 11 01295 g002
Figure 3. Variable selection and estimation results for intercept and β Age . The Y axis reports the coefficient values at different τ . Various colors of lines represent eight methods.
Figure 3. Variable selection and estimation results for intercept and β Age . The Y axis reports the coefficient values at different τ . Various colors of lines represent eight methods.
Mathematics 11 01295 g003
Figure 4. Estimation results for β Sex : F , β D : AML , and β Phase : CR 1 .The Y axis reports the coefficient values at different τ . Various colors of lines represent eight methods.
Figure 4. Estimation results for β Sex : F , β D : AML , and β Phase : CR 1 .The Y axis reports the coefficient values at different τ . Various colors of lines represent eight methods.
Mathematics 11 01295 g004
Figure 5. Estimation results for β Phase : CR 2 , β Phase : CR 3 and β Source : PB . The Y axis reports the coefficient values at different τ . Various colors of lines represent eight methods.
Figure 5. Estimation results for β Phase : CR 2 , β Phase : CR 3 and β Source : PB . The Y axis reports the coefficient values at different τ . Various colors of lines represent eight methods.
Mathematics 11 01295 g005
Figure 6. Estimation of F 1 . The Y axis reports the estimated values of F 1 ( t ) at different t. Various colors of lines represent eight methods.
Figure 6. Estimation of F 1 . The Y axis reports the estimated values of F 1 ( t ) at different t. Various colors of lines represent eight methods.
Mathematics 11 01295 g006
Table 1. Bias and empirical coverage; n = 300 , ρ = 0 , p 0 = 0.8 , p 1 = 0.6 .
Table 1. Bias and empirical coverage; n = 300 , ρ = 0 , p 0 = 0.8 , p 1 = 0.6 .
τ MethodBiasEmpCoverage
β 1 β 2 β 3 β 4 β 1 β 2 β 3 β 4
0.1cqr−0.012−0.0030.0030.0030.9520.9530.9530.958
wcqr1−0.027−0.0260.0310.0140.9500.9520.9500.949
wcqr2−0.027−0.0320.0360.0180.9480.9520.9460.950
0.2cqr0.000−0.015−0.0070.0080.9460.9500.9430.952
wcqr1−0.024−0.0610.0420.0320.9480.9440.9490.943
wcqr2−0.019−0.0810.0550.0400.9490.9400.9520.940
0.3cqr−0.010−0.0230.0080.0140.9470.9490.9560.953
wcqr1−0.042−0.1080.0950.0550.9380.9520.9440.943
wcqr2−0.027−0.1310.1040.0650.9390.9490.9340.941
0.4cqr0.219−1.244−0.0690.3160.9990.9990.9990.999
wcqr1−0.065−0.2030.1680.1210.9470.9430.9310.937
wcqr20.007−0.1530.1110.0950.9490.9390.9410.922
0.5cqr−5.878−25.526−12.52610.1390.9750.9570.9670.953
wcqr1−0.313−0.6320.5940.2800.9650.9430.9630.914
wcqr20.1440.0340.041−0.0070.9130.9550.9520.953
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, E.; Pan, J.; Tang, M.; Yu, K.; Härdle, W.K.; Dai, X.; Tian, M. Weighted Competing Risks Quantile Regression Models and Variable Selection. Mathematics 2023, 11, 1295. https://doi.org/10.3390/math11061295

AMA Style

Li E, Pan J, Tang M, Yu K, Härdle WK, Dai X, Tian M. Weighted Competing Risks Quantile Regression Models and Variable Selection. Mathematics. 2023; 11(6):1295. https://doi.org/10.3390/math11061295

Chicago/Turabian Style

Li, Erqian, Jianxin Pan, Manlai Tang, Keming Yu, Wolfgang Karl Härdle, Xiaowen Dai, and Maozai Tian. 2023. "Weighted Competing Risks Quantile Regression Models and Variable Selection" Mathematics 11, no. 6: 1295. https://doi.org/10.3390/math11061295

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop